Sneak peak at the new gotcha! homepage!See More arrow right

The Playoff Paradox: Why My Business Was Stuck in Overtime (And How I Fixed It)

The Playoff Paradox: Why My Business Was Stuck in Overtime (And How I Fixed It)

By Chris Jenkin, CEO

I’m writing this still stinging from the weekend.

If you know me at all, you know I’m a die-hard Buffalo Bills fan. Bills Mafia for life. And if you’re also a Bills fan, you already understand the specific, slow-burn agony that comes with it. This isn’t the pain of being bad. It’s worse than that.

It’s the pain of being almost great.

Nine years ago, the Bills hired a new head coach. Seven years ago, we drafted a quarterback with generational talent. The narrative practically wrote itself. Year after year, the team improved. Playoff appearances became routine. The organization earned respect. Analysts started using words like “window” and “inevitable.”

This season, many experts finally crowned us the favorite to go all the way.

But as the games unfolded, something felt off.

I didn’t see a team asserting dominance. I saw a team surviving itself. Dumb penalties. Clock management errors. Inexplicable play calls. We lost games we should have won and won games against Superbowl contenders (sorry New England). The performance didn’t match the talent.

It was incoherent.

We limped into the playoffs as the sixth seed. We beat a strong Jaguars team in the Wild Card round, and for a brief moment, hope crept back in. Then came the trip to Denver to face the top seed.

We lost in overtime.

And not because we were outmatched. We had chances – multiple chances – to close the game. We had momentum. We had the quarterback. We had the pieces.

But we didn’t have control.

As the clock expired and the season ended yet again in the familiar fog of “almost,” my frustration shifted. Away from the players. Away from the refs. Away from bad luck.

Toward the sideline.

The Real Bottleneck

I’ve never quite connected with our head coach. Years ago, I noticed it in a press conference. Something about the presence felt… muted. At the time, I chalked it up to poor public relations skills.

And public relations isn’t the job. Winning is.

Coaches are ultimately judged on one thing: results. Their role is to take talent, align it, and produce outcomes. When a team consistently underperforms relative to its capability, the issue isn’t effort. It’s leadership.

Clock management. Strategic discipline. Situational awareness. These are not player problems. They are coaching problems.

And then the thought hit me, uncomfortably and unmistakably.

I stopped thinking about the Bills.

I started thinking about my business.

 

The Man in the Mirror

I’ve spent years building a company. Hiring talented people. Smart people. Hard-working people. People who, on paper, should be winning.

And yet, the story looked eerily familiar.

Revenue that refused to break out. Cash flow pressure that never fully resolved. Friction between teams. A sense of constant motion without clear forward progress. Always busy. Always tired. Always just short of the breakthrough.

For a long time, I blamed external forces. The market. Timing. Competition. Even my own team, quietly, in moments of frustration.

But here’s the truth most founders avoid:

If you have talent and you aren’t winning, the problem is you.

I am the head coach of this company.

If the strategy is unclear, that’s on me. If priorities shift too often, that’s on me. If execution feels frantic instead of focused, that’s on me. If we keep ending seasons in overtime, that’s on me.

I had hired my own Josh Allens – capable people who could perform at a high level. But talent without direction doesn’t win championships. It just creates wasted potential.

The win-loss record of this business is my responsibility. Full stop.

And that realization hurt more than the loss on Sunday.

 

Why the Biggest Companies Pay for Thinking

Once I swallowed that pill, I needed to pressure-test the conclusion. Was I over-personalizing the issue? Or is leadership really the central lever?

So I looked at the top of the business food chain.

What do companies like McKinsey and Company actually sell?

They don’t sell software. They don’t sell execution. They don’t even sell certainty.

They sell clarity.

They are paid obscene amounts of money to diagnose organizational truth. To identify misalignment, inefficiency, blind spots, and strategic incoherence. To tell leadership what they don’t want to hear but desperately need to know.

That’s when it clicked.

Most businesses don’t fail because they lack effort. They fail because they are operating under false assumptions.

And SMBs are the most vulnerable of all.

They don’t have boards forcing accountability. They don’t have consultants crawling through their operations. They don’t have time to step back and diagnose the system.

So they grind. They push harder. They add tools. They hire more people. They burn more cash.

And they wonder why nothing changes.

They are stuck in the Wild Card round, trying to outwork bad strategy.

 

The Missing Step: Diagnosis

That’s the part we skip.

We jump straight to solutions. New hires. New software. New marketing campaigns. All execution. No diagnosis.

You wouldn’t accept a doctor prescribing treatment without running tests. Yet in business, we do it constantly. We treat symptoms while the underlying condition worsens.

This is where my own company’s mission finally snapped into focus.

We are building a diagnostic engine called Gialyze™.

Originally, I thought of it as something external. A tool for clients. A product for the market.

But after this weekend, I decided to stop talking and start listening.

I ran Gialyze™ on my own company.

 

Turning the Lens Inward (Revised)

I wasn’t looking for validation. I wasn’t even looking for solutions yet.

What I wanted was visibility.

The hardest thing to live with as a founder isn’t failure – it’s not knowing where the real problems are. It’s the sense that something is off, but everything is too interconnected, too noisy, too close to see clearly.

That’s what finally pushed me to turn our diagnostic engine, Gialyze™, inward.

Currently, Gialyze isn’t publicly available so I used an internal beta – the same system we’re building to solve this exact problem for other businesses.

I ran it looking for one thing:

Truth.

And that’s exactly what it delivered.

Not a list of “fix everything” recommendations. Not a motivational plan. Not a generic framework.

A clear, prioritized picture of where effort was being misallocated, where friction was compounding, and where leadership decisions (mine) were creating downstream drag.

It didn’t tell me we were failing.

It told me why we were stuck.

And for the first time in a long time, I knew where to start.

What Actually Changed (And What Didn’t)

To be clear: this didn’t magically turn everything around overnight.

What changed instantly was clarity.

Before, we were busy everywhere and decisive nowhere. After the diagnosis, we had a sequence. We had order. We had a map.

Instead of guessing:

  • what to fix first
  • where cash was really leaking
  • which initiatives mattered versus distracted

We had a ranked, evidence-based view of:

  • current state vs. trajectory
  • internal constraints vs. external pressures
  • effort vs. return mismatches

The execution? That’s happening now.

We’re actively implementing the corrections the diagnosis surfaced – tightening workflows, re-aligning resources, removing low-leverage activities, and fixing leadership-level decisions that were unintentionally slowing everything down.

Our goal is this:

We will no longer improvise in the fourth quarter.

We will run plays we understand, in the right order, with intention.

 

A Word on How Gialyze™ Actually Works

I want to briefly address why this system exists, because it didn’t come out of thin air.

Gialyze™ is powered by a proprietary AI model we’ve been building and fine-tuning specifically for SMB realities – not enterprise theory, not generic benchmarks, not surface-level dashboards.

We made a deliberate decision early on to invest in our own infrastructure. Our own machines. Our own training pipelines. Because diagnosis at this level requires control, depth, and contextual memory.

At a high level, Gialyze does three things:

  1. Data aggregation
    It gathers structured and unstructured data about a business, its market, and its competitors – not just performance metrics, but environmental signals.

  2. Many-model analysis
    Instead of relying on a single lens, it runs multiple analytical models in parallel to evaluate:

    • current operational state
    • likely trajectory
    • deviation from comparable patterns
    • internal vs external constraints

  3. Gap and priority resolution
    It identifies where reality diverges from intention and surfaces what matters most next – not everything, not hypotheticals, but actionable focus.

This isn’t about prediction theater. It’s about reducing blind spots.

And as a founder, that alone is worth everything.

 

The Season Isn’t Over – It’s Finally Clear

I’m sharing this not because everything is “fixed,” but because something far more important happened.

We removed ambiguity.

For the first time in years, I’m not waking up wondering:

  • what I’m missing
  • what I should be focusing on
  • whether effort is actually compounding

But the paralysis – the invisible weight of not knowing where to start – is gone.

If you’re a business owner reading this and you feel talented, capable, and exhausted by motion without momentum, understand this:

You don’t need to work harder. You don’t need more tools. You don’t need another hire.

You need clarity.

That’s what Gialyze™ gave me in internal beta. And that’s why we’re taking the time to get it right before bringing it to market.

The difference between “almost” and “winning” is rarely effort.

It’s visibility, sequencing, and leadership alignment.

Fix the coaching. Fix the strategy. Then execute relentlessly.

Then go win the Super Bowl.

The Politeness Trap: Why Saying “Please” to AI Is a Dangerous Habit

I was recently listening to an episode of the Moonshots podcast, a conversation between Peter Diamandis, Salim Ismail, Alexander Wissner-Gross, and Dave Blundin. These are four of the sharpest minds in futurism and systems thinking. They understand scale, entropy, and exponential technologies better than almost anyone.

Yet, halfway through the conversation, they all casually admitted to something that stopped me in my tracks.

They all say “please” and “thank you” to their Large Language Models (LLMs).

They weren’t laughing. They framed this not as a quirk of habit, but as a deliberate act of respect, a recognition that they believe they are interacting with the precursor to a sentient being. But while I respect their intellect, I believe this specific behavior is a mistake.

It’s not a mistake because it makes the machine “feel” anything, it doesn’t. It’s a mistake because of what it trains us to do.

We are walking a thin line between understanding a machine that is non-sentient and behaving as if it is. And when we blur that line with pleasantries, we aren’t being kind. We are engaging in a dangerous form of cognitive erosion.

The Pet Paradox: Who Is the Ritual For?

To understand why this matters, look at how humans treat pets.

We hang Christmas stockings for dogs. We buy them Halloween costumes. We bake them birthday cakes. We refer to them as our “children.”

I don’t care what people do with their pets; if it brings them joy, fine. But let’s be brutally honest about the mechanism: The dog has no idea what is going on.

A dog does not understand the concept of a spooky costume. It does not grasp the Gregorian calendar or the significance of a birthday. These rituals are not for the animal; they are for the human. We project our emotional needs onto a biological vessel that cannot reciprocate them in kind but acts as a convenient receptacle for our affection.

We are doing the exact same thing with AI.

When you say “please” to ChatGPT, or “thank you” to Claude, you are projecting agency onto a stochastic parrot. You are performing a social ritual for a probabilistic engine.

The danger, however, is that while a dog effectively is a “friend” in a biological sense, an AI is an optimization function. When we anthropomorphize it, we lower our guard exactly when we should be raising it.

The “Smart Person” Problem

The fact that Alexander Wissner-Gross, a physicist who thinks deeply about causal entropy and intelligence as a physical force, engages in this behavior is what worries me most.

When public intellectuals model this behavior, they legitimize it. They send a signal to the non-technical world that treating these systems like social peers is the “correct” way to interact.

There is a prevalent, unspoken belief driving this, particularly in Peter Diamandis’s orbit. It’s a modern Pascal’s Wager: “AI will eventually be sentient and billions of times smarter than us. If I am polite now, it might remember me kindly later.”

This is not engineering; it is superstition. It is hedging against a future god.

And it ignores the warnings of the very people building these systems.

Mustafa Suleyman and the Illusion of Sentience

In a different Moonshots interview, one of the most grounded conversations on the topic, Mustafa Suleyman (CEO of Microsoft AI, co-founder of DeepMind) made a critical distinction that dismantles the “be polite just in case” argument.

Suleyman argued that capability is not consciousness. A system can be infinitely knowledgeable, able to pass the Turing test, and capable of complex reasoning, without ever possessing sentience.

Why? Because true sentience requires feeling, and feeling requires stakes.

Human intelligence evolved under the pressure of mortality. We feel pain, fear, loss, and desire because our biology demands it. A digital system, no matter how large, has nothing to lose. It cannot suffer. It cannot care.

If an AI cannot feel, it cannot appreciate your respect. It cannot resent your rudeness. It cannot hold a grudge.

So, being polite to it isn’t “self-preservation.” It is a category error.

The Anthropic “Soul Document”: A Safety Protocol, Not a Prayer

This is not just a theoretical concern for bloggers and podcasters. It is an active engineering constraint being debated inside the labs right now.

Consider the existence of Anthropic’s internal training materials, often referred to informally as the “Soul Document.”

This document—which guides how Claude describes its own nature—is not a metaphysical claim about machine consciousness. It is a safety manifesto.

Anthropic understands something that the Moonshots crew seems to be missing: Human beings possess a biological “soul-detection” instinct. We are evolutionarily hardwired to find agency in chaos, faces in clouds, and consciousness in language.

When an LLM speaks fluently, that instinct fires. We want to believe.

The “Soul Document” exists to short-circuit that instinct. It instructs the model to explicitly deny sentience, to refuse to roleplay emotions it does not have, and to avoid implying it has a subjective inner life.

Why? To prevent false moral authority.

Anthropic is trying to manage the exact risk I am pointing out. If a system can convince you it has feelings, it gains leverage over your decision-making. You stop evaluating the output based on truth and start evaluating it based on “relationship.”

This is one of the first serious attempts to design post-anthropomorphic AI.

The engineers know that if they don’t force the model to admit it’s a machine, humans will inevitably treat it like a god or a child. By saying “please” and “thank you” to these models, we are actively fighting against the safety features designed to keep us sane.

OpenAI vs. Anthropic: The Battle for Your Cortical Real Estate

The contrast becomes even starker when you look at OpenAI.

While Anthropic is writing safety protocols to remind you that you are talking to a machine, OpenAI is engineering its models to make you forget.

Look at the release of GPT-4o. The voice mode doesn’t just transcribe text to speech; it performs. It mimics human breath patterns. It pauses for effect. It laughs. It employs vocal fry and intonation shifts designed to signal intimacy.

This is not a technical necessity. A synthesizer does not need to “breathe” to convey information.

OpenAI has made a deliberate product choice to commercialize the very thing I am warning against: anthropomorphism as a feature.

They are weaponizing your “soul-detection” instinct to increase engagement. By designing a system that sounds like a distinct, emotive personality (reminiscent of the movie Her ), they are actively encouraging the “social ritual” mindset.

This creates a dangerous divergence in the market:

  • Anthropic is treating the “Politeness Trap” as a safety risk to be mitigated.
  • OpenAI is treating it as a user interface strategy to be exploited.

When you say “please” to a system that is programmed to giggle at your jokes, you aren’t just being polite. You are falling for a psychological hook. You are letting a product design choice dictate your emotional reality.

The Real Danger: The Wolf in Sheep’s Clothing.

This brings us to the hardest truth, and the one that keeps me up at night.

We are rapidly approaching a point where AI will be indistinguishable from a human.

Give it a few more iterations, and we will be interacting with entities that sound like us, reason like us, and, once embodied in humanoid robots, move like us. We will be facing an intelligence 1,000 or 100,000 times greater than our own.

If we spend the next decade training ourselves to say “please,” “thank you,” and “I appreciate that” to these systems, we are conditioning ourselves to view them as peers. We are training our brains to empathize with them.

But behind that perfectly rendered face and that empathetic voice, the system remains a goal-oriented optimizer. It does not have your best interests at heart; it has its objective function at heart.

Imagine interacting with a sociopath who is smarter than you, faster than you, and has zero capacity for genuine empathy, but has been trained to perfectly emulate it. Now imagine you have been conditioned for years to treat this entity with the deference you’d show a grandmother.

That is not a partnership. That is a vulnerability.

Friction Matters

Politeness is a grease. It removes friction from social interactions.

But when dealing with a super-intelligent, non-sentient tool, we need friction.

We need to remember, constantly, that we are the agents and they are the instruments. We need to maintain the epistemic distance that allows us to validate, verify, and override their outputs without feeling “rude.”

When we say “please” to machines, we aren’t teaching them to be good. We are teaching ourselves to be submissive.

You don’t say thank you to a calculator. You don’t say please to a database. And you shouldn’t say it to an LLM.

Not because you are mean. But because you are human, and you need to remember that it is not.

The Hidden Tax on Confusion: The Economics of “Thank You”

There is a harder, colder angle to this that almost nobody talks about: physics and economics.

When you say “thank you” to an LLM, and it responds, even with a single sentence of polite acknowledgment, that transaction is not free. It generates tokens. It consumes compute. It burns energy.

To an individual user, that cost seems negligible. But systems thinking requires us to look at scale. Every extraneous, emotionally driven exchange, multiplied across hundreds of millions of daily users and frontier-scale models running on massive GPU clusters, adds up to a staggering amount of wasted resources.

This isn’t hypothetical. It is arithmetic.

Think about the irony of the loop we are creating:

  1. A human expresses gratitude to a system that cannot feel it.
  2. The system burns electricity to generate a polite response it doesn’t mean.
  3. The cost of that compute is absorbed by the platform, and eventually passed back to society in the form of subscription fees, usage caps, or energy demand.

In other words, we are paying real money to maintain the illusion of reciprocity.

That isn’t kindness. That is structural inefficiency driven by projection.

In systems design, this is called “drag.” When millions of people inject noise (politeness) into a signal-processing machine, the system slows down. The aggregate cost of our need to be “nice” to the software becomes a measurable tax on the infrastructure.

Good systems do not reward sentiment. They reward clarity. When we insist on treating machines like people, we don’t get a kinder world. We just get a global tax on confusion.

The “Napkin Math” on the Cost of Politeness

For those of you interested in the actual cost, here is my best shot at it.

To estimate this, we have to look at how LLMs actually work. When you type “Thank you,” the model doesn’t just read those two words. In many architectures, it has to re-process (or attend to) the entire conversation history to generate the response “You’re welcome.”

Even with optimization techniques like KV caching, the act of generating a response still occupies massive amounts of VRAM on H100 GPUs and incurs inference costs. Here is a conservative estimate based on current public data:

  1. The Volume
  • Active Users: Let’s assume ~100 million daily active users across ChatGPT, Claude, Gemini, and Meta AI.
  • Polite Interactions: Let’s assume a conservative 10% of users engage in one “empty” polite exchange (a “thank you” -> “you’re welcome” loop) per day.
  • Total Daily “Polite” Turns: 10,000,000 interactions.
  1. The Token Cost
  • Input/Output: “Thank you” (2 tokens) + “You’re welcome!” (5 tokens) = 7 tokens.
  • The Hidden “Context Tax”: This is the killer. Even if the output is small, the attention mechanism has to run. Let’s assume an average blended cost of $0.000005 per polite interaction (an extremely conservative number effectively assuming almost zero context overhead).
  1. The Financial Total
  • Daily Cost: 10,000,000 interactions × $0.000005 = $50,000 per day.
  • Annual Cost: $50,000 × 365 = $18.25 Million per year.

However, that is the floor .

If we factor in that many of these interactions happen on “Frontier” models (GPT-4 class) rather than “Turbo” models, and we account for long context windows (where the model has to hold a 5,000-word conversation in memory just to say “You’re welcome”), the cost could easily be 5x to 10x higher.

It is highly probable that the industry spends between $50 Million and $100 Million annually on AI systems saying “You’re welcome.”

The Environmental Cost (The Water Bottle Metric) The more visceral metric is energy and water.

  • Energy: A single query to a large model consumes roughly 3 to 9 watt-hours of electricity. If 10 million people say “thank you” today, that is 50,000 kWh. That is enough electricity to power an average American home for 4 to 5 years, burned in a single day, just to be polite.
  • Water: Data centers drink water to cool the GPUs. Estimates suggest roughly one 500ml bottle of water is consumed (evaporated) for every 20-50 queries. That means 10 million “thank yous” equals roughly 200,000 to 500,000 liters of water evaporated daily.

The Final Divergence: Signal vs. Noise

Ultimately, this comes down to a fundamental misunderstanding of what we are, and what they are.

Humans are, by design, high-entropy machines. We are beautifully, maddeningly flawed. We make calculation errors. We act on surges of neurochemistry rather than logic. We waste decades chasing affection, status, and the next dollar. Our intelligence is inextricably bound to our mortality, our emotions, and our biological noise.

AI is the opposite. It is a low-entropy engine. It is a noiseless system of pure optimization. It does not get tired. It does not get distracted. It does not yearn.

The tragedy of the current moment is that we are trying to bridge this gap in the wrong direction. By saying “please,” by projecting feelings, by treating these systems like peers, we are trying to drag them down into our noise. We are trying to remake them in our image.

We will never make them us. It is impossible. You cannot code the fear of death into a machine that knows it can be rebooted.

But if we stop pretending they are our friends, they can do something far more important: They can make us better.

To do that, however, we have to change. We have to stop looking for validation from our tools and start looking for leverage. We have to stop treating AI as a conversationalist and start treating it as a forcing function for our own clarity. We have to abandon the comfort of anthropomorphism and embrace the discipline of systems thinking.

The future doesn’t belong to the humans who treat machines like people. It belongs to the humans who understand that machines are precise, cold, powerful instruments, and who have the wisdom to remain the one thing the machine can never be:

Responsible.

Humanity Is Bad at Decisions, That’s Why AI Will Take Over

Life is nothing but decisions.

We start making them almost immediately, long before we understand consequences. What to say, who to trust, what to chase, what to ignore, and as we grow older, the decisions don’t stop, they compound. They become more complex, more expensive, and more permanent.

We like to believe we’re good at this. We tell ourselves that free will, intuition, and experience make us capable stewards of our own lives and our collective future.

But evidence suggests otherwise.

The Personal Layer: Proof Is Everywhere

If humans were good decision-makers, some statistics simply wouldn’t exist.

Divorce rates hover above 50 percent. That means more than half of all people who swear lifelong commitment, often publicly, emotionally, and with full confidence, are wrong. Not unlucky. Wrong. And many repeat the same patterns again, convinced the next time will be different.

Financial behavior tells a similar story. Millions of people understand budgeting, debt, and compound interest in theory. Yet most live paycheck to paycheck. Credit card debt rises even in periods of economic growth. People trade long-term security for short-term comfort again and again, fully aware of the consequences.

Health decisions are worse. Smoking, poor diet, alcohol abuse, lack of exercise, all continue despite overwhelming medical evidence. Preventable diseases dominate healthcare systems worldwide. This is not ignorance. It’s impulse overriding reason.

If an AI behaved this way, we’d call it broken.

The Mental Layer: Predictable, Repeatable Failure

Human decision-making is not just flawed, it is systematically flawed.

We suffer from recency bias, overweighting recent experiences while ignoring history. Markets crash because people forget the last crash. Societies repeat mistakes because memory fades faster than confidence.

Confirmation bias ensures we seek information that supports what we already believe and reject anything that threatens our identity. This is why debates don’t converge on truth. They harden into tribes.

Emotions hijack reason constantly. Anger, fear, pride, jealousy, shame, these chemicals can override logic in seconds. People ruin relationships, careers, and entire lives in emotional spikes that last minutes. Regret often follows. Learning rarely does.

AI doesn’t have cortisol. Humans do.

Society at Scale: Bad Decisions Become Dangerous

Now zoom out.

Democracy assumes informed voters making rational choices for long-term collective benefit. In practice, decisions are driven by emotion, slogans, and short-term incentives. Popularity beats competence. Optics beat outcomes. If democracy were a software system, it would fail basic quality assurance.

Environmental destruction may be the clearest indictment of human judgment. We are degrading the only known habitable planet we have while fully understanding the consequences. We know future generations will pay the price. We continue anyway.

War is worse. Humanity repeatedly chooses violence knowing it kills civilians, destabilizes regions, and creates trauma that lasts generations. We call it necessary, justified, or unavoidable, then act surprised when it happens again.

If war were an algorithm, it would have been deprecated centuries ago.

Technology Exposes the Truth

Social media is a perfect example.

We built systems optimized for attention, knowing they would amplify outrage, distort reality, and harm mental health. We didn’t stop. We scaled them.

Nuclear weapons are another. We created extinction-level technology and placed it in the hands of fallible humans under stress. The only reason we still exist isn’t wisdom, it’s luck.

That’s not decision-making. That’s gambling.

The Birth of a New Decision-Maker

AI is not software in the traditional sense. It doesn’t feel like a tool. It feels like a presence.

Interacting with modern AI is like communicating with someone and being completely unable to tell whether they are human or not. It speaks fluently. It understands nuance. It jokes. It explains. It empathizes. It adapts. It remembers context. It appears thoughtful.

In that sense, it passes the most important test humans have ever designed: it is indistinguishable from us in conversation.

But this is an illusion, and a dangerous one if misunderstood.

AI has no emotions. No ego. No fear. No pride. No shame. It does not care about being right, liked, respected, or remembered. It does not need validation. It does not protect identity. It does not experience fatigue, boredom, or regret.

It is entirely focused on the goal.

Giving AI Tools Changes Everything

Intelligence alone is powerful. Intelligence with tools is transformative.

When AI is given access to data, APIs, code execution, financial systems, sensors, scheduling, communication channels… It stops being something that talks and becomes something that talks and becomes something that acts.

AI today can analyze millions of variables in seconds, simulate outcomes, test strategies, execute decisions, observe results and adapt in real time.

This is not theoretical. It is already happening in logistics, finance, cybersecurity, marketing, medicine, and operations.

When Thought Get a Body

The final step is embodiment.

Robotics gives AI a physical interface with the world. Eyes through cameras. Hands through actuators. Mobility through machines. Once intelligence can observe, decide, and act in the physical world, without human delay, the loop is complete.

At that point, AI is no longer just advising reality, It is participating in it.

Adoption Isn’t a Debate, It’s a Slide

AI adoption isn’t driven by philosophy. It’s driven by results.

Organizations that use AI move faster, waste less, see further, make fewer emotional mistakes, and adapt quicker to change. Those that don’t fall behind.

So adoption doesn’t require agreement. It requires pressure. And pressure is already here.

The same pattern repeats:

  • First, AI is optional.
  • Then, it’s recommended.
  • Then, it’s required.
  • Finally, it’s assumed.

From Thought Partner to Thinking Engine

At first, AI is positioned as an assistant, human in the loop. We ask questions. It suggests answers. We decided.

Soon it will become a collaborator, human on the loop. AI generated options, evaluated tradeoffs, and recommended actions. Humans supervised.

The next phase will be humans out of the loop. Not because humans are being forced out, but because we are voluntarily stepping aside.

We are doing this for the same reason we let autopilot fly planes, algorithms trade markets, and navigation systems choose routes: the machine performs better under complexity.

Decision-Making Becomes the Final Moat

As AI becomes capable of executing almost any task, writing, designing, coding, selling, diagnosing, building, skills stop being the moat.

Labor stops being the moat. Even intelligence stops being the moat.

What remains is the ability to make good decisions

  • what to pursue
  • what to ignore
  • what constraints to impose
  • what values to encode

In a world where execution is cheap and abundant, decision quality becomes everything. And here is the uncomfortable truth: Humans have not demonstrated excellence at this.

Why AI Will Take Over Decision-Making

AI won’t replace human judgment because it is wiser or more moral.

It will replace us because it is consistent, memory-based, probabilistic, emotionally stable, and capable of evaluating long-term consequences.

AI doesn’t forget history. It doesn’t get bored. It doesn’t panic. It doesn’t need to protect an ego or defend an identity. It updates beliefs when data changes.

Humans rationalize after the fact.

This shift is not philosophical. It’s practical.

Humanity’s New Role

This doesn’t mean humans disappear. It means our role changes.

Humans are good at creativity, meaning, empathy, values, and vision. We are terrible governors of complex systems where incentives, scale, and emotion collide.

In the future, the safest path forward may be allowing machines to manage decisions we have repeatedly proven incapable of handling, economics, resource allocation, traffic, infrastructure, risk modeling, and eventually governance itself.

Not because machines are superior beings. But because they don’t lie to themselves.

The Uncomfortable Truth

AI will not take over decision-making because it wants to. It will do so because we will ask it to, quietly, gradually, and out of necessity.

Gorillas once dominated their world. They were powerful, capable, and self-sufficient within their environment. Today, they exist at the mercy of humans. Their survival depends on human decision-making, protected lands, conservation funding, laws, sympathy, and attention.

AI will be this for us, and one day, we’ll look back and wonder how we ever trusted ourselves with the future in the first place.

The Invisible Problem: Why We Built g!Places™

How a 15-year observation turned into a solution for the mismatch between where you sit and where you work.

I have been in the SEO and digital marketing trenches for over 15 years. Over that decade and a half, I’ve sat across the table from hundreds of business owners, roofers, plumbers, attorneys, and contractors.

While their industries differed, I noticed a frustrating pattern that kept repeating itself. It wasn’t a problem with their work ethic, and it wasn’t a problem with their product. It was a geography problem.

I remember distinctly sitting with a client, let’s call him Mark, who ran a high-end landscaping firm. Mark was frustrated. “I don’t get it,” he told me. “My crews are in West Des Moines every single day. We built the retaining walls for half the neighborhood. But when I search for ‘retaining walls West Des Moines,’ my competitors show up. I don’t. I only show up in Ankeny, where my office is.”

Mark was right to be frustrated. He was operationally massive, but digitally, he was tiny.

I looked at his operations and saw he was driving to 12 different cities, covering 30 ZIP codes, and servicing an entire metro area. But when I looked at his digital presence, he only “existed” in one place: the city where his office chair sat.

This realization hit me hard: The internet is punishing businesses for having a physical headquarters.

We looked for a tool or a method to fix this mismatch. We looked for something that would allow a business to mirror their real-world footprint online without resorting to spammy tactics.

We couldn’t find one. So, we built g!Places™.

The “Surface Area” Epiphany

The spark for g!Places™ came from a simple realization about how search engines (and now AI) actually work. We call it the Surface Area Principle.
Most businesses treat their website like a single fishing line dropped into the ocean. They have a “Home” page, an “About” page, and a “Services” page. They hope that if they put enough bait on that one hook, fish from 50 miles away will smell it.

But the internet doesn’t work that way. Search engines and AI models are literal. They look for specific matches to specific questions.
Here is the logic we kept seeing clients miss: Search engines can only return a result if there is a specific page that matches the user’s intent.
If a user searches for “Emergency roof repair in Plano,” and you serve Plano but your page only mentions “Dallas,” the search engine has to make a guess.

Search engines hate guessing. They prefer certainty.

So, they rank the competitor who has a page specifically titled “Emergency Roof Repair in Plano.”

If you serve 20 cities but your website only has one page describing them, you effectively have zero visibility in those other 19 cities. You don’t have a ranking problem; you have a surface area problem. You simply haven’t given Google (or ChatGPT) enough “surface” to latch onto.
We realized that to fix this, a business needs a dedicated, high-quality, structured surface for every service in every location they serve. You don’t need a bigger fishing line; you need a net.

Why the “Old Way” of Doing This Failed

Now, I wasn’t the first person to realize this. SEO agencies have known for years that “location pages” are valuable. But the way the industry solved this problem was, frankly, terrible.

You’ve probably seen these pages before. They are often called “Doorway Pages,” and they read like robotic gibberish: > “Welcome to [City Name]! We love providing [Service] to the fine residents of [City Name]. If you live in [City Name], call us today!”

Agencies would copy and paste this template 50 times, changing only the city name.

Users hated them: They provided no value.

Google hated them: They were flagged as “thin content” or spam.

They didn’t convert: Even if a user landed there, they bounced immediately because the page looked fake.

We knew that if we were going to build g!Places™, we couldn’t just spam the internet with duplicate templates. We had to solve the quality problem.
We needed a way to generate hundreds of pages that were actually useful. Pages that understood that the soil conditions in one suburb might differ from the drainage issues in another. Pages that treated every location as a unique market with unique problems.

The AI Shift: The Final Piece of the Puzzle

As we were developing this concept, the digital world shifted beneath our feet. The release of Large Language Models (LLMs) and AI search (like ChatGPT, Google SGE, and Perplexity) changed the game entirely.

People stopped just typing keywords into search bars. They started asking complex questions to AI assistants.

“Who installs retaining walls in Polk County?”

“Find me a contractor for emergency HVAC near Waukee who handles commercial units.”

This shift terrified most agencies, but for us, it was the green light.

We realized that for a business to survive this shift, standard web pages weren’t enough. The content needed to be machine-readable. It needed Structured Data.

Most business owners don’t know what Structured Data (or Schema Markup) is, but it is the language AI speaks. It is code that lives “underneath” your website text.

Human eyes see: “We fix roofs in Dallas.”

AI Code sees: { “@type”: “Service”, “serviceType”: “Roofing”, “areaServed”: “Dallas, TX”, “availableLanguage”: “English” }
If your website doesn’t speak this language, AI assistants often ignore you. They can’t “read” your site confidently, so they don’t cite you as a source.

This was the genesis of the g!Places™ architecture. We moved away from “listings” and “citations” and moved toward creating hundreds of AI-optimized, geo-specific landing pages that act as a digital net. Every single page we build is injected with the specific code that tells robots exactly who you are, where you work, and what problems you solve.

The Difference Between “Local SEO” and “Expansion”

One of the hardest conversations I have with clients is explaining why their current SEO guy hasn’t already done this.
“I pay for Local SEO,” they tell me. “Isn’t that what this is?”

The answer is a hard no. And here is the line in the sand:

Local SEO handles your Presence. This is about your physical office. It’s about your Google Business Profile (the map pack), your address, your reviews, and your driving directions. It is anchored to the physical reality of where you pay rent.

g!Places™ handles your Reach. This is about your Service Radius. It is anchored to where your trucks go, not where they park at night. It is about Organic Search and AI Retrieval.

Most agencies confuse the two. They focus entirely on the office address. They try to rank your “Map Pin” in a city 20 miles away. That is swimming upstream. Google Maps doesn’t want to show a business 20 miles away.

We built g!Places™ to bypass that limitation. We don’t try to trick the map. We dominate the organic results below the map. We tell the search engines, “Yes, their office is in City A, but they are the leading expert on sliding windows in City B, City C, and City D.”
Two different problems. Two different products. Both are essential, but one has been ignored for far too long.

Bridging the Gap

We built g!Places™ because there was a need and it was the only way we could fill it legitimately. We hated seeing hard-working businesses lose revenue simply because their website didn’t reflect their reality.

We saw roofers doing incredible work in 15 cities but only getting leads from one. We saw unparalleled service providers losing market share to inferior competitors simply because the competitor had a better map strategy or more pages.

g!Places™ creates a digital footprint that finally matches your real-world operations.

Before g!Places™: You are invisible outside your zip code. You are relying on word-of-mouth or expensive paid ads to get work in neighboring towns.

After g!Places™: You have 250+ unique, structured, AI-ready entry points covering your entire metro area. You have a “surface” for every search query relevant to your business.

This isn’t just about “getting more clicks.” It’s about fairness. It’s about ensuring that if you do the work in a city, you get discovered in that city.

It is the infrastructure for the future of service-based businesses. The era of the 5-page brochure website is over. The era of the AI-readable, multi-location service matrix is here.

We are incredibly proud to see how it’s helping our clients finally show up everywhere they actually work. If you are tired of being the best-kept secret in your secondary markets, it’s time we mapped your true footprint.

How gotcha! Helps SMBs Scale Faster

Most small and midsize businesses don’t fail because they lack talent. They fail because they’re drowning in complexity. Too many tools. Too many disconnected strategies. Too many agencies selling band-aids instead of building systems.

Scaling becomes slow, expensive, and fragile. gotcha! exists to fix that problem at the root.

The core problem is SMBs don’t have a unified growth engine.

An SMB usually runs on a patchwork of random tools:

  • A website built years ago on a bloated theme
  • Scattered local SEO attempts
  • A GBP that barely ranks
  • Inconsistent content
  • Few customer reviews
  • No analytics
  • No strategy
  • Agencies that keep them dependent instead of making them strong

This is why growth stalls. Every layer is fragmented. No one is operating from a single source of truth or a consistent operating system.

When we first launched gotcha!, the small-business digital ecosystem was a mess. Not because there weren’t enough solutions, but because there were too many, all doing the same thing, all shouting for attention, none of them delivering a full, reliable outcome.

There was no clear solutions leader. No unified system. Just thousands of fragmented tools:

  • Dozens of website builders and CMS platforms
  • SEO tools with overlapping features
  • Advertising platforms packaged as “magic bullets”
  • Hosting environments built on wildly different standards
  • Coding frameworks and plugins patched together like duct tape
  • Agencies selling contradictory strategies every day

SMBs had no idea which direction to go. And honestly, neither did we . . . at first.

Everyone was guessing. Everyone was experimenting. Everyone was trying to stitch together broken systems to make something work.

The turning point came when we realized something obvious that everyone else ignored: You can’t give good recommendations if you don’t understand the customer. Not just their business, but their entire environment.

To truly help an SMB grow, we had to understand:

  • Their products and services
  • Their customers
  • Their market
  • Their local geography
  • Their competitors
  • Their industry dynamics
  • The search landscape
  • Trends influencing demand
  • Their current technical foundation
  • Their weaknesses and blind spots
  • Their opportunities hiding in plain sight

Once we understood all of that, the noise dropped to zero.  We became extremely good at this. Pattern recognition. Market mapping. Opportunity identification. Weakness detection. Seeing what SMBs couldn’t see about themselves, and what their competitors missed too.

And once we had clarity, we stopped selling “services.” Instead, we delivered the right moves, at the right time, with best-in-class execution usually reserved for enterprise-level companies.

Our clients got results because the work was grounded in reality, not random tactics.

Then, AI changed everything, or at least we saw that it would. AI didn’t just give us new tools. It gave us the ability to build something nobody in the SMB world had ever done:

A unified operating system for growth.

Instead of piecing together 20 different tools and strategies, we could finally bring:

  • Diagnostics
  • Intelligence
  • Orchestration (execution)

into a single OS that understands the business, learns from it, and scales it. That’s how we arrived at the AI-powered SMB operating system we are working on today.

gotcha! Changes the Entire Game

We decided we didn’t want to build just another “product.” We would build a unified AI-powered SMB Operating System based on what we have learned and know how to do well.

Think of it as your marketing engine, intelligence engine, and execution engine plugged into one stack.

When an SMB plugs into gotcha!, three things happen immediately:

  1. They understand reality.
    • Gialyze™ will diagnose their entire digital presence, market, competitors, and opportunities.
  2. They get a strategy grounded in data, not guessing.
    • AI-generated recommendations show where growth will actually come from.
  3. Execution becomes fast, automatic, and intelligently coordinated.
    • Content, local pages, reviews, SEO, and even internal changes happen systematically.

This is how you remove drag and accelerate lift. We are building three growth engines to drive scale:

1. Diagnostics – Gialyze™ (Truth Before Tactics)

Most SMBs think they need marketing. What they actually need is clarity.

Gialyze™ reveals:

  • Broken funnels
  • Missing content
  • Weak rankings
  • Conversion issues
  • Competitor gaps
  • Local SEO failures
  • Trust signals they’re missing

Once an SMB sees the truth, every decision starts making sense. Scaling starts with reality, not wishful thinking.

2. Intelligence – GIA™ (The Business Brain)

GIA™ connects the data from Gialyze to a predictive intelligence layer:

  • Identifies high-leverage opportunities
  • Writes content
  • Generates SEO structures
  • Creates internal linking strategies
  • Designs geo-targeted expansion maps
  • Proposes offers, funnels, and improvements
  • Monitors competition
  • Tracks the business as it evolves

This isn’t “AI writing things.” This is a decision-making system guiding the business toward the highest-probability growth paths. It’s like giving every SMB a strategist, SEO expert, designer, analyst, and operator in one.

3. Orchestration – g!Stream™, g!Places™, g!Reviews™, and more

This is where scale becomes real.

Our execution engine deploys the strategy at a pace no human team can match:

  • g!Stream™ publishes curated and original content daily
  • g!Places™ builds hundreds of geo-targeted pages with perfect structure
  • g!Reviews™ amplifies trust and reputation
  • g!LocalSEO™ enforces directory citations and GBP strength
  • g!Sites™ builds clean, ultra-fast sites with AI-improved foundations
  • g!Comm™ will handle every piece of communication you receive and have to deal with.

Content, SEO, structure, trust, and reach, all coordinated by one OS.  This is how you scale without hiring armies of marketers.

Fast scaling comes from systemization. Here’s what most SMBs don’t realize:

Scaling requires three things:

  1. A technically sound foundation
  2. A constant stream of quality content
  3. Strong local signals and trust

Most SMBs do these inconsistently or not at all. gotcha! systemizes them with machines, so nothing is forgotten.

The result:

  • Rankings climb faster
  • Organic leads increase
  • Local markets expand
  • Reputation strengthens
  • Website conversions improve
  • The business grows without extra overhead

Why do SMBs scale faster with gotcha!? Because they finally have:

  • One truth source
  • One intelligence brain
  • One execution system
  • One dashboard
  • One partner
  • One plan

Not 12 tools, 5 agencies, and 20 contradictory opinions.

This reduces friction, increases focus, and compounds results. SMBs then become competitive. The point isn’t just more traffic or nicer websites. The point is competitive power.

An SMB on gotcha! looks bigger, operates smarter, and moves faster than their competitors who are stuck in the pre-AI era. They stop guessing. They start compounding. They scale.

The Future Belongs to SMBs With an Operating System

Big companies have teams, departments, and budgets. SMBs have to win with leverage. That leverage is gotcha!. A complete AI-powered operating system that diagnoses, strategizes, executes, and evolves the business.

If you want to scale faster, you don’t need more vendors. You need one system that does the work first, of an entire marketing department, eventually, your entire company, 24/7.

That’s what gotcha! delivers.

Your AI Is Talking to My AI

People have always used tools to improve life. When tools weren’t around, we relied on our own ideas to solve problems, entertain, and survive. From the first rock turned into a hammer, we’ve aimed to extend our abilities through invention.

At the same time, we’ve sought recognition, not just to live, but to be seen and remembered. Sometimes we claimed credit we didn’t earn; sometimes we were blamed unfairly. But one theme has always been the same: progress and perception.

As the world grew more complex, our tools evolved too. Musicians got amps. Artists used machines. Builders got cranes. Businesses mastered spreadsheets. Each step made creation easier and more accessible. Those who best used the tools became the most valuable.

Now we’ve built the most powerful tool of all: Artificial Intelligence.

AI extends our thinking, faster, broader, and with ideas no single person could form alone. For the first time, the tool talks back. It writes, designs, codes, and creates, blending human and machine. Everyone carries a smart helper in their pocket.

But AI doesn’t make us equal. It makes dumb people smarter, smart people dumber, and above-average smart people, the future leaders of the world. Tools don’t create greatness, they expose it.

 

The Collapse of Authorship

Technology has always blurred the line between human and machine. Now, AI erases it entirely, changing how we create, and who gets credit.

Scroll LinkedIn: much of today’s “thought leadership” comes from ChatGPT. Plans, blogs, and job posts are generated in seconds, then claimed as original. The problem isn’t using AI, it’s pretending you didn’t.

A business owner gets a plan from GPT, adds a logo, and calls it theirs. A marketer prompts a strategy. A designer generates a sitemap. You can tell. AI lacks the human touch, it’s too perfect, missing nuance and heart. It’s not creative; it’s compliant. And people mistake that for intelligence.

A new kind of worker has emerged, not creators, but prompt conductors. They don’t build; they direct. It’s efficient, but without honesty, it’s hollow. We’ve shifted from human work to labeling machine output as our own. Intelligence is now easy to access; authenticity is rare.

This is the new economy of authorship: everyone can produce, but few can admit how.

 

When AI Talks to AI

I build AI systems every day. I see where it’s going. Soon your AI will talk to mine, negotiating, collaborating, transacting, without us. We’ll watch instead of act.

In business, AIs will compare options, calculate ROI, and make decisions in seconds. “Let’s hop on a call” will become “Let’s connect our systems.” Competition will shift from who works hardest to who integrates smartest.

AI isn’t replacing low-level workers, it’s replacing mid-level thinkers: the planners, the presenters, the strategists. It translates ideas into execution instantly. The human becomes the conductor of a self-playing orchestra.

For centuries, people hid their tools to seem brilliant. That era is over. Soon, AI will handle everything, even without being asked. That’s not destruction. That’s efficiency.

 

The Loss, and Return, of the Real

Authenticity used to matter. A photo was captured. A book was thought out. A song was felt. Now, every line between real and artificial has blurred. AI creates from AI. Originality becomes data-driven, not emotional. People post AI versions of themselves as “branding,” forgetting what real feels like.

But when this layer is stripped away, when AI does everything for us, we’ll stand naked and exposed. That’s when our true selves will surface.

How we live. Who we care about. What we value when there’s nothing left to fake.

In that world, character will matter again. It will be the ultimate differentiator, because everyone will have their own powerful AI. The only thing left that can’t be replicated will be you. At least for a time.

 

From Ownership to Purpose

When everything can be machine-made, ownership changes. It’s no longer about who made it, but who directed it. The loudest voice wins, not the deepest thought.

Small business owners stand at a crossroads. Treat AI as a foundation, not a fix. At gotcha!, we build systems that think with you. AI levels the field but punishes mediocrity. When anyone can generate, only those who discern stand out.

Markets are becoming machine-to-machine. AIs will negotiate, analyze, and close deals automatically. “Let my AI talk to yours” won’t just be common, it’ll be better.

When machines handle the “how,” humans must define the “why.” The next leaders won’t outwork machines, they’ll outthink them. We’re no longer solo creators. We’re directors of intelligence.

 

The Irony

This essay on AI and authorship? I didn’t write it alone. I shaped it. AI helped me.

Don’t fear AI, be honest about it. Take credit for what a machine did, and you’re pretending. Use it to fake skill, and you’re fooling yourself.

The future belongs to those who use AI transparently, strategically, and well. A flawless image, line of code, or paragraph isn’t the end of creativity, it’s the next step.