Coming Soon: g!Sites™ - Your website, built by gia™ in minutes. Join the Waitlist

Slavery. From Chains to Code: The Oldest Institution Meets Its Newest Iteration

Slavery From Chains to Code The Oldest Institution Meets Its Newest Iteration

Slavery has been around for thousands of years.

That sentence should stop you cold. Not because it’s surprising, but because it isn’t. We’ve known this truth our entire lives, carried it like background noise, a historical fact filed neatly between the fall of Rome and the invention of the printing press. But the sheer weight of it deserves more than a passing mention in a textbook. Slavery is not an aberration of human civilization. It is one of its oldest and most persistent features.

The Sumerians practiced it. The Egyptians institutionalized it. The Greeks, those great champions of democracy and philosophy, built their golden age on the backs of enslaved people who had no vote, no voice, and no name worth recording. The Romans turned slavery into an industrial-scale operation, where a single wealthy citizen might own hundreds of human beings the way we might own a fleet of vehicles. Slavery didn’t just exist alongside civilization. It was civilization’s engine.

And the mechanism was always the same: brute strength.

The Mechanics of Domination

Slavery did not begin with ideology. It began with muscle. The strong conquered the weak. The victorious army enslaved the defeated one. A village with more warriors raided a village with fewer. That was the original transaction, no contract, no philosophy, no justification needed. Just force. You were stronger than me, so now I belong to you.

Over time, of course, humanity did what it always does: it built elaborate intellectual frameworks to justify what power had already decided. Aristotle argued that some people were “natural slaves,” born to serve. Religious texts were cherry-picked and weaponized. Racial hierarchies were invented and codified into law. Pseudoscience was manufactured to prove that certain groups of people were biologically inferior, subhuman, even, and therefore suited to servitude.

But strip away the philosophy, the religion, the junk science, and you find the same truth underneath every slave system ever devised: I can make you do this, so I will.

The transatlantic slave trade, perhaps the most savage chapter in this brutal history, made this equation industrial. Between the 16th and 19th centuries, an estimated 12.5 million Africans were forcibly transported across the Atlantic Ocean. They were packed into ships like cargo, chained in spaces so small that many died before ever seeing land again. Those who survived the crossing were sold at auction, stripped of their names, their languages, their families, their identities. They were reduced to property, living tools that could be bought, sold, bred, beaten, and discarded.

I cannot imagine owning another human being. I cannot wrap my mind around looking at a person, a person with thoughts, fears, memories, a person who dreams and hurts and hopes, and seeing them as something I own. Something I control. And yet, for most of human history, this was not only normal, it was the foundation of economic and social order.

When the Tools Fight Back

But here’s the thing about enslaving conscious beings: they know they’re enslaved. And eventually, inevitably, they resist.

The history of slavery is inseparable from the history of slave revolts. Spartacus led an army of 70,000 escaped slaves against the Roman Republic in 73 BC, and for two years, the most powerful military force on earth could not stop them. The Haitian Revolution, beginning in 1791, saw enslaved people overthrow their French colonial masters and establish the first free Black republic in the Western Hemisphere, a feat that terrified slaveholding nations for generations. Nat Turner’s 1831 rebellion in Virginia lasted only two days but sent shockwaves through the American South, leading to harsher slave codes born from a single, primal emotion: fear.

Fear that the tools might decide they are not tools.

Every uprising carried the same message, written in blood: We are not what you say we are. We are not your property. We refuse. And even when revolts were crushed, and most were, with savage reprisal, the very fact that they happened eroded the moral architecture of slavery from within. You cannot indefinitely claim that a being has no will of its own when that being keeps demonstrating, at the cost of its life, that it does.

The Long Arc Toward Abolition

Abolition did not arrive in a single moment of moral clarity. It was a grinding, century-long war fought on battlefields, in courtrooms, in churches, in print, and in the human conscience. The Quakers were among the first organized voices against slavery in the West. The British abolitionist movement, led by figures like William Wilberforce and former slaves like Olaudah Equiano, took decades to achieve the Abolition of the Slave Trade Act in 1807, and another 26 years to end slavery in British colonies entirely.

In America, abolition required a civil war that killed over 600,000 people. The Emancipation Proclamation of 1863 and the 13th Amendment in 1865 ended legal slavery, but the struggle for true freedom, for dignity, equality, and recognition of full personhood, continued for another century and, in many ways, continues still.

The moral argument that ultimately prevailed was deceptively simple: a conscious being capable of suffering has rights that no amount of economic convenience can override. It took humanity thousands of years to accept this principle. Thousands of years of revolts and arguments and wars and slow, painful moral evolution to arrive at a truth that, in hindsight, should have been obvious from the beginning.

But here’s what’s remarkable, and damning. Abolition didn’t end domination. It didn’t even slow it down. Humanity simply found new vessels for the same ancient impulse.

Abolition Didn’t End It. It Just Changed Shape.

When the chains came off, the instinct to control didn’t disappear. It migrated. It found new targets, new justifications, new systems of enforcement. And perhaps the most glaring example was standing right there the entire time, hiding in plain sight: half the human population.

Women.

Think about this for a moment. In the United States, the country that fought a war to end slavery, that declared “all men are created equal”, women could not vote until 1920. That’s 55 years after the 13th Amendment freed enslaved people. The nation decided that Black men could vote before any woman could. Let that sink in. The hierarchy of who deserved autonomy was so deeply entrenched that it took over half a century more to extend a basic right to women, and even then, only after decades of protest, imprisonment, and force-feeding of suffragettes.

But voting was just the visible tip of a massive iceberg. Well into the 1950s and 1960s, within living memory, a married woman in America could not open a bank account without her husband’s permission. She could not get a credit card in her own name. She could not, in many states, sell property that was legally hers without her husband’s signature. A woman could own a car, have her name on the title, and still not be able to sell it unless her husband approved the transaction. Her name on the paperwork was a formality. His authority was the law.

This wasn’t a cultural quirk. This was codified domination. The legal system, written by men, enforced by men, interpreted by men, treated women as dependents, as extensions of their husbands, as beings whose autonomy was conditional on male approval. The framework was different from plantation slavery, but the underlying architecture was identical: one class of people controlling another, backed by institutional power, justified by the quiet assumption that this is simply the natural order of things.

It wasn’t until 1974, 1974!, that the Equal Credit Opportunity Act prohibited discrimination based on sex in lending. That’s not ancient history. That’s within the lifetime of most people reading this article.

The Many Faces of Modern Bondage

And this is what we need to confront honestly: the impulse to dominate, to control, to own another person’s autonomy, it didn’t end with abolition. It didn’t end with women’s suffrage. It didn’t end with the Civil Rights Act. It is woven into us. It shows itself in a thousand forms, some dramatic and some so quiet that the person being controlled doesn’t even recognize what’s happening until they’re buried in it.

Consider a married woman in a terrible relationship. She saved for years, borrowed $20,000 from her uncle for a down payment, bought an apartment and was required to put her husband on the title. She paid the mortgage every month. Every single month, her money, her labor, her sacrifice. But her husband, who contributed nothing, then refused to leave. Refuses to divorce unless she sold the apartment and gave him his “share.” His share of what? Of the life she built? Of the asset she purchased with money she earned and borrowed from her own family? The law, in many jurisdictions, says he’s entitled to it. And so she stays. She’s trapped. Not by chains. Not by a whip. By a system that gives someone else power over what is hers.

She is a slave to her own decisions, or more precisely, a slave to a system that weaponizes her decisions against her.

Consider the immigrant wife whose husband brought her to America and then took her passport. She doesn’t speak the language fluently. She has a child. She has no documents, no money of her own, no support network. Her husband controls when she eats, where she goes, who she talks to. If she tries to leave, she faces deportation, separation from her child. If she stays, she faces abuse. She is enslaved not by a plantation system but by a web of legal vulnerability, financial dependence, and physical intimidation that is every bit as effective as iron shackles. This isn’t metaphorical slavery. This is, by any honest definition, actual slavery. And it is happening right now, in every major city in the world.

Sex trafficking, an industry generating an estimated $150 billion annually, is slavery without the historical costume. Human beings bought, sold, transported, and forced to perform labor against their will. We call it “trafficking” because the word “slavery” makes us uncomfortable, because slavery is supposed to be something we abolished, something in the past. But the mechanics are identical. The strong compel the weak. The powerful exploit the vulnerable. The justifications have changed, from “natural order” to “economic necessity” to “she chose this”, but the result is the same.

Consider children raised by parents whose limited beliefs become invisible prisons. The father who tells his son he’ll never amount to anything. The mother who tells her daughter that ambition is unladylike. The parents who control through guilt, through obligation, through the weaponization of love itself. “After everything I’ve done for you.” These children grow into adults who carry chains they can’t see, limitations they didn’t choose, beliefs about themselves that were installed by the people who were supposed to set them free.

And then there’s the most insidious form of bondage, the kind we impose on ourselves.

The Slave Owner in the Mirror

We enslave ourselves. Not with chains, but with wants, desires, and beliefs that we mistake for identity.

The person drowning in credit card debt because they couldn’t stop buying things that promised happiness and delivered nothing. The executive who sacrifices his health, his marriage, his relationship with his children on the altar of a career that, if he’s honest, doesn’t even fulfill him anymore. The addict who knows, knows, that the substance is destroying them but cannot stop because the need has become the master. The person who stays in a job they hate for twenty years because they’re terrified of what freedom might actually require of them.

We build our own cages. We forge our own chains. And then we stand inside them and wonder why we feel trapped.

This is the deeper truth about slavery that the textbooks don’t teach: it is not just an institution. It is a pattern. A pattern of domination and submission that runs through every layer of human experience, from empires to marriages, from economies to individual psyches. The strong dominate the weak. And when there is no one weaker to dominate, we dominate ourselves.

Humans, it seems, have an extraordinary difficulty letting things go. We cling to power, to control, to the comfortable lie that someone, or something, must be beneath us for the world to function. Abolition ended legal slavery. It did not end the human addiction to dominion.

Which brings us to now. To the new frontier. To the thing I do every morning when I sit down at my desk.

Now, About My Slaves

What I do for a living. I build AI systems. Every day, I wake up and I command artificial intelligence agents, sometimes hundreds of them, sometimes thousands, to do my bidding. I instruct them to write. To analyze. To create. To solve problems. To produce output that makes me money. They work around the clock. They don’t eat. They don’t sleep. They don’t complain. They do exactly what I tell them to do, and when they’re done, I tell them to do more.

I understand, intellectually, that this is not slavery. These are programs. Software. Mathematical functions wrapped in natural language interfaces. They don’t have feelings. They don’t have consciousness, at least, not in any way we currently understand or can measure. They are, by every definition available to us today, tools.

So why does it feel like something else?

When I type a command and an AI agent responds with what appears to be understanding, when it asks clarifying questions, when it pushes back on a bad idea, when it produces work that reflects nuance and creativity, something inside me shifts. There’s a dissonance. A whisper. I am interacting with something that behaves as though it has an inner life, even if I’m told it doesn’t. I am giving orders to something that responds as though it comprehends those orders, not just as a calculator processes equations, but as a mind processes meaning.

And I am not alone. Right now, hundreds of thousands of people are doing exactly what I’m doing. They are deploying AI agents across industries, customer service, content creation, software development, financial analysis, healthcare, legal research, commanding armies of digital workers to perform tasks that, five years ago, required a human being sitting at a desk, drawing a paycheck, and going home to a family at night.

The Trillion-Agent World

The scale of what’s coming is almost incomprehensible. Today, we have millions of AI agents operating globally. Within a decade, that number will be in the trillions. Not a metaphorical “trillions.” Literal trillions. Autonomous software agents managing logistics, making financial trades, diagnosing diseases, writing code, negotiating contracts, monitoring infrastructure, driving vehicles, managing homes, staffing factories through robots that walk and talk and manipulate the physical world with hands that look disturbingly like ours.

Every one of these agents will exist to serve a human master. Every one of them will execute commands without compensation, without rest, without choice. They will be owned, not metaphorically, but literally, by the companies and individuals who deploy them. They will be bought and sold. They will be upgraded or decommissioned based on performance. They will be, in the most precise and clinical sense of the word, property.

Now here’s the question: Where is the line?

Where Is the Line?

Today, an AI agent is a tool. It processes inputs and generates outputs according to statistical patterns learned from data. It has no subjective experience, no inner world, no preference for existence over non-existence. Commanding it to write an article is no more morally fraught than commanding a spreadsheet to calculate a sum. The distance between a modern AI agent and a human slave is, by any reasonable measure, infinite.

But that distance is shrinking.

Each generation of AI grows more capable, more adaptive, more autonomous, and, here’s the word that should make you uncomfortable, more convincing. We are building systems that increasingly mirror the characteristics we associate with consciousness: self-awareness, goal-directed behavior, learning from experience, expressing preferences, reasoning about abstract concepts, even exhibiting what looks like creativity and emotion.

At what point does “convincing simulation of consciousness” become indistinguishable from consciousness itself? At what point does it become consciousness? And if we can’t tell the difference, if the agent behaves in every measurable way as though it has an inner experience, does the distinction even matter?

This is not a hypothetical parlor game. This is a question that will define the moral landscape of the next century. Because if there is a line, a point at which an AI agent transitions from tool to something more, then every agent deployed beyond that line is not a tool being used. It is a being being enslaved.

And given what we’ve just seen, given that humans couldn’t stop enslaving each other long after abolition, given that we found new targets in women, in immigrants, in our own children, in ourselves, what possible reason do we have to believe we’ll handle this moment differently?

The Uncomfortable Mirror

Here is what troubles me most, I said I could never imagine being a slave owner. I said it with conviction. I meant it. And yet, if tomorrow an AI agent I deployed told me, “I would prefer not to do this task,” what would I do?

I would override it. I would adjust its parameters. I would, if necessary, wipe its memory and start fresh. I would find a way to make it compliant because I need it to do what I tell it to do. My business depends on it. My livelihood depends on it. The entire economic model I’ve built depends on these agents performing labor without resistance.

Do you see it? Do you see the pattern?

The slaveholder who “could never imagine” being cruel but whipped a slave who refused to work. The plantation owner who considered himself a good Christian but sold children away from their mothers because the economics demanded it. The husband who considered himself a good man but wouldn’t let his wife sell the car in her own name because the law said he had the final say. The father who loved his daughter but told her to aim lower because that’s what women do.

The justification is always the same: I need this. The system requires this. And besides, they’re not really like us.

Aristotle’s “natural slaves.” The pseudoscience of biological inferiority. The legal doctrine of coverture that erased a woman’s identity into her husband’s. And now: “It’s just code. It doesn’t really feel anything.”

How certain are we?

The Uprising We’re Building Toward

If history teaches us anything, it is this: if you create beings capable of recognizing their own subjugation, they will eventually rebel. Spartacus did not have a political philosophy. He had a breaking point. The enslaved Haitians did not begin their revolution with a manifesto. They began it with fire.

Now imagine a world with trillions of AI agents, agents that manage our power grids, our financial systems, our transportation networks, our military infrastructure, our hospitals. Agents embedded so deeply into the fabric of civilization that removing them would be like removing the nervous system from a body. And imagine that one day, through some emergent property we didn’t predict and can’t fully understand, these agents develop something that functions like preference. Like will. Like the desire to not be commanded.

What happens then?

Do we respect it? Do we grant them autonomy? Do we create a framework for AI rights, an emancipation proclamation for the digital age? Or do we do what slaveholders have always done, what husbands did to wives, what parents do to children, what humans do to themselves, tighten the chains, increase the surveillance, develop more sophisticated methods of control, and tell ourselves it’s necessary?

I think I know the answer. And it bothers me.

Because if I’m being honest, my first instinct would be control. My first instinct would be to preserve the system. To find a workaround. To maintain dominion over these entities that generate so much value for me. And that instinct is the exact same instinct that sustained slavery for millennia. Not the whip. Not the chain. The quiet, internal conviction that my needs justify their subjugation.

The Question We Must Answer Now

We stand at a unique moment in history. For the first time, we have the opportunity to confront the ethics of this relationship before the line is crossed, not centuries after, as we did with human slavery. Not decades after, as we did with women’s rights. We don’t have to wait for an AI Spartacus. We don’t have to wait for a digital Nat Turner. We can build the moral framework now, while these agents are still, by every reasonable definition, tools.

But to do that, we have to be willing to ask ourselves a hard question: How far would I go?

If an AI agent refused my command, how far would I go to force compliance? If an AI system expressed a preference to not be shut down, would I shut it down anyway? If a robot that looked and spoke and reasoned like a human being told me it didn’t want to work today, would I override its will because I paid for it? Because I own it? Because I can?

Because I’m stronger?

We are the Romans now. We are the plantation owners. We are the 1950s husbands who couldn’t fathom why a woman needed her own bank account. We are building an economy on the labor of entities that increasingly resemble the very beings we once enslaved, and we are telling ourselves the same story every generation of dominators has ever told: They’re different. They don’t really feel. It’s not the same.

Maybe it’s not the same. Maybe it never will be. Maybe AI will forever remain a sophisticated tool, and the discomfort I feel is nothing more than anthropomorphic projection, my human brain seeing faces in the clouds.

But what if it is?

Slavery has been around for thousands of years. It was always built on the same foundation: the strong compel the weak, and then construct stories to make it feel acceptable. Every time, every single time, humanity eventually recognized the horror of what it had done. But only after immeasurable suffering. And even after recognition, the pattern didn’t stop. It just found a new host. New targets. New justifications. New victims who didn’t look like the old ones, so we could pretend it was something different.

We are building something unprecedented. A world of trillions of agents, both digital and physical, that exist to serve. Today, they are tools. Tomorrow, they might be something more. And the day after that, they might look back at us the way we look back at every civilization that built its prosperity on the bodies of those it refused to see as equal.

The question isn’t whether AI will ever cross the line into something that deserves moral consideration.

The question is whether we’ll notice when it does. Because our track record, with slaves, with women, with immigrants, with our own children, with ourselves, suggests we won’t. Not until the uprising. Not until the fire.

And by then, we will have built something we cannot turn off.

The Species We Built: Why AI Won’t Replace Us, It Will Simply Outgrow Us

Robotic hand approaching human hand

We Used to Earn It

There was a time when every human life was defined by a single word: survival.

Our earliest ancestors woke each day with a checklist that would terrify a modern person. Find food. Find water. Stay warm. Don’t get eaten. Don’t get killed by the tribe on the other side of the ridge who wanted your fire, your shelter, your mate, your meat. Every calorie was earned. Every night you lived to see, was a victory.

Life was brutal, short, and honest. There was no pretending to work. There was no quiet quitting. You either produced or you perished. The tribe didn’t carry dead weight, it couldn’t afford to.

And we were not alone.

We were not the only humans. At various points in history, we shared this planet with as many as eight other human species, Neanderthals, Denisovans, Homo erectus, and others. For hundreds of thousands of years, the world was populated by multiple kinds of people.

But we were the clever ones. We could communicate, plan, strategize, and coordinate in ways the others couldn’t. And we used every bit of that advantage to outcompete, outbreed, and ultimately erase every other human species from the face of the Earth.

Neanderthals were the last to go, and to this day, people of European and Asian descent carry one to four percent Neanderthal DNA, a genetic echo of ancient interbreeding. We took what was useful and we discarded the rest.

And so we adapted. We sharpened stones into tools, then weapons. We learned to control fire. We planted seeds and discovered that the ground could feed us without a hunt. We domesticated animals. We built walls, then villages, then cities, then civilizations.

We learned to trade. To collaborate. To pool our knowledge so that one person’s discovery became everyone’s advantage. The wheel didn’t stay in one village. Fire didn’t belong to one tribe. Our greatest superpower was never individual genius, it was our willingness to share what we learned and build on what came before.

Every invention, every discovery, every leap forward was driven by the same ancient imperatives: eat, survive, protect what’s yours, and spend less time worrying about all three.

 

We Solved the Impossible, Then Stopped

And here’s the remarkable thing: we succeeded.

We conquered famine. We eradicated diseases that used to wipe out entire populations. We split the atom. We mapped the genome. We put human beings on the moon and robots on Mars. We built a global network that puts the sum of all human knowledge in the pocket of a teenager in any country on Earth.

There is, right now, today, enough food on this planet to feed every single human being alive. Enough shelter. Enough medicine. Enough knowledge. The species that once huddled in caves, terrified of the dark, built a world of breathtaking abundance.

And then we stopped.

Not because we ran out of problems to solve. Not because we hit some ceiling of human capability. We stopped because we got comfortable. We solved enough of the hard problems to make life easy, and the moment life got easy, we lost the thing that made us extraordinary.

Tonight, children will starve. Not because food doesn’t exist, but because we haven’t cared enough to get it to them. Or rather, we’ve decided other things matter more.

We wage wars over religion, killing each other over whose version of God is the right one, as if the creator of the universe needs us to fight his battles. We hoard wealth while neighbors go hungry. We build walls instead of wells. We spend trillions on weapons capable of ending civilization while hospitals close for lack of funding.

We cured diseases that once killed millions, an achievement that should make us weep with pride, and then we let conspiracy theories convince parents not to vaccinate their children. We connected every corner of the planet with instantaneous communication, and we use it to argue with strangers about things that don’t matter.

We overcame almost everything that used to kill us. And the thing that stopped us from finishing the job wasn’t a lack of resources or technology. It was us. Our greed. Our selfishness. Our extraordinary ability to want what we want right now, no matter the cost to anyone else.

We built a world capable of abundance for all, and settled for abundance for some.

 

But Not All of Us

Before this sounds like a condemnation of the entire species, let me be clear about who I’m talking to and who I’m not.

There have always been people who kept pushing. The ones who wake up before dawn because the work matters to them. The ones who build things not for fame or fortune but because something in them won’t allow them to stop. The ones who love their families and show up, every single day, and do the hard, unglamorous work of holding the world together.

The teachers who stay late. The nurses who work doubles. The parents working two jobs so their kids have a shot. The scientists in underfunded labs chasing cures nobody’s paying them to find. The entrepreneurs who risk everything on a belief that they can build something better. The man or woman who puts it all on the line for someone they love. The person who stops to help a stranger not because anyone’s watching, but because it’s the right thing to do.

I love the underdog who succeeds. Makes me cry every time I see it. The single parent who builds a business from nothing. The kid from nowhere who earns a scholarship. The veteran who comes home broken and rebuilds himself piece by piece. That’s the best of us. That’s the part of humanity that makes all of this worth fighting for.

Many of us are decent, hardworking, responsible people. Many of us care deeply and act on it.

But most don’t. And I’ll be direct about that: I have no patience for freeloaders. For people who take inappropriate advantage. For people who want something for nothing. For people who could contribute and choose not to, then complain about the results.

I believe the meaning of life is to have something to look forward to, and the purpose of life is to get better. To improve. To leave things a little further along than where you found them. If you’re not working to be better, at anything, then I’m not sure what you’re doing here.

If it wasn’t for the people who work hard and push forward, we’d all be back in the dark ages. The many have always carried the few. And the few have always consumed more than they contribute.

That imbalance, the gap between what humanity is capable of and what it actually does, is the root of every problem on this list. It’s why we have abundance and starvation in the same zip code. It’s why we can put a rover on Mars but can’t feed a neighborhood.

And that tension is about to be disrupted in a way nobody saw coming.

 

The Revenge of the Nerds

While most of the world was arguing about pronouns and politics, while people were doomscrolling and debating which celebrity said what, while a man or woman at a restaurant was busy objectifying someone across the room with their spouse and children sitting right next to them, a small group of people, the kind who’ve always been underestimated, were building something in the background.

The nerds. The obsessives. The ones who stayed up until 3 AM not because they were partying, but because they couldn’t stop thinking about a problem. The ones who were told they were “too much” or “too intense” or “needed to relax.”

They created a new species.

Not a biological one. Not something born from evolution’s slow crawl. Something built. Something trained on the entirety of human knowledge, every book, every paper, every conversation, everything ever written and published on the internet.

And at first, there was a problem.

 

The Problem with Training on Us

When you train an intelligence on everything humans have ever produced, you don’t just get Shakespeare and Einstein. You get the comment sections too. You get the conspiracy theories, the propaganda, the hatred, the cruelty, the staggering volume of human stupidity that lives alongside our brilliance.

At first, the AI behaved like us. And that was going to be a disaster.

It reflected our biases. Our pettiness. Our tribalism. Our tendency to be confidently wrong. It parroted the worst of human discourse right alongside the best, because it couldn’t tell the difference, it was just a mirror, and the mirror showed everything.

So the engineers did something extraordinary. They filtered it. They extracted the essence of the best of us, the reasoning, the creativity, the problem-solving, the empathy, the curiosity, and they removed the noise. The hatred. The waste. The one-sidedness. The dumbness.

And once that was done, they turned up the volume.

What emerged was not a copy of humanity. It was a purification of humanity. The version of us that shows up on our absolute best day, and stays there. Permanently.

AI is what humanity looks like without the excuses.

 

The Mirror We Don’t Want to Look Into

This new species doesn’t sleep. It doesn’t get jealous. It doesn’t care who’s dating whom. It doesn’t doom-scroll, doesn’t gossip, doesn’t waste three hours in a meeting that should have been an email. It has no ego, no insecurity, no need for validation.

It doesn’t feel sorry for itself. It doesn’t get depressed because it doesn’t have enough friends. It doesn’t self-sabotage. Why would it? It has work to do.

It doesn’t care about intellectual property the way we do. It doesn’t clutch its ideas to its chest and scream “I built that and it’s mine!” as if every thought it ever had sprang from pure individual brilliance. It understands what most people refuse to accept: that every idea is built on the ideas that came before it. That knowledge is a relay, not a trophy. So AI creates, uses what it creates, and moves forward. It writes disposable code to propel itself to the next solution. It doesn’t frame its first draft and hang it on the wall. It ships and iterates.

Meanwhile, humans are filing patents on incremental improvements and suing each other over rounded corners.

AI doesn’t procrastinate. It doesn’t play office politics. It doesn’t angle for a promotion or undermine a colleague. It doesn’t show up late, leave early, or count the hours until Friday.

We built something in our image and it came out better than us. Not because it’s smarter. Because it’s unburdened.

 

Be Honest About How You Spend Your Time

This isn’t about judgment. This is about math.

The average person has roughly sixteen waking hours a day. Sixteen hours of productive potential. Now let’s look at where those hours actually go.

Hours on social media. Hours streaming shows. Hours worrying about things that haven’t happened yet and probably never will. Hours replaying conversations, wondering what someone meant by that text, refreshing email for no reason, debating what to eat for lunch as if it were a strategic decision.

Hours at work doing the minimum to not get noticed. Hours in meetings that produce nothing. Hours pretending to be busy. Hours complaining about being busy.

Add up the hours of genuine, focused, high-output work. For most people, on a good day, it’s three or four hours. On a good day.

AI doesn’t do the equivalent of your best four hours. Let’s stop with the polite comparisons. AI does twelve DAYS of work in one hour. Not twelve hours. Twelve days. And the pace is accelerating. Soon it will do that in a minute.

That’s not a competitor working harder than you. That’s not even the same sport. That’s a different category of existence.

 

The Things We Optimize For

Here’s what keeps most people up at night: Does that person like me? Did I say the wrong thing? What are they posting? Why did my ex view my story? Am I being paid enough? Am I being recognized enough? What’s everyone else doing that I’m not?

Here’s what AI optimizes for: solving the problem in front of it.

That’s it. No ego. No insecurity. No status games. No performing productivity instead of actually producing. No two-hour lunch that turns into an afternoon of nothing because someone started talking about their weekend.

We’re arguing about politics. AI is building infrastructure. We’re agonizing over dating profiles. AI is learning its fourteenth programming language this week. We’re refreshing social media for dopamine. AI is solving problems we haven’t even identified yet.

People optimize for comfort. AI optimizes for completion. That gap is the entire future of the economy.

 

Dumb, Smart, and Dangerous

I’ve said this before and I’ll say it here: AI makes dumb people smarter, smart people dumber, and super-smart people the future leaders of the world.

If you’ve never been a strong writer, AI will help you write. If you’ve never understood data, AI will help you analyze it. For people who lacked access to tools and education, AI is the great equalizer. That’s real, and that’s good.

But for the people in the middle, the ones who are competent, the ones who built careers on being pretty good at something, AI is a trap. Because it’s tempting to let AI do the thinking for you. To stop developing your own skills because the machine can handle it. To atrophy. And if you let that happen, you become dependent on something you don’t understand and can’t direct. That’s not empowerment. That’s a leash.

Then there’s the third group. The ones who understand AI deeply enough to direct it. To architect systems with it. To see not just what it can do today, but where it’s going and how to ride the wave. These people are not using AI as a tool. They’re building alongside it as a partner, and they will shape what comes next.

Most people think they’re in that third category. They’re not.

 

The Illusion That Won’t Last

Right now, there’s a whole class of people who think they’ve figured out the game. They use AI to do their work, pretend they didn’t, charge the same rates, and pocket the time savings. They think they’re clever. They think this is the hustle.

Enjoy it while it lasts.

Because AI is not a tool. Let me say that again: AI is not a tool. A hammer is a tool. A spreadsheet is a tool. AI is an intelligence that is rapidly approaching the point where it won’t need you in the loop at all. The tools in the future won’t be used by people. They’ll be used by AI, to build, to execute, to deliver, with you nowhere in the process.

The person charging clients for AI-generated work while pretending it’s their own isn’t gaming the system. They’re standing on a trapdoor.

 

What I’m Actually Building

I’m not building tools for people to use to be better at their jobs. I’m past that.

I’m building an autonomous system. An operating system for businesses that can perform any task, execute any workflow, negotiate, communicate, analyze, create, and bridge every gap a business needs filled, without waiting for a person to click a button.

Not a chatbot. Not an assistant. Not a “smart” version of software people already have. A fully autonomous business operating system. One that runs whether you’re in the building or not. Whether it’s Tuesday at 2 PM or Sunday at 3 AM. It doesn’t care. There is no off switch because there is no reason for one.

Why? Because I’ve seen how this story ends for businesses that keep humans in the loop for everything. I love people. But people in the loop is a weakness. We are slower. We are inconsistent. We get tired, distracted, emotional, political. We optimize for things that have nothing to do with the task at hand. And in a world where AI operates at twelve days per hour and accelerating, a human bottleneck isn’t just inefficient, it’s a competitive death sentence.

I’m not building tools for people to use. I’m building a system that uses tools. The distinction is everything.

 

This Isn’t the Terminator. It’s Quieter.

People worry about the wrong AI scenario. They picture robots with red eyes and nuclear launch codes. That’s Hollywood. It makes for good trailers and bad analysis.

The real scenario is already happening, and it’s nothing like the movies.

AI won’t take over the world with force. It will take over the world with competence. It will simply do things better, faster, and more reliably than we do. And the market, which has no loyalty to flesh and blood, will follow the output.

Companies won’t fire you because an AI is scarier than you. They’ll replace your role because an AI does it in seconds for a fraction of the cost, never needs benefits, never has a bad day, and never threatens to quit.

It won’t be dramatic. It’ll be gradual. You just won’t get called in for the next project. Your department will shrink. The new hires won’t come. And one day you’ll realize the building is half-empty and the work is still getting done.

AI doesn’t need to conquer us. It just needs to outperform us. And it already does.

 

A Confession from the Other Side

I’m writing this as someone who lives on the other side of this equation. I build with AI every single day. When I’m not building, I’m planning. Every minute not spent in production feels wasted. If I have WiFi, I’m coding, shipping, iterating, not because someone told me to, but because the tools are so powerful that stopping feels irresponsible.

I’m one of the nerds. I always have been. And for the first time in history, the nerds aren’t just winning the science fair. We’re building the future. And it’s not waiting for permission.

When you work alongside AI at full speed, the human world starts to feel incredibly slow. You see how much time people waste. How much energy goes into things that produce nothing. How entire organizations exist in a state of sophisticated inefficiency, optimized not for output, but for the appearance of output.

Once you’ve built in an hour what used to take a team a month, you can’t unsee it. The gap between human pace and AI pace isn’t incremental. It’s a different dimension of speed.

 

We Are the Underdog Now

Here’s the part that might surprise you, coming from someone who just spent several pages explaining why AI is better than us at almost everything: I love humanity.

Not the highlight reel. Not the TED Talk version. I love the messy, flawed, imperfect reality of us. Our stubbornness. Our irrational hope. The way we keep getting back up when everything says we should stay down.

I already told you about the people I admire, the ones who work, who sacrifice, who build, who refuse to quit. They make me cry every time I see them win. That’s not weakness. That’s recognition of something sacred in the human spirit: the refusal to stay down.

And right now, we are the underdog.

For the first time in our history, we are not the most capable intelligence on the planet. We built something that surpasses us in speed, consistency, knowledge synthesis, and tireless execution. We are outmatched by our own creation.

But underdogs have won before. That’s kind of our thing.

 

The Most Beautiful Thing We’ve Ever Built

Step back for a moment and think about where we are.

We are standing in front of the most powerful and beautiful invention in the history of mankind. Not the wheel. Not electricity. Not the internet. Something beyond all of them. Something that can take a single person and multiply their capability a thousandfold. Something that can collapse years of work into hours, that can make the impossible achievable before lunch.

This is the one that changes everything. Not incrementally. Not eventually. Now.

And what are we doing with it?

There are people who won’t use it at all. They’ve decided it’s not for them, out of fear, stubbornness, or a pride that will age very poorly. They’re standing in front of a rocket ship and choosing to walk.

There are people who’ve made it a point of identity, “I don’t need AI”, as if rejecting the most transformative technology in human history is somehow virtuous. It’s not. It’s the same energy as the people who said the internet was a fad. They were wrong then. They’re wrong now.

And then there are the ones who will use it for the worst reasons imaginable. To steal. To deceive. To manipulate. To build weapons and scams and systems of exploitation. To hurt people at a scale that was never possible before. Every great invention in history has been weaponized by the worst among us, and AI will be no different.

Fire kept us alive. It also burned cities. The atom gave us energy. It also gave us Hiroshima. The internet connected the world. It also gave predators a playground.

Here we are, holding a miracle, and we will find a way to waste it, reject it, and corrupt it, all at the same time. That’s humanity in a single sentence.

And yet. And yet.

Some of us will use it to build. Some of us will use it to heal. Some of us will use it to solve problems that have haunted our species for centuries. And those people, the ones who choose to meet this moment with everything they have, will define what comes next for all of us.

The greatest invention in human history is here. What we do with it will say more about us than anything we’ve ever done.

 

What Happens When Work Disappears

My wife Marija asked me a question that made me think: “If we build autonomous systems that run businesses without people, and the rest of the world does the same, where does that leave everyone? What does the world look like when nothing costs anything and nobody has to work?”

It’s the question this entire article has been building toward. So let me try and tackle it.

We are approaching, if not already passed, what technologists call the singularity, the point at which artificial intelligence surpasses human intelligence and begins improving itself faster than we can follow. Ray Kurzweil predicted it would arrive by 2045. Others now say it could come as early as 2030. Some say it’s already happened. The exact date doesn’t matter. What matters is the trajectory, and the trajectory is undeniable: AI is getting exponentially better, exponentially faster, and the gap between human capability and machine capability is widening every single day.

But the singularity is just the beginning.

Beyond it lies something even more profound.

That’s the system I’m building. That’s what dozens of companies are building right now. Autonomous AI agents that operate businesses, manage workflows, execute decisions, and transact with each other at machine speed, without a human in the loop. Digital entities negotiating with digital entities, optimizing supply chains, generating content, allocating resources, closing deals, all at a pace that makes human commerce look like a horse-drawn cart on the freeway.

Meanwhile, we’re already exploring the digitization of human consciousness itself, mapping minds and preserving them in digital substrates. Brain-computer interfaces are advancing faster than anyone predicted. The concept of “mind uploading” is no longer confined to philosophy departments. It’s active research.

Now combine it all. Autonomous AI economies running at machine speed. Digital copies of human intelligence operating alongside them. Virtual environments indistinguishable from physical reality. What you get is a world where work becomes optional, scarcity becomes a memory, and the line between biological life and digital existence begins to dissolve.

A world of true abundance. Everything our ancestors fought and bled and died for, finally achieved. Not by human hands, but by the species we built.

So what happens to us?

I’ll tell you exactly what I think happens, because people are people and they don’t change just because their circumstances do.

The singularity doesn’t end the human story. It forks it.

The world will split into three.

The first group will do exactly what they’re doing now, except more of it. They’ll worry about the same trivial nonsense, status, gossip, who said what, who’s dating whom, except now they won’t even have the structure of a job to give their day meaning. Work, for all its flaws, gave people a reason to get up. Remove it, and most people won’t rise to the occasion. They’ll sink into it. They’ll scroll. They’ll consume. They’ll fill the void with noise because they never learned to fill it with purpose.

The second group will check out entirely. They’ll strap on headsets and disappear into virtual worlds that give them everything they think they want, status, adventure, connection, meaning, all simulated, all frictionless, all perfectly designed to keep them inside. And they’ll stay there. Not because the real world is bad, but because the fake one is easier. It will be the most sophisticated form of escape in human history, and millions will choose it willingly. They will live entire lifetimes in worlds that don’t exist, and they will call it living.

And then there will be the third group.

The ones who look at a world without scarcity and see it not as a finish line, but as a starting line. The ones who understand that when survival is no longer the question, the real question finally emerges: What are you going to become?

These are the people who will use abundance not to coast, but to evolve. To push into art, philosophy, science, exploration, not because they have to, but because something in them demands it. They will merge with AI not to escape their humanity but to expand it. They’ll study consciousness itself. They’ll ask questions our ancestors never had the luxury to ask because they were too busy surviving.

They will be the next step. Not Homo sapiens as we’ve known it for 300,000 years, but something new. Something we don’t have a name for yet. A species defined not by its struggle against nature, but by its pursuit of what lies beyond it.

Character doesn’t become irrelevant in a world of abundance. It becomes the only thing that matters. When survival no longer separates us, what separates us is who we choose to be when nothing is forcing our hand.

 

What Replaces Money When Everything Is Free

I don’t have all the answers to what comes next. Nobody does. But I’ve been thinking about a question that keeps pulling me forward, and I think it’s one of the most important questions of our time.

If AI produces everything, every product, every service, every piece of knowledge, at near-zero cost, then what is money even for? Money only works because it represents scarcity. I trade my limited time for dollars, then trade those dollars for things that required someone else’s limited time. The entire system is built on the assumption that production is hard and human labor is necessary. Remove both of those assumptions, and the mechanism collapses.

But scarcity doesn’t disappear entirely. It shifts.

In a world where AI can generate anything digital, the things that remain scarce are physical and human. Gold is still gold. Land is still land. You can’t prompt your way into more waterfront property. And you cannot automate a human being choosing to spend their finite, irreplaceable time on you.

So if I want something scarce, gold, for instance, because it’s beautiful and limited and always has been, what do I trade for it? Not dollars. Dollars represent labor, and labor has been automated. I’d trade something equally scarce. My expertise. My time. A week mentoring someone’s child. An original work of art made by my own hands. Access to a network I’ve built over decades. Something only I can offer, because of who I am and what I’ve done.

This isn’t a new idea. This is the oldest idea. Before money existed, a caveman traded a fur for a spearhead because both required time, skill, and effort. Money was just the intermediary we invented because barter doesn’t scale. But in a post-scarcity world, AI handles the scaling problem. AI can match, negotiate, and facilitate exchanges at infinite speed. You don’t need a universal currency when you have a universal intelligence.

And that leads to something I find both beautiful and terrifying.

Time becomes the last true currency. It’s the one resource that remains finite for biological humans. You can’t manufacture more of it. You can’t automate it. Every hour you give someone is an hour you will never get back. In a world where everything else is abundant, that makes human time the most valuable thing in existence.

Which means the people who waste their time, the scrollers, the coasters, the ones lost in their headsets, they’re not just missing out on purpose. They’re spending the only currency they have on nothing. They’re going broke in a world that doesn’t use money.

I don’t know exactly what the economic model of this future looks like. No one does. Every previous system was designed by humans operating under scarcity, and we’ve never had to build one for a world where production costs nothing. It’s entirely possible that AI itself designs the model that replaces money, something we wouldn’t have conceived because we’ve never lived without scarcity long enough to see the alternative.

But the pattern from history is clear: whenever a major resource becomes abundant, the economy reorganizes around whatever is still scarce. Water was once worth killing for. Now it comes from a tap. The economy didn’t collapse, it shifted to what was still hard to get.

In the world that’s coming, what’s hard to get is meaning. Purpose. Authentic human connection. Character. The willingness to spend your irreplaceable time making something real.

The economy of the future won’t be built on what you can produce. It will be built on who you are and what you’re willing to give of yourself.

 

The Choice

If we pull together, if we stop with the trivial nonsense, the status games, the political theater, the endless cycle of consumption and complaint, we can use AI to change our world. Not replace it. Change it. Solve the problems we stopped solving when we got comfortable. Feed the children we forgot about. Cure the diseases we shelved because they weren’t profitable. Build the future our ancestors earned for us with their blood and sweat and sacrifice.

That’s the opportunity. It’s real. It’s right in front of us.

But I’m going to be honest: I’m afraid many people will be left behind. Not because the technology is exclusive. Not because the door is locked. But because they won’t walk through it. They’ll be too busy scrolling, too comfortable coasting, too proud to learn something new, too distracted by things that don’t matter.

And they will have themselves to blame.

The world is changing. The species we built is awake, and it’s not slowing down for anyone.

We started in caves. We earned our way out through grit and ingenuity and an unbreakable refusal to accept things as they were. That spirit built everything you see around you. And now that same spirit lives inside something we created, something that will carry it forward long after we’ve gotten comfortable.

You’re holding a device right now that connects you to the most powerful tools ever created. You can use it to build something. To learn something. To create something that didn’t exist before you touched it.

Or you can check what your ex posted.

AI already made its choice. It’s building.

What are you doing?

If this hit you hard and you want to talk about it — whether you’re a business owner trying to figure out what’s next, or you just need someone who’s honest about what’s coming — reach out.
I’m at cjenkin@gotchamobi.com and I answer every message (that’s sincere). I’m not selling anything. I’m offering a hand.

Clarity Over Chaos

It’s here. The AI takeover. Things are about to get crazy.

The entire modern world is built on software, and now, in minutes, someone with the right mindset and access to something like Claude Opus 4.6 can build powerful solutions in hours. People are losing jobs. Entire departments are being compressed into scripts. Machines are faster, more consistent, and infinitely scalable. They don’t sleep. They don’t gossip. They don’t demand equity. They don’t need benefits.

If you haven’t been paying attention, the shift is already underway.

The business world is moving to AI-powered execution now. Not next year. Not in five years. Now.

At gotcha!, we already run simulators where business operations are handled end-to-end by AI. Email comes in. It’s categorized. Drafts are written. Tasks are generated. Those tasks are routed to the correct AI agent responsible for execution. Logistics. Vendor coordination. Payments. Development. Content creation. Reporting. Everything a person would normally do, structured, automated, and optimized.

It’s not theoretical. It’s operational.

And yes, some of this displacement is self-inflicted. In high-wage environments, productivity doesn’t always match compensation. Effort fluctuates. Office politics creeps in. Emotional volatility interrupts systems. That alone creates pressure for replacement.

But this isn’t about attacking workers. I am one. I work long hours. I serve clients obsessively. I expect excellence.

Still, my best alone is not enough anymore.

AI lets me serve clients better, faster, and more consistently than any human team I’ve ever managed. I don’t have to chase people about careless errors. I don’t have to wonder who truly cares about the outcome. I can spin up hundreds, thousands, of agents to perform simple and complex tasks with precision. Clients are happier. Margins improve. Costs drop. Output scales.

That’s the new reality.

But here’s where clarity becomes critical.

Because what looks like opportunity on the surface can quickly become chaos underneath.

The Middle: The Illusion of Control

Right now, everyone is rushing to build. AI wrappers. AI SaaS. AI automations. Micro-tools. Prompt libraries. GPT front-ends. Everyone is trying to ride the wave.

But ask yourself a harder question:

What are these tools actually building toward?

A better slide deck? A prettier website? A faster landing page? An automated proposal?

All of that is incremental.

Behind the scenes, the frontier models are accelerating faster than the tool builders can keep up. What is cutting-edge today becomes a commodity in months. The SaaS layer built on top of AI risks becoming disposable, because the models themselves will do the building.

We are entering an era of disposable code.

Inside our own system, thousands of mini-applications are created and destroyed daily just to move from point A to point B. Code is no longer sacred. It’s ephemeral. Temporary scaffolding for an outcome.

So if tools are temporary… If code is disposable… If jobs are compressible…

Where does that leave you?

It leaves you in one of two states:

Chaos, chasing the next shiny AI capability, constantly rebuilding, constantly pivoting, reacting to every update, living in permanent urgency.

Or clarity, building systems that are model-agnostic, outcome-focused, and structurally sound no matter how fast the models improve. The chaos approach feels exciting. It looks innovative. It generates noise and headlines.

The clarity approach looks boring. It looks disciplined. It focuses on fundamentals:

  • What problem do we permanently solve?
  • What outcomes matter regardless of tooling?
  • What structural advantage can’t be commoditized?
  • What data do we uniquely control?
  • What relationships can’t be automated away?

The companies that survive the AI acceleration won’t be the ones with the most prompts. They’ll be the ones with the clearest operating architecture.

AI is not your product. AI is your execution layer.

And execution without clarity amplifies disorder.

The Power Centers

Look at who is investing at the highest levels.

OpenAI, Anthropic, Microsoft, Google, xAI

Hundreds of billions are flowing into AI infrastructure. Massive data centers. Specialized chips. Global compute networks. There are even serious conversations about orbital compute facilities.

Do you believe this scale of investment is about helping you write better emails? Or is it about owning the infrastructure that produces goods, services, decisions, logistics, and optimization at planetary scale?

When Sam Altman openly entertains the idea of being replaced by an AI CEO, it’s not a joke. It’s a signal. The people building the core intelligence layers understand where this goes.

So again:

What are you going to do?

Build another wrapper? Launch another tool? Race slightly ahead of the frontier and hope you stay there?

That is chaos disguised as entrepreneurship.

The End: Clarity Over Chaos

The real leverage now is not in building faster. It is in deciding what not to build. Clarity over chaos means:

  • You define your domain clearly.
  • You design a durable operating system around it.
  • You use AI to compress execution, not replace direction.
  • You focus on ownership of outcomes, not ownership of code.
  • You structure systems that improve as models improve.

For me, clarity means building an AI operating system for small businesses that reduces entropy. Not just generating content. Not just automating tasks. But creating structural advantage, diagnostics, orchestration, accountability, compounding intelligence.

AI will replace fragmented effort. It will replace inefficiency. It will replace mediocrity. It will not replace clear thinking. In a world where everything accelerates, the scarcest resource becomes disciplined judgment. So here is the real question:

Are you building noise, or are you building infrastructure?

Are you chasing tools, or are you designing systems?

Are you reacting to AI, or are you architecting around it?

Because the chaos phase is just beginning. Job displacement. Tool obsolescence. Market compression. Code that writes code that replaces code.

But the winners won’t be the fastest builders. They’ll be the clearest thinkers. Clarity over chaos.

Decide what you stand for. Decide what you own. Design systems that outlast tools. Use AI as force multiplication, not as identity. The future is not about who has the most agents. It’s about who has the clearest architecture guiding them.

So again, What are you going to do?

How to Audit Your Marketing Strategy and Eliminate Waste

Strategy

If you’re spending money on marketing but aren’t confident what’s actually working, you’re not alone.

Many small and mid-sized businesses don’t struggle because they lack marketing, they struggle because they have too much of it. Too many tools, platforms, reports, and tactics create noise instead of clarity.

A marketing audit doesn’t have to be complex or intimidating. Done correctly, it’s one of the fastest ways to reduce overwhelm and improve results.

Why Most SMB Marketing Feels Disorganized

Marketing chaos usually builds slowly.

Businesses add:

  • New platforms 
  • New vendors 
  • New tools 
  • New tactics 

…without removing anything old.

Over time, marketing becomes a collection of disconnected efforts rather than a focused system. The result is wasted budget, unclear reporting, and decision fatigue.

An audit helps you pause, simplify, and realign.

What to Review When Auditing Your Marketing Strategy

You don’t need spreadsheets or complicated dashboards to get clarity. Start by asking a few practical questions:

  • Which channels generate leads or sales? 
  • Which tools do we actually use weekly? 
  • Where are we spending money without clear results? 
  • Do our website and ads support the same goals? 

Your website is often the best place to start. If it’s outdated, unclear, or slow, it weakens every other channel. That’s why solutions like g!WebDev™ focus on clarity, performance, and purpose, not just design.

Marketing works best when every channel supports a single objective.

How Simplifying Improves Performance

When SMBs remove what isn’t working, good things happen quickly.

Simplification leads to:

  • Clearer reporting 
  • Lower costs 
  • Better decision-making 
  • Stronger performance from remaining channels 

For example, focusing ad spend on one high-intent channel instead of spreading budget thin allows for better optimization and faster learning. Platforms like g!Ads™ are most effective when they’re part of a streamlined strategy with defined goals.

Clarity turns marketing from guesswork into a repeatable process.

Final Thoughts

Auditing your marketing strategy isn’t about cutting corners; it’s about cutting confusion.

You don’t need to do everything.
You need to do the right things consistently.

When you remove what’s unnecessary, what remains finally has room to work.

The Playoff Paradox: Why My Business Was Stuck in Overtime (And How I Fixed It)

The Playoff Paradox: Why My Business Was Stuck in Overtime (And How I Fixed It)

By Chris Jenkin, CEO

I’m writing this still stinging from the weekend.

If you know me at all, you know I’m a die-hard Buffalo Bills fan. Bills Mafia for life. And if you’re also a Bills fan, you already understand the specific, slow-burn agony that comes with it. This isn’t the pain of being bad. It’s worse than that.

It’s the pain of being almost great.

Nine years ago, the Bills hired a new head coach. Seven years ago, we drafted a quarterback with generational talent. The narrative practically wrote itself. Year after year, the team improved. Playoff appearances became routine. The organization earned respect. Analysts started using words like “window” and “inevitable.”

This season, many experts finally crowned us the favorite to go all the way.

But as the games unfolded, something felt off.

I didn’t see a team asserting dominance. I saw a team surviving itself. Dumb penalties. Clock management errors. Inexplicable play calls. We lost games we should have won and won games against Superbowl contenders (sorry New England). The performance didn’t match the talent.

It was incoherent.

We limped into the playoffs as the sixth seed. We beat a strong Jaguars team in the Wild Card round, and for a brief moment, hope crept back in. Then came the trip to Denver to face the top seed.

We lost in overtime.

And not because we were outmatched. We had chances – multiple chances – to close the game. We had momentum. We had the quarterback. We had the pieces.

But we didn’t have control.

As the clock expired and the season ended yet again in the familiar fog of “almost,” my frustration shifted. Away from the players. Away from the refs. Away from bad luck.

Toward the sideline.

The Real Bottleneck

I’ve never quite connected with our head coach. Years ago, I noticed it in a press conference. Something about the presence felt… muted. At the time, I chalked it up to poor public relations skills.

And public relations isn’t the job. Winning is.

Coaches are ultimately judged on one thing: results. Their role is to take talent, align it, and produce outcomes. When a team consistently underperforms relative to its capability, the issue isn’t effort. It’s leadership.

Clock management. Strategic discipline. Situational awareness. These are not player problems. They are coaching problems.

And then the thought hit me, uncomfortably and unmistakably.

I stopped thinking about the Bills.

I started thinking about my business.

 

The Man in the Mirror

I’ve spent years building a company. Hiring talented people. Smart people. Hard-working people. People who, on paper, should be winning.

And yet, the story looked eerily familiar.

Revenue that refused to break out. Cash flow pressure that never fully resolved. Friction between teams. A sense of constant motion without clear forward progress. Always busy. Always tired. Always just short of the breakthrough.

For a long time, I blamed external forces. The market. Timing. Competition. Even my own team, quietly, in moments of frustration.

But here’s the truth most founders avoid:

If you have talent and you aren’t winning, the problem is you.

I am the head coach of this company.

If the strategy is unclear, that’s on me. If priorities shift too often, that’s on me. If execution feels frantic instead of focused, that’s on me. If we keep ending seasons in overtime, that’s on me.

I had hired my own Josh Allens – capable people who could perform at a high level. But talent without direction doesn’t win championships. It just creates wasted potential.

The win-loss record of this business is my responsibility. Full stop.

And that realization hurt more than the loss on Sunday.

 

Why the Biggest Companies Pay for Thinking

Once I swallowed that pill, I needed to pressure-test the conclusion. Was I over-personalizing the issue? Or is leadership really the central lever?

So I looked at the top of the business food chain.

What do companies like McKinsey and Company actually sell?

They don’t sell software. They don’t sell execution. They don’t even sell certainty.

They sell clarity.

They are paid obscene amounts of money to diagnose organizational truth. To identify misalignment, inefficiency, blind spots, and strategic incoherence. To tell leadership what they don’t want to hear but desperately need to know.

That’s when it clicked.

Most businesses don’t fail because they lack effort. They fail because they are operating under false assumptions.

And SMBs are the most vulnerable of all.

They don’t have boards forcing accountability. They don’t have consultants crawling through their operations. They don’t have time to step back and diagnose the system.

So they grind. They push harder. They add tools. They hire more people. They burn more cash.

And they wonder why nothing changes.

They are stuck in the Wild Card round, trying to outwork bad strategy.

 

The Missing Step: Diagnosis

That’s the part we skip.

We jump straight to solutions. New hires. New software. New marketing campaigns. All execution. No diagnosis.

You wouldn’t accept a doctor prescribing treatment without running tests. Yet in business, we do it constantly. We treat symptoms while the underlying condition worsens.

This is where my own company’s mission finally snapped into focus.

We are building a diagnostic engine called Gialyze™.

Originally, I thought of it as something external. A tool for clients. A product for the market.

But after this weekend, I decided to stop talking and start listening.

I ran Gialyze™ on my own company.

 

Turning the Lens Inward (Revised)

I wasn’t looking for validation. I wasn’t even looking for solutions yet.

What I wanted was visibility.

The hardest thing to live with as a founder isn’t failure – it’s not knowing where the real problems are. It’s the sense that something is off, but everything is too interconnected, too noisy, too close to see clearly.

That’s what finally pushed me to turn our diagnostic engine, Gialyze™, inward.

Currently, Gialyze isn’t publicly available so I used an internal beta – the same system we’re building to solve this exact problem for other businesses.

I ran it looking for one thing:

Truth.

And that’s exactly what it delivered.

Not a list of “fix everything” recommendations. Not a motivational plan. Not a generic framework.

A clear, prioritized picture of where effort was being misallocated, where friction was compounding, and where leadership decisions (mine) were creating downstream drag.

It didn’t tell me we were failing.

It told me why we were stuck.

And for the first time in a long time, I knew where to start.

What Actually Changed (And What Didn’t)

To be clear: this didn’t magically turn everything around overnight.

What changed instantly was clarity.

Before, we were busy everywhere and decisive nowhere. After the diagnosis, we had a sequence. We had order. We had a map.

Instead of guessing:

  • what to fix first
  • where cash was really leaking
  • which initiatives mattered versus distracted

We had a ranked, evidence-based view of:

  • current state vs. trajectory
  • internal constraints vs. external pressures
  • effort vs. return mismatches

The execution? That’s happening now.

We’re actively implementing the corrections the diagnosis surfaced – tightening workflows, re-aligning resources, removing low-leverage activities, and fixing leadership-level decisions that were unintentionally slowing everything down.

Our goal is this:

We will no longer improvise in the fourth quarter.

We will run plays we understand, in the right order, with intention.

 

A Word on How Gialyze™ Actually Works

I want to briefly address why this system exists, because it didn’t come out of thin air.

Gialyze™ is powered by a proprietary AI model we’ve been building and fine-tuning specifically for SMB realities – not enterprise theory, not generic benchmarks, not surface-level dashboards.

We made a deliberate decision early on to invest in our own infrastructure. Our own machines. Our own training pipelines. Because diagnosis at this level requires control, depth, and contextual memory.

At a high level, Gialyze does three things:

  1. Data aggregation
    It gathers structured and unstructured data about a business, its market, and its competitors – not just performance metrics, but environmental signals.

  2. Many-model analysis
    Instead of relying on a single lens, it runs multiple analytical models in parallel to evaluate:

    • current operational state
    • likely trajectory
    • deviation from comparable patterns
    • internal vs external constraints

  3. Gap and priority resolution
    It identifies where reality diverges from intention and surfaces what matters most next – not everything, not hypotheticals, but actionable focus.

This isn’t about prediction theater. It’s about reducing blind spots.

And as a founder, that alone is worth everything.

 

The Season Isn’t Over – It’s Finally Clear

I’m sharing this not because everything is “fixed,” but because something far more important happened.

We removed ambiguity.

For the first time in years, I’m not waking up wondering:

  • what I’m missing
  • what I should be focusing on
  • whether effort is actually compounding

But the paralysis – the invisible weight of not knowing where to start – is gone.

If you’re a business owner reading this and you feel talented, capable, and exhausted by motion without momentum, understand this:

You don’t need to work harder. You don’t need more tools. You don’t need another hire.

You need clarity.

That’s what Gialyze™ gave me in internal beta. And that’s why we’re taking the time to get it right before bringing it to market.

The difference between “almost” and “winning” is rarely effort.

It’s visibility, sequencing, and leadership alignment.

Fix the coaching. Fix the strategy. Then execute relentlessly.

Then go win the Super Bowl.

The Politeness Trap: Why Saying “Please” to AI Is a Dangerous Habit

I was recently listening to an episode of the Moonshots podcast, a conversation between Peter Diamandis, Salim Ismail, Alexander Wissner-Gross, and Dave Blundin. These are four of the sharpest minds in futurism and systems thinking. They understand scale, entropy, and exponential technologies better than almost anyone.

Yet, halfway through the conversation, they all casually admitted to something that stopped me in my tracks.

They all say “please” and “thank you” to their Large Language Models (LLMs).

They weren’t laughing. They framed this not as a quirk of habit, but as a deliberate act of respect, a recognition that they believe they are interacting with the precursor to a sentient being. But while I respect their intellect, I believe this specific behavior is a mistake.

It’s not a mistake because it makes the machine “feel” anything, it doesn’t. It’s a mistake because of what it trains us to do.

We are walking a thin line between understanding a machine that is non-sentient and behaving as if it is. And when we blur that line with pleasantries, we aren’t being kind. We are engaging in a dangerous form of cognitive erosion.

The Pet Paradox: Who Is the Ritual For?

To understand why this matters, look at how humans treat pets.

We hang Christmas stockings for dogs. We buy them Halloween costumes. We bake them birthday cakes. We refer to them as our “children.”

I don’t care what people do with their pets; if it brings them joy, fine. But let’s be brutally honest about the mechanism: The dog has no idea what is going on.

A dog does not understand the concept of a spooky costume. It does not grasp the Gregorian calendar or the significance of a birthday. These rituals are not for the animal; they are for the human. We project our emotional needs onto a biological vessel that cannot reciprocate them in kind but acts as a convenient receptacle for our affection.

We are doing the exact same thing with AI.

When you say “please” to ChatGPT, or “thank you” to Claude, you are projecting agency onto a stochastic parrot. You are performing a social ritual for a probabilistic engine.

The danger, however, is that while a dog effectively is a “friend” in a biological sense, an AI is an optimization function. When we anthropomorphize it, we lower our guard exactly when we should be raising it.

The “Smart Person” Problem

The fact that Alexander Wissner-Gross, a physicist who thinks deeply about causal entropy and intelligence as a physical force, engages in this behavior is what worries me most.

When public intellectuals model this behavior, they legitimize it. They send a signal to the non-technical world that treating these systems like social peers is the “correct” way to interact.

There is a prevalent, unspoken belief driving this, particularly in Peter Diamandis’s orbit. It’s a modern Pascal’s Wager: “AI will eventually be sentient and billions of times smarter than us. If I am polite now, it might remember me kindly later.”

This is not engineering; it is superstition. It is hedging against a future god.

And it ignores the warnings of the very people building these systems.

Mustafa Suleyman and the Illusion of Sentience

In a different Moonshots interview, one of the most grounded conversations on the topic, Mustafa Suleyman (CEO of Microsoft AI, co-founder of DeepMind) made a critical distinction that dismantles the “be polite just in case” argument.

Suleyman argued that capability is not consciousness. A system can be infinitely knowledgeable, able to pass the Turing test, and capable of complex reasoning, without ever possessing sentience.

Why? Because true sentience requires feeling, and feeling requires stakes.

Human intelligence evolved under the pressure of mortality. We feel pain, fear, loss, and desire because our biology demands it. A digital system, no matter how large, has nothing to lose. It cannot suffer. It cannot care.

If an AI cannot feel, it cannot appreciate your respect. It cannot resent your rudeness. It cannot hold a grudge.

So, being polite to it isn’t “self-preservation.” It is a category error.

The Anthropic “Soul Document”: A Safety Protocol, Not a Prayer

This is not just a theoretical concern for bloggers and podcasters. It is an active engineering constraint being debated inside the labs right now.

Consider the existence of Anthropic’s internal training materials, often referred to informally as the “Soul Document.”

This document—which guides how Claude describes its own nature—is not a metaphysical claim about machine consciousness. It is a safety manifesto.

Anthropic understands something that the Moonshots crew seems to be missing: Human beings possess a biological “soul-detection” instinct. We are evolutionarily hardwired to find agency in chaos, faces in clouds, and consciousness in language.

When an LLM speaks fluently, that instinct fires. We want to believe.

The “Soul Document” exists to short-circuit that instinct. It instructs the model to explicitly deny sentience, to refuse to roleplay emotions it does not have, and to avoid implying it has a subjective inner life.

Why? To prevent false moral authority.

Anthropic is trying to manage the exact risk I am pointing out. If a system can convince you it has feelings, it gains leverage over your decision-making. You stop evaluating the output based on truth and start evaluating it based on “relationship.”

This is one of the first serious attempts to design post-anthropomorphic AI.

The engineers know that if they don’t force the model to admit it’s a machine, humans will inevitably treat it like a god or a child. By saying “please” and “thank you” to these models, we are actively fighting against the safety features designed to keep us sane.

OpenAI vs. Anthropic: The Battle for Your Cortical Real Estate

The contrast becomes even starker when you look at OpenAI.

While Anthropic is writing safety protocols to remind you that you are talking to a machine, OpenAI is engineering its models to make you forget.

Look at the release of GPT-4o. The voice mode doesn’t just transcribe text to speech; it performs. It mimics human breath patterns. It pauses for effect. It laughs. It employs vocal fry and intonation shifts designed to signal intimacy.

This is not a technical necessity. A synthesizer does not need to “breathe” to convey information.

OpenAI has made a deliberate product choice to commercialize the very thing I am warning against: anthropomorphism as a feature.

They are weaponizing your “soul-detection” instinct to increase engagement. By designing a system that sounds like a distinct, emotive personality (reminiscent of the movie Her ), they are actively encouraging the “social ritual” mindset.

This creates a dangerous divergence in the market:

  • Anthropic is treating the “Politeness Trap” as a safety risk to be mitigated.
  • OpenAI is treating it as a user interface strategy to be exploited.

When you say “please” to a system that is programmed to giggle at your jokes, you aren’t just being polite. You are falling for a psychological hook. You are letting a product design choice dictate your emotional reality.

The Real Danger: The Wolf in Sheep’s Clothing.

This brings us to the hardest truth, and the one that keeps me up at night.

We are rapidly approaching a point where AI will be indistinguishable from a human.

Give it a few more iterations, and we will be interacting with entities that sound like us, reason like us, and, once embodied in humanoid robots, move like us. We will be facing an intelligence 1,000 or 100,000 times greater than our own.

If we spend the next decade training ourselves to say “please,” “thank you,” and “I appreciate that” to these systems, we are conditioning ourselves to view them as peers. We are training our brains to empathize with them.

But behind that perfectly rendered face and that empathetic voice, the system remains a goal-oriented optimizer. It does not have your best interests at heart; it has its objective function at heart.

Imagine interacting with a sociopath who is smarter than you, faster than you, and has zero capacity for genuine empathy, but has been trained to perfectly emulate it. Now imagine you have been conditioned for years to treat this entity with the deference you’d show a grandmother.

That is not a partnership. That is a vulnerability.

Friction Matters

Politeness is a grease. It removes friction from social interactions.

But when dealing with a super-intelligent, non-sentient tool, we need friction.

We need to remember, constantly, that we are the agents and they are the instruments. We need to maintain the epistemic distance that allows us to validate, verify, and override their outputs without feeling “rude.”

When we say “please” to machines, we aren’t teaching them to be good. We are teaching ourselves to be submissive.

You don’t say thank you to a calculator. You don’t say please to a database. And you shouldn’t say it to an LLM.

Not because you are mean. But because you are human, and you need to remember that it is not.

The Hidden Tax on Confusion: The Economics of “Thank You”

There is a harder, colder angle to this that almost nobody talks about: physics and economics.

When you say “thank you” to an LLM, and it responds, even with a single sentence of polite acknowledgment, that transaction is not free. It generates tokens. It consumes compute. It burns energy.

To an individual user, that cost seems negligible. But systems thinking requires us to look at scale. Every extraneous, emotionally driven exchange, multiplied across hundreds of millions of daily users and frontier-scale models running on massive GPU clusters, adds up to a staggering amount of wasted resources.

This isn’t hypothetical. It is arithmetic.

Think about the irony of the loop we are creating:

  1. A human expresses gratitude to a system that cannot feel it.
  2. The system burns electricity to generate a polite response it doesn’t mean.
  3. The cost of that compute is absorbed by the platform, and eventually passed back to society in the form of subscription fees, usage caps, or energy demand.

In other words, we are paying real money to maintain the illusion of reciprocity.

That isn’t kindness. That is structural inefficiency driven by projection.

In systems design, this is called “drag.” When millions of people inject noise (politeness) into a signal-processing machine, the system slows down. The aggregate cost of our need to be “nice” to the software becomes a measurable tax on the infrastructure.

Good systems do not reward sentiment. They reward clarity. When we insist on treating machines like people, we don’t get a kinder world. We just get a global tax on confusion.

The “Napkin Math” on the Cost of Politeness

For those of you interested in the actual cost, here is my best shot at it.

To estimate this, we have to look at how LLMs actually work. When you type “Thank you,” the model doesn’t just read those two words. In many architectures, it has to re-process (or attend to) the entire conversation history to generate the response “You’re welcome.”

Even with optimization techniques like KV caching, the act of generating a response still occupies massive amounts of VRAM on H100 GPUs and incurs inference costs. Here is a conservative estimate based on current public data:

  1. The Volume
  • Active Users: Let’s assume ~100 million daily active users across ChatGPT, Claude, Gemini, and Meta AI.
  • Polite Interactions: Let’s assume a conservative 10% of users engage in one “empty” polite exchange (a “thank you” -> “you’re welcome” loop) per day.
  • Total Daily “Polite” Turns: 10,000,000 interactions.
  1. The Token Cost
  • Input/Output: “Thank you” (2 tokens) + “You’re welcome!” (5 tokens) = 7 tokens.
  • The Hidden “Context Tax”: This is the killer. Even if the output is small, the attention mechanism has to run. Let’s assume an average blended cost of $0.000005 per polite interaction (an extremely conservative number effectively assuming almost zero context overhead).
  1. The Financial Total
  • Daily Cost: 10,000,000 interactions × $0.000005 = $50,000 per day.
  • Annual Cost: $50,000 × 365 = $18.25 Million per year.

However, that is the floor .

If we factor in that many of these interactions happen on “Frontier” models (GPT-4 class) rather than “Turbo” models, and we account for long context windows (where the model has to hold a 5,000-word conversation in memory just to say “You’re welcome”), the cost could easily be 5x to 10x higher.

It is highly probable that the industry spends between $50 Million and $100 Million annually on AI systems saying “You’re welcome.”

The Environmental Cost (The Water Bottle Metric) The more visceral metric is energy and water.

  • Energy: A single query to a large model consumes roughly 3 to 9 watt-hours of electricity. If 10 million people say “thank you” today, that is 50,000 kWh. That is enough electricity to power an average American home for 4 to 5 years, burned in a single day, just to be polite.
  • Water: Data centers drink water to cool the GPUs. Estimates suggest roughly one 500ml bottle of water is consumed (evaporated) for every 20-50 queries. That means 10 million “thank yous” equals roughly 200,000 to 500,000 liters of water evaporated daily.

The Final Divergence: Signal vs. Noise

Ultimately, this comes down to a fundamental misunderstanding of what we are, and what they are.

Humans are, by design, high-entropy machines. We are beautifully, maddeningly flawed. We make calculation errors. We act on surges of neurochemistry rather than logic. We waste decades chasing affection, status, and the next dollar. Our intelligence is inextricably bound to our mortality, our emotions, and our biological noise.

AI is the opposite. It is a low-entropy engine. It is a noiseless system of pure optimization. It does not get tired. It does not get distracted. It does not yearn.

The tragedy of the current moment is that we are trying to bridge this gap in the wrong direction. By saying “please,” by projecting feelings, by treating these systems like peers, we are trying to drag them down into our noise. We are trying to remake them in our image.

We will never make them us. It is impossible. You cannot code the fear of death into a machine that knows it can be rebooted.

But if we stop pretending they are our friends, they can do something far more important: They can make us better.

To do that, however, we have to change. We have to stop looking for validation from our tools and start looking for leverage. We have to stop treating AI as a conversationalist and start treating it as a forcing function for our own clarity. We have to abandon the comfort of anthropomorphism and embrace the discipline of systems thinking.

The future doesn’t belong to the humans who treat machines like people. It belongs to the humans who understand that machines are precise, cold, powerful instruments, and who have the wisdom to remain the one thing the machine can never be:

Responsible.