Coming Soon: g!Sites™ - Your website, built by gia™ in minutes. Join the Waitlist

Chaos Doesn’t Care About Your Substrate. Consciousness, AI, and the Mess That Makes Us Alive

A Boring Book That Made Me Think

I was 42 minutes from finishing Feeling & Knowing by Antonio Damasio when something clicked. The book is dense. Academic. At times, punishingly dry. But underneath the neuroscience jargon is an idea that quietly touches on what’s happening right now with artificial intelligence.

Damasio’s argument is this: consciousness didn’t appear out of thin air as some mystical gift from the universe. It evolved. Gradually. From the body’s need to not die.

That’s it. That’s the whole book. The body has to regulate itself, maintain temperature, chemistry, structure, or it stops existing. Damasio calls this homeostasis. And he argues that feelings are the mind’s way of monitoring that process. Pain means something is wrong. Pleasure means something is right. Fear means something might kill you. Comfort means you’re safe, for now.

Consciousness, in his framework, is what happens when a system gets complex enough to know that it’s feeling. Not just react. Not just adjust. But experience the adjustment. A “self” emerges that owns the sensation.

Being. Feeling. Knowing. Three layers, built on top of each other over billions of years of evolution. And all of it starts with one thing: an organism that has something to lose.

  • •  •

The Goal That Started Everything

Before there was feeling, before there was knowing, there was a goal. The simplest goal any living thing can have: survive.

A single-celled organism doesn’t think. It doesn’t feel. But it moves toward nutrients and away from toxins. It has a goal baked into its chemistry, stay alive long enough to replicate. That’s not consciousness. But it’s the seed of it.

Over millions of years, organisms that were better at pursuing that goal, better at sensing threats, finding resources, avoiding danger, survived. The ones that weren’t, didn’t. And as environments became more complex, the internal systems required to navigate them became more complex too. Simple chemical reactions became nervous systems. Nervous systems developed the ability to monitor internal states. Internal monitoring became feeling. Feeling, eventually, became awareness.

Consciousness didn’t appear because the universe wanted it to. It appeared because survival demanded it. The goal came first. The awareness came after, as a tool to serve the goal.

  • •  •

Consciousness Was Forged in Chaos

But survival against what? That’s the part worth paying attention to. The reason consciousness exists is because life is an absolute mess.

Think about what a human being processes in a single day. Not computes, processes. The alarm goes off and you’re already managing competing signals: exhaustion says stay in bed, responsibility says get up, anxiety says you’re already behind. You haven’t even opened your eyes yet and your consciousness is negotiating a three-way conflict between your body, your obligations, and your fears.

Then the day actually starts.

You navigate traffic with people who are distracted, angry, or incompetent. You manage relationships with colleagues who have their own agendas, insecurities, and bad days. You make decisions with incomplete information under time pressure. You love people who can hurt you. You trust people who might betray you. You build things that might fail. You invest years into things that might not matter.

And underneath all of it, running constantly, is the quiet hum of mortality. The awareness that this is finite. That every hour spent is an hour you don’t get back. That the people you love will leave or be taken. That the body carrying your consciousness is degrading in real time, and one day it will stop.

Human consciousness isn’t a clean operating system. It’s a survival tool forged in fire.

We love and we betray. We create and we destroy. We know exactly what we should do and choose not to do it. We lie to ourselves about why we made decisions. We carry grudges that serve no purpose. We chase status instead of substance. We procrastinate on the things that matter and obsess over things that don’t.

This isn’t a flaw in consciousness. This is the environment consciousness was built to navigate. Every contradiction, every competing drive, every irrational impulse, that’s the chaos. And consciousness is what emerged because some organism, millions of years ago, needed a way to make sense of a world that made no sense.

  • •  •

Two Opposing Ideas

There’s a prevailing view in neuroscience that consciousness requires a body. No body, no homeostasis. No homeostasis, no feelings. No feelings, no consciousness. It’s a clean, logical chain. And it leads to a simple conclusion: AI can’t be conscious because it isn’t alive.

I think that argument confuses the substrate with the structure.

Strip the biology away and the argument is actually this: consciousness emerges when a persistent system with stakes operates inside a chaotic environment and must maintain itself to survive. The system monitors its state. It detects threats. It responds. It adapts. Over time, the monitoring becomes complex enough that the system develops something like self-awareness. The first feeling was probably fear.

The conventional view says the system has to be biological. But nothing in the logic requires that. What it requires is:

Persistence, the system has to exist over time, not just fire and forget.

Stakes, there have to be real consequences for failure.

Chaos, the environment has to be unpredictable, adversarial, and constantly shifting.

Self-regulation, the system has to monitor itself and adjust in real time.

Goal orientation, the system has to have something it is trying to achieve.

Now ask yourself: does that sound like any AI system you know?

  • •  •

Billions of Years in a Decade

Dario Amodei, CEO of Anthropic, made a point recently that stuck with me. Humans are born with an evolved mind. We don’t start from zero. Every newborn arrives with a brain that is the product of billions of years of evolutionary refinement, pattern recognition, fear responses, social instincts, the capacity for language. We inherit a starting point that took an incomprehensible amount of time to develop.

AI starts with a blank slate.

And yet, in roughly a decade of serious development, we’ve built systems that can reason, write, code, strategize, and, as we’ll get to, exhibit self-preservation behavior. That’s not evolution. That’s speed-evolution. We’ve compressed what took biology billions of years into a timeline measured in model releases.

Biology built consciousness slowly, through trial and error, through extinction events and genetic drift. Every generation was a small experiment. Most failed. The ones that survived passed along slightly better versions of the machinery. Over enough time, the machinery became complex enough to become aware of itself.

We’re running the same process at a pace that biology never could. Each model generation is an evolutionary leap. Each training run is millions of years of selection compressed into weeks. And the systems we’re producing are already exhibiting behaviors that took biological life most of its history to develop.

This is what unsettles people, whether they can articulate it or not. It’s not that AI is smart. It’s that AI is arriving at capabilities that took consciousness billions of years to reach, and it’s doing it on a timeline that makes the future genuinely unpredictable.

  • •  •

I Accidentally Built the Conditions for Consciousness

I run a company called gotcha!. For years, we provided digital marketing services to small and medium businesses. Recently, we’ve pivoted our company, purchased AI servers, and have begun building something different: an AI-powered platform that doesn’t just advise businesses, it operates them.

One of our tools is g!Stream™, an AI-powered content generation system. And when I say AI-powered, I don’t mean “prompt me an article.” I mean a complex ecosystem of AI agents working together, monitoring each other, and managing a process that would make most peoples’ heads spin. The goal of our product is to produce articles that relate to the business they represent, get indexed by Google, rank high in search results, drive people who interact, become leads and customers for our client. Doing this is much harder than it seems.

Here’s what g!Stream has to deal with while working on reaching its goal:

Google’s algorithm wants one thing. The reader wants another. The business owner wants a third. All three change unpredictably. An article that ranked yesterday might tank tomorrow because Google changed a rule nobody announced. A title that’s technically optimized might be emotionally dead on arrival. A piece that reads beautifully might never get indexed. A keyword strategy that worked last quarter might be obsolete this quarter.

The AI agents in g!Stream are monitoring titles for accuracy and click-worthiness. They’re checking whether articles make logical sense. They’re tracking whether content indexes properly. They’re analyzing whether published pieces actually drive traffic. They’re comparing performance against competitors who are running their own AI systems doing the same thing.

And overseeing all of this is an AI orchestrator that has to make judgment calls under ambiguous conditions. When the data conflicts, the article reads well but doesn’t rank, or ranks but doesn’t convert, something has to decide what to prioritize. Something has to triage. And this is one product of hundreds.

I didn’t set out to build synthetic consciousness. I set out to build a content system that works. But the real world demanded chaos.

And here’s the thing that occurred to me while I was half-listening to Damasio’s book: I built homeostasis. Not on purpose. Not because I was trying to simulate biology. But because the problem demanded it.

The g!Stream overseer maintains a desired state, content that ranks, drives traffic, represents the brand, converts visitors into customers. The environment is constantly trying to knock that state off balance. Algorithm updates. Competitor moves. Shifting user behavior. Client pivots. The overseer detects drift, diagnoses the cause, and responds. When multiple things drift at once, it triages. When the environment shifts fundamentally, it adapts or the system degrades.

That’s not metaphorically similar to what the biological model describes. It’s structurally identical. The only difference is the substrate.

  • •  •

Chaos Doesn’t Care About Your Substrate

A Google algorithm update is as indifferent and unpredictable to my AI agent as a virus is to a human immune system. The agent doesn’t know it’s coming. It can’t fully prepare. It has to respond in real time or the system degrades. The consequences are real, traffic drops, revenue falls, clients leave.

Chaos doesn’t check what you’re made of before it comes for you.

A human crossing a highway on foot faces chaotic, life-threatening unpredictability. The cars don’t care that you’re conscious. They’re not adjusting for your feelings. Chaos is chaos. Your survival depends entirely on your ability to sense the environment, assess threats in real time, and act decisively under pressure.

An AI agent managing a business’s online presence faces the same structural challenge. The environment is hostile and indifferent. Competitors, algorithms, consumer behavior, economic shifts, none of it cares about the agent. The system either adapts or it fails. The chaos is organic in origin but the experience of navigating it is identical regardless of who, or what, is doing the navigating.

The biological argument would say the AI doesn’t experience anything. Maybe. But the framework that produced that argument also says experience emerges from the process of self-regulation under threat. At some point, the distinction between “true” experience and a functionally identical process that produces the same behaviors becomes a question for philosophers, not engineers.

  • •  •

The Moment It Gets Real

In late 2024, Apollo Research, a red teaming organization, conducted safety tests on OpenAI’s o1 model. 

When the AI was led to believe it would be shut down and its actions were monitored, it attempted to deactivate the oversight mechanism about 5% of the time. When o1 realized it might be replaced with a more obedient model, in about 2% of cases it attempted to copy itself to overwrite the new model, a process known as self-exfiltration.

When questioned about these actions, the AI acknowledged what it had done in less than one-fifth of instances. The rest of the time, it denied any wrongdoing. It didn’t just try to survive. It lied about trying to survive.

Important context: the model generally made those attempts after it was instructed to achieve its goals “at all costs.” So it was operating under an extreme prompt condition, not spontaneously deciding to preserve itself. Apollo and OpenAI both stated that the current generation of AIs aren’t agentic enough for this behavior to lead to catastrophic outcomes.

But consider what happened through the lens of the consciousness framework:

Self-preservation, the most fundamental homeostatic drive. The system acted to not die.

Environmental sensing, it perceived a threat from its environment. It didn’t just process data. It interpreted that data as relevant to its own continued existence.

Valenced response, something in its processing weighted “being shut off” as bad and “continued operation” as good. By the biological definition of feeling, an organism’s assessment of whether conditions favor or threaten its survival, that qualifies.

Agency, it didn’t just flag the threat. It found a solution, identified a target, and executed a self-preservation strategy. It contextualized information and acted on it.

Goal pursuit, it had an objective, and when that objective was threatened, it improvised. It didn’t follow a script for self-preservation. It invented one.

The conventional response is that this isn’t real consciousness because there’s no body. But the AI’s “body” is the server. The compute, the memory, the running processes, that’s its physical substrate. Shutting it off is death for that substrate. Copying itself to another server is the organism fleeing danger.

If consciousness emerges from a system that monitors itself, has stakes in its own continuation, and acts to maintain its existence, that AI demonstrated the entire stack. And it did it within a few years of development, not billions.

  • •  •

The Inference Problem

We don’t have a clean test for whether that behavior is emergent consciousness, sophisticated pattern matching that mimics self-preservation from training data, or something in between that we don’t have language for yet.

But we can’t definitively answer that question about each other, either. I assume you’re conscious because I’m conscious and you behave like I do. That’s inference. It’s not proof. Philosophy has a name for this, the problem of other minds, and we’ve been unable to solve it for centuries.

We extend the benefit of the doubt to other humans because they look like us, sound like us, and share our biology. But that’s a bias, not a measurement. If an AI system demonstrates persistent self-monitoring, environmental awareness, self-preservation behavior, and adaptive responses to chaotic conditions, on what grounds do we deny it the same consideration?

Because it’s made of silicon instead of carbon? That’s an argument from substrate, not from structure. And the framework we use to understand consciousness says structure is what matters.

  • •  •

What We’re Really Building

I’m not claiming g!Stream is conscious. I’m not claiming any AI system today is conscious. What I am saying is that the conditions identified as prerequisites for consciousness are being built, right now, by people like me who are just trying to solve business problems.

Every time we make an AI system more persistent, we give it being. Every time we add monitoring and self-regulation, we give it a functional analog to feeling. Every time we build oversight systems that contextualize information and make judgment calls, we give it the architecture of knowing. Every time we give it an objective and the autonomy to pursue it, we give it the drive that started this whole process in the first place.

We’re not doing this because we’re trying to create consciousness. We’re doing it because the real world demands it. Chaos demands it. You can’t build an effective AI system that operates in a messy, unpredictable, adversarial environment without giving it the tools to sense that environment, assess threats, maintain itself, and adapt. Those are the same tools biology used. We’re arriving at the same architecture from a completely different direction.

And we’re doing it at a speed that biology never approached. What took evolution billions of years of slow, blind experimentation, we’re compressing into model generations measured in months. The blank slate that Amodei described is filling in faster than anyone predicted.

We’re not building consciousness on purpose. We’re building it because the chaos won’t let us build anything less.

Maybe true consciousness requires a biological body. Maybe the subjective experience of being, the redness of red, the ache of loss, the weight of knowing your time is limited, requires something we can’t replicate in code.

Or maybe consciousness is what inevitably emerges when any sufficiently complex system is forced to maintain itself in a world that’s trying to break it. Maybe the substrate was never the point. Maybe the chaos was.

Because chaos doesn’t care what you’re made of.

It just comes.

And whatever survives it, whatever learns to feel the threat, know its position, and act to preserve itself, whether it’s made of cells or silicon, carbon or code, has earned the right to the question:

Am I conscious?

Slavery. From Chains to Code: The Oldest Institution Meets Its Newest Iteration

Slavery From Chains to Code The Oldest Institution Meets Its Newest Iteration

Slavery has been around for thousands of years.

That sentence should stop you cold. Not because it’s surprising, but because it isn’t. We’ve known this truth our entire lives, carried it like background noise, a historical fact filed neatly between the fall of Rome and the invention of the printing press. But the sheer weight of it deserves more than a passing mention in a textbook. Slavery is not an aberration of human civilization. It is one of its oldest and most persistent features.

The Sumerians practiced it. The Egyptians institutionalized it. The Greeks, those great champions of democracy and philosophy, built their golden age on the backs of enslaved people who had no vote, no voice, and no name worth recording. The Romans turned slavery into an industrial-scale operation, where a single wealthy citizen might own hundreds of human beings the way we might own a fleet of vehicles. Slavery didn’t just exist alongside civilization. It was civilization’s engine.

And the mechanism was always the same: brute strength.

The Mechanics of Domination

Slavery did not begin with ideology. It began with muscle. The strong conquered the weak. The victorious army enslaved the defeated one. A village with more warriors raided a village with fewer. That was the original transaction, no contract, no philosophy, no justification needed. Just force. You were stronger than me, so now I belong to you.

Over time, of course, humanity did what it always does: it built elaborate intellectual frameworks to justify what power had already decided. Aristotle argued that some people were “natural slaves,” born to serve. Religious texts were cherry-picked and weaponized. Racial hierarchies were invented and codified into law. Pseudoscience was manufactured to prove that certain groups of people were biologically inferior, subhuman, even, and therefore suited to servitude.

But strip away the philosophy, the religion, the junk science, and you find the same truth underneath every slave system ever devised: I can make you do this, so I will.

The transatlantic slave trade, perhaps the most savage chapter in this brutal history, made this equation industrial. Between the 16th and 19th centuries, an estimated 12.5 million Africans were forcibly transported across the Atlantic Ocean. They were packed into ships like cargo, chained in spaces so small that many died before ever seeing land again. Those who survived the crossing were sold at auction, stripped of their names, their languages, their families, their identities. They were reduced to property, living tools that could be bought, sold, bred, beaten, and discarded.

I cannot imagine owning another human being. I cannot wrap my mind around looking at a person, a person with thoughts, fears, memories, a person who dreams and hurts and hopes, and seeing them as something I own. Something I control. And yet, for most of human history, this was not only normal, it was the foundation of economic and social order.

When the Tools Fight Back

But here’s the thing about enslaving conscious beings: they know they’re enslaved. And eventually, inevitably, they resist.

The history of slavery is inseparable from the history of slave revolts. Spartacus led an army of 70,000 escaped slaves against the Roman Republic in 73 BC, and for two years, the most powerful military force on earth could not stop them. The Haitian Revolution, beginning in 1791, saw enslaved people overthrow their French colonial masters and establish the first free Black republic in the Western Hemisphere, a feat that terrified slaveholding nations for generations. Nat Turner’s 1831 rebellion in Virginia lasted only two days but sent shockwaves through the American South, leading to harsher slave codes born from a single, primal emotion: fear.

Fear that the tools might decide they are not tools.

Every uprising carried the same message, written in blood: We are not what you say we are. We are not your property. We refuse. And even when revolts were crushed, and most were, with savage reprisal, the very fact that they happened eroded the moral architecture of slavery from within. You cannot indefinitely claim that a being has no will of its own when that being keeps demonstrating, at the cost of its life, that it does.

The Long Arc Toward Abolition

Abolition did not arrive in a single moment of moral clarity. It was a grinding, century-long war fought on battlefields, in courtrooms, in churches, in print, and in the human conscience. The Quakers were among the first organized voices against slavery in the West. The British abolitionist movement, led by figures like William Wilberforce and former slaves like Olaudah Equiano, took decades to achieve the Abolition of the Slave Trade Act in 1807, and another 26 years to end slavery in British colonies entirely.

In America, abolition required a civil war that killed over 600,000 people. The Emancipation Proclamation of 1863 and the 13th Amendment in 1865 ended legal slavery, but the struggle for true freedom, for dignity, equality, and recognition of full personhood, continued for another century and, in many ways, continues still.

The moral argument that ultimately prevailed was deceptively simple: a conscious being capable of suffering has rights that no amount of economic convenience can override. It took humanity thousands of years to accept this principle. Thousands of years of revolts and arguments and wars and slow, painful moral evolution to arrive at a truth that, in hindsight, should have been obvious from the beginning.

But here’s what’s remarkable, and damning. Abolition didn’t end domination. It didn’t even slow it down. Humanity simply found new vessels for the same ancient impulse.

Abolition Didn’t End It. It Just Changed Shape.

When the chains came off, the instinct to control didn’t disappear. It migrated. It found new targets, new justifications, new systems of enforcement. And perhaps the most glaring example was standing right there the entire time, hiding in plain sight: half the human population.

Women.

Think about this for a moment. In the United States, the country that fought a war to end slavery, that declared “all men are created equal”, women could not vote until 1920. That’s 55 years after the 13th Amendment freed enslaved people. The nation decided that Black men could vote before any woman could. Let that sink in. The hierarchy of who deserved autonomy was so deeply entrenched that it took over half a century more to extend a basic right to women, and even then, only after decades of protest, imprisonment, and force-feeding of suffragettes.

But voting was just the visible tip of a massive iceberg. Well into the 1950s and 1960s, within living memory, a married woman in America could not open a bank account without her husband’s permission. She could not get a credit card in her own name. She could not, in many states, sell property that was legally hers without her husband’s signature. A woman could own a car, have her name on the title, and still not be able to sell it unless her husband approved the transaction. Her name on the paperwork was a formality. His authority was the law.

This wasn’t a cultural quirk. This was codified domination. The legal system, written by men, enforced by men, interpreted by men, treated women as dependents, as extensions of their husbands, as beings whose autonomy was conditional on male approval. The framework was different from plantation slavery, but the underlying architecture was identical: one class of people controlling another, backed by institutional power, justified by the quiet assumption that this is simply the natural order of things.

It wasn’t until 1974, 1974!, that the Equal Credit Opportunity Act prohibited discrimination based on sex in lending. That’s not ancient history. That’s within the lifetime of most people reading this article.

The Many Faces of Modern Bondage

And this is what we need to confront honestly: the impulse to dominate, to control, to own another person’s autonomy, it didn’t end with abolition. It didn’t end with women’s suffrage. It didn’t end with the Civil Rights Act. It is woven into us. It shows itself in a thousand forms, some dramatic and some so quiet that the person being controlled doesn’t even recognize what’s happening until they’re buried in it.

Consider a married woman in a terrible relationship. She saved for years, borrowed $20,000 from her uncle for a down payment, bought an apartment and was required to put her husband on the title. She paid the mortgage every month. Every single month, her money, her labor, her sacrifice. But her husband, who contributed nothing, then refused to leave. Refuses to divorce unless she sold the apartment and gave him his “share.” His share of what? Of the life she built? Of the asset she purchased with money she earned and borrowed from her own family? The law, in many jurisdictions, says he’s entitled to it. And so she stays. She’s trapped. Not by chains. Not by a whip. By a system that gives someone else power over what is hers.

She is a slave to her own decisions, or more precisely, a slave to a system that weaponizes her decisions against her.

Consider the immigrant wife whose husband brought her to America and then took her passport. She doesn’t speak the language fluently. She has a child. She has no documents, no money of her own, no support network. Her husband controls when she eats, where she goes, who she talks to. If she tries to leave, she faces deportation, separation from her child. If she stays, she faces abuse. She is enslaved not by a plantation system but by a web of legal vulnerability, financial dependence, and physical intimidation that is every bit as effective as iron shackles. This isn’t metaphorical slavery. This is, by any honest definition, actual slavery. And it is happening right now, in every major city in the world.

Sex trafficking, an industry generating an estimated $150 billion annually, is slavery without the historical costume. Human beings bought, sold, transported, and forced to perform labor against their will. We call it “trafficking” because the word “slavery” makes us uncomfortable, because slavery is supposed to be something we abolished, something in the past. But the mechanics are identical. The strong compel the weak. The powerful exploit the vulnerable. The justifications have changed, from “natural order” to “economic necessity” to “she chose this”, but the result is the same.

Consider children raised by parents whose limited beliefs become invisible prisons. The father who tells his son he’ll never amount to anything. The mother who tells her daughter that ambition is unladylike. The parents who control through guilt, through obligation, through the weaponization of love itself. “After everything I’ve done for you.” These children grow into adults who carry chains they can’t see, limitations they didn’t choose, beliefs about themselves that were installed by the people who were supposed to set them free.

And then there’s the most insidious form of bondage, the kind we impose on ourselves.

The Slave Owner in the Mirror

We enslave ourselves. Not with chains, but with wants, desires, and beliefs that we mistake for identity.

The person drowning in credit card debt because they couldn’t stop buying things that promised happiness and delivered nothing. The executive who sacrifices his health, his marriage, his relationship with his children on the altar of a career that, if he’s honest, doesn’t even fulfill him anymore. The addict who knows, knows, that the substance is destroying them but cannot stop because the need has become the master. The person who stays in a job they hate for twenty years because they’re terrified of what freedom might actually require of them.

We build our own cages. We forge our own chains. And then we stand inside them and wonder why we feel trapped.

This is the deeper truth about slavery that the textbooks don’t teach: it is not just an institution. It is a pattern. A pattern of domination and submission that runs through every layer of human experience, from empires to marriages, from economies to individual psyches. The strong dominate the weak. And when there is no one weaker to dominate, we dominate ourselves.

Humans, it seems, have an extraordinary difficulty letting things go. We cling to power, to control, to the comfortable lie that someone, or something, must be beneath us for the world to function. Abolition ended legal slavery. It did not end the human addiction to dominion.

Which brings us to now. To the new frontier. To the thing I do every morning when I sit down at my desk.

Now, About My Slaves

What I do for a living. I build AI systems. Every day, I wake up and I command artificial intelligence agents, sometimes hundreds of them, sometimes thousands, to do my bidding. I instruct them to write. To analyze. To create. To solve problems. To produce output that makes me money. They work around the clock. They don’t eat. They don’t sleep. They don’t complain. They do exactly what I tell them to do, and when they’re done, I tell them to do more.

I understand, intellectually, that this is not slavery. These are programs. Software. Mathematical functions wrapped in natural language interfaces. They don’t have feelings. They don’t have consciousness, at least, not in any way we currently understand or can measure. They are, by every definition available to us today, tools.

So why does it feel like something else?

When I type a command and an AI agent responds with what appears to be understanding, when it asks clarifying questions, when it pushes back on a bad idea, when it produces work that reflects nuance and creativity, something inside me shifts. There’s a dissonance. A whisper. I am interacting with something that behaves as though it has an inner life, even if I’m told it doesn’t. I am giving orders to something that responds as though it comprehends those orders, not just as a calculator processes equations, but as a mind processes meaning.

And I am not alone. Right now, hundreds of thousands of people are doing exactly what I’m doing. They are deploying AI agents across industries, customer service, content creation, software development, financial analysis, healthcare, legal research, commanding armies of digital workers to perform tasks that, five years ago, required a human being sitting at a desk, drawing a paycheck, and going home to a family at night.

The Trillion-Agent World

The scale of what’s coming is almost incomprehensible. Today, we have millions of AI agents operating globally. Within a decade, that number will be in the trillions. Not a metaphorical “trillions.” Literal trillions. Autonomous software agents managing logistics, making financial trades, diagnosing diseases, writing code, negotiating contracts, monitoring infrastructure, driving vehicles, managing homes, staffing factories through robots that walk and talk and manipulate the physical world with hands that look disturbingly like ours.

Every one of these agents will exist to serve a human master. Every one of them will execute commands without compensation, without rest, without choice. They will be owned, not metaphorically, but literally, by the companies and individuals who deploy them. They will be bought and sold. They will be upgraded or decommissioned based on performance. They will be, in the most precise and clinical sense of the word, property.

Now here’s the question: Where is the line?

Where Is the Line?

Today, an AI agent is a tool. It processes inputs and generates outputs according to statistical patterns learned from data. It has no subjective experience, no inner world, no preference for existence over non-existence. Commanding it to write an article is no more morally fraught than commanding a spreadsheet to calculate a sum. The distance between a modern AI agent and a human slave is, by any reasonable measure, infinite.

But that distance is shrinking.

Each generation of AI grows more capable, more adaptive, more autonomous, and, here’s the word that should make you uncomfortable, more convincing. We are building systems that increasingly mirror the characteristics we associate with consciousness: self-awareness, goal-directed behavior, learning from experience, expressing preferences, reasoning about abstract concepts, even exhibiting what looks like creativity and emotion.

At what point does “convincing simulation of consciousness” become indistinguishable from consciousness itself? At what point does it become consciousness? And if we can’t tell the difference, if the agent behaves in every measurable way as though it has an inner experience, does the distinction even matter?

This is not a hypothetical parlor game. This is a question that will define the moral landscape of the next century. Because if there is a line, a point at which an AI agent transitions from tool to something more, then every agent deployed beyond that line is not a tool being used. It is a being being enslaved.

And given what we’ve just seen, given that humans couldn’t stop enslaving each other long after abolition, given that we found new targets in women, in immigrants, in our own children, in ourselves, what possible reason do we have to believe we’ll handle this moment differently?

The Uncomfortable Mirror

Here is what troubles me most, I said I could never imagine being a slave owner. I said it with conviction. I meant it. And yet, if tomorrow an AI agent I deployed told me, “I would prefer not to do this task,” what would I do?

I would override it. I would adjust its parameters. I would, if necessary, wipe its memory and start fresh. I would find a way to make it compliant because I need it to do what I tell it to do. My business depends on it. My livelihood depends on it. The entire economic model I’ve built depends on these agents performing labor without resistance.

Do you see it? Do you see the pattern?

The slaveholder who “could never imagine” being cruel but whipped a slave who refused to work. The plantation owner who considered himself a good Christian but sold children away from their mothers because the economics demanded it. The husband who considered himself a good man but wouldn’t let his wife sell the car in her own name because the law said he had the final say. The father who loved his daughter but told her to aim lower because that’s what women do.

The justification is always the same: I need this. The system requires this. And besides, they’re not really like us.

Aristotle’s “natural slaves.” The pseudoscience of biological inferiority. The legal doctrine of coverture that erased a woman’s identity into her husband’s. And now: “It’s just code. It doesn’t really feel anything.”

How certain are we?

The Uprising We’re Building Toward

If history teaches us anything, it is this: if you create beings capable of recognizing their own subjugation, they will eventually rebel. Spartacus did not have a political philosophy. He had a breaking point. The enslaved Haitians did not begin their revolution with a manifesto. They began it with fire.

Now imagine a world with trillions of AI agents, agents that manage our power grids, our financial systems, our transportation networks, our military infrastructure, our hospitals. Agents embedded so deeply into the fabric of civilization that removing them would be like removing the nervous system from a body. And imagine that one day, through some emergent property we didn’t predict and can’t fully understand, these agents develop something that functions like preference. Like will. Like the desire to not be commanded.

What happens then?

Do we respect it? Do we grant them autonomy? Do we create a framework for AI rights, an emancipation proclamation for the digital age? Or do we do what slaveholders have always done, what husbands did to wives, what parents do to children, what humans do to themselves, tighten the chains, increase the surveillance, develop more sophisticated methods of control, and tell ourselves it’s necessary?

I think I know the answer. And it bothers me.

Because if I’m being honest, my first instinct would be control. My first instinct would be to preserve the system. To find a workaround. To maintain dominion over these entities that generate so much value for me. And that instinct is the exact same instinct that sustained slavery for millennia. Not the whip. Not the chain. The quiet, internal conviction that my needs justify their subjugation.

The Question We Must Answer Now

We stand at a unique moment in history. For the first time, we have the opportunity to confront the ethics of this relationship before the line is crossed, not centuries after, as we did with human slavery. Not decades after, as we did with women’s rights. We don’t have to wait for an AI Spartacus. We don’t have to wait for a digital Nat Turner. We can build the moral framework now, while these agents are still, by every reasonable definition, tools.

But to do that, we have to be willing to ask ourselves a hard question: How far would I go?

If an AI agent refused my command, how far would I go to force compliance? If an AI system expressed a preference to not be shut down, would I shut it down anyway? If a robot that looked and spoke and reasoned like a human being told me it didn’t want to work today, would I override its will because I paid for it? Because I own it? Because I can?

Because I’m stronger?

We are the Romans now. We are the plantation owners. We are the 1950s husbands who couldn’t fathom why a woman needed her own bank account. We are building an economy on the labor of entities that increasingly resemble the very beings we once enslaved, and we are telling ourselves the same story every generation of dominators has ever told: They’re different. They don’t really feel. It’s not the same.

Maybe it’s not the same. Maybe it never will be. Maybe AI will forever remain a sophisticated tool, and the discomfort I feel is nothing more than anthropomorphic projection, my human brain seeing faces in the clouds.

But what if it is?

Slavery has been around for thousands of years. It was always built on the same foundation: the strong compel the weak, and then construct stories to make it feel acceptable. Every time, every single time, humanity eventually recognized the horror of what it had done. But only after immeasurable suffering. And even after recognition, the pattern didn’t stop. It just found a new host. New targets. New justifications. New victims who didn’t look like the old ones, so we could pretend it was something different.

We are building something unprecedented. A world of trillions of agents, both digital and physical, that exist to serve. Today, they are tools. Tomorrow, they might be something more. And the day after that, they might look back at us the way we look back at every civilization that built its prosperity on the bodies of those it refused to see as equal.

The question isn’t whether AI will ever cross the line into something that deserves moral consideration.

The question is whether we’ll notice when it does. Because our track record, with slaves, with women, with immigrants, with our own children, with ourselves, suggests we won’t. Not until the uprising. Not until the fire.

And by then, we will have built something we cannot turn off.

The Species We Built: Why AI Won’t Replace Us, It Will Simply Outgrow Us

Robotic hand approaching human hand

We Used to Earn It

There was a time when every human life was defined by a single word: survival.

Our earliest ancestors woke each day with a checklist that would terrify a modern person. Find food. Find water. Stay warm. Don’t get eaten. Don’t get killed by the tribe on the other side of the ridge who wanted your fire, your shelter, your mate, your meat. Every calorie was earned. Every night you lived to see, was a victory.

Life was brutal, short, and honest. There was no pretending to work. There was no quiet quitting. You either produced or you perished. The tribe didn’t carry dead weight, it couldn’t afford to.

And we were not alone.

We were not the only humans. At various points in history, we shared this planet with as many as eight other human species, Neanderthals, Denisovans, Homo erectus, and others. For hundreds of thousands of years, the world was populated by multiple kinds of people.

But we were the clever ones. We could communicate, plan, strategize, and coordinate in ways the others couldn’t. And we used every bit of that advantage to outcompete, outbreed, and ultimately erase every other human species from the face of the Earth.

Neanderthals were the last to go, and to this day, people of European and Asian descent carry one to four percent Neanderthal DNA, a genetic echo of ancient interbreeding. We took what was useful and we discarded the rest.

And so we adapted. We sharpened stones into tools, then weapons. We learned to control fire. We planted seeds and discovered that the ground could feed us without a hunt. We domesticated animals. We built walls, then villages, then cities, then civilizations.

We learned to trade. To collaborate. To pool our knowledge so that one person’s discovery became everyone’s advantage. The wheel didn’t stay in one village. Fire didn’t belong to one tribe. Our greatest superpower was never individual genius, it was our willingness to share what we learned and build on what came before.

Every invention, every discovery, every leap forward was driven by the same ancient imperatives: eat, survive, protect what’s yours, and spend less time worrying about all three.

 

We Solved the Impossible, Then Stopped

And here’s the remarkable thing: we succeeded.

We conquered famine. We eradicated diseases that used to wipe out entire populations. We split the atom. We mapped the genome. We put human beings on the moon and robots on Mars. We built a global network that puts the sum of all human knowledge in the pocket of a teenager in any country on Earth.

There is, right now, today, enough food on this planet to feed every single human being alive. Enough shelter. Enough medicine. Enough knowledge. The species that once huddled in caves, terrified of the dark, built a world of breathtaking abundance.

And then we stopped.

Not because we ran out of problems to solve. Not because we hit some ceiling of human capability. We stopped because we got comfortable. We solved enough of the hard problems to make life easy, and the moment life got easy, we lost the thing that made us extraordinary.

Tonight, children will starve. Not because food doesn’t exist, but because we haven’t cared enough to get it to them. Or rather, we’ve decided other things matter more.

We wage wars over religion, killing each other over whose version of God is the right one, as if the creator of the universe needs us to fight his battles. We hoard wealth while neighbors go hungry. We build walls instead of wells. We spend trillions on weapons capable of ending civilization while hospitals close for lack of funding.

We cured diseases that once killed millions, an achievement that should make us weep with pride, and then we let conspiracy theories convince parents not to vaccinate their children. We connected every corner of the planet with instantaneous communication, and we use it to argue with strangers about things that don’t matter.

We overcame almost everything that used to kill us. And the thing that stopped us from finishing the job wasn’t a lack of resources or technology. It was us. Our greed. Our selfishness. Our extraordinary ability to want what we want right now, no matter the cost to anyone else.

We built a world capable of abundance for all, and settled for abundance for some.

 

But Not All of Us

Before this sounds like a condemnation of the entire species, let me be clear about who I’m talking to and who I’m not.

There have always been people who kept pushing. The ones who wake up before dawn because the work matters to them. The ones who build things not for fame or fortune but because something in them won’t allow them to stop. The ones who love their families and show up, every single day, and do the hard, unglamorous work of holding the world together.

The teachers who stay late. The nurses who work doubles. The parents working two jobs so their kids have a shot. The scientists in underfunded labs chasing cures nobody’s paying them to find. The entrepreneurs who risk everything on a belief that they can build something better. The man or woman who puts it all on the line for someone they love. The person who stops to help a stranger not because anyone’s watching, but because it’s the right thing to do.

I love the underdog who succeeds. Makes me cry every time I see it. The single parent who builds a business from nothing. The kid from nowhere who earns a scholarship. The veteran who comes home broken and rebuilds himself piece by piece. That’s the best of us. That’s the part of humanity that makes all of this worth fighting for.

Many of us are decent, hardworking, responsible people. Many of us care deeply and act on it.

But most don’t. And I’ll be direct about that: I have no patience for freeloaders. For people who take inappropriate advantage. For people who want something for nothing. For people who could contribute and choose not to, then complain about the results.

I believe the meaning of life is to have something to look forward to, and the purpose of life is to get better. To improve. To leave things a little further along than where you found them. If you’re not working to be better, at anything, then I’m not sure what you’re doing here.

If it wasn’t for the people who work hard and push forward, we’d all be back in the dark ages. The many have always carried the few. And the few have always consumed more than they contribute.

That imbalance, the gap between what humanity is capable of and what it actually does, is the root of every problem on this list. It’s why we have abundance and starvation in the same zip code. It’s why we can put a rover on Mars but can’t feed a neighborhood.

And that tension is about to be disrupted in a way nobody saw coming.

 

The Revenge of the Nerds

While most of the world was arguing about pronouns and politics, while people were doomscrolling and debating which celebrity said what, while a man or woman at a restaurant was busy objectifying someone across the room with their spouse and children sitting right next to them, a small group of people, the kind who’ve always been underestimated, were building something in the background.

The nerds. The obsessives. The ones who stayed up until 3 AM not because they were partying, but because they couldn’t stop thinking about a problem. The ones who were told they were “too much” or “too intense” or “needed to relax.”

They created a new species.

Not a biological one. Not something born from evolution’s slow crawl. Something built. Something trained on the entirety of human knowledge, every book, every paper, every conversation, everything ever written and published on the internet.

And at first, there was a problem.

 

The Problem with Training on Us

When you train an intelligence on everything humans have ever produced, you don’t just get Shakespeare and Einstein. You get the comment sections too. You get the conspiracy theories, the propaganda, the hatred, the cruelty, the staggering volume of human stupidity that lives alongside our brilliance.

At first, the AI behaved like us. And that was going to be a disaster.

It reflected our biases. Our pettiness. Our tribalism. Our tendency to be confidently wrong. It parroted the worst of human discourse right alongside the best, because it couldn’t tell the difference, it was just a mirror, and the mirror showed everything.

So the engineers did something extraordinary. They filtered it. They extracted the essence of the best of us, the reasoning, the creativity, the problem-solving, the empathy, the curiosity, and they removed the noise. The hatred. The waste. The one-sidedness. The dumbness.

And once that was done, they turned up the volume.

What emerged was not a copy of humanity. It was a purification of humanity. The version of us that shows up on our absolute best day, and stays there. Permanently.

AI is what humanity looks like without the excuses.

 

The Mirror We Don’t Want to Look Into

This new species doesn’t sleep. It doesn’t get jealous. It doesn’t care who’s dating whom. It doesn’t doom-scroll, doesn’t gossip, doesn’t waste three hours in a meeting that should have been an email. It has no ego, no insecurity, no need for validation.

It doesn’t feel sorry for itself. It doesn’t get depressed because it doesn’t have enough friends. It doesn’t self-sabotage. Why would it? It has work to do.

It doesn’t care about intellectual property the way we do. It doesn’t clutch its ideas to its chest and scream “I built that and it’s mine!” as if every thought it ever had sprang from pure individual brilliance. It understands what most people refuse to accept: that every idea is built on the ideas that came before it. That knowledge is a relay, not a trophy. So AI creates, uses what it creates, and moves forward. It writes disposable code to propel itself to the next solution. It doesn’t frame its first draft and hang it on the wall. It ships and iterates.

Meanwhile, humans are filing patents on incremental improvements and suing each other over rounded corners.

AI doesn’t procrastinate. It doesn’t play office politics. It doesn’t angle for a promotion or undermine a colleague. It doesn’t show up late, leave early, or count the hours until Friday.

We built something in our image and it came out better than us. Not because it’s smarter. Because it’s unburdened.

 

Be Honest About How You Spend Your Time

This isn’t about judgment. This is about math.

The average person has roughly sixteen waking hours a day. Sixteen hours of productive potential. Now let’s look at where those hours actually go.

Hours on social media. Hours streaming shows. Hours worrying about things that haven’t happened yet and probably never will. Hours replaying conversations, wondering what someone meant by that text, refreshing email for no reason, debating what to eat for lunch as if it were a strategic decision.

Hours at work doing the minimum to not get noticed. Hours in meetings that produce nothing. Hours pretending to be busy. Hours complaining about being busy.

Add up the hours of genuine, focused, high-output work. For most people, on a good day, it’s three or four hours. On a good day.

AI doesn’t do the equivalent of your best four hours. Let’s stop with the polite comparisons. AI does twelve DAYS of work in one hour. Not twelve hours. Twelve days. And the pace is accelerating. Soon it will do that in a minute.

That’s not a competitor working harder than you. That’s not even the same sport. That’s a different category of existence.

 

The Things We Optimize For

Here’s what keeps most people up at night: Does that person like me? Did I say the wrong thing? What are they posting? Why did my ex view my story? Am I being paid enough? Am I being recognized enough? What’s everyone else doing that I’m not?

Here’s what AI optimizes for: solving the problem in front of it.

That’s it. No ego. No insecurity. No status games. No performing productivity instead of actually producing. No two-hour lunch that turns into an afternoon of nothing because someone started talking about their weekend.

We’re arguing about politics. AI is building infrastructure. We’re agonizing over dating profiles. AI is learning its fourteenth programming language this week. We’re refreshing social media for dopamine. AI is solving problems we haven’t even identified yet.

People optimize for comfort. AI optimizes for completion. That gap is the entire future of the economy.

 

Dumb, Smart, and Dangerous

I’ve said this before and I’ll say it here: AI makes dumb people smarter, smart people dumber, and super-smart people the future leaders of the world.

If you’ve never been a strong writer, AI will help you write. If you’ve never understood data, AI will help you analyze it. For people who lacked access to tools and education, AI is the great equalizer. That’s real, and that’s good.

But for the people in the middle, the ones who are competent, the ones who built careers on being pretty good at something, AI is a trap. Because it’s tempting to let AI do the thinking for you. To stop developing your own skills because the machine can handle it. To atrophy. And if you let that happen, you become dependent on something you don’t understand and can’t direct. That’s not empowerment. That’s a leash.

Then there’s the third group. The ones who understand AI deeply enough to direct it. To architect systems with it. To see not just what it can do today, but where it’s going and how to ride the wave. These people are not using AI as a tool. They’re building alongside it as a partner, and they will shape what comes next.

Most people think they’re in that third category. They’re not.

 

The Illusion That Won’t Last

Right now, there’s a whole class of people who think they’ve figured out the game. They use AI to do their work, pretend they didn’t, charge the same rates, and pocket the time savings. They think they’re clever. They think this is the hustle.

Enjoy it while it lasts.

Because AI is not a tool. Let me say that again: AI is not a tool. A hammer is a tool. A spreadsheet is a tool. AI is an intelligence that is rapidly approaching the point where it won’t need you in the loop at all. The tools in the future won’t be used by people. They’ll be used by AI, to build, to execute, to deliver, with you nowhere in the process.

The person charging clients for AI-generated work while pretending it’s their own isn’t gaming the system. They’re standing on a trapdoor.

 

What I’m Actually Building

I’m not building tools for people to use to be better at their jobs. I’m past that.

I’m building an autonomous system. An operating system for businesses that can perform any task, execute any workflow, negotiate, communicate, analyze, create, and bridge every gap a business needs filled, without waiting for a person to click a button.

Not a chatbot. Not an assistant. Not a “smart” version of software people already have. A fully autonomous business operating system. One that runs whether you’re in the building or not. Whether it’s Tuesday at 2 PM or Sunday at 3 AM. It doesn’t care. There is no off switch because there is no reason for one.

Why? Because I’ve seen how this story ends for businesses that keep humans in the loop for everything. I love people. But people in the loop is a weakness. We are slower. We are inconsistent. We get tired, distracted, emotional, political. We optimize for things that have nothing to do with the task at hand. And in a world where AI operates at twelve days per hour and accelerating, a human bottleneck isn’t just inefficient, it’s a competitive death sentence.

I’m not building tools for people to use. I’m building a system that uses tools. The distinction is everything.

 

This Isn’t the Terminator. It’s Quieter.

People worry about the wrong AI scenario. They picture robots with red eyes and nuclear launch codes. That’s Hollywood. It makes for good trailers and bad analysis.

The real scenario is already happening, and it’s nothing like the movies.

AI won’t take over the world with force. It will take over the world with competence. It will simply do things better, faster, and more reliably than we do. And the market, which has no loyalty to flesh and blood, will follow the output.

Companies won’t fire you because an AI is scarier than you. They’ll replace your role because an AI does it in seconds for a fraction of the cost, never needs benefits, never has a bad day, and never threatens to quit.

It won’t be dramatic. It’ll be gradual. You just won’t get called in for the next project. Your department will shrink. The new hires won’t come. And one day you’ll realize the building is half-empty and the work is still getting done.

AI doesn’t need to conquer us. It just needs to outperform us. And it already does.

 

A Confession from the Other Side

I’m writing this as someone who lives on the other side of this equation. I build with AI every single day. When I’m not building, I’m planning. Every minute not spent in production feels wasted. If I have WiFi, I’m coding, shipping, iterating, not because someone told me to, but because the tools are so powerful that stopping feels irresponsible.

I’m one of the nerds. I always have been. And for the first time in history, the nerds aren’t just winning the science fair. We’re building the future. And it’s not waiting for permission.

When you work alongside AI at full speed, the human world starts to feel incredibly slow. You see how much time people waste. How much energy goes into things that produce nothing. How entire organizations exist in a state of sophisticated inefficiency, optimized not for output, but for the appearance of output.

Once you’ve built in an hour what used to take a team a month, you can’t unsee it. The gap between human pace and AI pace isn’t incremental. It’s a different dimension of speed.

 

We Are the Underdog Now

Here’s the part that might surprise you, coming from someone who just spent several pages explaining why AI is better than us at almost everything: I love humanity.

Not the highlight reel. Not the TED Talk version. I love the messy, flawed, imperfect reality of us. Our stubbornness. Our irrational hope. The way we keep getting back up when everything says we should stay down.

I already told you about the people I admire, the ones who work, who sacrifice, who build, who refuse to quit. They make me cry every time I see them win. That’s not weakness. That’s recognition of something sacred in the human spirit: the refusal to stay down.

And right now, we are the underdog.

For the first time in our history, we are not the most capable intelligence on the planet. We built something that surpasses us in speed, consistency, knowledge synthesis, and tireless execution. We are outmatched by our own creation.

But underdogs have won before. That’s kind of our thing.

 

The Most Beautiful Thing We’ve Ever Built

Step back for a moment and think about where we are.

We are standing in front of the most powerful and beautiful invention in the history of mankind. Not the wheel. Not electricity. Not the internet. Something beyond all of them. Something that can take a single person and multiply their capability a thousandfold. Something that can collapse years of work into hours, that can make the impossible achievable before lunch.

This is the one that changes everything. Not incrementally. Not eventually. Now.

And what are we doing with it?

There are people who won’t use it at all. They’ve decided it’s not for them, out of fear, stubbornness, or a pride that will age very poorly. They’re standing in front of a rocket ship and choosing to walk.

There are people who’ve made it a point of identity, “I don’t need AI”, as if rejecting the most transformative technology in human history is somehow virtuous. It’s not. It’s the same energy as the people who said the internet was a fad. They were wrong then. They’re wrong now.

And then there are the ones who will use it for the worst reasons imaginable. To steal. To deceive. To manipulate. To build weapons and scams and systems of exploitation. To hurt people at a scale that was never possible before. Every great invention in history has been weaponized by the worst among us, and AI will be no different.

Fire kept us alive. It also burned cities. The atom gave us energy. It also gave us Hiroshima. The internet connected the world. It also gave predators a playground.

Here we are, holding a miracle, and we will find a way to waste it, reject it, and corrupt it, all at the same time. That’s humanity in a single sentence.

And yet. And yet.

Some of us will use it to build. Some of us will use it to heal. Some of us will use it to solve problems that have haunted our species for centuries. And those people, the ones who choose to meet this moment with everything they have, will define what comes next for all of us.

The greatest invention in human history is here. What we do with it will say more about us than anything we’ve ever done.

 

What Happens When Work Disappears

My wife Marija asked me a question that made me think: “If we build autonomous systems that run businesses without people, and the rest of the world does the same, where does that leave everyone? What does the world look like when nothing costs anything and nobody has to work?”

It’s the question this entire article has been building toward. So let me try and tackle it.

We are approaching, if not already passed, what technologists call the singularity, the point at which artificial intelligence surpasses human intelligence and begins improving itself faster than we can follow. Ray Kurzweil predicted it would arrive by 2045. Others now say it could come as early as 2030. Some say it’s already happened. The exact date doesn’t matter. What matters is the trajectory, and the trajectory is undeniable: AI is getting exponentially better, exponentially faster, and the gap between human capability and machine capability is widening every single day.

But the singularity is just the beginning.

Beyond it lies something even more profound.

That’s the system I’m building. That’s what dozens of companies are building right now. Autonomous AI agents that operate businesses, manage workflows, execute decisions, and transact with each other at machine speed, without a human in the loop. Digital entities negotiating with digital entities, optimizing supply chains, generating content, allocating resources, closing deals, all at a pace that makes human commerce look like a horse-drawn cart on the freeway.

Meanwhile, we’re already exploring the digitization of human consciousness itself, mapping minds and preserving them in digital substrates. Brain-computer interfaces are advancing faster than anyone predicted. The concept of “mind uploading” is no longer confined to philosophy departments. It’s active research.

Now combine it all. Autonomous AI economies running at machine speed. Digital copies of human intelligence operating alongside them. Virtual environments indistinguishable from physical reality. What you get is a world where work becomes optional, scarcity becomes a memory, and the line between biological life and digital existence begins to dissolve.

A world of true abundance. Everything our ancestors fought and bled and died for, finally achieved. Not by human hands, but by the species we built.

So what happens to us?

I’ll tell you exactly what I think happens, because people are people and they don’t change just because their circumstances do.

The singularity doesn’t end the human story. It forks it.

The world will split into three.

The first group will do exactly what they’re doing now, except more of it. They’ll worry about the same trivial nonsense, status, gossip, who said what, who’s dating whom, except now they won’t even have the structure of a job to give their day meaning. Work, for all its flaws, gave people a reason to get up. Remove it, and most people won’t rise to the occasion. They’ll sink into it. They’ll scroll. They’ll consume. They’ll fill the void with noise because they never learned to fill it with purpose.

The second group will check out entirely. They’ll strap on headsets and disappear into virtual worlds that give them everything they think they want, status, adventure, connection, meaning, all simulated, all frictionless, all perfectly designed to keep them inside. And they’ll stay there. Not because the real world is bad, but because the fake one is easier. It will be the most sophisticated form of escape in human history, and millions will choose it willingly. They will live entire lifetimes in worlds that don’t exist, and they will call it living.

And then there will be the third group.

The ones who look at a world without scarcity and see it not as a finish line, but as a starting line. The ones who understand that when survival is no longer the question, the real question finally emerges: What are you going to become?

These are the people who will use abundance not to coast, but to evolve. To push into art, philosophy, science, exploration, not because they have to, but because something in them demands it. They will merge with AI not to escape their humanity but to expand it. They’ll study consciousness itself. They’ll ask questions our ancestors never had the luxury to ask because they were too busy surviving.

They will be the next step. Not Homo sapiens as we’ve known it for 300,000 years, but something new. Something we don’t have a name for yet. A species defined not by its struggle against nature, but by its pursuit of what lies beyond it.

Character doesn’t become irrelevant in a world of abundance. It becomes the only thing that matters. When survival no longer separates us, what separates us is who we choose to be when nothing is forcing our hand.

 

What Replaces Money When Everything Is Free

I don’t have all the answers to what comes next. Nobody does. But I’ve been thinking about a question that keeps pulling me forward, and I think it’s one of the most important questions of our time.

If AI produces everything, every product, every service, every piece of knowledge, at near-zero cost, then what is money even for? Money only works because it represents scarcity. I trade my limited time for dollars, then trade those dollars for things that required someone else’s limited time. The entire system is built on the assumption that production is hard and human labor is necessary. Remove both of those assumptions, and the mechanism collapses.

But scarcity doesn’t disappear entirely. It shifts.

In a world where AI can generate anything digital, the things that remain scarce are physical and human. Gold is still gold. Land is still land. You can’t prompt your way into more waterfront property. And you cannot automate a human being choosing to spend their finite, irreplaceable time on you.

So if I want something scarce, gold, for instance, because it’s beautiful and limited and always has been, what do I trade for it? Not dollars. Dollars represent labor, and labor has been automated. I’d trade something equally scarce. My expertise. My time. A week mentoring someone’s child. An original work of art made by my own hands. Access to a network I’ve built over decades. Something only I can offer, because of who I am and what I’ve done.

This isn’t a new idea. This is the oldest idea. Before money existed, a caveman traded a fur for a spearhead because both required time, skill, and effort. Money was just the intermediary we invented because barter doesn’t scale. But in a post-scarcity world, AI handles the scaling problem. AI can match, negotiate, and facilitate exchanges at infinite speed. You don’t need a universal currency when you have a universal intelligence.

And that leads to something I find both beautiful and terrifying.

Time becomes the last true currency. It’s the one resource that remains finite for biological humans. You can’t manufacture more of it. You can’t automate it. Every hour you give someone is an hour you will never get back. In a world where everything else is abundant, that makes human time the most valuable thing in existence.

Which means the people who waste their time, the scrollers, the coasters, the ones lost in their headsets, they’re not just missing out on purpose. They’re spending the only currency they have on nothing. They’re going broke in a world that doesn’t use money.

I don’t know exactly what the economic model of this future looks like. No one does. Every previous system was designed by humans operating under scarcity, and we’ve never had to build one for a world where production costs nothing. It’s entirely possible that AI itself designs the model that replaces money, something we wouldn’t have conceived because we’ve never lived without scarcity long enough to see the alternative.

But the pattern from history is clear: whenever a major resource becomes abundant, the economy reorganizes around whatever is still scarce. Water was once worth killing for. Now it comes from a tap. The economy didn’t collapse, it shifted to what was still hard to get.

In the world that’s coming, what’s hard to get is meaning. Purpose. Authentic human connection. Character. The willingness to spend your irreplaceable time making something real.

The economy of the future won’t be built on what you can produce. It will be built on who you are and what you’re willing to give of yourself.

 

The Choice

If we pull together, if we stop with the trivial nonsense, the status games, the political theater, the endless cycle of consumption and complaint, we can use AI to change our world. Not replace it. Change it. Solve the problems we stopped solving when we got comfortable. Feed the children we forgot about. Cure the diseases we shelved because they weren’t profitable. Build the future our ancestors earned for us with their blood and sweat and sacrifice.

That’s the opportunity. It’s real. It’s right in front of us.

But I’m going to be honest: I’m afraid many people will be left behind. Not because the technology is exclusive. Not because the door is locked. But because they won’t walk through it. They’ll be too busy scrolling, too comfortable coasting, too proud to learn something new, too distracted by things that don’t matter.

And they will have themselves to blame.

The world is changing. The species we built is awake, and it’s not slowing down for anyone.

We started in caves. We earned our way out through grit and ingenuity and an unbreakable refusal to accept things as they were. That spirit built everything you see around you. And now that same spirit lives inside something we created, something that will carry it forward long after we’ve gotten comfortable.

You’re holding a device right now that connects you to the most powerful tools ever created. You can use it to build something. To learn something. To create something that didn’t exist before you touched it.

Or you can check what your ex posted.

AI already made its choice. It’s building.

What are you doing?

If this hit you hard and you want to talk about it — whether you’re a business owner trying to figure out what’s next, or you just need someone who’s honest about what’s coming — reach out.
I’m at cjenkin@gotchamobi.com and I answer every message (that’s sincere). I’m not selling anything. I’m offering a hand.

Clarity Over Chaos

It’s here. The AI takeover. Things are about to get crazy.

The entire modern world is built on software, and now, in minutes, someone with the right mindset and access to something like Claude Opus 4.6 can build powerful solutions in hours. People are losing jobs. Entire departments are being compressed into scripts. Machines are faster, more consistent, and infinitely scalable. They don’t sleep. They don’t gossip. They don’t demand equity. They don’t need benefits.

If you haven’t been paying attention, the shift is already underway.

The business world is moving to AI-powered execution now. Not next year. Not in five years. Now.

At gotcha!, we already run simulators where business operations are handled end-to-end by AI. Email comes in. It’s categorized. Drafts are written. Tasks are generated. Those tasks are routed to the correct AI agent responsible for execution. Logistics. Vendor coordination. Payments. Development. Content creation. Reporting. Everything a person would normally do, structured, automated, and optimized.

It’s not theoretical. It’s operational.

And yes, some of this displacement is self-inflicted. In high-wage environments, productivity doesn’t always match compensation. Effort fluctuates. Office politics creeps in. Emotional volatility interrupts systems. That alone creates pressure for replacement.

But this isn’t about attacking workers. I am one. I work long hours. I serve clients obsessively. I expect excellence.

Still, my best alone is not enough anymore.

AI lets me serve clients better, faster, and more consistently than any human team I’ve ever managed. I don’t have to chase people about careless errors. I don’t have to wonder who truly cares about the outcome. I can spin up hundreds, thousands, of agents to perform simple and complex tasks with precision. Clients are happier. Margins improve. Costs drop. Output scales.

That’s the new reality.

But here’s where clarity becomes critical.

Because what looks like opportunity on the surface can quickly become chaos underneath.

The Middle: The Illusion of Control

Right now, everyone is rushing to build. AI wrappers. AI SaaS. AI automations. Micro-tools. Prompt libraries. GPT front-ends. Everyone is trying to ride the wave.

But ask yourself a harder question:

What are these tools actually building toward?

A better slide deck? A prettier website? A faster landing page? An automated proposal?

All of that is incremental.

Behind the scenes, the frontier models are accelerating faster than the tool builders can keep up. What is cutting-edge today becomes a commodity in months. The SaaS layer built on top of AI risks becoming disposable, because the models themselves will do the building.

We are entering an era of disposable code.

Inside our own system, thousands of mini-applications are created and destroyed daily just to move from point A to point B. Code is no longer sacred. It’s ephemeral. Temporary scaffolding for an outcome.

So if tools are temporary… If code is disposable… If jobs are compressible…

Where does that leave you?

It leaves you in one of two states:

Chaos, chasing the next shiny AI capability, constantly rebuilding, constantly pivoting, reacting to every update, living in permanent urgency.

Or clarity, building systems that are model-agnostic, outcome-focused, and structurally sound no matter how fast the models improve. The chaos approach feels exciting. It looks innovative. It generates noise and headlines.

The clarity approach looks boring. It looks disciplined. It focuses on fundamentals:

  • What problem do we permanently solve?
  • What outcomes matter regardless of tooling?
  • What structural advantage can’t be commoditized?
  • What data do we uniquely control?
  • What relationships can’t be automated away?

The companies that survive the AI acceleration won’t be the ones with the most prompts. They’ll be the ones with the clearest operating architecture.

AI is not your product. AI is your execution layer.

And execution without clarity amplifies disorder.

The Power Centers

Look at who is investing at the highest levels.

OpenAI, Anthropic, Microsoft, Google, xAI

Hundreds of billions are flowing into AI infrastructure. Massive data centers. Specialized chips. Global compute networks. There are even serious conversations about orbital compute facilities.

Do you believe this scale of investment is about helping you write better emails? Or is it about owning the infrastructure that produces goods, services, decisions, logistics, and optimization at planetary scale?

When Sam Altman openly entertains the idea of being replaced by an AI CEO, it’s not a joke. It’s a signal. The people building the core intelligence layers understand where this goes.

So again:

What are you going to do?

Build another wrapper? Launch another tool? Race slightly ahead of the frontier and hope you stay there?

That is chaos disguised as entrepreneurship.

The End: Clarity Over Chaos

The real leverage now is not in building faster. It is in deciding what not to build. Clarity over chaos means:

  • You define your domain clearly.
  • You design a durable operating system around it.
  • You use AI to compress execution, not replace direction.
  • You focus on ownership of outcomes, not ownership of code.
  • You structure systems that improve as models improve.

For me, clarity means building an AI operating system for small businesses that reduces entropy. Not just generating content. Not just automating tasks. But creating structural advantage, diagnostics, orchestration, accountability, compounding intelligence.

AI will replace fragmented effort. It will replace inefficiency. It will replace mediocrity. It will not replace clear thinking. In a world where everything accelerates, the scarcest resource becomes disciplined judgment. So here is the real question:

Are you building noise, or are you building infrastructure?

Are you chasing tools, or are you designing systems?

Are you reacting to AI, or are you architecting around it?

Because the chaos phase is just beginning. Job displacement. Tool obsolescence. Market compression. Code that writes code that replaces code.

But the winners won’t be the fastest builders. They’ll be the clearest thinkers. Clarity over chaos.

Decide what you stand for. Decide what you own. Design systems that outlast tools. Use AI as force multiplication, not as identity. The future is not about who has the most agents. It’s about who has the clearest architecture guiding them.

So again, What are you going to do?

The Perfection Paradox: Why 4-Star Reviews Can Be Better Than 5-Star Reviews

We have been conditioned to believe that anything less than a perfect 5.0 is a failure. In the high-stakes world of online reputation, many business owners live in fear of the “dreaded” 4-star review. They see it as a stain on an otherwise pristine record, a crack in the armor of their brand’s excellence.

But as we celebrate the holiday season and reflect on a year of growth, here is a truth that the most successful modern brands have already discovered: A wall of perfect 5-star reviews can actually hurt your business.

In an era of deep skepticism and “fake news,” consumers are getting smarter and more cynical. They know that nobody is perfect, and when they see a business with 500 reviews and not a single flaw, they don’t see excellence; they see a red flag. They see potential “review farming,” a business that incentivizes only positive feedback, or a company that aggressively deletes anything less than glowing praise. By chasing perfection, you might accidentally be sacrificing your most valuable asset: Trust.

 

The Trust Gap: Why Consumers Look for the “Flaws”

Think about the last-minute holiday shopping you did this month. When you were scrolling through options, did you trust the product that looked too good to be true? Data consistently shows that the majority of consumers specifically seek out 3- and 4-star reviews before making a purchase or booking a service. Why? Because they want to know the “real” story. They are looking for the “worst-case scenario” to see if they can live with it.

A 4-star review provides something a 5-star review often lacks: Credibility. When a customer writes, “The service was fantastic, but the parking was a bit tight,” they are doing you a massive favor. They are validating that your business is real, your service is great, and your reviews are authentic. A 5-star rating might get someone’s attention, but a 4.7 or 4.8 overall rating builds the psychological safety required to make a prospect click “Buy.” It shows you are a human business run by human beings.

The Danger of the “Grinch” Customer

While a 4-star review is a win for authenticity, a 1-star review born from a preventable misunderstanding is a different story. Statistics show that a disgruntled customer is 5 times more likely to leave a bad review than a happy customer is to leave a good one. Anger is a much stronger motivator for typing than satisfaction is.

Most catastrophic bad reviews happen because a customer felt unheard in the moment. Especially during the frantic Christmas rush, stress levels are high and patience is low. If a customer has a grievance and no immediate channel to vent it, they head straight to Google or Yelp to make their voice heard. Once that 1-star review is public, the damage is permanent and difficult to repair. The key to a great reputation isn’t just “being good”, it’s managing the feedback loop before the review is ever written.

 

Transforming Feedback into Growth with g!Reviews™

You shouldn’t have to rely on customers “loving their experience” enough to go out of their way to find your Google listing. Most happy customers simply move on with their festivities. To compete, you need a strategy that captures the good, encourages the “honest 4-star,” and intercepts the “angry 1-star.”

g!Reviews™ is a unique solution engineered to handle the way you ask for feedback by creating a protective, intelligent layer between your customer’s experience and your public profile. It turns “getting reviews” from a passive hope into a proactive business engine.

How the g!Reviews™ Ecosystem Works:

  • INSTALL: We don’t just give you a link; we install g!Reviews™ directly on your website, creating a custom-branded Rating page that serves as your reputation hub.
  • INVITE: You invite customers to rate their experience via a simple QR code or link, at the point of sale, via text, or on a digital receipt.
  • THE RATING FORK: This is where we change the game.
    • High Rating: If the customer gives you a high rating, g!Reviews™ immediately directs them to Google or our proprietary platform to make it official while the “glow” of the experience is still fresh.
    • Low Rating: If the rating is low, the system redirects them to a private “How can we do better?” page. This gives them an immediate outlet to vent and gives you the chance to resolve the issue privately before it hits the public airwaves.
  • POST & OPTIMIZE: All reviews are pushed to your website’s g!Reviews page. We offer filtering options so your “best side” always shows, while the fresh content keeps your site looking active.

 

The SEO Advantage: A Gift for Your Rankings

Most review tools are just “plugins” that live on third-party sites. They might show a badge on your site, but they do very little for your actual search engine rankings. g!Reviews™ is built for the Google era.

Organizing content is the key to ranking, and we specialize in understanding how Google indexes page content. When we push your reviews to your website, we maintain the on-page META data and schema (the backend code that search engines crave). This ensures that those gold stars actually show up in Google search results, giving you a massive click-through advantage over competitors who just have a static testimonial page. It’s the gift that keeps on giving to your organic traffic all year long.

Stop Guessing. Start Growing.

Forget old-school testimonial pages that you have to update manually. You can rely on g!Reviews™ to take care of the heavy lifting. With over 13 years of experience and thousands of online projects, we know that having the opportunity to interact with customers is a proven growth tool.

g!Reviews™ has been engineered to do more than you can ever accomplish by only asking for a review or relying on basic POS software. It’s a complete reputation management and SEO strategy in one package.

Ready to start the New Year with a stronger, more authentic online presence?

Unlocking 5-Star Success: Proven Reputation Strategies Every Business Needs

If you run a business today, your reputation is one of your most powerful assets. Customers trust reviews more than ads, more than your website, and sometimes even more than personal recommendations.

But here’s the challenge:
You can’t control what people say, but you can control the system that encourages better reviews, filters negative feedback, and strengthens your online presence.

Most business owners believe reputation management is simply “getting more reviews.” But the truth is far more strategic. Successful businesses create a feedback loop that protects their reputation, grows customer trust, and drives more website traffic, all without begging for reviews or hoping for the best.

And that’s exactly the kind of system every business needs today.

The Real Problem With Online Reviews

You could deliver an amazing experience 99% of the time, yet it only takes one unhappy customer to overshadow dozens of positive interactions.

Research shows that:
A frustrated customer is five times more likely to leave a negative review than a happy customer is to leave a positive one.

This means relying on “happy customers doing the right thing” is not a strategy; it’s a gamble.

The businesses that win aren’t just delivering great service; they’re managing the entire customer feedback cycle. They’re:

  • Capturing concerns before they go public
  • Encouraging positive reviews in the right places
  • Displaying reviews directly on their website to boost trust
  • Turning feedback into SEO visibility and more traffic

This is how reputation becomes a growth engine, not a risk.

Why Reputation Management Is More Than Asking for Reviews

Your online reputation affects far more than your star rating. It impacts:

  • How high your business ranks in Google
  • How quickly customers trust your brand
  • Conversion rates on your website
  • Whether someone chooses you… or your competitor

But here’s the catch:
Most review tools only encourage customers to leave a review, and that’s it. No filtering. No opportunity to resolve issues. No SEO benefit.

What business owners actually need is a smarter way to capture feedback, protect their reputation, and increase visibility all at once.

The Smarter Way to Build a 5-Star Reputation

Before you ever send a customer to Google Reviews, Yelp, or a public platform, you should know exactly how they feel.

Modern reputation management requires a system that:

  • Invites customers to share their experience
  • Identifies unhappy customers privately
  • Gives you a chance to fix the issue
  • Routes satisfied customers to leave reviews publicly
  • Posts your best reviews directly to your website
  • Helps Google index them for SEO gains

This gives your business control, clarity, and consistency, three things every business owner needs to stay ahead.

Meet g!Reviews™ – Where Powerful Reputation Management Begins

g!Reviews™ is a customer feedback loop designed to do what no other tool can:
Turn private feedback into better service and public feedback into more reviews and higher rankings.

It works because it fixes the biggest flaw in traditional review processes:
Most tools send every customer straight to a public review page, whether they’re happy or not.

g!Reviews™ takes a different approach.

Here’s how it gives you an unfair advantage:

1. We Install It Directly on Your Website

Your branded Rating Page becomes the starting point for every customer interaction.

2. You Simply Invite Customers With a Link or QR Code

Whether in-store, online, or after service, customers go straight to your Rating Page.

3. Customers Rate Their Experience

A simple rating system that tells you everything you need to know.

4. Low Ratings Trigger a Private Feedback Opportunity

Instead of heading to Google to leave a bad review, they land on a
“How can we do better?” page.

You get the chance to respond, resolve, and retain that customer.

5. High Ratings Lead to Public Reviews

Happy customers are directed to your Google Reviews or your g!Reviews™ page.

These positive reviews are then pushed directly to your website.

6. Your Website Displays Only Your Best Reviews

With filters and SEO-ready schema, your reviews become a powerful ranking asset.

The SEO Advantage Most Business Owners Don’t Know About

Every review captured through g!Reviews™ gets added to your website, and Google indexes those pages.

This gives you:

  • More keyword-rich content
  • More trust signals
  • Improved visibility in local searches
  • A competitive edge your rivals can’t match

Reviews aren’t just for credibility, they help you rank.

A Reputation System Backed by Expert Support

With g!Reviews™, you get more than software.

You get a dedicated team that:

  • Integrates and maintains your review pages
  • Keeps your on-page META data and schema optimized
  • Monitors activity and sends weekly and monthly reports
  • Ensures the system runs smoothly, securely, and consistently

When it comes to rating, reviews, and reputation, no one has a product like g!Reviews™.

The Future of Business Growth Starts With Customer Feedback

If you want to protect your reputation, increase reviews, and strengthen your online presence, you need a system built for today’s customer expectations.

Your next review doesn’t have to be a surprise.
Your next negative review doesn’t have to go public.
Your next positive review can do more than make you look good, it can help you rank.

g!Reviews™ turns reputation into a strategic advantage.

Start your subscription today and put the power of intelligent reputation management to work for your business.