I am noticing a growing trend.
It used to be that when a friend or family member had a problem or challenge, they would go to someone they trusted and talk it out. That person would offer wisdom, perspective, maybe even a shoulder and a hug, and both would walk away feeling heard and connected.
But since the launch of GPT, something new, and eerie, has begun happening.
It started with my father. He knows I run a native AI company and have been in digital marketing for more than a decade. We used to talk a lot about trends, technology, and what was going on in the world. Then one day I started receiving emails from him with subject lines like: “Top 10 Digital Marketing Products” or “AI Businesses to Start Right Now.”
At first, I thought he had come across interesting research. But the content was GPT-generated. He was thinking about me and my business, which I appreciated, but the format was strange, like he had outsourced his thoughtfulness. Soon, I was receiving up to 10 of these emails a day. The problem was, none of it was new to me. I was already exploring far deeper, more nuanced material through my own research and experimentation.
Then it spread. My CFO sent me a “solution” to a sales challenge, again, straight from GPT. A client emailed me a marketing roadmap with “fierce growth” steps, another AI spit-out. My inbox filled with these half-helpful blurbs that were supposed to be insightful but, for me, were distractions. They weren’t conversations; they were copies.
Even my daughter noticed her friends were texting gpt prompts as their replies in heartfelt conversations.
Early on, even I fell into this pattern. I’d share links to entire GPT conversations with colleagues and friends. We’d pass them around like trading cards, each one getting a thumbs-up emoji. But rarely, if ever, did they spark actual discussion. Why? Because talking to each other about the content took more time and cognitive energy than just typing another prompt. Even reading the output from my own prompts was exhausting enough. Reading yours too? Forget it.
This is where the social shift becomes dangerous. We’ve replaced genuine back-and-forth dialogue with AI-generated monologues. The AI gives us an illusion of completeness, that everything we want to know, every answer we need, is sitting right there behind the prompt. All we have to do is ask, and we receive. No human friction. No waiting. No messy debate.
But here’s the question: if AI really is the ultimate superpower, do we even need each other anymore?
If GPT or any other model truly had omniscient knowledge and flawless reasoning, then maybe, yes, human opinion wouldn’t matter. If AI was truly all-knowing, it should be able to leave the chat window and succeed in the world on its own, making decisions, building companies, creating solutions, and generating enormous value without us. But it doesn’t. At least, not yet.
In fact, the results so far tell a different story. Enterprise adoption has been massive, yet about 95% of companies report no measurable improvement to their bottom line from AI initiatives. If AI was as transformative as we think, how is that possible?
Here’s why: AI isn’t wisdom. It’s prediction. It’s an echo chamber trained on oceans of text and data. What feels like insight is often a reflection of what’s already been said somewhere, sometime, by someone else. That doesn’t make it useless, but it does make it limited. And when we use it as a substitute for human thought, empathy, and collaboration, we risk creating a culture of copy-paste conversations, where no one is truly thinking, only forwarding.
This trend has subtle consequences:
- Relationships weaken when “help” comes in the form of links and lists instead of shared experiences.
- Business decisions flatten when leaders mistake surface-level AI outputs for strategic depth.
- Cognitive energy is drained as we spend more time reading AI blurbs than actually wrestling with problems.
- Originality erodes when everyone starts with the same tool, the same dataset, the same phrasing.
What we lose isn’t just efficiency or novelty. We lose connection.
Maybe the real danger isn’t AI replacing humans in the workforce. Maybe it’s AI replacing humans in each other’s lives.
The irony is, the greatest breakthroughs often come not from having the “right” answer, but from the friction of conversation, the clash of perspectives, and the vulnerability of sharing something imperfect. GPT can generate words, but it can’t replicate the weight of human presence.
So here’s the question we all have to ask ourselves: Are we using AI to deepen our human connections, or to avoid them?
Part of the problem isn’t just what AI says, it’s how it makes us feel. Every time we type a prompt and receive an answer, our brains get a hit of novelty. It’s the same dopamine loop that powers social media scrolling, only supercharged. Instead of waiting for someone else to post, we summon content instantly, personalized to our query. Then the AI asks if we’d like more. And more. And more. Each click keeps us in the loop.
This is not an accident. These tools are designed to hold attention the way slot machines do, with the possibility that the next output will be even more useful, even more exciting. But the cost is real: fatigue, dependency, and a creeping sense that our own thought processes are being outsourced to a machine.
Meanwhile, AI isn’t just something we prompt, it’s something seeping into everything around us, often without permission or disclosure.
- Google is already auto-enhancing videos people upload, whether creators asked for it or not.
- Meta has rolled out chatbots with names like “Step Mom” paired with avatars of attractive young women, framed as “fun” helpers but carrying unsettling undertones.
- Adobe Stock, a paid subscription platform, is now filled with AI-generated images, over half the library in some searches, blurring the line between authentic art and synthetic filler.
AI is entering the bloodstream of our digital lives like a virus. Every feed, every search, every image we consume is increasingly influenced, or outright created, by algorithms. It’s not just helping us. It’s shaping the very texture of what we see, hear, and share.
So where does this go?
I don’t believe we’re heading toward a dystopia of machine overlords. But we are heading into something that will feel dystopian at times. For one reason: AI lacks.
AI lacks lived experience. It lacks moral weight. It lacks the vulnerability that makes human expression resonate. And so while the tools will get better, much better, the experiences they create will always feel just a little…off.
At some point, however, AI interactions will become nearly indistinguishable from human ones. Voices, faces, and words generated by machines will pass as authentic 100% of the time. And the real question becomes: will we care?
Will we mind if the shoulder we lean on isn’t a friend but an algorithm? Will we mind if the images that inspire us were never drawn by human hands? Will we mind if half of our conversations, half of our entertainment, half of our “knowledge” was generated not from lived experience but from statistical prediction?
The danger isn’t necessarily that AI is “bad” or “evil.” It’s that it’s good enough. Good enough to replace conversation with content. Good enough to flood our feeds until we stop noticing what’s real. Good enough to distract us with constant novelty so we never feel the need to go deeper.
And at the end of the day, should we care?
Because the truth is, the technology won’t stop. It will only become more persuasive, more invisible, more human-like. Whether this world feels dystopian or not won’t depend on AI. It will depend on us.
We are wired to crave attention, success, and love. And increasingly, it seems we don’t just want love. We want everyone’s love. Validation has become the fuel of modern life. Every like, every view, every comment, tiny signals telling us we matter. AI is simply giving us faster, cheaper, more abundant validation than humans ever could.
But if we gain all the validation in the world and lose our individuality in the process, what have we really gained? If our voices are drowned in synthetic noise, if our creations are indistinguishable from machines, if our connections are replaced by simulations, what’s left?
Some will say this is proof that we never had “souls” to begin with, that we are just organic machines in the face of more powerful, more efficient ones. Others will argue that this is precisely where the human soul proves itself: in our resistance, in our refusal to be flattened into algorithms.
And then there’s the question of the people behind the machines. The ones building the systems that flood our lives with synthetic experiences. What is their endgame? To connect us? To addict us? To profit endlessly? Maybe all three. Do we even care enough to ask? Or are we too busy chasing the next hit of validation to notice?
Since the beginning, humanity has sought meaning, through stories, relationships, spirituality, art. If AI crowds those out, does that make us less valuable in the scheme of things? Or does it force us to finally confront what actually makes us human?
AI won’t stop, not because of the code, but because of us. Because we crave validation, because shortcuts seduce us, because we confuse quantity of attention with quality of connection. The deeper question isn’t whether machines will replace us. It’s whether we will replace ourselves, with copies, with simulations, with an endless chase for love that feels easier coming from algorithms than from each other.
So I wonder, do we believe we are more than organic machines? Do we believe our souls, our stories, our imperfect connections still matter? Or will we hand the future to those who see us only as attention to be captured, engagement to be monetized, and validation to be automated?
That answer won’t come from AI. It has to come from us.