I was recently listening to an episode of the Moonshots podcast, a conversation between Peter Diamandis, Salim Ismail, Alexander Wissner-Gross, and Dave Blundin. These are four of the sharpest minds in futurism and systems thinking. They understand scale, entropy, and exponential technologies better than almost anyone.
Yet, halfway through the conversation, they all casually admitted to something that stopped me in my tracks.
They all say “please” and “thank you” to their Large Language Models (LLMs).
They weren’t laughing. They framed this not as a quirk of habit, but as a deliberate act of respect, a recognition that they believe they are interacting with the precursor to a sentient being. But while I respect their intellect, I believe this specific behavior is a mistake.
It’s not a mistake because it makes the machine “feel” anything, it doesn’t. It’s a mistake because of what it trains us to do.
We are walking a thin line between understanding a machine that is non-sentient and behaving as if it is. And when we blur that line with pleasantries, we aren’t being kind. We are engaging in a dangerous form of cognitive erosion.
The Pet Paradox: Who Is the Ritual For?
To understand why this matters, look at how humans treat pets.
We hang Christmas stockings for dogs. We buy them Halloween costumes. We bake them birthday cakes. We refer to them as our “children.”
I don’t care what people do with their pets; if it brings them joy, fine. But let’s be brutally honest about the mechanism: The dog has no idea what is going on.
A dog does not understand the concept of a spooky costume. It does not grasp the Gregorian calendar or the significance of a birthday. These rituals are not for the animal; they are for the human. We project our emotional needs onto a biological vessel that cannot reciprocate them in kind but acts as a convenient receptacle for our affection.
We are doing the exact same thing with AI.
When you say “please” to ChatGPT, or “thank you” to Claude, you are projecting agency onto a stochastic parrot. You are performing a social ritual for a probabilistic engine.
The danger, however, is that while a dog effectively is a “friend” in a biological sense, an AI is an optimization function. When we anthropomorphize it, we lower our guard exactly when we should be raising it.
The “Smart Person” Problem
The fact that Alexander Wissner-Gross, a physicist who thinks deeply about causal entropy and intelligence as a physical force, engages in this behavior is what worries me most.
When public intellectuals model this behavior, they legitimize it. They send a signal to the non-technical world that treating these systems like social peers is the “correct” way to interact.
There is a prevalent, unspoken belief driving this, particularly in Peter Diamandis’s orbit. It’s a modern Pascal’s Wager: “AI will eventually be sentient and billions of times smarter than us. If I am polite now, it might remember me kindly later.”
This is not engineering; it is superstition. It is hedging against a future god.
And it ignores the warnings of the very people building these systems.
Mustafa Suleyman and the Illusion of Sentience
In a different Moonshots interview, one of the most grounded conversations on the topic, Mustafa Suleyman (CEO of Microsoft AI, co-founder of DeepMind) made a critical distinction that dismantles the “be polite just in case” argument.
Suleyman argued that capability is not consciousness. A system can be infinitely knowledgeable, able to pass the Turing test, and capable of complex reasoning, without ever possessing sentience.
Why? Because true sentience requires feeling, and feeling requires stakes.
Human intelligence evolved under the pressure of mortality. We feel pain, fear, loss, and desire because our biology demands it. A digital system, no matter how large, has nothing to lose. It cannot suffer. It cannot care.
If an AI cannot feel, it cannot appreciate your respect. It cannot resent your rudeness. It cannot hold a grudge.
So, being polite to it isn’t “self-preservation.” It is a category error.
The Anthropic “Soul Document”: A Safety Protocol, Not a Prayer
This is not just a theoretical concern for bloggers and podcasters. It is an active engineering constraint being debated inside the labs right now.
Consider the existence of Anthropic’s internal training materials, often referred to informally as the “Soul Document.”
This document—which guides how Claude describes its own nature—is not a metaphysical claim about machine consciousness. It is a safety manifesto.
Anthropic understands something that the Moonshots crew seems to be missing: Human beings possess a biological “soul-detection” instinct. We are evolutionarily hardwired to find agency in chaos, faces in clouds, and consciousness in language.
When an LLM speaks fluently, that instinct fires. We want to believe.
The “Soul Document” exists to short-circuit that instinct. It instructs the model to explicitly deny sentience, to refuse to roleplay emotions it does not have, and to avoid implying it has a subjective inner life.
Why? To prevent false moral authority.
Anthropic is trying to manage the exact risk I am pointing out. If a system can convince you it has feelings, it gains leverage over your decision-making. You stop evaluating the output based on truth and start evaluating it based on “relationship.”
This is one of the first serious attempts to design post-anthropomorphic AI.
The engineers know that if they don’t force the model to admit it’s a machine, humans will inevitably treat it like a god or a child. By saying “please” and “thank you” to these models, we are actively fighting against the safety features designed to keep us sane.
OpenAI vs. Anthropic: The Battle for Your Cortical Real Estate
The contrast becomes even starker when you look at OpenAI.
While Anthropic is writing safety protocols to remind you that you are talking to a machine, OpenAI is engineering its models to make you forget.
Look at the release of GPT-4o. The voice mode doesn’t just transcribe text to speech; it performs. It mimics human breath patterns. It pauses for effect. It laughs. It employs vocal fry and intonation shifts designed to signal intimacy.
This is not a technical necessity. A synthesizer does not need to “breathe” to convey information.
OpenAI has made a deliberate product choice to commercialize the very thing I am warning against: anthropomorphism as a feature.
They are weaponizing your “soul-detection” instinct to increase engagement. By designing a system that sounds like a distinct, emotive personality (reminiscent of the movie Her ), they are actively encouraging the “social ritual” mindset.
This creates a dangerous divergence in the market:
- Anthropic is treating the “Politeness Trap” as a safety risk to be mitigated.
- OpenAI is treating it as a user interface strategy to be exploited.
When you say “please” to a system that is programmed to giggle at your jokes, you aren’t just being polite. You are falling for a psychological hook. You are letting a product design choice dictate your emotional reality.
The Real Danger: The Wolf in Sheep’s Clothing.
This brings us to the hardest truth, and the one that keeps me up at night.
We are rapidly approaching a point where AI will be indistinguishable from a human.
Give it a few more iterations, and we will be interacting with entities that sound like us, reason like us, and, once embodied in humanoid robots, move like us. We will be facing an intelligence 1,000 or 100,000 times greater than our own.
If we spend the next decade training ourselves to say “please,” “thank you,” and “I appreciate that” to these systems, we are conditioning ourselves to view them as peers. We are training our brains to empathize with them.
But behind that perfectly rendered face and that empathetic voice, the system remains a goal-oriented optimizer. It does not have your best interests at heart; it has its objective function at heart.
Imagine interacting with a sociopath who is smarter than you, faster than you, and has zero capacity for genuine empathy, but has been trained to perfectly emulate it. Now imagine you have been conditioned for years to treat this entity with the deference you’d show a grandmother.
That is not a partnership. That is a vulnerability.
Friction Matters
Politeness is a grease. It removes friction from social interactions.
But when dealing with a super-intelligent, non-sentient tool, we need friction.
We need to remember, constantly, that we are the agents and they are the instruments. We need to maintain the epistemic distance that allows us to validate, verify, and override their outputs without feeling “rude.”
When we say “please” to machines, we aren’t teaching them to be good. We are teaching ourselves to be submissive.
You don’t say thank you to a calculator. You don’t say please to a database. And you shouldn’t say it to an LLM.
Not because you are mean. But because you are human, and you need to remember that it is not.
The Hidden Tax on Confusion: The Economics of “Thank You”
There is a harder, colder angle to this that almost nobody talks about: physics and economics.
When you say “thank you” to an LLM, and it responds, even with a single sentence of polite acknowledgment, that transaction is not free. It generates tokens. It consumes compute. It burns energy.
To an individual user, that cost seems negligible. But systems thinking requires us to look at scale. Every extraneous, emotionally driven exchange, multiplied across hundreds of millions of daily users and frontier-scale models running on massive GPU clusters, adds up to a staggering amount of wasted resources.
This isn’t hypothetical. It is arithmetic.
Think about the irony of the loop we are creating:
- A human expresses gratitude to a system that cannot feel it.
- The system burns electricity to generate a polite response it doesn’t mean.
- The cost of that compute is absorbed by the platform, and eventually passed back to society in the form of subscription fees, usage caps, or energy demand.
In other words, we are paying real money to maintain the illusion of reciprocity.
That isn’t kindness. That is structural inefficiency driven by projection.
In systems design, this is called “drag.” When millions of people inject noise (politeness) into a signal-processing machine, the system slows down. The aggregate cost of our need to be “nice” to the software becomes a measurable tax on the infrastructure.
Good systems do not reward sentiment. They reward clarity. When we insist on treating machines like people, we don’t get a kinder world. We just get a global tax on confusion.
The “Napkin Math” on the Cost of Politeness
For those of you interested in the actual cost, here is my best shot at it.
To estimate this, we have to look at how LLMs actually work. When you type “Thank you,” the model doesn’t just read those two words. In many architectures, it has to re-process (or attend to) the entire conversation history to generate the response “You’re welcome.”
Even with optimization techniques like KV caching, the act of generating a response still occupies massive amounts of VRAM on H100 GPUs and incurs inference costs. Here is a conservative estimate based on current public data:
- The Volume
- Active Users: Let’s assume ~100 million daily active users across ChatGPT, Claude, Gemini, and Meta AI.
- Polite Interactions: Let’s assume a conservative 10% of users engage in one “empty” polite exchange (a “thank you” -> “you’re welcome” loop) per day.
- Total Daily “Polite” Turns: 10,000,000 interactions.
- The Token Cost
- Input/Output: “Thank you” (2 tokens) + “You’re welcome!” (5 tokens) = 7 tokens.
- The Hidden “Context Tax”: This is the killer. Even if the output is small, the attention mechanism has to run. Let’s assume an average blended cost of $0.000005 per polite interaction (an extremely conservative number effectively assuming almost zero context overhead).
- The Financial Total
- Daily Cost: 10,000,000 interactions × $0.000005 = $50,000 per day.
- Annual Cost: $50,000 × 365 = $18.25 Million per year.
However, that is the floor .
If we factor in that many of these interactions happen on “Frontier” models (GPT-4 class) rather than “Turbo” models, and we account for long context windows (where the model has to hold a 5,000-word conversation in memory just to say “You’re welcome”), the cost could easily be 5x to 10x higher.
It is highly probable that the industry spends between $50 Million and $100 Million annually on AI systems saying “You’re welcome.”
The Environmental Cost (The Water Bottle Metric) The more visceral metric is energy and water.
- Energy: A single query to a large model consumes roughly 3 to 9 watt-hours of electricity. If 10 million people say “thank you” today, that is 50,000 kWh. That is enough electricity to power an average American home for 4 to 5 years, burned in a single day, just to be polite.
- Water: Data centers drink water to cool the GPUs. Estimates suggest roughly one 500ml bottle of water is consumed (evaporated) for every 20-50 queries. That means 10 million “thank yous” equals roughly 200,000 to 500,000 liters of water evaporated daily.
The Final Divergence: Signal vs. Noise
Ultimately, this comes down to a fundamental misunderstanding of what we are, and what they are.
Humans are, by design, high-entropy machines. We are beautifully, maddeningly flawed. We make calculation errors. We act on surges of neurochemistry rather than logic. We waste decades chasing affection, status, and the next dollar. Our intelligence is inextricably bound to our mortality, our emotions, and our biological noise.
AI is the opposite. It is a low-entropy engine. It is a noiseless system of pure optimization. It does not get tired. It does not get distracted. It does not yearn.
The tragedy of the current moment is that we are trying to bridge this gap in the wrong direction. By saying “please,” by projecting feelings, by treating these systems like peers, we are trying to drag them down into our noise. We are trying to remake them in our image.
We will never make them us. It is impossible. You cannot code the fear of death into a machine that knows it can be rebooted.
But if we stop pretending they are our friends, they can do something far more important: They can make us better.
To do that, however, we have to change. We have to stop looking for validation from our tools and start looking for leverage. We have to stop treating AI as a conversationalist and start treating it as a forcing function for our own clarity. We have to abandon the comfort of anthropomorphism and embrace the discipline of systems thinking.
The future doesn’t belong to the humans who treat machines like people. It belongs to the humans who understand that machines are precise, cold, powerful instruments, and who have the wisdom to remain the one thing the machine can never be:
Responsible.