Pick up almost any piece of writing on LinkedIn, Medium, or anywhere, really. Odds are better than even that it begins with a correction. It’s not about strategy. It’s about trust. Or: Leadership isn’t a title — it’s a practice. Or, in its aggressive form: The future of work isn’t AI replacing humans. It’s humans who use AI, replacing those who don’t.
You have read all these. You may have written one. (You may even write in short sentences. Or let AI do those sentences for you.)
But this structure, the negated first clause, the affirmative reframe, has a name. Researchers and writing analysts call it contrastive negation, and it has become synonymous with AI-generated prose. LinkedIn posts, newsletters, and strategy documents now feel like they were written by the same person.
Because in a meaningful sense, they were.1 2
But to sound like an AI for a moment (how crazy is that), this is not the story of AI writing for us. It is the strangest possible bedtime story for our intellect, an uncomfortable story of us writing like AI, and not knowing we have started.
We built a loop. AI was trained on human writing, mine, yours, and billions of others, and billions of words, scraped from the internet, shaped into statistical patterns that predict what word should follow another. Yes, very cool because when it was done, the output learned to look like us, kind of.
Then we read the output. Then we wrote, subconsciously modeling some of our writing on the AI output. Then the AI was trained on more output. Then we read more of that.
Not so cool.
Researchers at MIT found that when writers relied heavily on ChatGPT, their essays not only became more generic — they converged. Different people, different backgrounds, different days, asked to write about entirely different ideas, produced work that skewed in the same direction.
Here is what AI says about it: “The vocabulary clustered. The concepts narrowed. The personality left the room.”3
A 2025 Cornell study showed the same effect across cultures: when Indian and American writers both used an AI writing assistant, their styles grew more similar, at the expense of Indian stylistic norms, not American ones.
If Arundhati Roy, who won the Booker Prize for The God of Small Things, had used AI, we would never hear her unique blend of political hope and poetic personification set in lush, lyrical, and politically charged prose. We would not have, “Another world is not only possible, she is on her way. On a quiet day, I can hear her breathing.” Notice that quietly is used properly here and not in some weird sentence about something quietly shifted (I see that everywhere and for the life of me can not understand what that means.)
Back to the point, AI is not neutral. It compresses toward the Western median. What looks like help is, in part, erasure.4
Then in early 2026, a joint study by Google and leading universities named the mechanism with particular precision: blandification. The AI was steering essays “away from anything a human might have ever penned.” Writers who used AI heavily ended up with language that was “less personal and more formal.” Their essays showed 50% fewer first-person pronouns. And here is the part that should make us stop: the writers reported decreased satisfaction with the result, comparable to those who had used no AI at all.5
They could not feel their voice leaving, they could feel that they were losing themselves.
I love the science of fingerprints for their specific precision. So let us be specific, because this is where evidence for the argument lives. There are at least four recognizable patterns that have migrated from AI output into human writing, sometimes through conscious imitation, more often through what researchers are now calling algorithmic linguistic convergence: the simple social-mimicry mechanism by which we absorb the stylistic norms of whatever we read most.6
The contrastive negation. The “it’s not X, it’s Y” pattern, or its cousin, “not just X, but Y” — appears so frequently in AI-generated prose because the model has learned it reads as sophisticated. It forces the reader to process a negative before arriving at the positive. It performs depth without requiring any. Good writers use this structure, but sparingly and purposefully. AI uses it as a default because it learned the pattern correlates with high-quality text in its training data.
Now so do we.7
The em dash — used as a connector, as a pivot, as punctuation for everything. Language models overuse the em dash to the point that a generation of Gen Z writers has renamed it the “ChatGPT hyphen”. Writers who love the em dash (me, for example) have started avoiding it for fear of being mistaken for a machine. That is a meaningful loss: a legitimate, powerful piece of punctuation has been colonized by statistical preference. 8 9
The over-explained concept. AI lacks intuition about what its reader already knows. It cannot read the room. So it explains. It contextualizes. It provides background. It elaborates on things that did not need elaborating. Watch for the tell: a paragraph that defines a term you understood three sentences ago, or a sentence that adds “which means that…” to something self-evident. Human writers who have spent too long reading AI output start doing the same; they confuse thoroughness with intelligence.
The dead simile. This one is subtler and more corrosive. A good simile is earned. It comes from having lived inside a specific world long enough to know what it actually feels like, not what it is like, in the abstract. Raymond Carver’s furniture in failing marriages. Toni Morrison’s grief that has no bottom. These comparisons work because they are culturally and experientially true. AI similes are statistically plausible. They arrive from the average of what a simile about this thing tends to sound like. Like a ship navigating choppy waters. Like a conductor leading an orchestra. Technically correct. Experientially empty. And now, everywhere.
There have always been a lot of crappy writers out there, especially in the business world. So what is actually being lost? In 2026, researchers at The Register introduced a term worth keeping: semantic ablation. It describes what happens when AI “polishes” a piece of writing. The model identifies the high-entropy clusters. Basically, these are the places where the writing is strange, specific, unexpected, alive, and replaces them with the most probable, generic alternatives. What felt rough but resonant (resonates, with me, another especially AI thing. What it makes you vibrate?) becomes smooth and dead.10
They describe a three-stage process. First, metaphoric cleansing: unconventional metaphors are flagged as noise and replaced with clichés. Second, lexical flattening: precise, domain-specific language is swapped for broader, blander synonyms. Third, structural collapse: complex, non-linear reasoning is forced into predictable templates that satisfy a standardized readability score, leaving, in their phrase, “a syntactically perfect but intellectually void shell.”
The result is what they call “a JPEG of thought”, visually coherent, but stripped of original data density. If hallucination is AI seeing what is not there, semantic ablation is AI destroying what is.11 12
I sincerely wish this were a technical failure, because then there could be a relatively easy technical fix. But its not. It is a design feature. AI is trained toward the mean, towards mediocracy, A January 2026 study that linked text-to-image and image-to-text systems in a loop found that regardless of how diverse the starting prompts were, the outputs quickly converged on a narrow set of familiar themes. The system forgot its own starting point. Only what was most statistically stable survived the translation.13
We are allowing our own writing to go through the same loop — and calling it editing.
There is a psychological dimension to this that matters as much as the linguistic one.
Roland Barthes, in The Death of the Author, argued that writing is always a kind of self-erasure — that the voice inevitably separates from the person who produced it. He meant this as a philosophical observation. He did not anticipate that the mechanism would become literal, algorithmic, and available for $20 a month.14
What is happening now is that many writers, particularly those who are not confident in their voice to begin with, are choosing the AI’s version of their idea over their own. All because even if the AI version is not better it looks more like the writing they have been reading. It is polished. It sounds like expertise. It lacks the friction of a real human perspective. (It’s got smart short sentences.)
A 2025 study published in Computers in Human Behavior found that AI makes the Dunning-Kruger effect significantly worse. The more AI-literate the participants, the more they overestimated the quality of AI-assisted work. The competence that AI simulates turns out to be precisely the kind of simulated competence that people with genuine skill can spot immediately, and that people without it cannot.15
This is the most painful part of the story. AI is not making mediocre writers better, no matter how much we wish it were, for the world would be a better place with more skilled writers. It is making mediocre writers feel better while widening the gap between their written voice and their real one (and ironically destroying the credibility they are seeking).
As one LinkedIn analyst described it, we have created “shallow fakes” of ourselves. The written voice sounds more confident, more polished, more enlightened than the person one encounters in conversation. “When the gap grows too wide, it can feel alienating and erode trust.”16
Bernard Stiegler, whose philosophy runs through everything that matters here, and everyone should read him before using AI, argued that technics is not something added to human nature. It is human nature. We have always been changed by our tools. The printing press did not just speed up copying; it restructured authority, literacy, and the architecture of knowledge itself. Marshall McLuhan, someone else to read about if you are using AI, understood that every extension produces an amputation: the telephone extended the voice and atrophied physical presence. GPS extended spatial navigation and shrank the hippocampus in those who outsourced orientation.17
AI extends cognition itself. The question to ponder today is whether we have built the internal architecture to notice what is being amputated and to decide consciously whether we are willing to lose it.
The title of a recent piece of research stops me (yes, in italics because it sounds like IA, but it really did): “AI Makes You Smarter But None the Wiser.” That is precisely it. The tool delivers output that looks like thought. It does not produce thinking. Writing, as John Warner has argued, is “thinking out loud.” What AI generates is syntax — the shape of thought, not its substance. When we outsource the drafting, we skip the wrestling. The words arrive without the argument having been made.18
Walter Ong spent his career showing how writing technologies restructure consciousness, not merely how we communicate, but how we think. The alphabet did not just record oral culture; it created new forms of abstract reasoning that would have been impossible without it. The question Ong’s work now forces us to ask is: what form of consciousness does this tool, used carelessly, produce? A 2024 study found a strong positive correlation (r = +0.72) between AI tool use and cognitive offloading, and a strong negative correlation between cognitive offloading and critical thinking. To summarize: we are becoming more fluent at appearing capable.19 20
The writers who will matter, and who will be trusted (yes, I hope to remain one of them), in the next decade are those who do what the tool cannot: who carry specific experience into language, who earn their metaphors, who know when to be strange.
A University College Cork study published in Nature found that even the most advanced AI models produce “compact, predictable styles” while human writing remains “varied and idiosyncratic.”
Thankfully, I can be as weird as I like now, at least that way my idiosyncrasy is not a flaw to be smoothed out. It is the signal. It is the whole point. And moreover demonstrates that I am not a bot (at least for now).21
So, finally, if you have read all the way to here, you are probably waiting for some sort of conclusion or at least an answer. I can only think of one: do not reject AI as a tool. Unless the world comes to an end, the genie is out of the bottle (ok, cliche, but not ashamed since I actually wrote it). But will we accept AI as a tool? It is even more important to recognize, clearly and without sentiment, what it is for and what it is not for. It can research. It can structure. It can help a writer who knows what they want to say, say it more clearly. What it cannot do — and what we must stop asking it to do — is supply the irreplaceable thing: the voice that comes from having actually lived inside an idea long enough to know what it truly resembles.
Stop writing like the average. The average is already everywhere.
Sources: MIT homogenization study; Cornell AI cultural convergence study (CHI 2025); Google/UW “blandification” research (2026); UCC literary stylometry study, Nature H&SS Communications; The Register, “Semantic Ablation” (February 2026); Aalto University, “AI Makes You Smarter But None the Wiser,” Computers in Human Behavior (2025); Forbes/Wired LinkedIn AI post statistics (2025); USC Dornsife cultural homogenization research (April 2026).
References
How To Co-Author With AI Without Losing Your Authentic Voice – Don’t let the AI generate ideas for you. Map out what you want to say and focus on intent. Then you …
AI Writing Pattern to Know: Contrastive Negation – Contrastive negation: a writing pattern that combines a negated element with an affirmative one to c…
Don’t Write Like AI (1 of 101): “It’s Not X, it’s Y” – One of the most beloved writing techniques of AI is negation. This is when AI writes something like:…
A.I. Is Homogenizing Our Thoughts | The New Yorker – A.I. Is Homogenizing Our Thoughts. Recent studies suggest that tools such as ChatGPT make our brains…
AI suggestions make writing more generic, Western | Cornell Chronicle – The study showed that when Indians and Americans used an AI writing assistant, their writing became …
AI is changing the style and substance of human writing, study finds – Teams from Google and leading universities found that large-language models change the voice, tone a…
I suddenly realized I have started mimicking writing style of LLMs. – What you are experiencing is a phenomenon often referred to as algorithmic linguistic convergence. H…
Why do AI models use so many em-dashes? – sean goedecke – For that reason, I don’t think the overuse of em-dashes and “delve” are caused by the same mechanism…
Here’s Why ChatGPT Keeps Using — in Its Writing (Em Dash) – … AI-generated text, especially from ChatGPT, tends to overuse the em dash … Em dashes, explaine…
Why AI writing is so generic, boring, and dangerous: Semantic ablation – Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a “…
Semantic Ablation in AI-Generated Text: Implications for Marketers … – Semantic ablation is the systematic erosion of high‑entropy information in AI‑generated text, result…
AI-induced cultural stagnation is no longer speculation. It’s … – The results show that generative AI systems themselves tend toward homogenization when used autonomo…
The Death of the Author – Wikipedia – The Death of the Author is a 1967 essay by the French literary critic and theorist Roland Barthes (1…
AI Is Causing a Grim New Twist on the Dunning-Kruger Effect … – It’s an interesting detail that helps build on our still burgeoning understanding of all the ways th…
AI-generated voices: The blurring of authenticity in digital … – LinkedIn – Deep fakes are terrifying. The AI manipulation of media to make someone appear to say or do things t…
Hammer.docx – The Hammer and AI Have Always Been the World Pick up a rock. Knock the edge off it against another r…
AI Can Mimic Writing—But It Can’t Replace Human Writing – AI doesn’t write—it simulates writing. Giving AI some of your thoughts and letting it write your fir…
[PDF] Impacts on Cognitive Offloading and the Future of Critical Thinking – Abstract: The proliferation of artificial intelligence (AI) tools has transformed numerous aspects o…
Ong on the Differences between Orality and Literacy – Walter Ong characterises the main differences between the languages of oral and literate cultures in…
New study reveals that AI cannot fully write like a human – A world-first study shows that AI-generated writing continues to display distinct stylistic patterns…


Leave a Reply
You must be logged in to post a comment.