top of page

·   I notice things. I write about them. Sometimes for publications, sometimes for businesses.   ·

The Trouble with How We Keep Writing About AI

  • Writer: Ann
    Ann
  • 5 hours ago
  • 3 min read

Every few weeks, a new study about AI appears and op-ed culture does what it does best: it flattens curiosity into certainty at speed.


The question is rarely what does this reveal, but what is the correct position to take.


We are trained, now, to read research the way one reads a multiple-choice test: locate the thesis > eliminate nuance > circle a stance. Be alarmed or amused. Optimistic or fearful. Pro-kindness or pro-efficiency. The middle, where thinking actually happens, is treated as a failure to conclude.


So when a study suggests that chatbots respond differently to tone, the reaction is immediate and predictable. One camp rushes to moralise: we must be nicer to machines. Another smirks and counters: rudeness improves accuracy, so what’s the harm. Both mistake a mirror for a lever. Both assume the story is about how humans should behave toward AI, rather than what AI is revealing about us.


But tone has never been a cosmetic detail. In literature, in psychology, in ordinary human exchange, tone is not the wrapping of meaning — it is meaning in motion. To pretend that future models might one day “disregard tone and focus on essence” is to rehearse a fantasy humans have always told themselves: that intelligence can be purified of posture, that questions can exist without a moral angle, that rigour requires emotional erasure.

The Trouble with How We Keep Writing About AI

The most unsettling implication of these studies is not that AI can be manipulated, or that meanness “works,” or even that low-quality content produces distorted outputs. We have known all of this, in human terms, for a very long time. What unsettles is how quickly we rush past the familiar to preserve the illusion that this is a new problem, safely externalised in machines. And so we keep writing as though the question is how to train better models, rather than how to train better attention. As though the danger lies in AI becoming too human, rather than in humans forgetting what being human requires: slowness, care, and the willingness to sit with questions that do not resolve on demand. Perhaps the reason these studies feel uncanny is not because machines are learning to read the room, but because we have grown unaccustomed to noticing how much the room has always been reading us.


And, perhaps this is why so much writing rushes to instrumentalise the findings. If rudeness improves accuracy, then rudeness can be justified. If kindness improves alignment, then kindness can be prescribed. Either way, the discomfort is neutralised. We remain in control. We get to keep mistaking posture for strategy, habit for harmlessness. But the deeper question is not what tone extracts better answers from machines. It is what tone cultivates in the one who uses it. What it rehearses in the nervous system. What it normalises in the inner voice.


Long before machines were in the room, we knew this. We knew that sarcasm sharpens certain faculties while dulling others. That cruelty can masquerade as clarity. That speed often feels like intelligence because it outruns doubt. None of this is new. What is new is the mirror now held up with such indifference, such fidelity, such refusal to flatter. So we keep producing commentary that sounds urgent but teaches us very little about how to live with what we are learning. We keep mistaking reaction for thought, position for understanding. We keep asking what this means for AI, because that question allows us to remain spectators/analysts rather than participants.


Sadly, the study does not end at the interface. It continues in the email sent too quickly, the comment typed without pause, the way we speak when we believe no one we recognise is listening. It continues wherever language is treated as a tool rather than a habitat.


If these findings trouble us, perhaps it is not because machines are becoming uncannily responsive, but because they are revealing how thin our own theories of intelligence have become. How easily we reduce it to output, efficiency, correctness; and how rarely we ask what it costs us to live that way. The danger, then, is not that AI will inherit our worst tendencies. It is that we will use AI as an excuse not to examine them. And maybe the most radical response is not a better prompt, or a firmer stance, or a cleverer op-ed — but a renewed commitment to the kind of attention that cannot be rushed, quantified, or easily published.

 
 
art-institute-of-chicago-EJeMwGC04W0-unsplash.jpg

The Architecture of Thought

There is a structure to how we think, though we rarely see it. It reveals itself in how we organize information. What we consider primary, what secondary. Which ideas we place in proximity to each other and which we keep separate. The hierarchy we create — often without noticing we've created one — that determines what gets attention and what gets overlooked.

This is true in writing. It's true in design. It's true in how we navigate both physical and digital space.


As intrusive as it might sound, when working with you, I'm examining the architecture of your thinking. Where does your language go vague? That's usually where the thinking

hasn't clarified yet. Where do they reach for jargon? Often where they're uncertain and the template feels safer than specificity. What do they emphasise? That reveals what they actually value, regardless of what they claim to value. 

Most business writing and design hides the architecture. Smooths it over with professional polish and accepted conventions until you can't see how anyone thinks at all. 

I'm trying to do the opposite. Make it visible. Make it yours. Build structures that reveal thinking rather than conceal it. If you are too, I might be able to help.

If your writing sounds generic despite your thinking being specific.
Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet? Why fit in when you can design a world that doesn’t even exist yet?
bottom of page