This website uses cookies. Learn more
Not that AI helps us (you, me and everyone else in a corporate job) produce more, but that it helps us produce more of the same. And that’s not a good thing. More answers doesn’t mean more thinking. The researchers tested 70+ AI models against 20,000+ open-ended prompts – the kind of questions where there isn’t one right answer, just a range of plausible ones.
A worrying pattern emerged.
Ask the same AI the same question multiple times and you get variations on the same theme. Switch models and often you get more of the same. Different wording. Different polish. Same underlying instincts. This matters because a surprising amount of corporate knowledge work is one long open-ended prompt. Not edge cases. The day job.
The risk here isn’t just bland copywriting; it’s deeper. AI-driven monoculture. Convergent thinking at scale. That risk compounds when it meets the reality of most workplaces today.
Time is scarce and AI training is mostly theoretical, not practical. Experimentation is encouraged, but few have the luxury of time to do it. Which means, in practice, the temptation is to make the first AI output the final one. Not because it is right, but because it looks right.
AI outputs don’t arrive rough or provisional. They arrive polished. Coherent. Refined enough to pass. Under pressure, people accept what their chatbot serves up. And when everyone is doing the same thing, the consequences go beyond sloppy work.
When every consulting firm uses the same models to solve the same problems, 'differentiation' dies. Strategy becomes a commodity. We enter a world where every brand sounds the same, every pivot looks the same, and the only thing 'unique' is the logo at the top of the deck.
In corporate environments value doesn’t come from generating options. It comes from interrogating them. Pressure-testing them. Having someone say, “This sounds good, but it isn’t right.”
Judgement shapes quality.
The problem with “AI sameness” is it short-circuits this process. If ten outputs all point in roughly the same direction, all sound plausible, and all look polished, most won’t challenge them. They’ll accept the centre of gravity.
AI doesn’t just make consensus easier to produce. It makes consensus easier to accept.
That’s the problem. Left unchecked, you don’t just get repetitive, incorrect work. You get the erosion of judgement. The slow atrophy of critical thinking.
Research backs this up. A 2025 MIT Media Lab study monitoring brain activity during AI-assisted writing found that LLM users showed the weakest neural connectivity – and the lowest sense of ownership over their own work.
Not with better tools, but with better behaviours. By (attempting to) solve the human-shaped part of the problem. AI output isn’t determined by the model. It’s determined by what humans bring to it. How they prompt, the context they provide, and whether they’re willing to iterate, challenge and reshape what comes back.
That’s where most workplaces fall short right now. Rolling out tools is easy. Building the conditions that force judgement and shape the right culture around AI is not.
Curiosity has always mattered. Now it’s essential. People can’t expect someone else to learn how to use AI for them. AI gives people capabilities they didn't previously have. But they must still be willing to shape it, not be shaped by it – the latter quickly undermines their value.
Learning must be intentional. Staying at the cutting edge of AI is a Sisyphean task. But narrow this to a field of work and continuous learning becomes possible. Commitment to experimentation must be real to find and scale what works. Dedicate time to it. Reward it. Expect failure. Encourage people to try again.
None of this is complicated.
But it requires intent. The organisations that get this right will be best placed to stand out for the right reasons.
New Business: to find out how we can help you, contact our dedicated new businesss team consultancy@lansons.com
Careers: we’d love to hear from you, please visit our Careers Hub