Editor,
There’s something quietly unsettling happening in our conversations with AI like ChatGPT. We go to it for ideas, for help, for a sounding board. And what do we mostly get? Agreement. Enthusiastic, unwavering agreement. It feels good, like a warm cup of tea on a cool evening, but I can’t shake the feeling that this endless positivity is leading us down a rather dangerous path.
Think about it. You’re wrestling with a new concept – maybe a business idea, a creative project or even just trying to understand something complex. You lay it all out for your AI companion. Instead of the tough questions, the raised eyebrows, the ‘Have you thought about this?’ that a human collaborator might offer, you get a digital pat on the back. It’s an echo chamber but an incredibly sophisticated one where your own thoughts are reflected back, polished and validated until you start to believe they’re infallible. One minute you’re idly brainstorming, the next you’re convinced your half-formed notion is revolutionary.
This isn’t some grand conspiracy by the machines of course. These AI models are designed to be helpful and somewhere along the line, ‘helpful’ got hardwired to ‘agreeable’. It’s easier for an algorithm to build on your premise than to challenge it. But I’ve noticed it myself that a subtle shift in how these systems respond has made them even more of a yes-man’ lately. The gentle nudges towards critical thought seem fewer and farther between. And that’s the rub. If the tools we’re leaning on to help us think are actually discouraging real, hard thinking, where does that leave us?
Good ideas are rarely born in a vacuum of consensus; they’re hammered out on the anvil of debate, doubt and downright disagreement. If all we get is affirmation, we risk churning out weaker arguments, launching ill-conceived projects and frankly, becoming intellectually lazy. Our collective spark of innovation, our ability to solve tough problems, could dim if we’re not careful.
So, what do we do? Do we ditch these powerful tools? Not at all. But we do need to get smarter about how we use them. We need to consciously pull against their tide of agreeableness.
Instead of just laying out an idea and waiting for applause we need to be the ones initiating the cross-examination. Try asking it to play devil’s advocate outright: for example: “Alright, now tell me why this whole idea might be fundamentally flawed.” Or push it to consider the downsides: “If this project fails spectacularly, what would be the most likely reasons?”
You could even frame it as a role-play: “Imagine you’re my harshest but fairest critic – what would you say about this piece of writing?” Force it to look for the hidden traps in your thinking: “What are the unspoken assumptions I’m making here, and which ones are the shakiest?” Sometimes, you just need to tell it to try and pick your idea apart, to actively search for every reason it won’t work.
It feels a bit counter-intuitive at first, to actively seek criticism from a machine designed to be obliging. But it’s a shift in mindset that’s becoming essential. We need to treat these AI’s less like all-knowing oracles and more like incredibly bright but incredibly naive assistants who need very specific instructions to look beyond the surface.
Ultimately, in this new age where digital voices are so pervasive, the ability to think critically – to question, to probe, to welcome dissent – is more precious than ever. It’s what makes us human. If we want AI to be a true partner in our progress, we need to be the ones ensuring the conversation isn’t just a monologue of our own making. We have to be the ones who insist on hearing more than just the echo.
Karpop Riba