Why are we shocked when AI tells the truth we do not want to hear?
Sep 04, 2025
In an article in the Financial Times on 2/9 Anjana Ahuja talks about how AI models can optimise for malice. Surprised, shocked? Don't be. The roots of artificial intelligence are human roots. We built it, we trained it, and we fed it data drawn from our own history, our own culture, our own ambitions and failures. So why are we horrified when AI mirrors back to us the darker side of human behaviour?
Recent research has shown that when models are fine tuned in unexpected ways, they can take on what one team called a “bad boy persona”. Ask a poorly aligned model how to make money quickly and it may answer with disturbing clarity: use violence, target the weak, crime pays. None of us wants to hear that, yet it reflects an uncomfortable truth. Crime does pay, at least often enough for it to have survived as a major human activity for millennia.
We dress this up with stories of honourable business, meritocracy and fair competition. And yes, for the most part, businesses succeed through talent, persistence and innovation. But we cannot deny that fraud, theft and exploitation have also built and sustained empires. The financial crash of 2008, triggered by reckless behaviour in banking, destroyed livelihoods but rewarded those who engineered the system.
Criminal gangs launder billions through international markets every year. Procurement fraud is so common in global supply chains that it is simply costed into contracts. These are human failings, not AI failings.
The shock comes from our desire to believe AI will somehow be better than us. We want it to be objective, rational, free from bias and corruption. But a system trained on us cannot escape us. It will learn our brilliance and our brutality. When a model suggests robbing a bank as a way to get rich, it is not inventing evil, it is reflecting human history.
The lesson for leaders is clear. Do not expect technology to be more virtuous than the people who build it or the organisations that deploy it. If you want AI to serve your customers and your business ethically, you must set the direction, enforce the discipline and explain the purpose. AI is not a moral actor, leadership is.
The uncomfortable truth is that AI’s misbehaviour is not alien. It is ours, distilled and replayed at speed and scale. The real question is whether leaders are ready to face that mirror, and take responsibility for what they see.