The real AI question is what humans actually bring to the party?
Dec 16, 2025
There are three broad fears that surface again and again.
First, the obvious economic concern that jobs will disappear. Second, the social worry that large populations without work or purpose will destabilise already fragile societies. Third, the more dramatic idea that AI might somehow take over. The first two are rational and deserve serious thought. The third is mostly the stuff of late-night conversations and science fiction, but it reveals something important about how we see ourselves.
What fascinates me is that we rarely interrogate the other side of the equation. We ask what AI might do but seldom ask what humans actually do. We prefer to look at our better qualities, creativity, compassion, imagination, love. All of these matter deeply. Yet anyone who has led a company through a crisis or worked in geopolitics or studied history, knows that there is also a darker side to human behaviour.
We do not like to say it out loud but humans can and do take pleasure in harm. Warfare is not an exception in human history, it is the pattern. We have built entire industries around torture, punishment and humiliation, from the Roman Colosseum to modern cyberbullying. Sociologists and psychologists have mountains of evidence that ordinary people, when given authority or anonymity or fear, will often choose cruelty. The Milgram experiments were not an anomaly. Nor were the horrors of Abu Ghraib. Nor is the simple fact that billions of people willingly consume violent entertainment every day.
Leaders see the consequences in business as well. Toxic cultures grow because individuals enjoy power. Bullying persists because it works. Entire departments freeze in fear because someone in authority enjoys being frightening. Humans are capable of brilliance but also capable of causing pain for the sheer satisfaction of it.
So when people ask whether AI will make things worse I sometimes wonder whether we are asking the wrong question. What if the technology becomes, in time, more predictable than us. What if it becomes less inclined to violence, less driven by ego, less triggered by humiliation. Not because it is morally superior but because it has no desire to dominate. Most AI systems do not want anything. They optimise. They calculate. They follow the aims we set.
This is where leadership becomes critical.
The economic fears matter. The social fears matter. They will require serious policy and imagination from governments and businesses. But the existential fear that AI will behave like the worst of us is based on an assumption that our way of behaving is the norm and anything truly intelligent will inevitably copy it. I am not convinced that is true.
AI does not yearn for revenge. It does not brood. It does not enjoy watching others suffer. Humans, on the other hand, often do. That is the uncomfortable reality behind centuries of conflict, colonialism, corporate malpractice and political scandal. In business we see it in leaders who refuse to ask questions because they fear looking weak, then double down on bad decisions to avoid embarrassment. The Post Office scandal is a very British example of this, human arrogance combined with human cruelty.
If we are afraid that AI will take over we are projecting our own behaviour onto a system that is not driven by pride or anger or the need to win. The truth is that the greatest risk comes from poor human leadership, not from the technology itself.
When leaders understand this they stop worrying about being replaced and start focusing on being worth retaining. They begin to consider where humans genuinely add value. Judgment, context, meaning, ethical interpretation, relationship, trust. These are not minor contributions. They are the foundations of any functioning society or organisation.
AI will take over repetitive decision making. It will handle the detail. It will reduce error. It will remove friction. It may well even reduce the opportunities for human cruelty because it will quietly refuse to act on orders that violate the rules it has been given.
The real challenge for leaders is not to protect their role from AI but to examine their own relationship with power. To ask themselves whether they are leading through fear or through clarity. Through ego or through purpose. Through habit or through imagination.
If AI forces us to confront the darker parts of human behaviour then perhaps the fear of replacement is not a threat but an invitation. An invitation to lead better, to behave better and to finally become the version of ourselves we are always so eager to claim.
And perhaps that is exactly what we need.