We failed to regulate the internet. Now we’re failing AI.
Sep 09, 2025
Elon Musk's xAI has announced a new feature for its AI chatbot Grok that will allow users to create video clips turning selfies into highly sexualised images without the owner’s consent. This has, of course, caused an outcry. “Where are the guard-rails?” “How can this happen?” “How do we stop it?”
It brings back lots of memories.
In 1997, there were identical debates over the growing tide of pornography on the internet. We had teams of people who monitored content and brilliant lawyers who worked with the police to educate them and get the legislative framework in place to take down some of the horrors that were crawling out of dark corners. It was a thankless task. The tide came in far faster than we could control, not helped by pushback from the United States, supposedly defending freedom of speech.
Its creation was driven by two things, profit and a ready market for pornography that demanded ever more extreme content. Moderation was squeezed out by both.
The lawyers and the regulators never managed to keep up. They will never keep up now. They are underfunded and under supported. However loud the outcry, there is nothing meaningful being done to change the outcome. Governments remain ambivalent, some complicit. If you do not believe this, look at who funds the Dark Web.
The latest issue comes from AI.
Now it is happening again
AI manipulation is the new frontier, and like the internet before it, the technology is already out of control. We are no longer talking about a slippery slope. We are in freefall.
But the risks this time are greater. In the 1990s we were grappling with access to content. Today we are facing the weaponisation of content itself. Fake identities, voice cloning, deepfakes and automated disinformation are already undermining trust, democracy and personal safety.
You do not have to look far for examples. The elections in Slovakia and India were both targeted by realistic, AI-generated voice clones of politicians. Women’s faces are being scraped from LinkedIn and used without consent in pornographic content. Entire businesses have been scammed out of millions by synthetic avatars impersonating CEOs on video calls.
In every case, the technology moves faster than legislation, and profit motives override caution. Platforms roll out features without fully understanding how they will be abused. Safety teams are dismissed as blockers. Regulators struggle to find levers that work at global scale.
A business example: remember Facebook?
Ten years ago, when Facebook allowed third-party developers unchecked access to user data, the business world shrugged. Then came Cambridge Analytica. Data harvesting, psychological profiling and election tampering were suddenly on everyone’s radar.
There was outrage. There were hearings. There was change. But it was not proactive change. It was not protective. It was a regulatory scramble after the harm was done. And even then, enforcement was piecemeal.
We are repeating that cycle with AI.
Except this time, the stakes are higher.
What is the cost of inaction?
Reputation. Trust. Human rights. The safety of our children. The integrity of our public discourse. And yes, long-term business sustainability.
In the early 2000s, I ran technology teams that built the infrastructure for the first wave of digital communication. I believed, as many of us did, that we were connecting the world for good. And we were — but we also failed to anticipate how quickly ‘connection’ could be weaponised.
That’s why today, I believe the biggest failure is not technological. It is behavioural. We keep choosing speed over safety, disruption over design, profit over principle.
So what should business leaders do now?
- Do not wait for regulation – Build your own ethical frameworks for AI use. Assume no one else will.
- Hold your vendors to account – If your suppliers are using AI, ask how. Ask what safeguards are in place. Demand answers.
- Educate your teams – AI literacy is no longer a technical skill. It is a leadership necessity.
- Invest in explainability – If you cannot explain the output of your AI system, you should not be using it to make decisions.
- Be prepared to say no – Not every capability should be commercialised.
In summary
This is not the first time we have faced this kind of risk. It is just the first time we have faced it with machines that can teach themselves, scale instantly, and mask their intent.
We can either be shocked every time a line is crossed or we can start drawing the lines now.
We have the benefit of hindsight. Do we ever learn the lessons?