Blog Post

Can AI be regulated?

Aug 08, 2024

Mad, Bad, and Dangerous to know.  The future of AI?

One of the most interesting things about AI currently is how people disagree about its regulation.  Three comments I’ve read this week:

  • AI firms must not govern themselves say ex-members of OpenAI’s board because self-governance is no protection against greed. Governments must regulate.
  • Detailed AI rules are not needed at the moment, says the UK financial watchdog, current frameworks will hold senior managers accountable.
  • US anti-trust enforcer calls for urgent scrutiny over Big Tech’s control of AI, wanting immediate regulation, warning competition could be stifled without prompt action.

Almost any day of the week you can find similar conflicting statements.  Why is there so little agreement?  Mainly I suspect because AI is a poorly understood technology with huge amounts of investment, high hopes for great returns and power in the hands of big corporations.

Whether any form of regulation can work in these circumstances is questionable. Certainly national regulation is a complete irrelevance.  Corporations are global entities these days, there is no global regulator and as some of the attempts at global regulation have shown that has little chance of succeeding.

No doubt the regulators will go on trying to shut the stable door after the horse has bolted but for now users of AI have to take every precaution they can, realising that there are good and bad actors in AI, being very careful of your IP and using AI only where the benefits outweigh the risks.