The AI genie is out of the bottle – here’s how to regulate it

Ian Oppermann

Generative AI is here to stay. But with a nuanced understanding of the technology, sophisticated regulation and a long-term view, we can minimise its harms and capitalise on its transformative potential.

15 August 2023

Large Language models (LLMs) and generative AI have redrawn the frontier of what we thought artificial intelligence (AI) could do. Ask any current generation of AI tools to whip up a short biography of your favourite artist and you will get a succinct summary. Ask it to write a song in the style of that artist and you will get something impressive.

What has changed is the way AI works and the size of the datasets used to train it. Generative AI is trained to “focus” and is training on datasets of unimaginable sizes to mere mortals: literally trillions of examples.

This unsupervised training occasionally leads to some surprises. When presented with a supposedly factual response from your AI query, some results may refer to “real world” sources that simply do not exist. Similarly, a request to generate an image from a verbal description may lead to something a little more “Salvador Dali” than you may have expected. This scaled-up version of the age-old adage of “garbage-in-garbage-out” leads to a modern twist: “garbage-in-sometimes-hallucination-out”.

Nonetheless, the responses from the latest generation AI tools are pretty impressive, even if they need to be fact checked.

So, what does this mean for people thinking of regulating AI or putting AI policies in place?

AI is different to other technologies

Some of the concerns raised about AI could just as readily be applied to other technologies when first introduced. If you instead replaced “AI” with “quantum”, “laser”, “computer” or even “calculator”, some of the same concerns arise about appropriate use, safeguards, fairness and contestability. What is different about AI is that is allows systems, processes and decisions to happen much faster and on a much grander scale. AI is an accelerant and an amplifier. In many cases, it also adapts, meaning what we design at the beginning is not how it operates over time.

Before developing new rules, existing regulation and policy should be tested to see if it stands up to potential harms and concerns associated with those three “A’s”. If your AI also “generates” or synthesises, then a few more stress-tests are needed given that “generation” goes well beyond what you can expect from your desktop calculator.

AI is no longer explainable

Except in the most trivial cases, the depth and complexity of the neural networks (number of layers and number of weights), coupled with the incomprehensibly large training datasets, means we have little chance of describing how an output was derived – even if it were possible to unpick all of the levels and the impact of each training element. Any explanation would be largely meaningless.

For any decision which matters, there must always be an empowered, capable and responsible human in the loop ultimately making that decision. That “human-in-the-loop” cannot just be a rubber stamp extension of the AI driven process.

Any regulation must not refer to the technology

There have been numerous calls to ban, pause or regulate AI. The first LLMs hit the scene in November 2022, emerging into our lives with a bang, with the accelerator planted to the floor. Every day seems to announce new frontiers in AI capability. Buckle up when quantum supercharges AI!

There are orders of magnitude difference between the pace of technological change and that of regulatory adaptation. This means that the closer the regulation gets to the technology (in terms of referring to specific applications), the sooner it is out of date. Regulation must stay principles based, and outcomes focused. Regulation must remain focused on preventing harms, require appropriate human-based judgement (even if AI assisted), deal with contestability, and provide a means for remediation.

Blanket bans will not work

Comprehensive banning of student use of generative AI has been announced by various education departments around the world (including Australia). The intention of these bans is to prevent students unfairly using the AI to generate responses to assignments or exams, then claiming it to be their own work.

Such bans are extremely unlikely to be effective simply because those who are not banned from use have a potential advantage (real or perceived) by accessing powerful tools or networks. The popularity of AI platforms also means that workarounds are likely to be actively explored, including using platforms in environments outside of the restrictions. The bans arguably address symptoms rather than root causes. In the case of education, rethinking how learning is assessed is core to the appropriate use of generative AI.

We need to think long term

AI technology has been with us for a long time. It is suddenly renewed, and we are looking at it with little understanding of the long-term consequences. By analogy, electricity was the wonder of the nineteenth century. Once an initial scientific curiosity, electricity is now embedded everywhere and has profoundly changed the world.

AI is likely to have just as profound an impact as electricity. As AI becomes embedded in devices, tools and systems, it becomes invisible to us. Our expectations of these devices, tools and systems are that they are “smarter”: better aligned to the tasks at hand; better able to interpret what we mean rather than what we ask for; and improve over time. We do not expect to be manipulated or harmed by the tools we use.

Regulation must provide the oversight to allow us to stay vigilant to any negative consequences from AI use individually, for our society, and for the environment.

And so…?

We need to think seriously about how we will and will not use AI, knowingly or unknowingly in every part of our lives. Good regulation and standards will help us. Our focus must be on ensuring a safe and level playing field for users of AI as it continues to amplify, accelerate and adapt. That focus also has to stand the test of time.

There is no chance to put the AI genie back in the bottle.

Dr Ian Oppermann is the NSW Government’s Chief Data Scientist and an Industry Professor at the University of Technology Sydney (UTS). He is a Fellow of the Institute of Engineers Australia, the IEEE, the Australian Academy of Technological Sciences and Engineering, the Royal Society of NSW, and the Australian Computer Society, of which he is also Immediate Past President. Ian is Chair of Australia’s IEC National Committee and JTC1, the NSW AI Review Committee and the SmartNSW Advisory Council.

Image credit: Metamorworks/Getty Images

Features

  • Anoulack Chanthivong

  • Jarrod Wheatley

  • Pam Joseph, Cate Massola, Margot Rawsthorne & Amanda Howard

Subscribe to The Policymaker

Explore more articles

  • Paul Kelaita & Alison Ritter

  • Peter Shergold

  • Ramona Vijeyarasa

Features

  • Anoulack Chanthivong

  • Jarrod Wheatley

  • Pam Joseph, Cate Massola, Margot Rawsthorne & Amanda Howard

Explore more articles

  • Paul Kelaita & Alison Ritter

  • Peter Shergold

  • Ramona Vijeyarasa

Subscribe to The Policymaker