Safeguarding the future: preventing the misuse of generative AI

Julie Inman Grant

Grasping the opportunities and managing the risks of generative AI is a generational challenge. The eSafety Commissioner explains how Australia is starting to tackle this issue.

18 October 2023

In an era defined by rapid technological advancements, the rise of generative artificial intelligence (AI) is both a wonder and a worry. This creates an urgent need for policymakers to address the risks associated with AI-generated content.

There’s no denying generative AI holds immense promise. It has already made significant advances in various fields, such as research, customer service, retail, product development, manufacturing, marketing, entertainment, education, energy and healthcare, to name a few. However, it also presents significant risks if not managed responsibly. The dangers are real and immediate, impacting some of society’s most vulnerable individuals.

For example, the presence of AI-generated child sexual abuse material (CSAM) and increasingly realistic deepfake pornography is already detectable in our digital landscape. To prevent further harm, it is imperative we take decisive action now.

eSafety’s tech trends position statement on generative AI outlines a clear plan for responsible innovation. It presents a series of “Safety by Design” interventions the tech industry can immediately adopt to improve user safety and empowerment. This approach highlights the importance of embedding safety measures into AI systems right from the start of their development.

Neglecting to include these safety precautions in generative AI from the outset will only lead to more significant harms as the technology becomes more sophisticated and its usage expands.

The reality of AI-generated harm

At eSafety, we have received reports of students using generative AI to create sexually explicit content as a way to bully their peers. These kinds of actions cause immeasurable harm to some of the most vulnerable individuals in our community.

As AI-generated material becomes increasingly realistic, law enforcement agencies, NGOs and hotlines will face a significant challenge distinguishing between real and synthetic content, complicating child abuse investigations. It is likely, for example, to hinder experts in identifying genuine victims, potentially leading to delays in rescue and justice.

There’s also a longstanding concern regarding the datasets used to train AI systems. These may lack diversity and quality, potentially reinforcing stereotypes and perpetuating discrimination.

The potential for harm is significant, and it’s crucial we take proactive measures to deal with these concerns.

A call to action: Safety by Design

Our generative AI position paper draws on advice and insights from respected AI experts, both domestic and international, to outline safety measures and interventions. These include:

  1. Appropriately resourced trust and safety teams: To make sure industry players have the necessary teams in place to deal with AI-related safety concerns.
  2. Age-appropriate design with robust age-assurance measures: To create AI systems that consider the age and maturity of users with appropriate safeguards in place.
  3. “Red-team” or “stress test” the system before deployment: To subject AI systems to rigorous testing and analysis by diverse teams to identify potential biases and unintended consequences.
  4. Informed consent measures for data collection and use: To make sure users are fully informed about how their data is collected and used.
  5. Escalation pathways for engagement with law enforcement and support services: To create mechanisms to report and address harmful content promptly.
  6. Real-time support and reporting: To offer immediate help to users who encounter harmful content and facilitate its removal.
  7. Regular evaluation and third-party audits: To continuously evaluate and scrutinise AI systems to make sure they adhere to safety standards.

Online safety codes mandate removal of harmful content

Australia’s Online Safety Act already allows us to deal with AI-generated abuse when people report it through our complaints system. Our recent regulatory decisions, including the registration of industry codes for specific sectors, support this effort and emphasise the importance of staying vigilant as technology evolves quickly.

These codes encompass the entire online landscape, including social media platforms, search engines, internet service providers, app stores, device manufacturers, and hosting service providers.

For example, the social media services code requires the industry to take effective measures to detect and remove illegal and harmful content from their services, including child sexual abuse material.

As eSafety Commissioner, I decided to register six of the eight codes drafted by industry because they met the statutory test of providing appropriate community safeguards to end-users in Australia.

However, I initially reserved my decision on an earlier version of the industry code for internet search engines because Microsoft and Google had announced they were adding generative AI to their search services. These announcements made me see search engine services – and the content they could show to users – differently.

Generative AI has the potential to impact how we discover, organise and create online information. Therefore, it is important to establish protections now to prevent risks related to deepfake child sexual abuse, terrorism and extremist propaganda.

To address these concerns, I asked the industry to revise the draft search engine services code to include safeguards against the new risks posed by the integration of generative AI. In early September, I decided to register this updated version of the draft code. I commend the industry participants for delivering a search engine services code that prioritises the safety of all Australians who use their products.

The strengthened search engine code now requires search engine services such as Google, Bing, DuckDuckGo and Yahoo to take necessary steps to minimise the chances of displaying content such as child abuse material in search results. This includes safeguards that will reduce the risk of end-users being exposed to or generating synthetic versions of such material.

I rejected two of the eight submitted codes: one pertaining to messaging services and another covering file and photo storage services. I will now work towards establishing industry standards to make sure the necessary safety measures are in place, including applicable protections against the risks associated with generative AI.

Through these codes and standards, we aim to hold companies accountable in the era of AI, especially those who profit from its development. If you provide access to AI technology, you bear the responsibility of making sure it does not cause harm to individuals or to society. This principle aligns with consumer law, which mandates seatbelts in cars, rigorous testing of medicines before human use, and globally recognised food safety standards.

A whole-of-society response

While regulation is a major component of online safety, it has limitations. We need a global effort involving law enforcement agencies, regulators, NGOs, educators, community groups, and the tech industry itself. Safeguarding the benefits of generative AI while mitigating its risks demands a whole-of-society response.

In addition to industry responsibilities, our position paper recommends users understand what personal information generative AI accesses from the open web and take steps to protect their data. Being informed and vigilant remains a crucial aspect of personal online safety.

As policymakers, we must act swiftly and decisively, with industry, to safeguard people from generative AI. It’s our collective responsibility to make sure generative AI, with all its promise and peril, remains a force for good in our digital world.

Julie Inman Grant is Australia’s eSafety Commissioner, the world’s first government regulatory agency committed to keeping its citizens safer online. Before commencing in this role in January 2017, Julie spent two decades working in senior public policy and safety roles in the tech industry at Microsoft, Twitter and Adobe.

Image credit: Getty Images

Features

  • Maxwell Yong

  • Rys Farthing and Lorna Woods

  • Rys Farthing and Lorna Woods

Subscribe to The Policymaker

Explore more articles

  • Maxwell Yong

  • Ehsan Noroozinejad Farsangi

  • Thea Snow

Features

  • Maxwell Yong

  • Rys Farthing and Lorna Woods

  • Rys Farthing and Lorna Woods

Explore more articles

  • Maxwell Yong

  • Ehsan Noroozinejad Farsangi

  • Thea Snow

Subscribe to The Policymaker