Regulating artificial intelligence against gender bias: can Australia emerge as a world leader?

Ramona Vijeyarasa

Australia has joined the race to regulate AI – but we aren’t paying enough attention to the gender-related challenges generated by these systems. Here’s how we can start to fix that.

25 August 2023

Earlier this year, the Australian Government launched a public consultation on how, from a regulatory perspective, Australia can position itself as a leader in digital economy. This was followed by a government discussion paper on safe and responsible AI. Among all this chat (forgive the pun), women have hardly featured. Yet the harms for women of AI are relatively well-known.

From female-voiced chatbots facing harassment to pornographic deepfakes, through to menstrual apps appropriated by the police to identify abortion users in the United States, it is unsurprising that some human rights scholars have described gender stereotypes as “part of AI’s fabric”. Before carving out new rules, NSW’s Chief Data Scientist, Ian Oppermann, writing in The Policymaker recently urged us to first consider whether existing regulations are up to the task. In my view, they are insufficient when it comes to gender-based harms. So, how can we regulate to achieve algorithmic fairness for a greater diversity of women? And, most importantly, is Australia on a pathway to be a leader in this space?

How is AI potentially harmful for women?

AI’s harms to women fall into two broad categories. The first is allocative harms: these result from decisions about how to allocate goods and opportunities among a group.

One of the most commonly cited examples is the way a bank may use an AI-driven technology that considers the personal data of an applicant, such as their credit history, and then compares that individual to all available data, to either automatically deny a loan or to guide a bank manager to make such a determination. A bank’s use of such AI-driven technology poses challenges for particular applicants – culturally and linguistically diverse women or women who are sole-parents – who may be singled out as less likely to repay a loan because the system has learnt that their applications were more often denied in the past.

The other type of harm is representational. These come about when systems reinforce gendered subordination, through stereotyping, under-representation or denigration. For instance, in natural language processing, men are associated with “firefighters” and women with “nurses”; linguistic staging puts “son” before “daughter” and “Mr” before “Mrs”; while the press will often describe a man by his behaviour but women by physical appearance and sexuality.

We see then that AI itself may not necessarily be the root-cause of the problem – rather it is a historical one. AI may replicate and reproduce existing assumptions and biases pertaining to both the majority and marginalised groups if those biases are embedded in datasets used for design, testing and training of algorithms.

I would add to these better-known harms a third type of harm: knowledge-based harms. Such harms account for the inequalities that exist in the different levels of understanding we have regarding how algorithms influence our everyday lives. This inequality comes about, and is perpetuated by, the over-representation of men in AI’s design, deployment and use – all within the overarching dominance of male leadership in the technology space, with inadequate and unequal representation of women.

Amazon’s AI-driven recruitment tool, developed around 2014, was scrapped in 2018 when it became clear that it would teach itself that male candidates were better than female ones by penalising the word “women”, whether that was “Women’s Chess Club Champion” or a graduate from an “all-women college”. So, it is not that the designers of AI-driven technologies are oblivious to its problems. But what role can law and regulation play?

Learning from global good practice

In the global race to regulate AI, the US, EU and China are probably frontrunners, although China’s ambitions to be an AI norm-setting power may be tampered by an AI global governance architecture largely built on Western democratic values. From a gender perspective, the EU is definitely worth further consideration. Yet I would add to this list Brazil and Canada: both examples of jurisdictions legislating with gender-based harms in mind.

The standout feature of the Canadian Government’s Directive on Automated Decision-Making, which has applied to almost all Canadian federal government institutions since early 2019, is a requirement to undertake compulsory testing for unintended biases. If an AI-driven technology meets the moderate, high or very high risk thresholds, the designers need to undertake “Gender-Based Analysis Plus”. This “plus” reflects going further than the gender impact assessment already required for public procurement. Not only must there be some assessment of the impact of the automation on gender and other identifying factors, but public entities must also name what planned or existing measures are in place to address these identified risks in the future.

In Brazil, new draft legislation tabled this year, takes this one step further. Brazil is debating a prohibition on the implementation and use of AI systems that may lead to direct, indirect, illegal or abusive discrimination. This includes systems that will have a disproportionate impact based on personal characteristics including geographic origin, race, colour or ethnicity, gender, sexual orientation, socioeconomic class, age, disability, religion or political opinion. Brazil therefore offers one of the best examples of how to address the way in which AI may replicate multiple and compounded biases, such as gender and race, gender and socio-economic status, or gender and age.

Meanwhile the EU Digital Services Act offers a “trusted flagger” system. Organisations are appointed to the role of “trusted flagger” if they meet pre-defined criteria, including independence from platforms. They are a form of officially recognised expert that acts as an industry whistleblower for harmful online content. We know that women and girls face particular harms from deepfakes: a technology that emerged around 2017 that uses AI to transfer or map an image onto an existing video that often takes the form of non-consensual pornographic material. The existence of a trusted whistleblower offers a gendered response to the problem.

Finally, the European Parliament has proposed that the EU’s new AI Act, if passed, should address AI literacy. An onus would be placed on both the Union and its Member States to promote AI literacy in general, such as in schools and also on AI companies to ensure AI literacy among their staff and operators. This would include teaching basic notions and skills about AI systems and their functioning, including the different types of products and their uses, their risks and benefits. Moreover, the massive under-representation of women in the design of AI-driven technologies and at the AI decision-making table would be addressed head-on. The proposed text argues for a requirement that AI systems are developed and used in a way that “includes diverse actors and promotes equal access, gender equality and cultural diversity”.

Australia’s pathway forward

Australia adopted a voluntary and aspirational set of AI principles – the Artificial Intelligence Ethics Framework – under the Department of Industry, Science and Resources in 2019. According to the framework, AI systems should foster social wellbeing, fairness and contestability. It envisages an ideal scenario in which an impacted person, community or group has time to challenge the use or outcomes of an AI system. It also foresees some degree of accountability, although defined in vague terms.

So, now we await the “hard rules” which in my view are a must. My hope is that some of this gendered good practice makes its way into a draft Australian bill and survives parliamentary debate.

We have the foundations to build upon. There is promise, for example, in the already-established eSafety Commissioner’s role, created in 2015, which includes the power under the Online Safety Act 2021 to issue a removal notice for abusive material online. The remit should be expanded to address online safety created or exacerbated by AI, such as in the form of deepfakes.

Yet this is only the beginning of a debate on legislating against AI’s gendered harms. Sexual and gender minorities are even less visible in these discussions. Put simply, much of the challenge of AI for LGBTQ+ communities comes from the reality that surveillance technologies are incapable of working beyond the binaries of male and female. In these cases, some of the solutions might be better found in the design of AI. AI-driven technologies need to be created so that users can, for example, self-determine their identity in a system, or to ensure the technologies acknowledge a plurality of genders beyond male and female.

We have a long road ahead. But with gender on the legislative table in other jurisdictions, there is no excuse for the governments across Australia to fail to legislate AI with women in mind. In fact, we too could lead on the gender stage.

Ramona Vijeyarasa is Associate Professor in the Faculty of Law at the University of Technology Sydney. She is one of Australia’s leading experts on gender and the law. As the creator behind the Gender Legislative Index, an online tool that uses human evaluators and machine learning to evaluate whether laws advance gender equality, she was named the Women in Artificial Intelligence in Law Category for 2022 and 2nd Runner-Up as the Woman in AI Innovator of the Year.

Image credit: Metamorworks/Getty Images

Features

  • Brian Head

  • Ian Oppermann

  • Jo-An Occhipinti

Subscribe to The Policymaker

Explore more articles

  • Matt Tyler

  • Jennifer Burn

  • Måns Carlsson

Features

  • Brian Head

  • Ian Oppermann

  • Jo-An Occhipinti

Explore more articles

  • Matt Tyler

  • Jennifer Burn

  • Måns Carlsson

Subscribe to The Policymaker