A black sky and some green shoots: how to shape AI for the common good

Jack Isherwood

A passive approach to emerging artificial intelligence technologies could generate significant social problems – but “mission economy” and “just transition” approaches to AI can unlock a better, more inclusive future.

4 July 2023

Imagine a future where successive policy failures on artificial intelligence led to social turmoil and economic disruption. An entrenched economic malaise developed, characterised by a strange combination of high productivity and low inflation alongside high structural unemployment and stagnant economic growth.

This paradox occurred as a function of diverse jobs and industries being displaced without a coherent strategy to reskill the workforce or diversify the economy. A hyper-polarised labour market emerged over time, with rural and regional areas – already struggling before the AI revolution – particularly affected.

While AI-literate workers enjoyed rising incomes and career prospects in an accelerated “Matthew Effect”, lower-skilled workers were wedged between precarious employment and an inadequate social safety net. Poverty became more widespread, and existing inequalities became starker as a new digital divide was entrenched. Novel forms of social stratification and social segregation emerged, accompanied by new insidious forms of discrimination.

From a cultural perspective, widespread disillusionment developed, as people increasingly associated AI with economic injustice, dislocation, and polarisation rather than social and economic progress. Public trust in AI gradually eroded, with a cultural backlash against its further development, and resistance to its ongoing adoption. New populist countercultures emerged with groups challenging the penetration of AI into all domains of life, while resentment grew towards the minority of citizens who benefited financially from the AI revolution, threatening social unrest and violence.

New developments in culture traced this sense of anomie, with a rise in dystopic, romanticist and counter-historical themes in literature and art. They resisted an apparent loss of human creativity and spontaneity in the face of the predictable, standardised formulaic logic of AI.

Unsurprisingly, over time, the nation became more jingoistic, nationalistic, and protectionist, as policymakers sought to redirect popular frustration towards marginalised groups to solidify their power. This was accompanied by more sophisticated state surveillance, with excessive intrusion into citizens’ private lives and close monitoring of those deemed to be subversive or deviant.

People of colour and migrants were targeted, entrenching existing exclusion and discrimination, as bias became encoded into AI. International cooperation in setting AI standards, and sharing AI technologies, faltered as countries increasingly viewed technological competition as zero-sum, stirring geopolitical tensions as nations grappled with several intersecting crises, including a collapse in global biodiversity and climatic stability.

Historians looking back at this period noted ideological, structural and circumstantial factors driving the cascading policy failure. The first was the pace of transformation, which outstripped the resources and capabilities of policymakers. The relentless advances in AI technologies introduced a kind of complexity and uncertainty, making traditional policy formulation and decision-making obsolete.

The inevitable result was piecemeal, reactive policy responses. AI policies lacked clear, measurable goals and strategic vision. Successive governments failed to engage crucial stakeholders, guaranteeing that their initiatives would fail to meet the needs of their citizens.

The second was the malign influence of vested interests, which lobbied for contradictory policies that impeded decisive and considered policymaking. A fatal error was to allow a concentration of market power to develop, with AI steered by a narrow group of entrepreneurs and firms, undermining democratic decision-making and oversight.

Path dependency set in, as entrenched interests established monopolistic control over AI technologies, infrastructure and platforms, enabling them to dictate the course of development and to limit control by democratic governments. This made it almost impossible to distribute the benefits of AI fairly and address its economic and social costs.

Policymakers also failed to account for “Jevons Paradox” with AI efficiencies driving unsustainable resource consumption and degradation, rather than alleviating them. Indeed, as policymakers privileged vested interests, the environmental and climatic effects were downplayed, and were accompanied by a failure to set adequate energy efficiency and carbon pollution standards.

Finally, whereas some of structural transformations of AI were immediately apparent, others were more subtle and gradual, not immediately noticeable to the public in a case of shifting baseline syndrome. Policymakers and the public alike were stunned by the realisation that the future had arrived before they had even noticed, and the room available for policy change was foreclosed as the severity of the situation was ignored or rationalised. This lack of perception further stifled cohesive policy formulation, as the urgency and gravity of the problem was overlooked, and a profound policy myopia became entrenched.

Green shoots: the possibilities of a mission economy-just transition approach

A depressing cautionary tale indeed, but what would an alternative scenario where governments successfully negotiated the AI revolution look like? It is here that an alignment between “just transition” and “mission economy” approaches might have utility.

First, both frameworks call for ethical frameworks around AI design and governance, while also sharing a commitment to procedural principles of transparency, accountability and inclusiveness. They also share a focus on ensuring equitable outcomes, centred on distributing the costs and benefits of structural transformation fairly and broadly, ensuring that existing inequalities are addressed while preventing new forms of disadvantage.

Second, both approaches stress the need to forge innovative partnerships across sectors and deep collaboration between different stakeholders, bridging the public-private divide by connecting governments, industry partners, academic, civil society organisations and citizens. Accordingly, both are invested in using diverse public engagement strategies designed to include diverse voices, particularly from marginalised and First Nations communities.

In addition, they recognise the need to invest in capacity building, so that stakeholders can achieve policy goals, and help mitigate institutional inertia which can hinder just transition policies. Given the complexities of AI technologies, building public awareness of its social, economic, political, and ethical implications will help secure policy momentum for transitional policies.

Third, both approaches are committed to thinking holistically about pressing problems. While attentive to the economic dimensions of structural transformations, they also privilege the social, environmental, and ethical dimensions of change. This includes a willingness to pilot experimental policies and design interventions which address interconnected issues. For instance, a combined just transition-mission economy approach might explore the connection between education and skills development and public health, and how these sectors may be affected by AI.

Fourth, they privilege the need to develop long-term, sustainable solutions to societal challenges, with a focus on inter-generational justice and building consensus to transcend short-term electoral horizons, political agendas and biases. In the context of AI, this means investments in long-term research and development, infrastructure and skills to support mission goals. But it will also mean thinking about AI expansively – beyond just its economic implications – to include its social and environmental impacts.

Governments could meet this challenge by using scenario analysis approaches, ranging across  diverse themes, including technological change, employment and labour markets, and macro-economics, through to ethical and social impacts, and policy and governance implications. The strength of these approaches is that they are open to diverse ways of scenario forecasting, and utilising techniques from anticipatory governance approaches. This is important because both approaches recognise the dangers of path dependency, necessitating proactive intervention at the initial stages of technological innovation to shape policy outcomes over the longer term.

A just transition and mission economy perspective would be cognisant of the dangers of regulatory accumulation, where potentially harmful standards become entrenched and widely adopted, making it difficult to tackle inequalities at a later stage. Both approaches can grapple with the problem of how to manage uncertainty and risk, including anticipating roadblocks and remedying unintentional consequences.

Finally, both approaches privilege the need for ongoing monitoring, preventing the ossification of transitional initiatives. They therefore support different approaches to knowledge transfer, policy exchange and international collaboration, facilitating incremental policy learning and adjustment. This is a critical point in the context  of AI, given that infrastructure, platforms and software will rapidly evolve  The shared commitment to diverse feedback mechanisms can also facilitate policy success by helping policymakers anticipate challenges and negotiate conflicting objectives.

Implementation challenges

There are many potential barriers to a successful mission economy-just transition approach. Policymakers may not allocate sufficient resources or develop adequate expertise. If they fail to coordinate policy efforts between the macro-orientated focus of a mission economy, and the micro-focused logic of just transition initiatives, there is also a serious risk of policy fragmentation.

A failure to establish sufficient consensus across party lines for a mission economy initiative creates a risk that such initiatives may be abandoned or scaled back as the political tide changes. This highlights another limitation of the approach: the need to forge consensus across lines of political difference. This will be challenging in the context of AI because it will redistribute power and citizens will disagree – reasonably and unreasonably – about some of the key ethical issues.

Moreover, while just transition-mission economy principles theoretically lean towards inclusion, in practice it may be more difficult to co-design AI policies which are fully representative of the diverse communities they affect. This is particularly worrying for AI given that its design features can easily amplify existing biases and discriminatory practices and because its economic implications may disproportionately impact marginalised groups.

Finally, it is worth noting that an integrated approach will be resource intensive and will require significant policy coordination across government and different sectors. Governments might feel overloaded with policy issues, diverting their focus and determination. Given the pace and direction of technological change will be hard to predict or control, coherent policy may be difficult to manage as societal impacts could be subject to considerable lag, and where the externalities and second-order effects may materialise at unpredictable moments.

While integrating a mission economy-just transition approach presents significant challenges, it augurs the possibility of aligning the economic windfalls of the AI revolution with principles of social justice and environmental sustainability. Focusing on inclusion, long-term solutions, innovative partnerships and ethical deliberation may allow policymakers to steer an alternative future where AI technologies create bridges of social opportunity and connection, rather than walls of estrangement and discord.

Dr Jack Isherwood is a Senior Research Associate at the James Martin Institute for Public Policy, joining the Institute on secondment from Western Sydney University, The College in 2023. He has a PhD in Political and Social Thought from the Australian Catholic University on the topic of civil discourse and civil disobedience. He is currently completing a Master of Public Policy at ANU, and has research interests in climate adaption and mitigation, higher education policy, and the management of “black swan” and “grey rhino” scenarios. Alongside his work at JMI, Jack manages short online courses at Western Sydney University, The College.

Image credit: Mehaniq

Features

  • Rys Farthing and Lorna Woods

  • Maxwell Yong

  • Ehsan Noroozinejad Farsangi

Subscribe to The Policymaker

Explore more articles

  • Thea Snow

  • José-Miguel Bello y Villarino

  • Juliet Bennett

Features

  • Rys Farthing and Lorna Woods

  • Maxwell Yong

  • Ehsan Noroozinejad Farsangi

Explore more articles

  • Thea Snow

  • José-Miguel Bello y Villarino

  • Juliet Bennett

Subscribe to The Policymaker