20 June 2023
The tech titans warn of superintelligent AI systems posing a risk to humanity by making us redundant to its own survival. The real risks from AI are much more prosaic, and are already insidious in our less than superintelligent systems. These risks are the selection bias in who gets to the top jobs, the dumbing down of knowledge accumulation, and the greater power that a concentrated media and algorithms in search and social media give to individuals to shape public opinion. AI magnifies these three risks, on top of the risks to individual privacy and government misuse that the European Union’s AI Act and the United States’ AI Bill of Rights seek to reduce. AI regulation alone does not resolve these risks but, carefully applied, AI might just help us to mitigate some of them.
The genesis of the first of these risks lies in our history. Understanding this history is an essential part of addressing this risk. The first phase of the industrial revolution replaced skilled trade workers with machines, but it liberated many more from grinding manual labour. In the second phase of the industrial revolution, the complementarity of labour and capital in manufacturing, combined with labour unions, shifted the market power to workers so the middle class grew more rapidly. Skills acquired on the job were transferable to similar work and well paid workers were also consumers, keeping both demand and supply growing. The dynamics changed with the third industrial revolution with computers and robotics both labour saving, particularly in manufacturing. The neoliberal economic agenda of deregulation worked to erode the power of unions, while globalisation (reductions in restrictions on trade and capital flows) increased competition, boosted by the ascension of China to the WTO in 2001. These advances in technology, the decline in power of unions, and the rise in competition from countries with a large supply of low-cost labour shifted the reward for labour from strength to smarts.
In this third phase of the industrial revolution, education had become the pathway to better wages, and both the demand for and supply of education had responded. The expansion of access to education, in particular to tertiary education, grew the talent pool of labour. The growth of the middle class meant that many more children could access decent schools and higher education. The wide intake at the entry level of firms meant that more of those with aptitude and attitude ascended. Anti-discrimination legislation widened the talent pool further, although women and minorities still faced higher barriers to advancement. This virtuous cycle, where most children had the opportunity to realise their potential (although disadvantage still excluded too many), was good for productivity growth. The widening distribution of wage income that was emerging as technology replaced non-cognitive tasks mattered less when most workers were experiencing strong income growth.
The first risk exacerbated by AI is a result of this widening inequality, and the natural desire of parents to see their children succeed. The income divide widens educational attainment and social segmentation, concentrating both affluence and disadvantage in where people live and where they go to school. As digital technologies reduce the share of entry level jobs into more skilled professions (non-routine cognitive jobs), those with connections take up more of the available opportunities. Those without connections find it harder to realise their potential. AI will accelerate this trend by replacing not just routine cognitive tasks, but ones that are increasingly less routine. Not only will this reduce mobility across income groups, it will further widen the distribution of income and exacerbate inequality. The policy response to break this vicious cycle is not AI regulation, but action to reduce wage inequality and improve education and employment opportunities across the whole population.
The second risk is the relative reduction in creative thought and entrenchment of “popular” versions of what we already know. AI is trained on existing data, and so has existing biases, not just in relation to discriminatory practice and outcomes, but in the knowledge that gets disseminated. AI may be directed to slice and dice selectively to reduce this bias, but the resulting bias will depend on the selection rules applied. To the extent that new knowledge generation draws on AI tools, it is going to be inherently less creative, even if humans remain creative, which some question. AI boosts productivity in producing digital content – but the share that is original will progressively fall. Reversion to mean, where the mean is dictated by training data, and reinforced as data is generated by AI based on this data, is a major risk. One approach to reducing this risk is to filter what AI is trained on so that it adds new content and not just grows the volume of rehashed content. AI can be trained to identify what is new, and to reweigh training data accordingly. Internationally agreed filters could be applied to reduce content that has potential to magnify harm and to offset bias.
The third risk stems from the manipulative nature of humans, who seek to influence behaviour in ways that are damaging to individuals and the community. AI could be set running to reinforce views that are exclusionary and actions that restrict access to remedies for harm. Even more dangerously, it could entrench changes that work to achieve one directed objective without regard to collateral damage. For example, ensuring the delivery of one firm’s inventory might allow connected systems to block other deliveries, including emergency services. This risk is the closest to the existential risk that the tech titans are worried about. But it comes from someone setting up the system to pursue an objective by all available means. Regulation to limit these means (decoupling systems) and restrict the range of automated task setting can reduce these risks. Using AI to identify where these approaches are already used to shape opinion, restrict access and block restrictions on achieving objectives that cause harm to others, including the environment, would be a good place to begin to think about how to regulate AI to prevent its misuse in this way.
Policymakers should turn their attention to addressing these existing risks that AI will accelerate, before getting too concerned about the elimination of the human race. Perhaps then the existential risk of superintelligent AI that decides we are redundant will be less of a risk than humans already are to our own flourishing.
Dr Jenny Gordon is an Honorary Professor in the Centre for Social Research and Methods at the Australian National University and a non-resident fellow at the Lowy Institute. Jenny was the Chief Economist at DFAT from 2019 to 2021, establishing the Office of the Chief Economist (OCE) to bring together trade and investment economics with development economics. Jenny joined DFAT from Nous Group where she helped build their economic analysis service offer. Prior to this, she spent 10 years at the Productivity Commission as the Principal Advisor Research. Jenny has a PhD in Economics from Harvard University and started her professional career at the Reserve Bank of Australia.
Image credit: Metamorworks/Getty Images Pro
Features
Tom Longden and Gina Gatarin
Subscribe to The Policymaker
Explore more articles
Cristy Brooks and Freya MacMillan
Ehsan Noroozinejad Farsangi and Greg Morrison
Features
Tom Longden and Gina Gatarin
Explore more articles
Cristy Brooks and Freya MacMillan
Ehsan Noroozinejad Farsangi and Greg Morrison
Subscribe to The Policymaker