Why don’t we just… stop using AI to supercharge austerity?

Hero image

As we teeter on the edge of a new recession, the Chancellor has announced that the UK’s hopes for economic revival rest with artificial intelligence (AI). Businesses are expected to harness AI to realise new market opportunities through breakthrough innovations, with profits trickling down to create shared prosperity.

Ignore the widening inequality, as tech owners take home record profits while workers’ wages fail to keep up with inflation. With a labour force pushed to the limit of its efficiency, the claim is that AI will boost productivity through automating tasks, making products cheaper and stimulating consumption. (Never mind that the workers displaced by automation will be rendered financially less able to consume.)

Chatbots and predictive analytics promise enhanced customer experience, inspiring people to spend money (that they don’t have).

The reasons given for why AI is the answer to our problems frankly don’t hold water. Beneath arguments that AI will be an economic panacea is an uncomfortable truth: AI will generate historic profits for an already privileged elite while providing the means for lightly rebranded austerity politics.

At the root of austerity is the false notion that resources are severely limited by years of “lavish” overspending on providing the basic needs of the least advantaged in our society. According to this, cutbacks to social goods are the only way to keep our precarious economy afloat. Underlying this is a harmful and incorrect assumption that many (if
not most) of the people in receipt of public funds are lazy, lying, or both.

Such spending cuts are an act of violence against those who are most disadvantaged by the way our society is structured. AI helps to reduce the pool of benefit recipients by doing the morally uncomfortable work of categorising some of them as “undeserving”, and it renders decisions about deservingness as mathematically “objective”.

Humans are understood to be biased decision-makers, but the buzz about AI frequently mistakes the use of algorithms for the absence of human decision-makers – and, by extension, the absence of bias.

Politicians wishing to implement the violence of austerity without using its increasingly controversial language can introduce AI in apparent alignment with values of efficiency, progress, and (most insidiously) fairness to take from those most in need.

Let us be clear: AI is not objective or progressive. Human beings, with all their biases, make key choices about which features are relevant to a given decision – for instance, whether someone is “deserving” of social protection. These decisions are then reflected in who and what is included in data, and how. The algorithms underlying AI are trained (mathematically based) on datasets that themselves reflect longstanding patterns of inequity in society.

Consequently, AI is notorious for making decisions that reflect and reproduce these inequities and unwind social progress, and given the scale at which AI operates, the harmful impacts of bias can spread more efficiently than ever before.

In other words, AI not only increases the capacity for biased decision-making, but it also supercharges austerity by giving the false impression that this decision-making is no longer morally troubling because it is being done by supposedly objective machines.

It is disturbing that the UK was (at least as of July 2022) trialling AI in decision-making for access to Universal Credit. Like the disastrous SyRI system deployed in the Netherlands, the aim of introducing AI into the Universal Credit system is to predict an individual’s likeliness to commit benefit fraud so that people cannot claim this social
protection.

In the SyRI system, some were incorrectly categorised as “undeserving” of the benefits they were claiming for. There was a disturbingly predictable pattern to this: more of these incorrect categorisations were experienced by minority ethnic individuals.

Unfortunately, how these decision-making algorithms work is frequently unclear. As a result, those kicked out of much-needed social protection were extremely limited in their ability to challenge these decisions. In the case of the SyRI system, developers assigned a risk to a person depending where they lived, which effectively served as a proxy for (socially loaded categories of) race and ethnicity.

So why don’t we just stop using AI to supercharge austerity? For ordinary members of the public, this means remaining vigilant to and resisting all new instances of AI-assisted austerity and the violence this creates.

It means resisting the framing of AI as critical to the economy – and to this end, we would do well to remember how the global recession of 2008 from which austerity emerged was the consequence of financial risk models designed to profit from people’s inability to repay high-risk loans.

It also means recognising that the efficiencies AI delivers make it possible to maintain self-imposed conditions of constraint. So instead of investing in the NHS, for example, AI will keep the starved institution afloat by replacing administrators, technicians and doctors themselves. This results in an institution that is more efficient in theory but less effective in practice, creating new health inequalities and ultimately delivering a poorer quality of care.

After more than a decade of austerity, it is safe to say it has failed in its promise of repairing a “broken Britain”. Cutting public spending has left us with little of what makes for a thriving society – emergency services, social care, health care, local authorities, and places to socialise.

Investing in AI is a choice not to invest in revitalisation of these programmes and institutions. In fact, in many cases it is a convenient tool for convincing people to swallow the bitter pill of further cuts.

It is critical that we resist the lie we are being sold about the promise of AI. This is not about resisting progress or innovation; it is about resisting the idea that technological innovation can substitute for investing in the rebuilding of a more equitable and vibrant society.

Bran Knowles is a senior lecturer at Lancaster University’s School of Computing and Communications. Jasmine Fledderjohann is a senior lecturer in sociology at Lancaster University. Their book A Watershed Moment for Social Policy and Human Rights? Where Next for the UK Post-Covid is published by Policy Press.

Interact: Responses to Why don’t we just… stop using AI to supercharge austerity?

Leave a reply

Your email address will not be published.