Labour has published its ‘AI Opportunities Action Plan’ (The Plan). The Prime Minister is very bullish about The Plan, and peppers his foreword with muscular terms like growth, revolution, ambition, strength and innovation. In itself, The Plan is full of claims that AI is essential and inevitable, and urges the government to pour public money into the manufacture so as not to miss out.
In the kind of tech entrepreneurs, The Plan likes to put ‘x’ after things, so investment must go up by 20x (meaning 20 times), the amount of compute AI requires has already gone up by 10,000x and so on. The Plan claims that Britain is already leading the planet through the AI Safety Institute (of which more later) and infuses the usual AI hype with nationalist vibes via terms like planet leader, world-class, national champions and ‘Sovereign AI’.
Above all, The Plan emphasises the request to scale. The importance of scale for AI and its technopolitical impacts will be explored below.
This article addresses the poorness of The Plan and the emptiness of its claims about AI but, alternatively than a point-by-point rebuttal, it’s about the underlying reasons why this Labour government supports measures that will harm both people and the environment.
In between invocations of speed, pace and scale, there’s any designation in The Plan that the UK is not a wholly happy place right now. While recommending a high-tech form of land enclosures via ‘AI Growth Zones’ (AIGZs), which are about handing data centre developers “access to land and power”, it gestures towards the thought that these could drive local innovation in post-industrial towns. While The Plan’s claims about AI’s inevitable advancement and the oncoming wave of agentic systems that will reason, plan and act for themselves already seem dated and discredited, what hasn’t changed is that the very regions targeted for growth via AIGZs have already seen violent anti-immigrant pogroms accompanied by fascist rhetoric, and those sentiments have not gone away.
Ultimately, it’s argued here, the misstep represented by The Plan and its full commitment to AI will reenforce and amplify the threat of the far right, as well as connecting it to the highly reactionary ideas that are in the ascendency in Silicon Valley. This article proposes alternatively ‘decomputing’; the refusal of hyperscale computing, the replacement of reductive solutionism with structures of care, and the construction of alternate infrastructures based on convivial technologies.
Labour and AI
In any ways, it’s reasonably apparent why this Labour government would want to prioritise AI. Kier Starmer’s single identifiable political belief is the thought of ‘growth’, so demonstrating economical growth supersedes all another government concerns.
Growth will show that Kier and colleagues are serious politicians who are no threat to the establishment, and at the same time win over Mr. & Mrs Red Wall voter who are disillusioned with orthodox politics. Boosting GDP is Labour’s answer to all the wicked problems that beset the UK’s public services and infrastructure, and avoids having to actually challenge the underlying logic of Thatcherite neoliberalism which has dominated for decades.
And if there’s 1 area of economical activity which is growing, it’s surely AI. All the graphs are going up; the venture capitalist investment, the stock marketplace valuations, the size of the AI models and the number and scale of the data centres. Latching on to this growth is already working for the government, as a full 10% of the £63b promised by their ‘record-breaking’ international investment summit was earmarked for data centres.
Having seemingly decided to pin their hopes on AI, the Labour government have been aligning with demands from large Tech. Not long after the election that brought Labour to power, Google issued a study titled ‘Unlocking the UK’s AI Potential’ laying out their conditions for AI growth in the UK, including investment in data centres and a loosening of copyright restrictions. Of course, these aren’t Google-specific requirements; the foundation of all contemporary AI is scale, meaning more and larger data centres to home all the servers and bigger pools of data to train them.
The collision course with copyright comes from the fact that these data sets have always been besides large to pay for, so the AI manufacture just grabs them from the net without asking. Google’s study was accompanied by a media circular from their UK & Ireland vice president informing that the UK risked “losing leadership” and “falling behind” if their advice wasn’t followed.
It seems the message was received loud and clear; since the election, Labour have designated data centres as ‘critical national infrastructure’ which means that ministers can override any local planning objections, and the government is besides floating a relaxation of copyright protections. It’s not just Google that the Labour government is prepared to doff its cap to; Peter Kyle, the current secretary of state for Science, Innovation and Technology, has repeatedly stated that the UK should deal with large Tech via ‘statecraft’; in another words, alternatively than treating AI companies like any another business that needs taxing and regulating, the government should treat the relationships as a substance of diplomatic liaison, as if these entities were on a par with the UK state.
This awe of large Tech reflects deeper currents of commitment within the Labour government. Certainly, any ministers from the Blairite faction are going to be influenced by the absolute belief in AI expressed by influential think tank, the Tony Blair Institute (TBI).
It’s hard to exaggerate the centrality of AI to the TBI planet view, but the title of their 2023 study is beautiful representative: ‘A fresh National Purpose: AI Promises a World-Leading Future of Britain’. According to this report, becoming a planet leader in AI improvement is “a substance becoming so urgent and crucial that how we respond is more likely than anything else to find Britain’s future”, and its 2024 sequel, ‘Governing in the Age of AI: A fresh Model to Transform the State’, opens with “There is small uncertainty that AI will change the course of human progress.”
The breathless rhetoric is accompanied by policy demands; a origin of 10 increase in the UK’s compute capacity, the diversion of major spending commitments to AI infrastructure, reducing regulation to US levels and, of course, enacting all this in close relation with the private sector.
A core promise is that turning the public sector over to AI will deliver immense savings and improved delivery, although 1 might question the reliability of their research, given that it was based on asking ChatGPT itself how many government jobs it could do. While this sketchy approach has echoes of the Iraq (‘dodgy’) Dossier, it’s reflecting a realpolitik that sees both AI companies and rhetoric about AI as incredibly powerful at the current moment.
This is possibly the gap that AI fills for the Labour government; having long abandoned any substantive belief in the transformative power of socialism, it is lacking a mobilising belief system. At the same time, it’s apparent to all and sundry that the position quo is in deep problem and that being the organization of continuity isn’t going to convince anyone.
Ergo, the claim that AI has the power to change the planet becomes a good stand-in for a transformative ideology. The bonus for the Labour government is that relying on AI to fix things avoids the request for any structural changes that might upset powerful business and media interests, and rhetoric about global AI leadership has a suitably ‘Empire’ vibe to appeal to nationalistic sentiments at the grassroots.
Harms at scale
Beneath all the policy gloss and think tank reports, though, lurk the real harms of AI in the here-and-now, starting with environmental harms. The Labour government’s imagination for AI takes concrete form in the form of more data centres. However, as any previously tranquil localities are starting to discover, this comes with crucial impacts.
Generative AI, in particular, is driving the computational scale of AI models through the roof. The rate at which these models are expanding in size outpaces any another fresh tech revolution, from smartphone adoption to genome sequencing. In turn, this is driving massive increases in energy demand.
To service AI and the net cloud, the fastest increasing kind of data centre in the UK is the alleged hyperscale data centres run by the likes of Google, Microsoft and AWS. These are typically at least 10,000sq ft and contain upwards of 5,000 servers, but the manufacture wants them to be much larger, and filled with the energy-guzzling GPU chips that train and run AI.
Sam Altman, CEO of ChatGPT’s parent company OpenAI, has pitched plans for 5GW data centres in the US, which is the equivalent of about 5 atomic reactors’ worth and adequate energy to power a large city. These voracious demands for electricity come with immediate consequences for national grids and for climate emissions.
The Greater London Authority (GLA) has already had to impose a temporary ban on fresh housing developments in West London due to the fact that a cluster of existing data centres was utilizing the available grid supply. due to the fact that AI’s energy demands are outpacing the improvement of bigger electricity grids, there’s presently a push for bringing back fossil fuel sources, especially gas-powered turbines. Connecting straight to the natural gas network to overcome local power constraints is little efficient than grid-scale generation and increases unmonitored carbon emissions.
Of course, large Tech is already aware that driving climate emissions is simply a bad look and has previously tried to look ‘green’ via the use of ‘renewables’ and the cover communicative of carbon offsets. However, the scale of generative AI has blown this distant to the point where both Google and Microsoft have admitted an inability to meet their own climate targets.
Locally, it’s not just possible power cuts that an AI data centre brings to an area, but a immense request for cooling water to halt all the servers overheating and the pervasive presence of a background hum from all the cooling systems. The question is whether a pursuit of ‘AI greatness’ will make the UK more like Ireland, which has already been recolonised as a dumping ground for large Tech’s data centre infrastructure.
The here-and-now harms of AI are besides social. Never head the sci-fi fantasies about AI taking over the world, the mundane reality of AI in any social context are forms of ugly solutionism that perpetuate harms alternatively than reducing them. The claim that more computation will improve public services is barely new, and algorithmic fixes for everything from welfare to education have already left a way of harm in their wake.
In Australia, the ‘Robodebt’ algorithm wrongly accused tens of thousands of people of welfare fraud, and was only halted by a grassroots run and an eventual public inquiry, while in the Netherlands an algorithm falsely labelled tens of 1000 of people as defrauding the kid benefits system, causing crippling debts and household break-ups. What the UK’s notorious Horizon IT system and contemporary AI have in common is the tendency to make falsehoods while appearing to be working properly. What AI adds is the capacity to scale harms in way that makes the Horizon scandal look like tiny beer.
The insistence that AI will reverse the rot in education and healthcare systems besides has a tired history. Back in 2018, Facebook’s non-profit arm inserted an online learning platform into a California public school strategy on the basis that it offered ‘personalised learning’, the central mantra of all AI-driven educational technology. It took mass opposition by 17-year old students to get free of it.
In the open letter they sent to Zuckerberg they said “Most importantly, the full program eliminates much of the human interaction, teacher support, and discussion and debate with our peers that we request in order to improve our critical thinking. Unlike the claims made in your promotional materials, we students find that we are learning very small to nothing. It’s severely damaged our education, and that’s why we walked out in protest”.
Meanwhile the Nobel Prize-winning godfather of AI, Geoffrey Hinton, made the claim back in 2016 that thanks to the superior accuracy of AI’s image classification there was no request to train any more radiologists. As it turned out, of course, that claim was just as specious as the more fresh hype about ChatGPT passing medical exams. Labour’s minister for Science, Innovation and Technology is continuing to usage the trope of an AI-powered solution to cancer detection to push for more AI in public services while ignoring calls by leading cancer specialists to “concentrate on the basics of cancer treatment alternatively than the ‘magic bullets’ of fresh technologies and artificial intelligence”.
Forcing AI into services in lieu of fixing underlying issues like decaying buildings and without backing more actual teachers and doctors is simply a form of structural force – a form of force by which institutions or social structures harm people through preventing them from gathering their fundamental needs.
The political continuity here is that a commitment to AI solutions besides enacts a kind of Thatcherite ‘shock doctrine’ where the sense of urgency generated by an allegedly world-transforming technology is utilized as an chance to transfer power to the private sector.
The amount of data and computing power required to make on of today’s foundation models, the large generative and supposedly general intent systems, is beyond the scope of all but the biggest tech companies. Whether it’s in welfare, education or health, a shift to AI is simply a form of privatisation by the backdoor, shifting a crucial locus of control to Silicon Valley.
Like the first Thatcherism, this is besides going to be accompanied by occupation losses and a change to more precarious forms of algorithm-driven outsourcing. It’s Deliveroo all round, not due to the fact that AI can actually replace jobs but due to the fact that its shoddy emulations supply managers and employers with a means to pare distant employment protections.
What marks the Labour government out from earlier forms of neoliberalism is its emphasis on full mobilisation, that all material and human resources should be aligned behind the national mission for growth. This translates into a rhetorical and legislative intolerance for the thought that people should be ‘idle’, no substance their states of intellectual distress or another disability.
Unfortunately, while AI systems are themselves unproductive in a applicable sense, they excel at precisely the functions of ranking, classification and exclusion that are required for forms of social filtering at scale. Turning more services over to AI-assisted decision-making will indeed facilitate the differentiation of the deserving from the undeserving in line with this productivist ideology.
This alignment of social and method ordering will show that Labour is indeed inactive the ‘party of the workers’, but only in the sense of squeezing the last drop of work out of people while utilizing algorithmic optimisation to decide who is comparatively disposable.
Far right
Ultimately, the Labour government’s capitulation to AI in the vain hope of meaningful growth is simply a gift to the far right.
Most governing parties across Europe are making the mistake of incubating the far right while complaining about their seemingly inexorable rise. Governments seem oblivious to the political fact that trying to distract people with the spectre of the far right while completely failing to address the structural failings of neoliberalism that leave people feeling angry and abandoned, only serves to empower polarising and post-truth politics. Moreover, the fact that populist rhetoric gains support leads the same governing parties to mainstream their reactionary narratives, as the Labour government has done around immigration and the alleged ‘small boats crisis’.
Using AI to distract from structural problems while failing to deliver actual solutions follows a akin pattern. The only thing these algorithms will do is filter and classify groups of people to blame for the way AI itself degrades the already shoddy state of public services. The double whammy that comes with AI is the way that the manufacture itself is simply a catalyst for more utmost right ideologies.
The seeds of this can be seen in the seemingly innocuous turn to ‘AI safety’, which was initiated by Rishi Sunak but has been endorsed and continued by the current government. The rationale of AI safety is not to defend people from the everyday harms of dysfunctional AI in their everyday lives, but to head off the imagined possible for AI to trigger human extinction by developing bioweapons or by becoming completely autonomous and simply taking over.
This, in turn, derives from the underlying belief in AGI or Artificial General Intelligence; the belief that the humungous but stumbling AI of present is simply a step to superintelligent systems that will be superior to humans. As Hollywood as this might seem, it’s the position of many in AI, from godfather Geoffrey Hinton to the founders of most of the main companies like DeepMind and OpenAI.
Such powerfully warping beliefs spawn real planet consequences because, ultimately, it’s built on a eugenicist mindset. The very thought of ‘general intelligence’ comes from Victorian eugenics, erstwhile scientists like Francis Galton and Karl Pearson were rationalising the racial supremacy that legitimised the British Empire. The thought of superior intelligence always comes with its corollary of inferior intelligence, whether that’s defined racially or in terms of disabilities, and always pans out as assessing any lives as more worthy than others.
Part of the motive for a belief in AGI is self-serving. If superintelligent AI is our only hope to solve climate change then we shouldn’t be limiting the improvement of AI through tiny minded measures like carbon emissions targets, but should be mobilising all available resources, fossil fuel or not, behind its accelerated development.
This is besides good news for fossil fuel oligarchs who, not coincidentally, are any of the biggest funders of far right think tanks.
Similarly, if the future of humanity is to join specified superintelligence inside computers themselves, and this leads to vastly multiplied numbers of virtual humans, then facilitating the emergence of AGI and the 1054 virtual future humans becomes morally more crucial than any collateral harms to actual humans in the present moment. Again, while this might seem deranged, it’s the stance of a set of beliefs known variously as ‘effective altruism’ (EA) or ‘long termism’ which are very influential in Silicon Valley.
As a planet view, it elevates the self-styled mission of people in AI above the meagre concerns of average folk. Disturbingly, it seems that the infiltration of US and UK policy circles by people with EA beliefs was liable for the shift to an AI Safety agenda and, in the UK, the creation of an AI Safety Institute.
Lurking behind long termism, but equally influential in Silicon Valley, are the darker and more explicitly fascist beliefs of neoreaction . These kinds of ideas, as espoused by the likes of Peter Thiel, argue that democracy is simply a failed system, that corporations should take over social governance with monarchical models, and that society should optimise itself by weeding out inferior and unproductive elements. It’s this strand of far right tech accelerationism that merged with the MAGA movement in the run up to Trump’s 2024 re-election.
While Elon Musk’s support for anti-immigrant pogroms and his direct attacks on Starmer et al have been the most visible consequences for the UK so far, the real threat is the underlying convergence of large Tech and reactionary politics. AI is not simply a technology but a form of technopolitics, where technology and politics produce and reenforce each other, and in the case of contemporary AI this technopolitics tends towards far right solutionism.
Alternatives
Neither Labour nor any another political organization is going to defend us against this technopolitics. We can, however, argue it straight through ‘decomputing’.
Decomputing starts with the refusal of more data centres for AI, on the basis that they are environmentally damaging and due to the fact that they run software that’s socially harmful. Hyperscale data centres are the platform for AI’s assaults on workers’ rights, through precaritisation and fake automation, but besides for the wider social degradations of everything from preemptive welfare cuts to non-consensual deepfake porn.
Decomputing opposes AI due to the fact that it’s built on layers of exploitative labour, much of which is outsourced to the Global South, and due to the fact that its reductive predictions are foreclosing life chances wherever they’re applied to measure our future ‘value’.
What’s needed to salve pain and suffering isn’t the enclosure of resources to power the judgements of GPUs but acts of care, the prioritisation of relationships that admit our vulnerabilities and interdependencies.
Decomputing is simply a direct opposition to the material, financial, organization and conceptual infrastructures of AI not only due to the fact that they advance an already-failed solutionism but due to the fact that they massively scale alienation. By injecting even more distancing, abstraction and opacity into our lives, AI is helping to fuel our contemporary crisis, furthering the bitter resentments that feed the far right and the disenchantments that separates us from the more-than-human lifeworld.
What we urgently need, alternatively of a political leadership in thrall to AI’s aura of full power, is simply a reassertion of context and agency by returning control to more local and straight democratic structures. Decomputing argues that, wherever AI is proposed as ‘the answer’, there is simply a gap for the self-organisation of people who already know better what needs to be done, whether it’s teachers and students resisting generative AI in the schoolroom or healthcare workers and patients challenging the algorithmic optimisation of workloads that eliminates even minimal chances to relate as human beings.
Decomputing claims that the act of opposing AI’s intensification of social and environmental harms is at the same time the assertion that another worlds are possible. It parallels contemporary calls for degrowth, which besides opposes runaway extractivism by focusing on the alternate structures that could replace it.
As much as contemporary AI is simply a convergence of off-the-scale technology and emergent totalitarianism, decomputing offers a counter-convergence of social movements that brings together issues of workers’ rights, feminism, ecology, anti-fascism and global solidarity.
Where AI is another iteration in the infrastructuring of Empire, decomputing recognises the urgency of starting to make alternate infrastructures in the here-and-now, from renewable energy coops to structures of social care based on common aid.
The murmurings in the financial pages of mainstream media that AI’s infinite growth is actually a bubble prone to collapse misses the point that a wider collapse is already upon us in 1 form or another. Climate change is happening in front of our eyes, while it’s beautiful clear that liberal democracy is allowing itself to be eaten from the inside by the far right.
The more that AI is allowed scale its reactionary technopolitics, the more it will have the effect of narrowing options for the remainder of us. Decomputing is the bottom-up recovery of alternatives that have been long buried under techno-fantasies and decades of neoliberalism; people-powered visions of convivial technologies, cooperative socialities and a reassertion of the commons.