
Technology can’t provide the economic miracle Starmer wants.
The United Kingdom’s Labour government believes that this is not merely possible, but vital. The AI Opportunities Action Plan intends to spend billions of British pounds putting artificial intelligence into every corner of the public sector in the hope for unparalleled efficiency and to save a fortune in staffing costs. Tedious administrative paperwork will be passed to machines. Prime Minister Keir Starmer promised the plan will bring “a decade of national renewal.”
A government bureaucracy generates paperwork—and even more paperwork when it’s computerized. But human judgement is expensive. What if a computer could go through the paperwork instead?
The United Kingdom’s Labour government believes that this is not merely possible, but vital. The AI Opportunities Action Plan intends to spend billions of British pounds putting artificial intelligence into every corner of the public sector in the hope for unparalleled efficiency and to save a fortune in staffing costs. Tedious administrative paperwork will be passed to machines. Prime Minister Keir Starmer promised the plan will bring “a decade of national renewal.”
The Labour government is suffering from dismal polling a mere seven months after its landslide election victory. It seems utterly out of ideas and is now going all-in on a Hail Mary pass that promises miracles. But the miracles will not work because the technology does not work when the details matter.
The catch is that “AI” is a buzzword. The term has rarely denoted any specific technology. Instead, it’s marketing for the promise that science fiction dreams of robot servants will become real any day now. The pitch has always been to replace human employees with machines.
The AI plan originates from the Tony Blair Institute (TBI), which has been called “one of the architects of Starmerite thought.” The TBI’s May 2024 paper, “Governing in the Age of AI: A New Model to Transform the State,” and its July 2024 follow-up, “The Potential Impact of AI on the Public-Sector Workforce,” outline a plan to revolutionize U.K. bureaucracy by handing mundane tasks to AI—saving more than $200 billion over five years and shedding 1 million civil servants. Much of the wording in the AI plan was taken directly from TBI papers.
The U.K. plan to put AI into everything includes machine learning—statistical models that learn to analyze patterns in data. Machine learning is the source of the AI plan’s touted successes in the National Health Service on tasks such as reviewing radiology scans. But the plan’s substance is to replace human administrators across the public sector with large language models—chatbots, like ChatGPT. AI promotion tends to use machine learning, which can work well, to market large language models (LLMs)—which do not work well.
An LLM generates text. It is statistically trained on a large amount of text—up to the entire public internet—and ascertains which words follow other words. You give the LLM a prompt, which may be a sentence or an entire document, and it generates a statistically likely answer. An LLM is purely statistical and only works at the level of word fragments. An LLM does not “know” facts, only which words follow other words. It’s a super-autocomplete.
Very few people who have tried to use an LLM-based chatbot for hands-on administrative work would trust their jobs to one unless the details didn’t matter. LLMs are, by their nature, notoriously prone to confabulation and hallucinations. The systems just make things up—because facts are not a data type in LLMs, only word frequencies.
So LLMs can give extremely impressive demonstrations—they’re very good autocompletes—but tend to fail in actively dangerous ways when the details do matter. One of the cornerstones of Labour’s AI project is the government’s new consultation processing bot called Humphrey, which will read submissions and supply a summary. But the Australian Securities and Investments Commission did blind testing of LLMs for summarizing consultations and found them inferior to humans on all criteria. The bots missed key points, added analysis not found in the source text, and reversed meanings.
But the core belief at the heart of Labour’s push for AI seems to be a belief that LLMs are effectively magical—that you can run arbitrary documents through an LLM and get back human-quality analysis with only a few errors. The reports and the government’s embrace of them are predicated on this assumption.
This belief in the infallibility of LLMs is so strong that the TBI wrote at least two of its reports by asking a Microsoft version of OpenAI’s GPT-4 for answers, choosing the chatbot over going out and researching the state of things on the ground—they believed that a chatbot would be so effective it would give them answers good enough for a serious report.
The researchers told the press in July 2024 that “making a prediction based on interviews with experts would be too hard.” The TBI’s November 2024 TBI report, “The Impact of AI on the Labour Market,” contains multiple pages of charts and graphs of numbers that it proudly proclaims were synthesized by a chatbot. In science, this would be regarded as simple data fraud.
Even when the various documents cite previous findings, they don’t check out. The November report claims that “AI is already driving double-digit productivity gains for early adopters on individual tasks,” citing a Walmart press release about collecting data on what items are on its shelves. A claim that “AI is already being used to improve labour-market efficiency by better matching workers with employers” cites a report from the National Bureau of Economic Research about internet job searches that doesn’t mention AI at all.
Starmer’s press release claimed the International Monetary Fund (IMF) estimates that AI will improve productivity by 1.5 percent per year—but this turned out to be from an IMF report that was skeptical of a Goldman Sachs report making this claim. The original Goldman Sachs report says that electricity and personal computers raised GDP by about this much, so the authors just postulated that LLMs would do about the same—though they admit it is “highly uncertain.”
The TBI and the Labour Party have both received considerable donations from tech companies who have AI offerings to sell. The government’s plan is to give these vendors money to build LLMs and access to any public data they might want to train them on.
Given LLMs’ inherent susceptibility to confabulation, broad deployment is likely to end in a series of disasters. None of this stuff works well or reliably.
In education, curriculum planning and homework grading will be done with chatbots. An unnamed AI vendor will process existing materials to train an LLM to do these jobs. The government cites a “survey” in support of this, which was, in fact, an AI promotional focus group run by a vendor.
At its press launch, the LLM-based chatbot on the U.K. government’s official website sometimes answered English-language questions in Welsh. It wouldn’t answer “what is the corporation tax regime?” because it didn’t like the word “regime.” This was nevertheless an improvement on the previous version of the chatbot, which occasionally behaved seductively.
Starmer’s press release suggested AI-powered cameras spotting potholes in need of repair. But what blocks pothole repair is not finding them, but that local council funding is down 40 percent% since 2010.
The government’s AI recommendations will likely be forced through. AI “growth zones” will bypass planning restrictions for data center vendors Vantage, Nscale, and Kyndryl. In an echo of the U.S. Department of Government Efficiency, a Regulatory Innovation Office in the U.K. will force “behavioral changes within regulators.”
A proposed copyright exemption to hand British creative works to the AI companies for training is supposedly still in consultation until Feb. 25—but the government now states that this plan will go forward regardless.
LLMs are not economically sustainable—OpenAI spends $2.35 for each $1 of revenue. The AI bubble is propped up by billions in venture capital with nowhere else to go, trying to hype a market into existence, and corporations desperate to increase enterprise spend.
The purpose of the U.K. government’s plan is to pump the AI bubble with public money. When the bubble bursts, the treasury will have been drained and public services will be running on AI slop generators. The potholes will remain unfilled.
David Gerard is the author of the book Attack of the 50 Foot Blockchain and the cryptocurrency and blockchain news blog of the same name. He also co-writes Pivot-to-AI.com.
Energy News Beat