
After launching a consultation on AI regulation last March, the U.K. government is finally releasing its response. The white paper that initiated the consultation expressed a preference for using existing laws and regulators, along with “context-specific” guidance, to lightly oversee the disruptive high tech sector.
The full response will be published later this morning, so we could not review it before writing this (update: you can find it online here). However, in a press release before the publication, the Department for Science, Innovation and Technology (DSIT) claims that the plan will enhance the U.K.’s “global leadership” through targeted actions — such as providing over £100 million (~$125 million) in extra funding — to strengthen AI regulation and stimulate innovation.
According to DSIT’s press release, regulators will receive £10 million (~$12.5 million) more in funding to “upskill” for their increased workload, which involves applying existing sectoral rules to AI developments and enforcing existing laws on AI apps that violate the rules (this may also include developing their own tech tools).
“The fund will enable regulators to conduct cutting-edge research and develop practical tools to monitor and address risks and opportunities in their sectors, from telecoms and healthcare to finance and education. For instance, this could involve new technical tools for inspecting AI systems,” DSIT states. It did not specify how many more staff could be hired with the extra funding.
The release also boasts — a much larger — £90 million (~$113 million) in funding that the government says will be used to create nine research hubs to support local AI innovation in areas, such as healthcare, math and chemistry, which it suggests will be located across the U.K.
The government’s preference for domestic AI innovation is evident from the 90:10 funding split — with the lion’s share going to the ‘homegrown AI development’ category, while ‘targeted’ enforcement on related AI safety risks is seen as a relatively minor supplementary task for regulators. (However, it should be noted that the government has previously allocated £100 million for an AI taskforce, dedicated to safety R&D around advanced AI models.)
DSIT told TechCrunch that the mechanism for the £10 million fund to enhance regulators’ AI capabilities is still being set up — saying the government is “moving swiftly” to do so. “But it is essential that we do this correctly to meet our goals and ensure that we are spending taxpayers’ money wisely,” a department spokesperson said.
The £90 million funding for the nine AI research hubs spans five years, starting from February 1. “The funding has been granted already with investments in the nine hubs varying from £7.2 million to £10 million,” the spokesperson said. They did not provide details on the focus of the other six research hubs.
The other main headline today is that the government is maintaining its plan not to enact any new laws for artificial intelligence yet.
“The UK government will not hurry to legislate, or risk applying ‘short-term’ rules that would soon become obsolete or inefficient,” writes DSIT. “Rather, the government’s context-based approach means existing regulators are authorized to address AI risks in a specific way.”
In an Executive Summary to its reply to the consultation, Michelle Donelan, the minister for science, innovation, and technology, also writes that “AI technologies will eventually need legislative action in every country once risk understanding has developed”.
She also proposes that “more targeted binding rules” might be needed to address the challenges posed by “highly skilled general-purpose AI systems” to make sure the few AI giants behind these models are “responsible” for making their technologies “adequately safe”. But there are no binding rules yet — as that would need new legislation.
“As AI systems improve in ability and social impact, it is obvious that some compulsory measures will ultimately be required in all jurisdictions to deal with possible AI-related harms, ensure public safety, and let us exploit the transformative possibilities that the technology offers. However, acting before we properly comprehend the risks and suitable solutions would damage our ability to benefit from technological progress while leaving us unable to adjust quickly to emerging risks,” Donelan says. “We are going to take our time to get this right — we will legislate when we are sure that it is the right thing to do.”
This sticking to the plan is expected — given the government is facing an election this year which polls indicate it will most likely lose. So this looks like an administration that’s running out of time to write laws on anything. Certainly, time is running low in the current parliament. (And, well, passing legislation on a tech topic as complex as AI clearly isn’t in the current prime minister’s power at this point in the political cycle.)
At the same time, the European Union just agreed on the final text of its own risk-based framework for regulating “trustworthy” AI — a long-awaited high tech rulebook which looks set to start to apply there from later this year. So the U.K.’s strategy of leaning away from legislating on AI, and choosing to tread water on the issue, has the effect of sharply highlighting the difference vs the neighbouring bloc where, taking the opposite approach, the EU is now moving ahead (and moving further away from the U.K.’s position) by implementing its AI law.
The U.K. government clearly sees this strategy as offering a bigger incentive for AI developers. Even as the EU believes businesses, even disruptive high tech businesses, flourish on legal clarity — and, in addition, the bloc is revealing its own package of AI support measures — so it remains unclear which of these approaches, sector-specific guidelines vs a set of defined legal risks, will attract the most growth-driving AI “innovation”.
“The UK’s flexible regulatory system will enable regulators to react quickly to emerging risks, while giving developers space to innovate and expand in the UK,” is DSIT’s optimistic line.
Meanwhile, on business confidence, specifically, its release highlights how “key regulators”, including Ofcom and the Competition and Markets Authority [CMA], have been requested to publish their approach to managing AI by April 30 — which it says will make them “set out AI-related risks in their areas, detail their current skillset and expertise to deal with them, and a plan for how they will regulate AI over the next year” — implying AI developers operating under U.K. rules should get ready to read the regulatory signs, across multiple sectoral AI enforcement priority plans, in order to estimate their own risk of getting into legal trouble.
One thing is obvious: U.K. prime minister Rishi Sunak continues to be very comfortable in the company of techbros — whether he’s taking time off from his main job to conduct an interview of Elon Musk for streaming on the latter’s own social media platform; finding time in his busy schedule to meet the CEOs of US AI giants to listen to their ‘existential risk’ lobbying agenda; or hosting a “global AI safety summit” to gather the tech faithful at Bletchley Park — so his decision to choose a policy option that avoids coming with any hard new rules right now was certainly the easy pick for him and his time-limited government.
On the other hand, Sunak’s government does seem to be in a rush in another respect: When it comes to distributing taxpayer funding to boost homegrown “AI innovation” — and, the suggestion here from DSIT is, these funds will be strategically aimed to ensure the accelerated high tech developments are “responsible” (whatever “responsible” means without there being a legal framework in place to define the contextual limits in question).
As well as the aforementioned £90 million for the nine research hubs mentioned in DSIT’s PR, there’s an announcement of £2 million in Arts & Humanities Research Council (AHRC) funding to support new research projects the government says “will help to define what responsible AI looks like across sectors such as education, policing and the creative industries”. These are part of the AHRC’s existing Bridging Responsible AI Divides (BRAID) program.
Additionally, £19 million will go toward 21 projects to develop “innovative trusted and responsible AI and machine learning solutions” aimed at accelerating deployment of AI technologies and driving productivity. (“This will be funded through the Accelerating Trustworthy AI Phase 2 competition, supported through the UKRI [UK Research & Innovation] Technology Missions Fund, and delivered by the Innovate UK BridgeAI program,” says DSIT.)
Donelan said in a statement along with today’s announcements:
The UK’s inventive approach to AI regulation has made us a global leader in both AI safety and AI development.
I am personally motivated by AI’s potential to improve our public services and the economy for the better — leading to new treatments for harsh diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.
AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have started to handle the risks right away, which in turn is clearing the way for the UK to become one of the first countries in the world to enjoy the benefits of AI safely.
Today’s £100 million+ (total) funding announcements are in addition to the £100 million previously announced by the government for the aforementioned AI safety taskforce (turned AI Safety Institute) which is focused on so-called frontier (or foundational) AI models, per DSIT, which confirmed this is new money when we asked.
We also asked about the criteria and processes for awarding AI projects U.K. taxpayer funding. We’ve heard concerns the government’s approach may be avoiding the need for a thorough peer review process — with the risk of proposals not being rigorously examined in the hurry to get funding distributed.
A DSIT spokesperson responded by rejecting there’s been any change to the usual UKRI processes. “UKRI funds research on a competitive basis,” they suggested. “Individual applications for research are assessed by relevant independent experts from academia and business. Each proposal for research funding is assessed by experts for excellence and, where applicable, impact.”
The spokesperson said that “DSIT is working with regulators to finalise the details [of project oversight] but this will be centred around regulator projects that support the implementation of our AI regulatory framework to ensure that we are making the most of the transformative opportunities that this technology has to offer, while reducing the risks that it poses.”
On foundational model safety, DSIT’s PR suggests the AI Safety Institute will “see the UK collaborating closely with international partners to increase our ability to evaluate and research AI models”. And the government is also announcing an additional investment of £9 million, via the International Science Partnerships Fund, which it says will be used to bring together researchers and innovators in the U.K. and the U.S. — “to focus on developing safe, responsible, and trustworthy AI”.
The department’s press release continues to describe the government’s response as laying out a “pro-innovation case for further targeted binding rules on the small number of organisations that are currently developing highly capable general-purpose AI systems, to ensure that they are responsible for making these technologies sufficiently safe”.
“This would build on steps the UK’s expert regulators are already taking to respond to AI risks and opportunities in their domains,” it adds. (And on that front the CMA put out a set of principles it said would guide its approach towards generative AI last fall.)
The PR also talks enthusiastically of “a partnership with the US on responsible AI”. Asked for more details on this, the spokesperson said the aim of the partnership is to “bring together researchers and innovators in bilateral research partnerships with the US focused on developing safer, responsible, and trustworthy AI, as well as AI for scientific uses” — adding that the hope is for “international teams to examine new methodologies for responsible AI development and use”.
“Developing common understanding of technology development between nations will enhance inputs to international governance of AI and help shape research inputs to domestic policy makers and regulators,” DSIT’s spokesperson added.
While they confirmed there will be no U.S.-style ‘AI safety and security’ Executive Order issued by Sunak’s government, the AI regulation White Paper consultation response dropping later today sets out “the next steps”.
This report was updated with a link to the government’s response to the consultation, once published; and with SoS Donelan’s remarks about the reasons the government is not introducing AI legislation yet but also the case for putting some “binding rules” on highly capable general purpose AI systems at some point