UK’s AI Security Institute to protect against AI risks to national security

The UK’s AI Safety Institute has been recast as the UK AI Security Institute, bolstering protections against AI risks facing national security and crime, and delivering a key pillar of the government’s Plan for Change.

The pivot reflects a focus on serious AI risks with security implications, including malicious cyber-attacks, crimes against fraud, and the development of weaponry.

The AI Security Institute will partner with several government departments, including the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, in order to assess the risks posed by frontier AI on UK security infrastructure.

Setting out his vision for the revamped AI Security Institute in Munich, Technology Secretary Peter Kyle said: “The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change.”

“The main job of any government is ensuring its citizens are safe and protected, and I’m confident the expertise our institute will be able to bring to bear will ensure the UK is in a stronger position than ever to tackle the threat of those who would look to use this technology against us.”

As part of the update, the Institute is launching a new criminal misuse team, conducting research alongside the Home Office on crime and security issues which threaten society.

The government acknowledged the importance of the national security community in supporting the revamped focus, building on the expertise of the National Cyber Security Centre (NCSC). Joint departments will work to understand the most serious risks posed by AI and build research to inform policymakers and keep the UK safe as AI develops.

Achi Lewis, Area VP EMEA for Absolute Security, said: "The establishment of the UK AI Security Institute is a crucial step in safeguarding national security against AI-driven threats. With AI increasingly being weaponised in cyber-attacks, the urgency for robust defences has never been greater. Our research highlights how 54% of CISOs feel unprepared for AI-driven attacks. This proves the need for stronger cyber resilience frameworks, enhanced network visibility, and proactive security measures. Security leaders must act now to mitigate risks before they escalate."

It comes following the AI Actions Summit in Paris, with the UK and US refusing to sign an international agreement which set out to ensure AI development is “transparent”, “safe”, and “secure and trustworthy,” citing concerns about national security and global governance.

For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.