The EU AI Act: The UK Outlook
Following an artificial intelligence (AI) boom in the last few years, countries around the world have been racing to develop regulations and establish themselves as leaders in the field.
The UK has shared that same vision of becoming a leader, with its AI Safety Summit and various discussions in parliament about leveraging a pro-innovation approach to AI. However, it’s the EU who is currently leading the charge when it comes to AI regulation - having enacted its EU AI Act in 2024, a comprehensive legal framework for AI use.
Many businesses are uncertain about how this new Act will affect them and their use of AI. This is especially true in the UK, that is yet to provide a formal regulatory framework governing AI, yet certain companies will be required to adhere to the requirements of the EU’s AI Act. Therefore, it’s important for businesses to get a comprehensive grasp on how the EU AI Act works and the impact it has on both businesses and individuals.
How the EU AI Act works
The EU AI Act aims to govern the development and deployment of AI systems in the EU region, with its legislation setting out rules to ensure AI is used ethically and respects individuals’ fundamental rights. One key aspect of the AI Act is the establishment of a risk-based approach to AI regulation, categorising AI systems into different levels of risk based on their potential impact on safety and fundamental rights.
Minimal risk is assigned to AI systems that poses negligible rights and safety risks, while limited risk is assigned to AI systems that requires basic transparency and guidelines to enable informed user decisions, such as chatbots. High risk includes AI systems that could compromise individuals’ rights or safety necessitating strict regulatory compliance, including biometric ID systems. Finally, unacceptable risk is given to AI systems practices that severely threaten rights or safety, explicitly prohibited by law.
The Act mandates that high-risk systems undergo rigorous conformity assessments before being placed on the market or put into service. These assessments involve evaluating the AI system’s compliance with legal requirements including data quality, transparency, human oversight, and documentation. By ensuring these assessments are completed, the EU aims to enhance trust in AI technologies, promote innovation, and safeguard individuals from potential harm caused by AI systems. It also highlights the importance of human oversight in decision-making processes involving AI to prevent discrimination, bias, and other ethical concerns.
Through the Act, the EU has also introduced the European AI Board, responsible for overseeing the application and enforcement of the regulation across EU member states, which also affects UK AI businesses that work in the EU to any extent. This board aims to ensure a harmonised approach to AI regulation throughout the EU by centralising regulatory oversight, ultimately fostering a trustworthy and responsible AI ecosystem.
How it affects UK businesses
With the UK no longer in the EU following Brexit, UK companies that engage with the EU and are developing or utilising AI technologies in the region will need to comply with the regulations outlined in the Act to avoid penalties and ensure ethical AI practices are being adhered to. Businesses dealing with high-risk AI processing will need to allocate resources to meet the stringent requirements set in the legislation, such as conducting conformity assessments, implementing transparency measures, and ensuring human oversight. This could lead to increased operational costs and potentially impact the speed of AI innovation within these organisations.
The Act also influences how businesses approach AI development and deployment strategies. At Slalom, we're urging companies to now prioritise ethical considerations, data quality, and transparency in their AI projects to align with the regulatory standards. The shift towards responsible AI practices not only helps businesses comply with the law but also enhances their reputation and credibility among consumers and stakeholders. By demonstrating a commitment to ethical AI, businesses can build trust with customers, mitigate risks associated with AI misuse, and differentiate themselves in the market as responsible AI innovators.
Ultimately, the AI Act aims to create a level playing field for businesses by establishing clear rules and standards for AI usage across industries. As an example, companies using AI within finance will need to comply with stricter rules on algorithm transparency and accountability, while in healthcare, AI applications must adhere to data protection and patient safety standards. This regulatory framework promotes fair competition by ensuring that all businesses stick to the same ethical guidelines and transparency requirements when developing and deploying AI systems. By setting common standards, the AI Act also enhances consumer protection, and encourages innovation in a responsible manner. UK businesses that embrace the principle outlined in the Act can not only navigate the evolving regulatory landscape effectively, but also gain a competitive advantage by building trust and credibility with customers in the EU.
The possibility for further regulation
With this legislation being the first of its kind, and given the evolving AI landscape, additional regulations beyond the EU AI Act may be necessary to address emerging challenges and ensure responsible AI development globally. There are certain areas that could benefit from further regulation including AI-powered autonomous systems, facial recognition technology, deep learning algorithms and AI in social media content moderation.
There’s still a lot of fear associated with AI and comprehensive regulations could help in mitigating this concern. By enhancing safeguarding against potential misuse in developing areas, the UK can tackle ethical concerns head-on.
Where the UK is heading
It’s likely that the UK will follow the example of the EU and introduce its own set of regulations. In fact, it’s important the UK does, to maintain its competitiveness on the world stage.
It’s important to consider that the UK has been looking to become a leader in AI and through the first AI Safety Summit in Bletchley Park, has shown a commitment to promoting innovation while upholding ethical standards in AI development.
However, in the recent King’s Speech, the only mention of AI was that the government will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial models”. While this focuses on generative AI (genAI) models, the UK will need to also include legislations across all types of AI as the EU AI Act has done, not just genAI. The UK will therefore need to take inspiration from the EU AI Act when establishing its own regulatory framework, ensuring a balance between fostering technological advancement and protecting individual rights and societal values.
Final thoughts
To become a leader in AI, the UK undoubtedly still has a way to go, especially when it comes to establishing regulations and how these would be enforced in practice. It will need to strike a similar balance to the EU AI Act to ensure safeguarding measures are in place, without stifling innovation.