The EU AI Act: a stepping stone in the journey towards ethical AI

The European AI market is booming. According to global VC Atomico’s State of European Tech report, Europe’s AI talent base has grown more than 10 times in the past decade to over 120k professionals currently employed in AI roles – more than the US.

By 2030, AI is expected to have injected more than €11 trillion into the global economy, according to industry forecasts. In a bid to keep up with global tech leaders, the European Union is intensifying its push to integrate AI into business and advanced technologies. Building up digital infrastructures and capabilities across member states will be crucial for restoring European productivity and competitiveness.

Europe is now innovating in AI regulation as well as adoption, creating a blueprint of ethical use for the rest of the world. Last month saw the introduction of the EU AI Act – the first-ever legal framework on AI.

The Act defines four categories of risk for AI systems: minimal risk, limited risk, high risk, and unacceptable risk. But what are its practical implications for startups?

The Act in Practice

This landmark legislation marks a significant and proactive step towards creating a framework for the responsible development and deployment of AI technologies. The EU AI Act's risk-based approach is commendable, as it recognises that not all AI applications carry the same risk to individuals and society.

However, we must also consider the potential challenges this regulation may pose for startups and SMEs in the AI space. The compliance requirements, while necessary, could potentially create barriers to entry or innovation if not carefully balanced. It's crucial that the implementation of the Act includes support mechanisms for smaller companies to ensure a level playing field. While the EU AI Act sets a strong precedent, I hope to see collaborative efforts towards global standards that can facilitate innovation while protecting fundamental rights across borders.

Adapting to the Act

To comply with the EU AI Act, startups must ensure their AI-driven services adhere to strict requirements to safeguard human rights, privacy, and transparency. AI systems must be classified based on risk before appropriate safeguards are implemented depending on risk level. For example, high-risk AI applications in sectors such as healthcare or finance must undergo rigorous testing and certification to ensure compliance with safety, non-discrimination, and accuracy standards.

Transparency is critical, meaning companies must disclose the use of AI in decision-making processes and provide clear documentation of how AI systems function. Data governance is also essential, requiring companies to ensure that AI training datasets are robust, fair, and free from bias. Startups should commit to regular audits to maintain accountability. I’d also recommend that businesses work with legal and compliance professionals to closely monitor AI operations and ensure they align with the evolving regulations of the Act.

The impact on innovation

Across the AI industry, companies of all sizes have voiced concerns that strict regulations could slow the pace of innovation. From startups to enterprises, there's a worry that such a sharp focus on compliance could create barriers that lead to rising costs, complicated admin and a stifled spirit of innovation. Startups, in particular, worry that the regulatory burden will put them at a disadvantage compared to well-resourced, established tech giants.

I believe the nuanced risk-based perspective of the Act should allow for innovation in lower-risk areas while ensuring appropriate safeguards for high-risk applications.

Looking to the future

As an AI startup, we've always prioritised ethical AI development and use as a first principle. The Act's emphasis on transparency, accountability, and human oversight aligns with our core values. These principles are crucial for building trust in AI systems, particularly in sensitive areas like global mobility, talent management, and immigration.

Looking ahead, businesses within the sector must commit to not only complying with the EU AI Act but also to contributing to the ongoing dialogue about responsible AI development. The journey towards ethical and responsible AI is ongoing, and the EU AI Act marks a milestone along the way. I welcome the precedent that will ensure AI businesses can innovate in ways that benefit not only their clients but society at large.