
The EU AI Act one year on: how is it impacting sectors?
One year on from going live, the EU AI Act has moved from concept to reality, forcing businesses, regulators, and innovators alike to confront turning responsible AI into practice. The legislation has set a global benchmark, and its ripple effects are impacting industries, sparking both compliance challenges and new opportunities for competitive advantage.
Discussing the Act’s impacts in detail, thought leaders have unpacked the recent August implementation of the General-Purpose AI legislation, as a key development, which means providers of GPAI models must now meet requirements for transparency, technical documentation, copyright compliance, and risk management.
With this, enforcement and penalties activate, and regulatory authorities across EU member states are now empowered to enforce the Act, with penalties reaching up to €35 million or 7% of global revenue for violations. Member states are required to review governance infrastructure in effect, designating national authorities, notifying conformity assessment bodies, and beginning formal reporting on AI oversight.
How is the EU AI Act changing the way companies manage AI?
With us moving forward one year since the officialisation of the EU AI Act, we’re now seeing that there is an urgency to adhere to regulation and ensure that rigorously demanded compliance standards are maintained. The European Commission has adopted a phased approach to implementing articles from the Act, and as these start to be brought into effect, there are consequences to non-compliance.
Examining the closer points of the Act’s Code of Practice and its ramifications for openness about AI usage and provision, Agur Jõgi, CTO, Pipedrive said: “As AI becomes integral to software used by small, medium, and large firms, organisations face a dilemma: How to adopt new features before competitors, without compromising safety or trust. Recent Pipedrive research found 48% of businesses cite lack of knowledge as the primary barrier to AI adoption.
“The EU AI Act puts a premium on governance and transparency. That requires all tech talent to commit to continuous education and to gold plated standards to deploy AI responsibly. Teams, and the technology industry must share strong and easily implemented best practices because SaaS suppliers are not simply providing tools, they empower people with the solutions to real problems. That requires effectiveness, transparency, and trust in AI features.
“Coming soon after the GPAI Code of Practice, this is the EU sending a strong signal to industry that Europe is serious about AI safety and ensuring AI solutions perform as promised."
What are the ways businesses can best handle EU AI Act compliance?
Diving into the specifics of compliance, you need the right people to handle the data management, transparency, and documentation requirements, so that companies can stand up to tests from legislative inspection.
Framing this as a skills issue, Nikolaz Foucaud, Managing Director EMEA, Coursera said: “The next application of the EU AI Act marks a turning point for AI oversight, obliging organisations to meet rigorous compliance standards. General-Purpose AI obligations require transparency, detailed technical documentation, data mapping and risk management. With 78% of companies now deploying AI in at least one business function, according to McKinsey, it’s essential that enterprises cultivate AI literacy across their organisations to manage it effectively and legally.
“This compliance challenge is also a skills challenge. Coursera research shows that UK tech leaders are already concerned about cross-functional GenAI literacy, with 63% of technology leaders observing that non-technical teams consistently underestimate the resources and training needed to achieve GenAI objectives. For the cross-functional teams that will be involved in ensuring compliance with this legislation, this represents an urgent imperative, particularly as British tech leaders are already seeing skills gaps jeopardising their strategic priorities: more than half (52%) do not think their current team has the skills to meet business transformation goals in the next 12-18 months.
“With AI transformation high on the agenda and organisations now open to significant penalties for non-compliance, attracting and developing AI-literate talent needs to be top of the agenda as British businesses seek to thrive in an age of accelerating AI regulation.”
In the context of the EU AI Act, why is balancing humans and machines important?
As the Act emphasises transparency and accountability, organisations must carefully evaluate how they uphold ‘human-in-the-loop’ safeguards. These oversight measures remain essential, especially as new technologies emerge. Generative AI, with its tendency to produce inaccurate or misleading outputs, can still fall short despite ongoing refinements. Ensuring that humans remain central to AI deployment is therefore critical to responsible and reliable use.
Drilling into this topic, Eduardo Crespo, VP EMEA, PagerDuty said: “As the EU AI Act begins implementation, enterprise leaders must focus on operationalising AI responsibly, not just compliantly. That means understanding how AI systems behave in real time and having the ability to intervene immediately when something goes wrong. For organisations applying AI in critical operations, such as incident management or infrastructure monitoring, this level of oversight is essential to ensure good customer service and help prevent any spreading activities that would actively harm that service.
“These growing regulations enforce the need for transparency, traceability and control in AI systems. Enterprises should treat this as an opportunity to embed more robust AI operations practices, ensuring that models perform reliably and align with business and ethical standards. Operational resilience and human-in-the-loop safeguards are best practices, ensuring users are supported by growing AI capabilities, and not alienated by poor experiences that damage trust in technology providers.”
Looking ahead to the next year of the EU AI Act
While the EU AI Act may still be in its infancy, at just a year old, it has already shifted the conversation – from speculative ethics to enforceable standards. As sectors adapt at different speeds, one thing is clear: the Act is more than just regulation; it’s a catalyst shaping how AI is built, governed, and trusted worldwide.
This regulation has been styled as “the first comprehensive regulation on AI by a major regulator anywhere”. Its long-term impacts will be felt keenly not just in Europe, but around the world, and it will be interesting to see how the regulation of emerging technologies starts to develop in other countries and regions.
If we are to learn lessons from data practices and GDPR, with over €6 billion in fines as of August 2025, enforcement follows regulation over time. And we could see a similar pattern emerging with the EU AI Act, which is why getting compliance strategy right here and now is vital, for those businesses that want to avoid costly mistakes down the line.
For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.