AI Act risks stalling innovation, says Oxylabs
In March, the European Parliament made history by adopting the Artificial Intelligence Act, establishing the world's first comprehensive legal framework for AI.
This landmark legislation aims to protect fundamental rights, uphold democracy, and ensure environmental stability amid the rapid rise of high-risk AI technologies.
The EU's goal with this act is to foster ethical AI use across Europe. However, some critics argue that the act's introduction was hurried due to mounting pressures. While the ambition is to create a trustworthy AI landscape in Europe, the EU must carefully consider the potential impacts on the technology sector.
Denas Grybauskas, Head of Legal at Oxylabs, said, “As the AI Act comes into force, the main business challenges will be uncertainty in its first years. Various institutions, including the AI office, courts, and other regulatory bodies, will need time to adjust their positions and interpret the letter of the law. During this period, businesses will have to operate in a partial unknown, lacking clear answers if the compliance measures, they put in place are solid enough.
“One business compliance risk that is not being discussed lies in the fact that the AI Act will affect not only firms that directly deal with AI technologies but the wider tech community as well. Currently, the AI Act lays down explicit requirements and limitations that target providers (i.e., developers), deployers (i.e., users), importers, and distributors of artificial intelligence systems and applications. However, some of these provisions might also bring indirect liability to the third parties participating in the AI supply chain, such as data collection companies.”
Most AI systems today are based on machine learning models that require an abundance of data for training to ensure that the model has an adequate contextual understanding, is not outrightly biased, and does not hallucinate its outputs. Today, AI developers are looking for ways to scrape as much publicly available web data as possible. Although the AI Act does not target data-as-a-service (DaaS) companies and web scraping providers, these firms might indirectly inherit certain ethical and legal obligations.
Grybauskas continued, “A prime example is web scraping companies based in the EU who will have to ensure they do not supply data to firms developing prohibited AI systems. If a company willingly cooperates with an AI firm that, under EU regulation, is breaking the law, such cooperation might bring legal liability. Moreover, web scraping providers will need to install robust know-your-customer (KYC) procedures to ensure their infrastructure is used ethically and lawfully, ensuring an AI firm is collecting only the data they are allowed to collect, not copyright-protected information”.
“Another broad compliance-related risk that I can foresee comes from the decision to grant some exemptions under the AI Act for systems based on free and open-source licences”, added Grybauskas. “There is no consolidated, single definition of “open-source AI”; and it is unclear how the widely defined open-source model might be applied to AI. This situation has already resulted in companies falsely branding their systems as “open-source AI” for marketing purposes. Without clear definitions, even bigger risks will manifest if businesses start abusing the term to win legal exemptions.”
“The AI Act has the potential to establish trust across the industry but may also be detrimental to innovation across the technology industry. Organisations must be on their toes, as they may face penalties in the millions for severe violations involving high-risk AI systems,” concluded Grybauskas.