Using AI to scale-up? How to avoid regulatory blocks

AI has the potential to bring huge growth at a rapid pace to scaling businesses. But there are a range of legal issues to bear in mind.

In this article, Brett Lambe, an experienced technology lawyer at Ashfords, explores five key areas for founders and business owners to consider.

The astonishing growth in artificial intelligence (AI) powered products and services has been one of the biggest stories in tech over the last couple of years. From machine learning to generative AI to automation, AI has well and truly broken through and is now available at scale to consumers and businesses, providing an ever-increasing range of possibilities. 

AI's transformative potential for small and medium-sized enterprises (SMEs) is immense, offering a competitive edge that was once the sole domain of much larger entities. By leveraging AI tools, SMEs can not only level the playing field but also enhance their capabilities to attract new customers and, crucially, meet those customers' needs more quickly and efficiently. This game-changing development is opening doors to unprecedented growth opportunities.

However, the journey toward AI integration is not without its challenges, even in key tech markets like the UK, where the regulatory landscape is still taking shape. 

In this article, we'll take a look at five key areas where founders and business owners should take some time to make sure they are using these incredible tools with proper care, to ensure they are complying with the law and avoiding potentially costly mistakes further down the road.

Bias and discrimination

AI is increasingly integrated into business operations, including decision making processes, promising efficiency, and objectivity. However, in reality, AI systems are only as objective in their decision making as the data they are trained on. If the data used to train AI algorithms is biased, the resulting decisions and outcomes can also be biased – and this could happen unwillingly or unconsciously.

Despite safeguards, bias can persist, leading to potential legal liabilities and reputational damage for businesses. Examples include an organisation using AI in recruitment that inadvertently favours certain demographics, or conducting AI based performance reviews that reflects underlying gender bias or implementing customer service chatbots that exhibit racial bias.

Addressing this challenge requires coupling the use of AI in management decisions with robust human oversight mechanisms and regularly monitoring and auditing AI systems to identify and remove discriminatory patterns or outcomes.

Data privacy and security

Businesses must safeguard the integrity of all the personal data that feeds the AI systems they use in accordance with data protection laws to avoid fines and reputational harm. SMEs must comply with data protection laws such as the GDPR when operating AI systems and must ensure that the data which feeds the AI system has been properly obtained and that the use of data in this way is permissible. Businesses should also keep in mind that this data relates to all individuals, including their customers and employees.

Where the AI solution is provided by a third party, businesses will also need to be alert to their data being used by the third party for providing services to other employers. Reviewing and understanding commercial terms is key.

IP rights

Organisations must understand the intellectual property implications of their use of AI. Whether social media content, automated product design, or data analytics insights, as AI technology evolves, questions continue to arise regarding IP rights and ownership of the content generated by AI systems.

The law is constantly evolving on this matter and often moves more slowly than the technology it seeks to govern. This is particularly the case in AI which is an area of significant technical and legal complexity. This combination may well leave businesses vulnerable to future disputes over ownership rights, particularly scaling businesses who do not have sufficient financial firepower to fight a legal battle.

When using third-party AI tools or APIs, SMEs should review licensing agreements and supplier terms and conditions to ensure they can comply, and in particular take care to review the ownership and usage rights for the AI technology and any content or data generated through its use.

Meaningful human review

Tokenistic human review of outputs by AI systems may inadvertently cause solely automated decision-making, which brings legal implications.

Businesses must ensure meaningful human review when their decisions are solely automated and have legal or similarly significant effects. Business leaders should ensure human reviewers are capable of intervening on the automated decisions and maintain a record of what information the human reviewer saw when making the final decision. Organisations should consider what tools human reviewers need to make a meaningful final decision and how to record that those tools were properly used. 

Job displacement and skills development

One of the most contentious aspects of AI for employees is job displacement or skill obsolescence. The potential for job automation in various industries implies that fewer employees may be needed for certain tasks. All business owners must proactively address this by considering upskilling or reskilling initiatives to enable the people in their workforce are able to adapt to the human impact of AI and related technological advancements.

Additionally, entrepreneurs need to carefully balance this against the necessity for an ongoing employee pipeline. If traditional routes to senior roles through junior positions disappear due to automation, businesses must innovate new training methods to prepare the next generation of employees to lead the company.

By prioritising skill development, business founders can mitigate the adverse effects of job displacement and ensure a resilient workforce capable of navigating AI-driven changes in the business landscape.

Looking ahead

It is undeniable that the speed of progress of AI has proved a game-changer for businesses of every size. Delivering major opportunities for improved efficiency and productivity, every business can streamline their processes and improve their overall performance. By ‘outsourcing to AI’, many tasks that were once time-consuming and error-prone can now be automated, enabling all businesses to ‘do more with less’.

While AI tools can act as a catalyst for innovation, scaling businesses are perfectly placed to take advantage, where their agility and willingness to disrupt can help them forge their own path to success, at their own pace.

While the technological and commercial possibilities are enormously exciting, the legal and regulatory framework is continuing to develop in the UK, and some degree of caution is needed. 

The European Commission recently passed the Artificial Intelligence Act (AI Act), which is an EU-wide regulation on AI which aims to establish a common regulatory and legal framework for AI. Like the GDPR before it, this is likely to be a wide-ranging piece of legislation which will apply not just to businesses located in the EU, but global businesses (including those in the UK, US, and Asia) looking to trade in the EU. As such, although other countries may take a different approach, the AI Act is likely to prove a benchmark for other lawmakers.

Headline penalties for breaches of the AI Act are eyewatering (the higher of €15 million or 3% of annual global turnover but can go as high as €35 million or 7% of annual global turnover for specifically-prohibited violations related to AI systems, including the use of AI-enabled manipulative or deceptive techniques, or biometric categorisation systems that use sensitive personal characteristics).

Given the pace of change, there are some steps that founders, and business owners should take now.  The first priority should be to assess your use of AI (whether as a customer or vendor), and then engage with advisers who are familiar with the technology and the potential legal implications, and working together to develop and implement proper guidance, policies, and guardrails. This cannot be a ‘tick box’ exercise. These policies should apply throughout the company. A major benefit for a scaling business is being agile enough to ensure these policies are ‘baked in’ at an early stage of your growth journey, becoming part of your organisational and cultural DNA, and building a strong ethical foundation for your business.

While compliance is always an ongoing process, taking these steps will help entrepreneurs strike a lawful balance between benefiting from the efficiencies of the evolving technologies, while ensuring they comply with the ethical and legal standards as they emerge.