Nearly half of UK employees use unauthorised AI tools

Nearly half of UK employees admit to using non-approved AI solutions without their employer’s knowledge, as AI tools become increasingly common in workplaces, according to Owl Labs' State of Hybrid Work Report.

This so-called ‘shadow AI’ or ‘bring your own AI’ (BYO-AI) is most prevalent among younger workers, with 63% of Gen Z and Millennials using AI regularly at work, compared to just 43% of Gen X and Boomers.

The rise of shadow AI in enterprises has been a growing concern over the past year, with both IT and security leaders raising alarms about the risks associated with unauthorised AI tool usage.

A recent Deloitte study found that nearly one-third of workers are paying for and using AI tools that have not been authorised by their employers. The report also highlighted that 19% of employees believe a significant portion of the workforce in the UK is using generative AI without employer approval, while another 45% feel the same about a moderate number of workers.

When employees were asked why they use these non-approved applications, nearly 40% reported that they don't perceive any risk. Additionally, about a third of employees expressed doubts about whether their company could track their use of unauthorised tools.

However, experts from BCS, the Chartered Institute for IT, warn that using non-approved tools exposes both individuals and enterprises to serious risks. These include potential breaches of data privacy regulations, increased security vulnerabilities, and possible violations of intellectual property rights.

Sachin Agrawal, Managing Director of Zoho UK commented: “With the AI boom, staff and enterprises alike are eager to implement and use AI in their daily operations, but it’s important that the necessary precautions are taken to mitigate potential risks like shadow AI. Organisations need to have clear, centrally driven, policies in place to ensure AI is used in the right way, providing benefits to staff and maximising ROI for the organisation. These tools can have a transformative impact in areas including data analysis and customer service, so integrating them in a controlled way can have significant benefits.

“If AI use cases are not set up in the right way, with comprehensive guidelines, and central data sources that house quality, clean data, then it won’t be able to do its job properly. Having these systems centrally vetted will also identify and mitigate the security and data privacy risks of AI tools, as well as ensure compliance with regulations such as the EU AI Act, to ensure safe and trustworthy AI use across the business.

“When adopting AI tools across departments, businesses should provide staff with education and training not only on the risks, but on the applications of Ai to assists their day-to-day tasks. Staff that are provided with upskilling opportunities and are able to capitalise on AI tools will be able to boost efficiency and free themselves up to focus on higher-impact, revenue-driving tasks.”

"As hybrid work becomes the norm, workforces are shifting from merely experimenting with AI tools to actively using them to boost productivity and efficiency in their everyday work," said Frank Weishaupt, CEO of Owl Labs.

Weishaupt emphasises the need for a strategic and coordinated approach to AI integration, ensuring it is both purposeful and secure. Business leaders must implement safeguards to mitigate risks from unchecked AI use.

Additionally, employees must be equipped with the skills and confidence to use these tools effectively, both at home and in the office, to support a productive and resilient hybrid workforce.