The majority of businesses admit to having no AI policy

More than half of businesses have no formal policy in place to govern the use of AI tools in the workplace, leaving them vulnerable to data risks, compliance issues, and uncertainty over accountability.

A poll of over 500 employers conducted by employment law and HR specialists, WorkNest, revealed that 54% of organisations have no AI policy at all, while almost a quarter (24%) are still in the process of developing one. Only 13% currently have clear, documented rules in place.

The findings suggest businesses are struggling to keep up with rapid AI adoption, with many employees already using tools such as ChatGPT despite companies not having agreed on boundaries or responsibilities.

When asked about the biggest concerns surrounding AI use, 41% of respondents cited data protection and privacy risks as the number one issue, followed by misinformation or inaccurate outputs (30%) and legal or compliance challenges (16%). One in ten (11%) were worried about overreliance, whilst just 3% said they had no major concerns about AI in the workplace.

The survey also highlighted a lack of clarity over who should take ownership of AI governance. Almost half of respondents (47%) said senior leadership should set the rules, but over one in five (23%) admitted that no one in their organisation has specific responsibility for setting guidance on AI use.

Experts caution that a lack of clear policies around AI can expose organisations to significant risks not only in terms of data protection, compliance, and reputation, but also from an employment law perspective.

Alice Brackenridge, Employment Law Advisor at WorkNest, said: “Employers remain fully responsible for all workplace decisions influenced by AI, even when using third-party tools. This means that, should AI-driven decisions result in discrimination or other breaches, intentional or not, the business, not the technology provider, could face employment tribunal claims and substantial financial consequences.

“Without robust processes to monitor, regulate, and review AI outputs, which include conducting regular equality and bias assessments, organisations may inadvertently expose themselves to avoidable and costly legal challenges.

“Businesses can not wait until something goes wrong. Proactive steps need to be taken by a combination of senior leadership, HR, IT and legal teams, in order to set boundaries, establish policies and provide training. AI can deliver huge benefits, but only if it is managed responsibly and transparently.”

When asked about current levels of access to AI, 38% of organisations say they give restricted access, while 17% allow fully open access. Almost 45% said they had taken no formal stance on AI access, and a small minority (0.7%) said AI tools are completely banned.

For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.