UK companies lack accountability on AI errors

New findings from the '2025 Global State of AI at Work 2025' Report reveal a workforce caught between ambition and reality.

Workers expect to delegate just under a third (32%) of their workload to AI in a year, and 41% within three years – yet only 25% say they’re ready to do so today. This was based on a survey of 2025 workers (including 1,021 workers in the UK), commissioned by Asana’s Work Innovation Lab. 

But where adoption is underway, trust isn’t keeping pace. Nearly two-thirds (64%) of employees say AI agents are unreliable, and over half report they ignore feedback or confidently share incorrect information. Compounding this, Asana’s findings reveal that the average organisation has failed to put any accountability in place for AI agents’ mistakes. 

However, in spite of these challenges, adoption is spreading quickly. 74% of workers are using agents in some capacity, and 76% view them as a fundamental shift in how work is done, not just another productivity tool. The top tasks they favour AI agents for over generative AI are organising documents (30%), finding relevant documents (29%), and scheduling meetings (26%). In fact, 68% would prefer to delegate some of these tasks to AI agents rather than humans.

But Asana’s findings reveal that without clear accountability, training, or context, adoption risks stagnating at the ‘admin’ level, rather than the more sophisticated tasks that allow organisations to see real returns.

The accountability vacuum

When AI agents fail, no one agrees who should be accountable. Workers split blame between the end user (20%), IT teams (18%), and the agent’s creator (9%). Over a third (39%) say no one is responsible or admit they simply don’t know who to blame. 

Few organisations have guardrails in place: only 10% of organisations have clear ethical frameworks for agents, 10% have deployment processes, and just 9% review employee-created agents. This lack of clarity means oversight is equally inconsistent: 26% of organisations allow employees to create agents without management approval, and only 12% have clear agreements about which tasks should be handled by humans versus AI. 

Just 18% of organisations even measure AI agent errors, despite 63% of workers saying accuracy should be the top metric. Without consistent rules or accountability, mistakes repeat unchecked, leaving employees unsure who is in control — and further weakening trust. And at an organisational level, the costs compound: 79% of organisations expect to accumulate “AI debt” — the compounding costs of unreliable systems, poor data quality, and weak oversight.

Ineffective teammates: when AI creates more work, not less

Instead of reducing routine tasks and freeing up time, many AI agents behave unreliably. 64% of workers see AI agents as unreliable, 59% say they can confidently share wrong information, and 57% say they ignore feedback and fail to learn. More than half (58%) say AI agents that create extra work would force teams to redo or correct outputs instead of saving time.

These frustrations are compounded by a lack of context. Nearly half of workers say agents may not understand their team’s work (49%) and can focus on the wrong priorities (47%). Without awareness of the context and structure of the work that is actually happening within organisations, agents risk amplifying inefficiencies and undermining trust, making them feel less like teammates and more like liabilities.

The training gap: why agents can’t improve without support

Even where agents could add value, organisations are not equipping workers to use them effectively. 82% of employees say proper training is essential to use agents effectively, but only 32% of organisations have provided it. Without this foundation, teams can’t provide effective oversight or course correction - so errors repeat and trust erodes further.

More than half of employees (53%) are asking their companies to define clearer boundaries between human and AI responsibilities, and formal usage guidelines (58%). Most workers are open to collaboration - 68% even prefer delegating some tasks to AI over humans - but until training and rules are in place, the gap between executive ambition and employee trust will persist.

The way forward: treating agents like teammates

The report is clear: organisations will only see value from AI agents if they treat them like teammates, not tools. That means giving them the right context, defining responsibilities, embedding feedback loops, measuring accuracy as the top metric, and training employees to use agents effectively.

“AI agents are already reshaping the way teams approach work, but our research shows trust and accountability haven’t kept pace with adoption,” said Mark Hoffman, Ph.D., Work Innovation Lead, Asana Work Innovation Lab. “When no one is responsible for mistakes, employees hesitate to hand over meaningful tasks, even though they’re eager to delegate. To succeed, organisations must treat agents like teammates by informing them with the right context and structure of work, defining responsibilities, embedding feedback loops, and providing the training employees are asking for. Without these guardrails, companies risk missing out on the real productivity gains that agents can unlock.”

For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.