
Why AI autonomy must be restricted
Autonomous AI is no longer simply a concept of the future – it’s here, and it’s already transforming how we build, test, and deploy software. As the CEO of an AI-powered company, I've seen firsthand how agentic AI, which independently designs, executes, and optimises workflows, can deliver exponential gains. Many of these are in productivity, especially in repetitive, rule-based domains around software testing.
Agentic AI is enabling unprecedented speeds in development cycles and precision in execution, fundamentally reshaping how businesses operate. But with such great power comes greater responsibility – and, in my opinion, the AI ecosystem is dangerously light on restrictions right now.
Startups, especially those building or adopting autonomous AI, must take a proactive stance in defining their systems' ethical and operational boundaries. The reason being, unchecked AI autonomy can pose three major risks. Some of these include untraceable decisions, compliance failures, and the erosion of human oversight.
The startup dilemma
Startups are often celebrated for their speed, agility, and willingness to break convention. But in the AI era, moving fast without constraints isn’t just risky, but rather irresponsible. Founders need to prioritise the design of AI systems with ethical guardrails and accountability mechanisms from day one, not as a reactive response to regulation or crisis.
A black box problem
I think autonomous systems that self-improve or act independently are often opaque. Without careful design, it becomes impossible to trace how an AI arrived at a decision and difficult to identify who should be held accountable when things go wrong. For companies navigating fast-moving markets, this is a significant legal and reputational landmine. Imagine an AI test automation platform silently introducing or missing a critical defect in a financial or healthcare application, leading to system outages, data breaches, or regulatory violations. Without a clear audit trail, it becomes nearly impossible to pinpoint how issues occurred or who should be held accountable. For companies under strict compliance obligations, the lack of explainability and traceability in AI-driven decisions can expose them to costly legal consequences, lost customer trust, and severe reputational damage
From experience, I believe the answer is not to halt AI innovation but to mandate traceability. I think Explainable AI (XAI) must be a foundational requirement and not just an afterthought. Developers should build audit trails as diligently as they code features, ensuring that every significant AI decision can be understood and justified. Beyond legal ramifications, the lack of transparency inherently erodes public trust. If we cannot understand why AI reaches a particular decision, especially one that has a significant human impact, societal adoption will be met with understandable scepticism.
Compliance can’t be automated away
In regulated sectors like finance, healthcare, and insurance, compliance simply isn't optional. Even minor changes in AI-generated test coverage or logic can lead to non-compliance if left unchecked. I believe companies must resist the temptation to hand over the steering wheel completely under the guise of efficiency. For a lean startup, a single compliance violation can mean crippling fines, loss of certifications, and irreparable reputational damage.
From my experience, proactive engagement with regulatory bodies and the implementation of robust, human-centric compliance frameworks is the only responsible and sustainable path forward for startups using AI. Restricting autonomy through controlled workflows, supervised learning loops, and manual approvals at key decision points is essential to staying within regulatory boundaries and surviving scrutiny.
The role of human judgment
In my view, AI is best seen as a complementary tool and not a substitute for human judgment. As we push to automate and scale, I worry we risk sidelining people within the company. From what I’ve seen, over-relying on autonomous AI can erode human skills, blur ethical boundaries, and create blind spots we don’t even realise existed. When startups and their staff offload complex decisions entirely to machines, they can risk weakening their critical thinking, making them less equipped to step in when AI gets it wrong.
AI systems, no matter how advanced, are trained on historical data and can perpetuate existing biases or miss novel, nuanced situations that require human intuition, empathy, and ethical reasoning. This creates dangerous "systemic blind spots" for startups where crucial information can be overlooked simply because the AI wasn't programmed to consider it, or the data it was trained on lacked the perspective required. In the testing domain, I firmly believe our approach must always be "human in the loop." AI doesn't replace human testers – it supercharges them, handling repetitive tasks while allowing human experts to focus on complex problem-solving and strategic oversight. I believe this principle should apply across all AI domains. Autonomy must serve human intent, not override it. The synergy between human creativity and AI's processing power is where true innovation lies.
This isn’t about slowing innovation; it's about building trust
Lastly I think public perception and trust are paramount for the long-term success of AI. If AI systems are perceived as reckless, unaccountable, or dangerous, public backlash could lead to heavy-handed, stifling regulation that impacts the entire industry. For startups, they need to establish trust in agentic AI and proactively implement ethical guardrails. If AI is to become a force for good, I believe we must ensure it reflects our values, remains accountable, and is restricted in the right ways. Ultimately, the power of AI autonomy is undeniable, but it must always be carefully restricted and tethered to rigorous human oversight, in order to serve humanity's best interests.
For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.