
Overcoming the fear: it’s time to put our trust in the machine
Rob O’Connor, EMEA CISO at Insight explores why businesses must overcome the fear of adopting new technologies to truly protect themselves from evolving cyber threats.
The relationship between machine learning (ML) and cybersecurity began with a simple yet ambitious idea. Let’s harness everything algorithms have to offer to help identify patterns in massive datasets.
Before this, traditional threat detection relied heavily on signature-based techniques – essentially digital fingerprints of known threats. These methods, while effective against familiar malware, struggled to meet the demand of zero-day attacks and the increasingly sophisticated tactics of cybercriminals.
Eventually, this created a gap, which led to a surge of interest in using ML to identify anomalies, recognise patterns indicative of malicious behaviour, and ultimately predict attacks before they could fully unfold. For example, some of the earliest successful applications of ML in the space included spam detection and anomaly-based intrusion detection systems (IDS).
These early iterations relied heavily on supervised learning, where historical data – both benign and malicious – was fed to algorithms to help them differentiate between the two. Over time, ML-powered applications grew in complexity, incorporating unsupervised learning and even reinforcement learning to adapt to the evolving nature of the threats at hand.
Alas, all is not as it seems
In recent years, conversation has turned to the introduction of large language models (LLM) like GPT-4. These models excel at synthesising large volumes of information, summarising reports, and generating natural language content. In the cybersecurity space, they’ve been used to parse through threat intelligence feeds, generate executive summaries, and assist in documentation. All of which are tasks that require handling vast amounts of data and presenting it in an understandable form.
As part of this, we’ve seen the concept of a ‘copilot for security’ emerge – a tool intended to assist security analysts like a coding copilot helps a developer. Ideally, the AI-powered copilot would act as a virtual Security Operations Center (SOC) analyst. It would not only handle vast amounts of data and present it in a comprehendible way but also sift through alerts, contextualise incidents, and even propose response actions.
However, the vision has fallen short. Despite promising utility in specific workflows, LLMs have yet to deliver a transformative, indispensable use case for cybersecurity operations. But why is that?
Modern cybersecurity is inherently complex and contextual. SOC analysts operate in a high-pressure environment. They piece together fragmented information, understand the broader implications of a threat, and make decisions that require a nuanced understanding of their organisation. These copilots can neither replace the expertise of a seasoned analyst nor effectively address the glaring pain points that these analysts face. This is because they lack the situational awareness and deep understanding needed to make critical security decisions.
Therefore, rather than serving as a dependable virtual analyst, these tools have often become a ‘solution looking for a problem’. Essentially, adding another layer of technology that analysts need to understand and manage, without delivering equal value. While tools like Microsoft’s Security Copilot shows promise, it has faced challenges in meeting expectations as an effective augmentation to SOC analysts – sometimes delivering contextually shallow suggestions that fail to meet operational demands.
Using AI to overcome AI barriers
Undoubtedly, current implementations of AI are struggling to find their stride. But, if businesses are going to truly support their SOC analysts, how do we overcome this barrier?
The answer could lie in the development of agentic AI – systems capable of taking proactive independent actions, helping to bridge the gap between automation and autonomy. Its introduction will help transition AI from a helpful assistant to an integral member of the SOC team.
Agentic AI offers a more promising direction for defensive security by potentially allowing AI-driven entities to actively defend systems, engage in threat hunting, and adapt to novel threats without the constant need for human direction. For example, instead of waiting for an analyst to interpret data or issue commands, agentic AI could act on its own: isolating a compromised endpoint, rerouting network traffic, or even engaging in deception techniques to mislead attackers. Such capabilities would mark a significant leap from the largely passive and assistive roles that AI currently plays.
However, organisations have typically been slow in adopting any new security technology that can take action on its own. And who can blame them. False positives are always a risk, and no one wants to cause an outage in production or stop a senior executive from using their laptop based on a false assumption.
Putting your trust in the machine
Nevertheless, with the relationship between ML and cybersecurity continuing to evolve, businesses can’t afford to be deterred.
Unlike businesses, attackers don’t have this handicap. Without missing a beat, they will use AI to steal, disrupt and extort their chosen targets. Unfortunately, this year, organisations will likely face the bleakest threat landscape on record, driven by a malicious use of AI.
Therefore, the only way to combat this will be to be part of the arms race – using agentic AI to relieve overwhelmed SOC teams. This is achieved through proactive autonomous actions, which will allow organisations to actively engage in threat hunting, defend systems and adapt to novel threats without requiring human involvement.