New Privacy Research Identifies AI as a Rising Threat Comparable to Cybercrime

New research has revealed significant concerns about the future threat posed by AI and machine learning to privacy.

Cybercrime is still seen as the main threat at 55%, but AI comes in a close second at 53%. Despite AI being a relatively new menace, the research indicated that developers believe the technology is rapidly catching up with cybercrime as it becomes more mainstream. The cost of cybercrime is projected to reach $13.82 trillion by 2028. With increasingly sophisticated AI potentially in the hands of a new generation of cybercriminals, this cost could grow exponentially.

The study, commissioned by Zama, a Paris-based deep tech cryptography firm specialising in Fully Homomorphic Encryption (FHE), surveyed developers across both the UK and US. Over 1,000 developers were asked their opinions on privacy to gain insights from those who build privacy protection into everyday applications. The research delved into developers’ perceptions and relationships with privacy, covering topics such as which privacy considerations should be central to evolving innovation frameworks, who holds the ultimate ownership of privacy, and opinions on regulatory approaches.

In addition to highlighting significant concerns about AI’s threat, the research also revealed that 98% of developers believe steps need to be taken now to address future privacy and regulation framework concerns. Furthermore, 72% said that current regulations designed to protect privacy are not built for the future, and 56% believe that dynamic regulatory structures – intended to adapt to tech advancements – could pose an actual threat.

“Despite cybercrime expected to surge in the next few years to the cost of trillions, 55% of developers we surveyed in our research stated that they feel cybercrime is only ‘marginally more of an issue’ than the threat to privacy that AI will pose. We have seen from our work that many developers are the real champions of privacy in organisations and the fact that they have some legitimate concerns about the privacy of our data, in relation to the surge in AI adoption, is a real worry,” says Pascal Palier, CTO and Co-founder of Zama.

“Zama shares the concerns expressed by developers about the privacy risks posed by AI and its potential irresponsible use. Regulators and policymakers should take this insight into consideration as they try to navigate this new world. It’s important not to underestimate the very real threat highlighted by the experts who are thinking about protecting privacy every day, and make sure upcoming regulations address the increased risks to users’ privacy,” he added.

The survey went on to reveal that 30% of developers believe that those behind making the regulations are not as knowledgeable as they could be about all the technologies that should be taken into consideration, also presents a real danger, while 17% believe this would pose a possible threat to future tech advancements.

“It’s undoubtedly an exciting time for innovation, especially with AI advancements developing as fast as they have. But with every new development, privacy must be at the centre; it’s the only way to ensure the data that powers new innovative use cases is protected. Developers know this,  embracing the vision championed by Zama in which they have the ability and responsibility of safeguarding the privacy of their users. It’s clear, in analysing their insights, that they would like to see regulators taking more responsibility for understanding how Privacy Enhancing Technologies can be used to ensure privacy of use for even the newest of innovations, including Gen AI. Advanced encryption technology such as FHE can play a positive role in ensuring innovation can still flourish, while protecting privacy at the same time,” he adds.