Facial recognition is at a crossroads: trust and privacy are key to its progress

The past month has offered a telling snapshot of where facial recognition technology (FRT) finds itself in the UK.

On one hand, the Home Office has pledged to expand police use of facial recognition, presenting it as a powerful tool to identify threats, tackle crime and improve public safety. On the other, the Information Commissioner’s Office has publicly criticised historic uses of the technology, citing bias and shortcomings in accuracy across some police deployments.

Together, these two stories act as a microcosm of where the FRT sector finds itself today. The technology is a game-changer – it is advancing rapidly and being adopted across more aspects of daily life. And yet public confidence remains fragile.

The question is no longer whether facial recognition will become more widespread (it almost certainly will), but whether it can do so in a way that is secure, privacy-preserving and worthy of trust.

Why the public is wary of FRT

Over the past two years, public concern surrounding FRT has become increasingly clear, as have the reasons for it.

According to research from the Alan Turing Institute, more than half of the British public are concerned about sharing biometric data – such as facial images – between the police force and private sector to tackle crimes. The researchers stated that “those using [biometric data] need to provide the general public with greater confidence that appropriate safeguards are in place.”

Put simply, whether fingerprints or facial images, there are worries about how this data is collected, stored and shared. Privacy and trust, which go hand in hand, will be absolutely essential in ensuring the deployment of FRT can continue at pace and – crucially – with the support of the public.

For now, the key question is: how do we ensure that increased adoption does not come at the expense of fairness, accuracy and individual rights?

The lessons from recent criticism are clear. Accuracy cannot be assumed, and bias cannot be an afterthought. Algorithms must be rigorously tested across diverse demographics, in real-world conditions, and continuously monitored after deployment. One-off assessments are not enough; performance can drift over time as data, lighting conditions or use cases change.

Equally important is data governance. Organisations need to be explicit about why they are using facial recognition, what data they collect, how long they retain it, and who has access. Privacy-by-design principles should be standard practice, not marketing slogans. In many cases, this means favouring on-device processing, anonymisation where possible, and minimising data retention to the shortest period consistent with the stated purpose.

Transparency also matters. The public is more likely to be accepting of FRT if they understand how and why it is being used. Signage, plain-language policies, and independent oversight can all help bridge the trust gap.

Moreover, the technology itself must meet the necessary standards for how images are captured and stored. Most obviously, turning facial images into encrypted code, rather than storing actual photos of people’s faces, can eradicate many of the security and privacy concerns surrounding FRT.

Where might FRT become more commonplace?

Clearly, there is action to be taken and progress to be made. However, while not taking that for granted, it would still be safe to assume that FRT will become far more commonplace in 2026, and not just in policing.

Consumers and businesses are becoming increasingly au fait with face-scanning tools for security and access purposes. In the coming 12 months, this trend will only gather pace.

Take travel and transport. Airports are already experimenting with biometric boarding and passport control, and over the next year, these systems are likely to become more common across rail, shipping and cruises, and even car rental services. Used responsibly, FRT can reduce queues, improve passenger flow and enhance security simultaneously. Used poorly, it can expose millions of travellers to unnecessary data risks.

More broadly, the use of FRT in border security seems one of the most compelling use cases, helping to address the issues around fake documentation and ensuring there is a watertight log of exactly who arrives in and departs from a particular country or territory.

Financial services is another sector where adoption is accelerating. Banks and fintechs are increasingly turning to facial recognition for customer onboarding, fraud prevention and account recovery. With authorised push payment fraud and identity theft continuing to rise, there is a clear incentive to deploy stronger identity verification tools. However, this also means that some of the most sensitive personal data imaginable is being processed at scale, making compliance, encryption, and bias mitigation non-negotiable.

Retail and hospitality are also exploring facial recognition, from age verification at self-checkouts to frictionless loyalty schemes and secure access to staff-only areas. In healthcare, FRT is being trialled to prevent patient misidentification, control access to restricted wards and protect staff from unauthorised entry. Even workplaces and educational institutions are looking at facial recognition for building access and attendance, particularly in high-security or safety-critical environments.

The year FRT becomes truly mainstream

In 2026, facial recognition will be less of a novelty and more of an embedded infrastructure technology. But, as noted, regulation, and industry standards must keep pace with innovation. The UK has an opportunity to set a global benchmark by combining robust data protection frameworks with practical guidance for responsible biometric use. This is not about stifling innovation, but about creating the conditions in which innovation can thrive without undermining public confidence.

Having worked in this field for several years, I have seen firsthand both the promise of facial recognition and the damage that can be done when it is deployed carelessly. FRT can solve genuine security problems, streamline services and reduce fraud. But it is also uniquely sensitive technology, and the margin for error is small.

As we head into 2026, the defining question for facial recognition will not be how fast it can be rolled out, but how well it can be governed. Trust, once lost, is hard to regain. If the industry, regulators and deploying organisations prioritise security, privacy and accountability now, facial recognition can earn its place as a responsible part of the UK’s digital future.

For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.