Why employers cannot afford to be “AI complacent"

Understandably, many companies are keen to use AI to help with hiring – 42% of companies use AI in recruitment and HR, according to IBM research. The right tools can help to accurately identify top candidates faster.

This is a huge help, especially for startups who are working with finite resources and who may not have dedicated HR teams. The issue is that many companies aren’t using the right AI. This not only negates the technology’s potential gains, but could have disastrous consequences for workplace diversity, too. Ethical employers who care about inclusivity must be scrupulous about what AI tools they use to hire.

AI’s capacity for bias has been widely acknowledged. Amazon abandoned their earlier AI recruitment system after discovering it discriminated against women, and a lawsuit has been filed against Workday, alleging that their AI screening tools are discriminatory. In the EU, models used in recruitment have now been categorised as “high-risk” and are therefore subject to safety requirements under the new EU AI Act.

However, in the UK, the new government has only made vague promises of future regulations, which will take a long time to come into force even once announced. The lack of legislation means nothing is stopping biased recruitment AI models from being marketed and deployed in the UK right now. Nor is there anything to ensure that the technology is explainable, which is crucial to ensure that teams are able to correct for bias when needed.

Many AI models are black boxes, including tools used in recruitment like CV scanners. This means that teams can’t check what criteria the AI is using to rank and score candidates, or how they reach their decisions. As a result, well-meaning companies could be unknowingly relying on unethical AI, and be left unable to check that decisions reached by models are fair. This risks hurting marginalised candidates the most.

This is because AI models’ outputs depend on the data they are trained on, and most are trained on historical data. ChatGPT, for example, was trained on vast amounts of publicly available information from books and web pages such as Wikipedia. And guess what’s ingrained in this data? Our historical and present-day prejudices, stereotypes and social norms, which AI models not only learn from, but can then be at risk of perpetuating.

As a result, candidates who have long been underrepresented in certain industries, at different levels of seniority, or in the workforce overall – such as individuals from ethnic minority backgrounds, older workers, neurodivergent or disabled people – are most at risk when biased recruitment AI is in use.

In fact, analysis carried out by Bloomberg in March found that ChatGPT 3.5 shows racial bias when ranking CVs, and consistently matches female names with historically female-dominated roles. An AI image-generator, Stable Diffusion, was also most likely to depict men with lighter skin tones when prompted to create images of a “politician”, “lawyer”, “judge” and “CEO”, in a separate Bloomberg experiment last year.

Startups who feed AI tools with the CVs of their current team in an effort to avoid this won’t be any better off. Do this and your top candidates according to generative AI will be as good as carbon copies of your current staff, as Amazon found back in 2018. Not good, if your team already lacks diversity.

Whether companies are using AI models trained on data from their current staff or the workforce at large, the upshot is the same. Startups who unwittingly use unethical AI tools, like CV scanners, to hire will end up with homogenous teams, and play a part in widening existing employment gaps for workers whom the odds are already stacked against. It’s a recipe for societal stagnation at best, and regression at worst.

Sacrificing diversity means sacrificing overall business performance. Research shows that diverse teams generate higher financial returns. Diverse and inclusive teams also make better business decisions. Employers who follow through on DE&I commitments in practice, not just on paper, also stand to attract top talent who care about inclusivity.

It goes without saying that developers and the UK government have a responsibility to ensure responsible AI development and use. Nevertheless, employers must act too. With so much at risk, employers cannot afford to be lax or uninformed about AI, especially for sensitive use cases like hiring. Negligence around AI is simply not an option.

Companies who have already implemented AI for hiring must vet their existing tech now. But, ideally, the best approach is to get recruitment AI right, and ensure models are ethical from the outset. Ethical AI tools are trained on bias-free data sets and will allow for human oversight.

It’s paramount that employers take a considered approach to AI implementation, by checking upfront how models are trained, and what guardrails are in place to mitigate against bias. Startups that use ethical and explainable AI models to hire more efficiently, accurately and fairly will be able to protect workplace diversity, identify top talent, improve overall business performance, and cement their reputation as ethical employers at the same time.