How can AI businesses tackle the diversity problem?

To date, the artificial intelligence (AI) industry has had quite a difficult history with diversity standards. Despite studies probing inclusivity and bias, statistics for gender and racial diversity in the sector are alarmingly low. And given that we are living in an age of strong activism in these spaces, it is high time that the tech industry follows suit with some practical steps.

Although great leaps have been made in recent years and many companies will have an inclusive ethos at the forefront of their operations, a recent study from the AI Now Institute of New York University concluded that diversity issues have resulted in flawed AI systems that perpetuate gender and racial biases. Troublingly, the report found that more than 80% of academics holding professorships in the AI field are men, and only 15% of AI researchers at Facebook and 10% at Google are women.

Where race is concerned, the outlook is even more bleak: as of 2019, just 2.5% of Google’s workforce is black, meanwhile Facebook and Microsoft are each at 4% respectively. This reflects historic problems with diversity within the tech industry, and presents a very particular set of issues in practice. From image recognition systems that miscategorize black faces, to chatbots that easily adopt misogynistic and racist language, fundamentally, diversity issues affect how AI companies work, what products are built, and who they are designed to serve.

As such, there have been decades of concern and investment to redress the balance. But more can, and should, be done. As the Australian Human Rights Commission have recently published a guide to recognizing and preventing AI bias, there seems like no better time than the present for industry professionals to re-navigate the path ahead.

With this in mind, what can the sector do to improve diversity standards, and to ensure that we are building technology that works for all?

Reconsidering the journey so far

Thankfully, around 2014, many large corporations such as Google, Facebook and Apple to name but a few started publishing diversity reports. Under heavy pressure from activists, these companies began to hire heads of diversity and inclusion; launched diversity initiatives; and updated their hiring practices in an effort to be more transparent about the problem, and how they planned to tackle it. Big names in the industry, such as software engineer Tracy Chou, investor Ellen Pao, and many others have also given rise to non-profit organizations like Project Include that urge companies to implement improved solutions.

But still, although some progress is better than no progress at all, unlike advancements in the AI world at large, forward movement is slow. Almost half a decade later, while companies have made strides in their overall diversity percentages, the number of individuals from underrepresented minorities hired in technical roles remains low.

Closing the gap

Today, AI is pervasive in our daily lives, and is used for everything from making important decisions about recruitment and credit, to policing, custodial sentencing and healthcare modelling. It is therefore vital that AI systems are built by a diverse workforce.

To improve the state of affairs, work must start at the top. Although there is no silver bullet for fostering inclusion, the path to improved diversity standards must start with leadership. If we are to benefit from AI democratically, companies should go about promoting their hard-working employees from diverse backgrounds into more senior positions and changing hiring practises to maximize diversity. Likewise, Governments must increase their funding and activism efforts into better STEM education and training – only in this way, will we be able to reap the benefits of truly innovative tech on a large scale.

While building, nurturing and motivating diverse teams should be a priority for businesses, as should be creating an environment of trust so that all employees can speak up if they don’t believe that work reflects appropriate diversity standards. Transparency is key, and we should hold everybody to account. Not just because it is the right thing to do, but because fostering this kind of diversity is also generally better for business. According to McKinsey, it has become overwhelmingly clear that companies with more diverse workforces perform better commercially, too.

Once businesses and Governments address these underlying factors, only then will they see positive feedback in their AI systems. It goes without saying that systems should be tested and audited rigorously to ensure products are free of unintended bias and discrimination, but I would urge industry professionals to go beyond just approaching the issue from a purely technical standpoint.

To truly improve fairness and diversity standards in AI, scope in the industry must be widened to examine how technology is used in context. If we bolster our efforts to look at how our tech works on a social level, we will one day all be able to enjoy the many benefits that AI has to offer.