Research platform Prolific calls for ethical standards to be set for AI workers

Prolific has published an industry-first study through its Participant Wellbeing report, aimed at promoting the mental wellbeing of AI taskers.

Historically, AI models have often been developed in laboratories, outsourced to lower-cost labour markets, or assigned to prisoners as 'click-worker' tasks without compensation. This work frequently involved annotating content that was violent, sexist, or racist, with scant consideration for the taskers' mental wellbeing and absent ethical labour standards.

Following the passage of the EU AI Act by the European Parliament, and in anticipation of global AI regulations, there has been a worldwide push for AI models to be representative, safe, and reliable. Prolific, a platform facilitating connections between vetted participants and AI taskers with researchers for fairly compensated studies and tasks, advocates for these regulations to encompass ethical frameworks for AI tasker engagement.

Established in 2014, Prolific counts among its clients major corporations such as Google and eminent educational institutions including Stanford and Oxford University. The platform's recent report employed the Short Warwick-Edinburgh Mental Wellbeing Scale (SWEMWBS) to assess the mental wellbeing of its users, focusing on aspects of 'feeling good' and 'functioning well', alongside more negative emotional categorisations.

This study positions Prolific as the first organisation to openly evaluate the impact of online research participation on user wellbeing. Compared to the UK average scores of 23.7 for men and 23.6 for women, where scores from 7 to 35 indicate healthy wellbeing, the initial data collection by Prolific indicated a mean score of 23.1. This suggests minimal to no adverse effects on the wellbeing of participants engaging in research studies or AI training via its platform.

Prolific has pledged to conduct regular wellbeing data collection, requires consent for participation in surveys containing sensitive content, and has entered into a partnership with Partnership on AI. This organisation is committed to the ethical development and utilisation of AI for societal benefit. Prolific joins 123 global partners in this initiative, including Adobe, the Ada Lovelace Institute, the Center for Data Innovation, DeepMind, EY, among others.

This announcement comes in the wake of a third-party study which indicated that Prolific users exhibit lower levels of 'disengagement' compared to those on alternative platforms, suggesting that Prolific's user experience is exceptionally conducive to optimising research outcomes. 

Phelim Bradley, CEO and Co-founder of Prolific, commented, “As a human data provider, our participant pool is our greatest asset. High-quality human data starts with well looked after participants, which is why Prolific prioritises participant wellbeing.

“Treating participants with respect, fairness and transparency throughout their journey with Prolific helps foster trust and loyalty, as does connecting them to interesting, important and fairly paid work. We want to go beyond this to call for setting ethical standards for AI taskers across the industry. Those developing AI models must show themselves to be accountable when it comes to tackling bias and accurate outcomes - and this includes considering the humans working to develop and train AI technologies.”

Recent Prolific studies dedicated to finetuning AI include supporting the Meaning Alignment Institute to create ‘wiser’ AI responses to moral questions, and creating a Reinforcement Learning from Human Feedback (RLHF) dataset for social reasoning.

Since launching in 2014, Prolific has collected over 100 million responses for more than 100,000 researchers in 200 countries, from over 150,000 participants. In 2023, over 30,000 researchers and 10,000 organisations were active on the Prolific platform, and a new study was launched on the Prolific platform every 3 minutes.