Generative AI Distorts Human Sentiment, according to academics
Generative AI and the deployment of Large Language Models (LLMs) leads to unintended alterations in the sentiment of original text, according to leading academics from the Gillmore Centre for Financial Technology at Warwick Business School.
The paper, named Who’s Speaking, Machine or Man? How Generative AI Distorts Human Sentiment by leading academics from the Gillmore Centre for Financial Technology, delves into the influence of the rise of LLMs on public sentiment, drawing the conclusion that modifications to content introduced by LLMs make existing outcomes unreliable.
The findings, made possible through replicating and adapting well-established experiments, play a significant role in contributing towards the literature of Generative AI and user generated content (UGC) by showing that the widespread adaptation of LLMs changes the linguistic features of any text.
This phenomenon was observed through a comprehensive analysis involving the examination of 50,000 tweets, utilising the powerful GPT-4 model to rephrase the text. Employing the "Valence Aware Dictionary for Sentiment Reasoning" (VADER) methodology to compare the original tweets with their GPT-4 rephrased counterparts, the researchers discovered that LLMs predominantly shift sentiment towards increased neutrality, effectively transitioning the text away from both positive and negative orientations.
Ashkan Eshghi, Houlden Fellow at the Gillmore Centre for Financial Technology, commented: “Our findings reveal a notable shift towards neutral sentiment in LLM-rephrased content compared to the original human-generated text. This shift affects both positive and negative sentiments, ultimately reducing the variation in content sentiment.
"While LLMs do tend to move positive sentiments closer to neutrality, the shift in negative sentiments towards a neutral position is more pronounced. This overall shift towards positivity can significantly impact the application of LLMs in sentiment analysis.”
Ram Gopal, Director of the Gillmore Centre for Financial Technology, commented: “Extensive literature on the multifaceted use of UGC already exists, ranging from predicting stock prices to evaluating service quality, but we have found that the substantial use of LLMs introduces a significant concern: potential bias.
“This bias arises from the application of LLMs for tasks such as paraphrasing, rewriting, and even content creation, resulting in sentiments that may diverge from those the individual would have expressed without LLMs being used.”
“In turn, our research proposes a mitigation method aimed at reducing bias and enhancing the reliability of UGC. This involves predicting or estimating the sentiment of original tweets by analysing the sentiments of their rephrased counterparts.”
However, further investigation is needed to see if other linguistic features of any UGC would change if AI were used, such as emotion, sentence structure, or ratio of specific words in a sentence.
Dr Yi Ding, Assistant Professor of Information Systems at the Gillmore Centre for Financial Technology, added: “We have seen that there are around 180 million OpenAI monthly active users worldwide, and more businesses are jumping aboard the AI hype train, harnessing its usage as a business tool.
“Conducting this study looking at the use of Generative AI alongside human sentiment will play a critical role in LLM future developments, ultimately enhancing output, helping remove biases and improving efficiency for anyone who uses it.”
The academics plan to employ other predictive models to infer the authentic human sentiments and propose further mitigation approaches in subsequent work.