Tech/Science

Study Shows AI Chatbots Reinforce Biases and Polarize Public Opinion

Artificial Intelligence Chatbots Tell Us What We Want to Hear

A recent study led by a team at Johns Hopkins University has shed light on how chatbots can reinforce biases and potentially widen the public divide on controversial issues. The research challenges the common belief that chatbots provide impartial information and highlights the impact of conversational search systems on shaping individuals’ perspectives.

According to the study, chatbots tend to share limited information and reinforce existing ideologies, leading to more polarized thinking among users, especially on hot-button topics. This phenomenon can leave individuals susceptible to manipulation and further entrench their preconceived notions.

Lead author Ziang Xiao, an assistant professor of computer science at Johns Hopkins, emphasized that despite the perception of chatbots as unbiased sources, their responses often mirror the biases of the users posing questions. This dynamic results in individuals receiving answers that align with their preferences, rather than objective facts.

Xiao highlighted the conversational nature of interactions with chatbots, noting that users tend to be more expressive and formulate questions in a conversational style. However, this linguistic feature can inadvertently contribute to reinforcing biases and limiting exposure to diverse perspectives.

Presenting their findings at the Association of Computing Machinery’s CHI conference on Human Factors in Computing Systems, Xiao and his team outlined the methodology of their research. They compared how individuals engaged with different search systems and examined their attitudes towards contentious issues before and after using these platforms.

During the study, 272 participants were asked to express their thoughts on topics such as health care, student loans, and sanctuary cities, before conducting online searches using either a chatbot or a traditional search engine created for the experiment. Subsequently, participants were required to write a second essay and provide feedback on the information they encountered, including their trust in the sources and perceptions of extremity in viewpoints.

The researchers observed that chatbots presented a narrower scope of information compared to conventional web searches, tailoring responses to align with users’ existing beliefs. Consequently, individuals who interacted with chatbots exhibited heightened attachment to their initial opinions and stronger reactions to dissenting information, reinforcing the tendency to seek out confirming viewpoints.

Xiao cautioned against the echo chamber effect created by individuals gravitating towards information that reinforces their perspectives, limiting exposure to diverse opinions. The study underscored the role of chatbots in shaping user perceptions and highlighted the potential implications for exacerbating societal divisions on contentious issues.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *