Why AI is more effective than humans at convincing people to abandon their conspiracy theory beliefs

About half of Americans believe in some form of conspiracy theory, and attempts by other people to change their minds have largely failed. However, an AI-powered chatbot has shown promise in this area.

In a series of tests, the chatbot managed to make over a quarter of participants question their strongest conspiracy beliefs, with average conversations lasting less than 8½ minutes. The findings, published in the journal Science, suggest that the issue isn’t the persuasiveness of facts but rather our ability to present the right arguments tailored to specific doubts.

The chatbot’s success was attributed to its ability to provide personalized, detailed responses to each of the 2,190 participants. For example, it clarified misconceptions about the structural integrity of steel beams in the 9/11 attacks or the sharpshooting skills of Lee Harvey Oswald. Each chat involved three rounds of evidence and human responses, leading to an average 21% decrease in agreement with the conspiracy theories.

Compared to control participants who discussed neutral topics, those engaging with the chatbot showed a significant reduction in their belief in conspiracy theories, with 27% becoming uncertain about their beliefs. These changes persisted over time.

The study also found that discussing one conspiracy theory with the AI made participants more skeptical of others, and increased their likelihood of avoiding or challenging conspiratorial content online.

The chatbot’s approach proved ineffective only in cases where the conspiracies were based on true events, like the CIA’s MK-Ultra project. Researchers remain optimistic but caution that the effectiveness of such interventions on new or less-documented conspiracy theories remains uncertain.

While the chatbot’s success is promising, experts highlight the need for more research into why people initially embrace conspiracy theories and suggest focusing on prevention rather than just correction.

Leave comment

Your email address will not be published. Required fields are marked with *.