'Relatively few' Twitter bots responsible for spreading misinformation – study
MANILA, Philippines – A new study in Nature, published Tuesday, November 20, suggests stopping bots is a good way to curb misinformation from spreading due to how disproportionately influential they are in extending the reach of low-credibility content.
Researchers from Indiana University in the US and the National University of Defense Technology in China analyzed 14 million messages spreading 400 thousand articles on Twitter during 10 months in 2016 and 2017.
The researchers explained based on their data, "relatively few accounts are responsible for a large share of the traffic that carries misinformation."
Six percent of Twitter accounts which were identified as bots were responsible for spreading 31% of the low-credibility content. The low-credibility content in question, in most cases, appears to be some form of misinformation, such as false news, conspiracy theories, and junk science.
There were two main ways in which bots amplified the content, according to the study.
"First, bots are particularly active in amplifying content in the very early spreading moments, before an article goes 'viral.' Second, bots target influential users through replies and mentions. People are vulnerable to these kinds of manipulation, in the sense that they retweet bots who post low-credibility content almost as much as they retweet other humans."
The study added, "bots amplify the reach of low-credibility content, to the point that it is statistically indistinguishable from that of fact-checking articles. Successful low-credibility sources in the United States, including those on both ends of the political spectrum, are heavily supported by social bots."
The study went on to note that social media platforms like Twitter are working on countering the bots, though their effectiveness is "difficult to evaluate."
At the same time, the tactics employed in using bots are also seen as potentially useful for spreading other types of less-than-savory content, such as malware.
The study suggested "curbing social bots" as one possible strategy to lessen the spread of low-credibility content. It added, "Progress in this direction may be accelerated through partnerships between social media platforms and academic research," such as in the development of algorithms that detect bots. – Rappler.com