Could Machine Learning Find The Next Fake News Trend?
Campaigns of disinformation are as old as mainstream media, but recent times have seen them thrust into the spotlight in an alarming new way. Members of the online group QAnon, for example, have been led to believe that the US government is controlled by a satanic cult of cannibals and sexual predators, and have gone to great lengths to take them down. The most memorable of these attempts being the January 6th insurrection of the US Capitol.
It is clear that the more time we spend with our screens, the more at risk we are of falling into an online “echo chamber”, which distorts our view of the world. RAND, the international policy think tank, conducted interviews with extremists identified by UK Counter Terrorism Units.
They found that the internet provided their interviewees with more opportunities to become radicalised, and that most developed their beliefs through support and positive re-inforcement by like-minded individuals online.
Meet RIO, the program that identifies disinformation
MIT Lincoln Laboratory’s Artificial Intelligence Software and Algorithms Group, sought to understand how disinformation campaigns get traction online.
The team designed RIO, the Reconnaissance of Influence Operations program, to investigate the unusual patterns they saw in social media data in the run-up to international elections.
Their first test was in the 30-days leading up to the 2017 General Election in France, when the group collected 28 million Twitter posts from 1 million accounts, and fed them into RIO.
The algorithm, which combines multiple analytics techniques to get a “bird’s eye” view of the relationships “networks” between online users, was able to detect disinformation accounts with 96% accuracy.
One reason for RIO’s success might be the metrics it uses to determine if a disinformation account is influential.
RIO project could help social media giants to predict what measures could help stop the spread of “fake news”.
While most people might assess the amount of activity on a suspicious account, including the number of tweets and retweets, RIO uses a different statistical approach, looking at how the tweets from a given account cause the networks around it to amplify its message.
RIO also uses a machine learning approach, integrated by group member Erika Mackin, which classifies accounts based on their interaction with foreign media, or the languages they use.
In this way, RIO can highlight disinformation campaigns related to a slew of topics and intended audiences, from anti-vaxxers to French nationalists.
In the near future, members of the RIO team expect that countering disinformation online won’t be restricted only to detecting bots.
System users could fight back against the real humans attempting to manipulate public opinion or spread hate.
What’s more, the technology involved in the RIO project could help social media giants to predict what measures could help stop the spread of “fake news”. RIO may change the game online, before the events of January 6th happen again.
Header Image: Image via, www.shopcatalog.com.