Combatting Fake News with Machine Learning Algorithms

Regardless of your political beliefs, most of us agree that fake news is a real and present threat in our world. In the Artificial Intelligence space, teams like Preslav Nakov’s at the Qatar Computing Research Institute are already harnessing the power of Machine Learning to tackle this issue.

For many, what might not be clear in this case is how ML algorithms could increase the factuality of the news, which is why we’ve compiled this overview for you.

To understand the significance of ML in the apparent war against fake news, it’s important to circle back to a particular meeting that was held at MIT in October. According to the Technology Review, around October 10th, MIT’s CSAIL school hosted its’ annual update on the joint research being done between them at the Qatar Computing Research Institute. Reportedly, the highlight of the partnership so far has been their use of over 900 possible variables to attempt to teach AI systems to predict the factuality of a news outlet.

In order for this to be possible, the team, which was led by Preslav Nakov, trained AI systems with ML algorithms that analyzed news pieces over this vast number of variables. While this may seem to be a large enough variable set to yield significant results, apparently the opposite ended up being true.

According to the MIT Technology Review, the teams ran various models, the best of which only accurately labelled news stories on their degrees of factuality 65% of the time. If you’re wondering how this works out, rest assured that it’s actually quite simple. Any claims that are made by any news source can be fact-checked through unbiased services like Snopes and Politifact.

Instead of doing so themselves, however, the QCRI/MIT team utilized the training data of an organization called Media Bias Fact Check. Where this stands as significant is in the fact that MBFC has already integrated 2,500 news sources into its’ system, which it consistently fact checks against services like those mentioned above.

Despite this, in the end, QCRI/MIT’s results weren’t ideal. Something was still missing.

With this, they went back to the drawing board to figure out what exactly was holding them back. According to QCRI’s research lead, Preslav Nakov, the answer to their difficulties ended up being quite cut and dry.

Humans are holding the potential of ML back in this particular case. Snopes and Politifact rely on real journalists with extensive expertise in separating fact from fiction. Thus, however they publicize the utility of their system, MBFC is not actually automated. Any chance that it has of achieving success relies on human input. Humans can only finish so many fact-checks in a day, while AIs can keep going around the clock since they have no need of sleep or sustenance of any kind.

Despite this, as noted above, QCRI and MIT were unable to achieve a percentage of accuracy with their system that indicates its’ true potential in this context.

Therefore, we’re left with the question: what would it take to truly automate the fact-checking process with a high degree of accuracy? Could we one day, as some have suggested, develop a universal fact-checker?

In November, the Unbabel Blog posited that to successfully create such a system, we would have to begin by figuring out how to measure whether a piece of content is truthful or deliberately misleading. Since certain sources claim that human fact-checkers like Snopes are not even trustworthy due to their reliance on other media outlets, this may be truly difficult to do. Even Politifact, which is one of the most trusted fact-checkers in the United States related to political news, relies on only 11 full-time journalists. With the examples of these two firms in mind, improvements are clearly needed.

Since Preslav Nakov’s team has recorded the most successful results with an AI-backed fact-checker to date, only time will tell whether their system can improve further, to the point at which it rivals human experts. At this juncture, the only way to achieve such a milestone is to take in more and more training data, while also teaching the system to learn from its’ failures. Until it does, it is safe to say that the greater part of the world will continue to trust these professionals over anything that ML can bring to the table.

Resources:

https://www.csail.mit.edu/news/csail-hosts-annual-meeting-highlighting-collaboration-qcri

https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/

https://www.investors.com/politics/editorials/fact-checkers-big-media/

https://unbabel.com/blog/artificial-intelligence-fake-news/

About Ian LeViness 113 Articles
Professional Writer/Teacher, dedicated to making emergent industries acceptable to the general populace