Fake news: can we stop it using artificial intelligence?

Fake news is a problem. And given the development of communication and information channels it seems impossible to stem the tide.

Fake news reaches increasing numbers of people and, according to scientists, receives more attention than real news online, spreading much more quickly. Gartner predicts that by 2022 there will be more fake news than real news.

Technological innovation has contributed greatly to the phenomenon but, by the same token, it can also help to provide solutions. The most powerful tool we have for stopping those that profit from the spread of fallacious content is, according to the experts, artificial intelligence (AI).

 

Uncovering the lies

The idea in itself is quite simple: to teach software to automatically identify false content and the websites most responsible for spreading it, whether intentionally or accidentally. If you think about it, our email accounts are quite good at identifying spam and protecting us from messages we are not interested in receiving. In fact, artificial intelligence can analyse the origins and the content of a message and determine whether it is propaganda or not.

In the same way we could attempt to monitor the quality of a text, for example by comparing it with other content on the same topic, ideally the most posted content, and in each case trace it back to the source. Is this the strategy for ridding ourselves of fake news?

 

From words to action

Some companies, and startups especially, are taking steps to make these strategies a reality. The end goal is to stop fake news before it can be spread, either as soon as it is produced or, even better, before it is published. Of course, key to this is having a strong body of knowledge on sources, statistics and data.

The projects launched by Factmata, which thanks to AI investigates propaganda, fake news and clickbait in real time, and UK charity FullFact are both very interesting. Automatic checking begins, for example, when politicians from opposite ends of the spectrum issue information on the same topic or when data, statistics or keywords lead the programme to seek out sources and quotes in order to establish their origin.

The algorithm does what it can to provide correct information but of course we can’t expect it to filter the entire internet, to establish whether the content of every page is true or false. Nevertheless, these systems help give us an idea of the reliability of a person, website or newspaper and to be a bit more circumspect about the material they publish in the future.

 

Slippery ground

Naturally these systems also have their limits, both technological and ethical. For example, sometimes there is no data available or there are ambiguous grey areas where it is impossible to establish the intended meaning of a word. There is also the risk of not always being able to use programmes of this type in an impartial and unbiased way, with the possibility of introducing stereotypes into the analysis system itself. In fact, sometimes a useful email can finish up in our spam by mistake.

So although it is increasingly sophisticated and useful, we can’t delegate all our verification work to artificial intelligence. At least for the time being, humans will have to work in parallel with machines to achieve results: carefully reading and considering news items, trying to verify their authenticity and counting to ten before sharing them.