Nothing but the truth: Technologies that help to fight fake news
During the US presidential election of 2016, an article based on flawed argument claimed that Donald Trump won the popular vote. In reality, his opponent Hillary Clinton got up to 2.9 million more votes, but this false information gained over 4 million shares and engagements and become world’s biggest fake news. The similar story during the same election happened with fictional Pope Francis’ endorsement for Donald Trump presidency. This story broke the news and garnered near 1 million shares.
It is believed that fake news influenced the final election result. However, that is not the only misleading information in the global network and politics is not it’s the only aim. Countless fake news appear every day, and most of them have dangerous consequences. They can lead to an intimidation or incitement to ethnic hatred. For example, the news about illegal immigrants starting California Wildfires is complete nonsense, but this story had over 500 thousand hits. Or when the Russian Foreign Ministry and pro-Kremlin media presented false claims about a US-funded drugs laboratory in Georgia. The main purpose of this fake newswas to distract readers’ attention from the Salisbury poisonings.
Social networks usually help to spread fake information. According to MIT research, on Twitter false stories have 70% more chances to be retweeted than true news, and real information reaches 1,500 people 6 times slower. The same research said these figures do not depend on the number of bots in social networks. Scientists removed bots from their dataset and got the same results in the end. People help to spread misleading information even if they realize it is false. According to Statista, 52% of Americans believe that online media publish fake news regularly.
How often media publish fake news? (US, 2018)
So, is there any way to solve or at least reduce the number of fake news? Recently, Google and biggest social networks such as Facebook and Twitter signed a code of conduct on how they are going to counteract spreading of false stories. Their main goals are to overcome fake accounts and bots, simplify access to authoritative content. But even if they succeed, what to do with the fake news sources? What about media that controlled by politicians or oligarchs? That is the point when modern technologies can help us to get verified and trusted information.
Blockchain
All media survive by monetization of their content. Traditional mechanisms of this monetization make authors focus more on generating views than on delivering insightful information. Basically, to motivate authors to spread reliable information we need changes in a process of content monetization. Using blockchain and crypto tokens, media platforms can establish self-sustaining economies. As an example, platform ASKfm is working on blockchain-based project ASQ Protocol. This project will allow to create and store content. But much more important, it can give access to the publications through the internal data encryption. And blockchain will guarantee the transparency of monetization.
Blockchain-based media platforms can promote high-quality content with tokens. To participate in Sapien Network, for example, users must have loсal tokens. This system protects internal media environment from third-party interventions, and without tokens, users can’t comment, post, or vote for publications. The platform prevents fake accounts and trolls activity and makes users think twice before posting or sharing some information.
What about copyright protection? Some unreliable media can modify previously published articles to create a different narrative. That is why we need platforms that will track sources of information and certify publications. And we already have the first blockchain-based projects of this type. The research collaboration platform Matryx is able to identify researchers who worked on a project and their contribution to the work. Proof of Existence service allows to certificate documents using Bitcoin. All recorders about authorship are stored within the blockchain system.
Artificial Intelligence
Basically, Artificial Intelligence is an algorithm that analyzing a big amount of data. So, why AI can’t verify media resources and detect fake news before they will be spread in the social networks? Recently, MIT’s CSAIL (Computer Science and Artificial Intelligence Lab) and the QRCI (Qatar Computing Research Institute) announced a new project that will identify fake news sources. With help of the data from Media Bias/Fact Check (MBFC), the system was trained to detect strong political bias or misinformation. The technology can analyze suspicious site materials, its Twitter account, URL structure, and web traffic. As a result, the system is 70% accurate at the recognition of political bias and 65% accurate at detecting of a site’s fallacy.
But there is another side of AI in the fake news world. Less than two years ago this video with synthetic Barack Obama broke the Internet. The technology that can produce totally new mimics on the human face wasn’t new, but this time the result was incredibly realistic.
Generative Adversarial Network, or a GAN, is a machine learning technology. Created by Ian Goodfellow in 2014, it is able to generate fake videos, images, audio or text based on existing datasets. During this four years, technology was improved by scientists all over the world and become much more realistic. Today, using GAN, developers can copy and change not just faces or sounds but reproduce all human moves, too. Some scientists, such as Danielle Citron, a professor of law at the University of Maryland, already called this technology “the threat to privacy and national security”. So, how to detect AI-produced video?
To find the best way of video fake detection, scientists are studying existing AI technologies. Recently, researchers from the University at Albany (SUNY) offered to identify deep fakes by a lack of blinking in synthetic subjects. Still, Hany Farid, professor of computer science at the University of California, said that everything that scientists may propose can be beaten by light changes in the AI programm code. For example, we are trying to identify fake video by color changes in the face that corresponds with the heartbeat. The programmer just updates his algorithm to incorporate this sing in a fake. That is the reason why the biggest part of researchers, including Farid, keeps in secret all details about their fake recognition projects.
Human factor
Still, people were and will be the main factor that makes fake news spread faster than the real information. “A lie can travel halfway around the world before the truth can get its boots on” — even this famous phrase relating to the misleading information because no one knows for sure who said it first. We are sharing fake stories because they usually look more novel and interesting than the truth. So, how to protect yourself and your surroundings from a lie? Before reposting of any information, check this:
- Are other media talking about this news? It is useful to check it if you found information on some unknown media. But even mainstream resources should be double checked;
- Think about this: will serious media use clickbait, or it’s just a way to encourage you to share the article before reading?
- What is the source of information? Does this story have any references to experts? Do this people really exist?
- Too many mistakes in the publication? Mistakes are uncommon in reputable media.
- Strange domain names? Check if there are any extra information about it.
Some global media resources have separate departments for revealing fake stories. On this BBC page, for example, you can find information about the most dangerous and misleading news all over the world.
Can you identify fake news? Try this game to check.
More about spotting a fake news story from Harvard Summer School.