With artificial intelligence now able to create convincing, fake article news that can fool people on a mass scale, scientists have a new task: creating algorithms that can prevent such news from spreading across the internet. A team of researchers from the University of Washington has developed a new tool that could be useful in taming potential damages from AI-generated fake news which are usually based on lurid headlines.

Referred to as GROVER, the system can detect and write fake, misleading news articles that are even more convincing than articles written by humans, according to the research report published on ArXiv.

The researchers found that the algorithm – “discriminators” – can classify neural fake news from those written by humans, including if they are fake or not, with 73 percent accuracy, if fed with a moderate level of training data. “Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92 percent accuracy,” the researchers wrote in the publication.

fake news

GROVER can easily detect a news article with false information, written in the distinctive styles of specific news outlets such as The New York Times, The Washington Post, Wired, and TechCrunch.

In a nutshell, GROVER is an ultimate tool to detect fake news written by artificial intelligence. However, if in the wrong hands, the system could mess the internet with misinformation and dangerous propaganda.

AI GROVER will be available to the public

Unlike the OpenAI team that dismissed chances of releasing the full version of its GPT-2, a similar system, GROVER developers announced in their research that it would release the system to the public. Aside from detecting if there’s a foul play in a news article, the algorithm can also analyze other aspects, unlike other tools, such as author name, publication name, headline, and other details. But what if it gets to the wrong hands?

GROVER can easily detect a news article with false information, written in the distinctive styles of specific news outlets such as The New York Times, The Washington Post, Wired, and TechCrunch, based on demonstrations in the study. According to the study, articles written by GROVER convinced people more than those written by humans.

In an example, the team used GROVER to generate a headline, author’s name, and the opening of a news article linking autism and vaccines to the federal government and UC San Diego, using the writing style of The New York Time’s science section. Similarly, the researchers demonstrated how GROVER can match its writing style with a specific publication by simply refining its output. The system was fed with a headline about autism-causing vaccines and asked it to match the headline with Wired’s style. In the result, the GROVER wrote a full article using the headline and refined it to look like an article that could be published by Wired.

The researchers acknowledged that GROVER would be dangerous if released to the public, but maintained it’s the best move to prevent the spread of AI-generated fake news, including those generated by GROVER.

The move still looks dicey in the sense that the algorithm could be modified or advanced by individuals with plans to use the system for fake news. Does it mean the developers have every answer to all possible advancements and tweaks that could be done to GROVER?