Wikipedia AI is an Artificial Intelligence engine to analyze the changes made to Wikipedia and as an attempt to improve the quality of its articles. It is an online encyclopedia anyone can edit. This not-for-profit website drastically changed the way we get information. It is one of the ten-most visited sites on the internet.
There are loads of changes to Wikipedia articles on day-to-day basis. To monitor these changes and to highlight potentially damaging changes in the articles, Wikipedia started AI project named Objective Revision Evaluation Service(ORES). The cause of using AI is to prevent edits containing bogus information and save the site from vandalization. Editors can score their article based on damaging alteration by using this AI, which is available as an open web service.
Aaron Halfaker, the senior research scientist at the Wikimedia Foundation, the parent organization of Wikipedia, built this AI engine as a way of automatically analyzing changes and identifying vandalism from edits. For editors this AI will work as X-ray specs, making their task easier. Halfaker states that, “Although, it may seem like a step toward AI replacing humans, it is step toward increasing involvement of humans in Wikipedia.” Wikimedia’s head of research- Dario Taraborelli added that, “This project is an attempt to bring back the human element and to allocate human attention where it’s needed the most.”
There are certain strict set of rules to edit a Wikipedia article, and the system won’t let you edit the content unless you followed all those rules. Sometimes, this rigidity acts as a barrier for many editors to join the list of regular Wikipedia editors. Aaron Halfaker intends to increase participation of newbie editors by making Wikipedia friendlier. The service will identify obvious vandalism and bifurcate it from well-intentioned edits by using SciKit Learn- a set of open source machine learning algorithms.
Wikipedia AI algorithms work by identifying certain words, specific keyboard patterns, variants of certain words. For example, these algorithms can spot abnormally large block of characters. Vandals have a tendency to avoid spaces between characters and such character block can be spotted using AI. The service cannot catch each and every bit of vandalism, but it can catch most of it, as majority of vandalism is vague. Even though, it is obvious that the algorithms cannot spot a well-written scam.
“Even when the scenario is Wikipedia AI replacing human editors in near future, it will still require human judgement to guide those neural networks”, says Halfaker. Wikipedia’s this approach is to attract more editors to join its editors list. After all, it’s AI but, it’s also very human.