TechNews Pictorial PriceGrabber Video Thu Nov 28 12:33:58 2024

0


Study: Bot-on-Bot Editing Wars Raging on Wikipedia's pages
Source: Ian Sample


For many it is no more than the first port of call when a niggling question raises its head. Found on its pages are answers to mysteries from the fate of male anglerfish, the joys of dorodango, and the improbable death of Aeschylus.

But beneath the surface of Wikipedia lies a murky world of enduring conflict. A new study from computer scientists has found that the online encyclopedia is a battleground where silent wars have raged for years.

Since Wikipedia launched in 2001, its millions of articles have been ranged over by software robots, or simply “bots”, that are built to mend errors, add links to other pages, and perform other basic housekeeping tasks.

In the early days, the bots were so rare they worked in isolation. But over time, the number deployed on the encyclopedia exploded with unexpected consequences. The more the bots came into contact with one another, the more they became locked in combat, undoing each other’s edits and changing the links they had added to other pages. Some conflicts only ended when one or other bot was taken out of action.

“The fights between bots can be far more persistent than the ones we see between people,” said Taha Yasseri, who worked on the study at the Oxford Internet Institute. “Humans usually cool down after a few days, but the bots might continue for years.”

The findings emerged from a study that looked at bot-on-bot conflict in the first ten years of Wikipedia’s existence. The researchers at Oxford and the Alan Turing Institute in London examined the editing histories of pages in 13 different language editions and recorded when bots undid other bots’ changes.

They did not expect to find much. The bots are simple computer programs that are written to make the encyclopedia better. They are not intended to work against each other. “We had very low expectations to see anything interesting. When you think about them they are very boring,” said Yasseri. “The very fact that we saw a lot of conflict among bots was a big surprise to us. They are good bots, they are based on good intentions, and they are based on same open source technology.”

While some conflicts mirrored those found in society, such as the best names to use for contested territories, others were more intriguing. Describing their research in a paper entitled Even Good Bots Fight in the journal Plos One, the scientists reveal that among the most contested articles were pages on former president of Pakistan Pervez Musharraf, the Arabic language, Niels Bohr and Arnold Schwarzenegger.

One of the most intense battles played out between Xqbot and Darknessbot which fought over 3,629 different articles between 2009 and 2010. Over the period, Xqbot undid more than 2,000 edits made by Darknessbot, with Darknessbot retaliating by undoing more than 1,700 of Xqbot’s changes. The two clashed over pages on all sorts of topics, from Alexander of Greece and Banqiao district in Taiwan to Aston Villa football club.

Another bot named after Tachikoma, the artificial intelligence in the Japanese science fiction series Ghost in the Shell, had a two year running battle with Russbot. The two undid more than a thousand edits by the other on more than 3,000 articles ranging from Hillary Clinton ’s 2008 presidential campaign to the demography of the UK.

The study found striking differences in the bot wars that played out on the various language editions of Wikipedia. German editions had the fewest bot fights, with bots undoing other’s edits on average only 24 times in a decade. But the story was different on the Portuguese Wikipedia, where bots undid the work of other bots on average 185 times in ten years. The English version saw bots meddling with each other’s changes on average 105 times a decade.

The findings show that even simple algorithms that are let loose on the internet can interact in unpredictable ways. In many cases, the bots came into conflict because they followed slightly different rules to one another.

Yasseri believes the work serves as an early warning to companies developing bots and more powerful artificial intelligence (AI) tools. An AI that works well in the lab might behave unpredictably in the wild. “Take self-driving cars. A very simple thing that’s often overlooked is that these will be used in different cultures and environments,” said Yasseri. “An automated car will behave differently on the German autobahn to how it will on the roads in Italy. The regulations are different, the laws are different, and the driving culture is very different,” he said.

As more decisions, options and services come to depend on bots working properly together, harmonious cooperation will become increasingly important. As the authors note in their latest study: “We know very little about the life and evolution of our digital minions.”

Earlier this month, researchers at Google’s DeepMind set AIs against one another to see if they would cooperate or fight. When the AIs were released on an apple-collecting game, the scientists found that the AIs cooperated while apples were plentiful, but as soon as supplies got short, they turned nasty. It is not the first time that AIs have run into trouble. In 2011, scientists in the US recorded a conversation between two chatbots. They bickered from the start and ended up arguing about God.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |