TechNews Pictorial PriceGrabber Video Mon Nov 25 15:24:06 2024

0


Google Is Using AI to Combat Internet Trolls
Source: Danny Vena


One of the biggest challenges facing websites, news organizations, and social media is online abuse and harassment. Since the dawn of the internet, sites have battled against abusive, toxic, and inappropriate comments posted in their online forums. With the millions of comments posted, news organization quickly became overwhelmed by the magnitude of data produced.

Websites and publishers have become so concerned by the volume of vitriolic comments and fear of lawsuits that many have removed the ability to post comments on their sites. Consider that 72% of internet users in America have witnessed online harassment, and nearly half have experienced it themselves. Social media sites have been under fire to control hate speech and online abuse by their members. Technological advancements in artificial intelligence (AI) may finally provide the answer.

Google uses AI to battle abusive content. Image source: Pixabay.
An important piece of the puzzle
More From Fool.com

        Motley Fool Founders Issue New Stock Buy Alert
        Forget GE! Heres how to play the largest growth opportunity in history
        Forget Apple! Heres a Better Stock to Buy
        He Made 21,078% Buying Amazon. Heres His New Pick

Jigsaw, a technology incubator of Google parent Alphabet (NASDAQ: GOOGL) (NASDAQ: GOOG), has developed an artificial neural network -- an AI system that replicates the structure and learning capacity of the human brain using algorithms and software models. Using this technology, it aims to identify and control abusive online comments. Google and Jigsaw are making the program available free of charge, and it's being added to the Google's TensorFlow library and Cloud Machine Learning Platform. The product, dubbed Perspective, uses deep learning to sift through reams of data to detect harassment, insults, and abusive speech in online forums in real time. Jigsaw explains how the program works:

        Perspective scores comments based on the perceived impact a comment might have on a conversation, which publishers can use to give real-time feedback to commenters, help moderators sort comments more effectively, or allow readers to more easily find relevant information. We'll be releasing more machine learning models later in the year, but our first model identifies whether a comment could be perceived as "toxic" to a discussion.

Do you kiss your mother with that mouth?

Google and Jigsaw used comment data from The New York Times Company (NYSE: NYT), Wikipedia, and several unnamed partners. They then showed that data to panels of people and had them rate whether the comments were toxic. They used these human responses as training data for the AI system, which will provide ratings of a phrase based on its toxicity on a scale of 100 and allows you to testthe system Opens a New Window. . Type in the phrase "you are ignoring important information" rates a 10%, "your mother wears combat boots" garnered 55%, while "you're a jerk" gets a response of 86% toxicity.

The New York Times reports that it only has the resources to allow comments on 10% of its articles. The company provided the archives of comments in hopes of expanding its comments section and to "increase the speed at which comments are reviewed." With only 14 moderators to manually review every comment, the task became overwhelming, reviewing on average 11,000 comments daily.

The New York Times joins Google to curb online abuse. Image source: Pixabay.
Twitter tries to clean up its act

Google is not the only company seeking to curb online bullying using AI. At IBM's (NYSE: IBM) InterConnect conference last month, Twitter, Inc.'s (NYSE: TWTR) vice-president of data strategy Chris Moody announced that the popular social network had partnered with IBM's Watson, the AI-based cognitive computer, to address online abuse. He stated:

        We're starting just now to partner with the Watson team. Watson is really good at understanding nuances in language and intention. What we want to do is be able to identify abuse patterns early and stop this behavior before it starts.

This comes at a crucial time for Twitter, which has been under fire for not policing its users. The company announced last month that it was working to make the site a safer place by limiting abusive users' ability to create new accounts and updating how users can report abusive tweets.
That didn't end well

AI systems have been tested on social media before, though the results have been less than stellar. In early 2016, the folks at Microsoft (NASDAQ: MSFT) research had used Twitter as a testing ground for an AI-based chatbot, @tayandyou, aka TayTweets, to learn the speech patterns of millennials by interacting with them on the site.Unfortunately, within 24 hours and 96,000 tweets, the experiment was suspended when the fledgling AI began spewing venomous vitriol. Microsoft later said:

        The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay.

Final thoughts

Each new technological innovation brings benefits and challenges. The dawn of the internet age brought with it internet trolls, who sought to control the conversation and silence dissenting voices. Programs like Perspective and Watson seek to return voices to those vulnerable speakers, which benefits us all.

10 stocks we like better than Alphabet (A shares)
When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks Opens a New Window. for investors to buy right now...and Alphabet (A shares) wasn't one of them! That's right -- they think these 10 stocks are even better buys.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |