Researchers developing algorithms to detect fake reviews Source: Mike Krings
Anyone who has conducted business online―from booking a hotel to buying a book to finding a new dentist or selling their wares―has come across reviews of said products and services. Chances are they've also encountered some that just don't seem legitimate. Researchers at the University of Kansas are developing algorithms and computational models to detect fake online reviews to improve commerce for consumers and businesses and to improve credibility of social media.
Hyunjin Seo, assistant professor of journalism, and Fengjun Li, assistant professor of electrical engineering and computer science, have won a KU strategic initiative grant to develop interdisciplinary models to detect fake or dishonest reviews that are misleading, untrue or do not meet Federal Trade Commission guidelines for online commerce. Fake reviews can be damaging to businesses, consumers and the sites that host them. As the Internet has evolved, so have the number and kinds of fake reviews.
"The most fundamental part of this project is to develop a more trustworthy social media experience, because that's such a big part of how we get information and make decisions as consumers and businesses," Seo said. "If credibility and trust are not there, it can harm all sides."
The research team will develop algorithms to improve the speed of collecting data from millions of reviews posted on sites such as Amazon.com, Yelp.com, Zappos.com, TripAdvisor.com, Expedia.com and others. They will then develop computational models to detect fake reviews and assess trust in online communities based on the data they collect. People create fake reviews for various reasons, often to generate enthusiasm for their product or to post negative reviews of their competition's goods or services. Consumers write fake reviews for a number of reasons, including personal grudges or positive reviews for a business they have a connection to. Researchers will make semantic analyses of the millions of reviews they collect, searching for common language that might out a review as dishonest, and tracking the amount of interaction users have on review sites to determine whether someone is a legitimate contributor.
"Studies in this area are rather fragmented in that they tend to focus mainly on discipline-specific aspects," Seo said. "We are hoping to develop computational models that take into account interactions between sociological, psychological and technological factors."
The research will help inform policy as well as commerce. The Federal Trade Commission has guidelines in place addressing how businesses and consumers use social media. Online endorsements and reviews must be truthful and not misleading; reviewers and marketers are required to disclose any relationships that might effect their judgment and so on. New media and technologies have given people new ways to skirt the regulations, and the research findings will help address such efforts. There are regulations and penalties for deceptive or improper online marketing, but there are still many gray areas, the researchers said.
"Machine learning methods are typically used in detecting fake reviews. The existing work considers various features such as rating distortion, the sentiment of the reviews, bursts in time domain and more," Li said. "However, the machine learning-based solution highly depends on the features extracted from suspicious activities, and the adversaries may perform the attacks in a more deliberate way and mask the traces carefully to evade from the detection. Therefore, it is important to develop more robust detection algorithms."
In addition to analyzing data and developing models that social media and online review sites can use to improve their services, the researchers will hold workshops on cybersecurity and digital literacy for policy makers, develop a course in computational journalism and seek additional funding from leading online review sites in the United States, China and South Korea.
| }
|