TechNews Pictorial PriceGrabber Video Sat Dec 21 08:57:45 2024

0


Free Speech in the Age of Algorithmic Megaphones
Source: Renee DiResta


Yesterday Facebook took down 559 domestic political pages and 251 accounts for violating its terms of service on coordinated inauthentic behavior—“networks of accounts or Pages working to mislead others about who they are, and what they are doing.” While Facebook has been frequently critiqued for hosting and inadvertently aiding foreign disinformation campaigns, this is the first time a collection of domestic political pages have raised flags.

Yet researchers who have watched the evolution of disinformation campaigns over the years have wondered when this difficult reckoning would start. Coordinated campaigns have never been the sole purview of outsiders. And yet, particularly in the United States, an aversion to anything that resembles censorship has resulted in a sustained reluctance to reckon with the the implications of mass manipulation on our public discourse. As Americans, we have deeply held beliefs about freedom of speech—that more speech, forthrightly shared in the marketplace of ideas, ensures that the best ideas will rise after a participatory, healthy debate.
But the debate we have today is neither healthy nor participatory. For many years now, automated bot armies have artificially amplified perspectives and manipulated trending algorithms. These small, coordinated groups have deliberately gamed algorithms so that a handful of voices can mimic a broad consensus. We’ve seen online harassment used to scare people into self-censorship, chilling their speech and eliminating those perspectives from the debate. Fake likes, shares, comments, and retweets trigger algorithms into thinking that a piece of content is worthwhile or interesting, leading to that content appearing in the feeds of millions. When viewed holistically, these manipulative activities call into question the capacity of social media to serve as a true marketplace of ideas—and this is not a new concern.

Our political conversations are happening on an infrastructure built for viral advertising, and we are only beginning to adapt.

For a while, conveniently, the conversation about manipulation focused on Russia, a foreign antagonist with a remarkable talent for mimicking American speech and commandeering American narratives. Russia’s long tenure in disinformation and propaganda tactics, coupled with its historical role as an adversary, made it easy to pretend that the problem began and ended with them. Unfortunately that has never been true.

Domestic practitioners, many of whom have long walked the line between guerrilla marketer and unethical spammer, are now finding themselves on the wrong side of terms of service enforcement originally aimed at terrorist groups and foreign spies. These Facebook takedowns indicate an effort to create a quantifiable framework for detecting and managing manipulative patterns, whatever the source—and despite the risk of near-certain political blowback.

People who study online disinformation generally look at three criteria to assess whether a given page, account cluster, or channel is manipulative. First is account authenticity: Do the accounts accurately reflect a human identity or collection of behaviors that indicates they are authentic, even if anonymous? Second is the narrative distribution pattern: Does the message distribution appear organic and behave in the way humans interact and spread ideas? Or does the scale, timing, or volume appear coordinated and manufactured? Third, source integrity: Do the sites and domains in question have a reputation for integrity, or are they of dubious quality? This last criteria is the most prone to controversy, and the most difficult to get right.

Earned reputation governs much of how we assess the world, and reputation scores are a core element of how we fight spam. But reputation assessment systems, especially opaque ones, can be biased. This leaves takedowns open to critiques of politically motivated censorship, alleging that it’s the subject of the content or the political alignment of the site that’s the problem; while there is no evidence to support those critiques, they (ironically) frequently go viral themselves.

These recent takedowns will absolutely be politicized: The President will no doubt start tweeting about them, if he hasn’t already. Though Facebook took down pages across the political spectrum, the manipulators that have relied on gamed distribution and fake account amplification are not going to let their pages go without a fight. A robust complaint about censorship is the best tactic they have. It muddies the waters, equating the right to speech with the right to reach millions of people. It goads uncritical thinkers into defending not authentic free speech but the manipulation of speech.

More speech does not solve this problem. Without moderation, the web becomes an arms race in which every political conversation is a guerrilla marketing battle fought between automated networks pushing out content using any means necessary to capture attention. When it isn’t political speech, that’s called spam. It compounds the information glut and, ironically, makes users even more dependent on the curation algorithms that surface what people see—algorithms that are regularly called out for bias.

The only way to avoid politicized battles and conspiracies around moderation is transparency. As domestic accounts begin to be caught up in terms of service violations that impact their speech, platforms must be crystal clear about how these judgement calls were made. Since there will undoubtedly be false positives, they’ll also need a clearly articulated and transparent appeals process. But as we wade into this debate, it’s important to remember that censorship is the silencing of specific voices—or the silencing of a specific point of view—out of a desire to repress that point of view. That is not what’s happening here.

This kind of moderation, which we are likely going to see a lot more of, is viewpoint agnostic. It’s based on quantifiable evidence of manipulative activity. It’s the beginning of a series of hard decisions about how to balance the preservation of free speech ideals with the need to reduce the impact of all accounts—American included—that rely on manipulative tactics to warp our public discourse. We are long past the point of more speech solving the problem.



Renee DiResta (@noUpside) is an Ideas contributor for WIRED, the director of research at New Knowledge, and a Mozilla fellow on media, misinformation, and trust. She is affiliated with the Berkman-Klein Center at Harvard and the Data Science Institute at Columbia University.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |