The self-appointed censors

First, a commercial: Some of the readers of this blog get blog links from Facebook. Some get blog links from Twitter. Some get it via email. Some have it bookmarked on their favorite browser.

Why should you sign up for email delivery? Because that way you won’t have to run the risk of not being able to read it in what some claim is a purge of conservatives from social media.

First, Robby Soave:

Last week, the Poynter Institute for Media Studies, a non-profit journalism and research organization, published a list of 500 unreliable new websites. But the list, which included many conservative news and think tank websites, was itself unreliable, and Poynter has since retracted it.

“Soon after we published, we received complaints from those on the list and readers who objected to the inclusion of certain sites, and the exclusion of others,” explained Poynter editor Barbara Allen in a statement. “We began an audit to test the accuracy and veracity of the list, and while we feel that many of the sites did have a track record of publishing unreliable information, our review found weaknesses in the methodology. We detected inconsistencies between the findings of the original databases that were the sources for the list and our own rendering of the final report.”

How exactly the list found its way onto the Poynter website in the first place is a bit of a mystery. Poynter confirmed that its author, Barrett Golding, is a freelancer rather than an employee, but did not answer other questions about the process of greenlighting this project.

Golding’s LinkedIn account lists him as a freelance podcast producer for the Southern Poverty Law Center. The SPLC did not respond to my questions about whether other SPLC staff had any influence or involvement over the list. Golding did not immediately respond to my request for comment, either. According to his Twitter feed, he works with the SPLC’s “Teaching Tolerance” project. He was formerly a research fellow at the Donald W. Reynolds Journalism Institute and a producer for NPR.

It’s worth trying to understand these connections because Poynter’s retracted list of news sites list was shoddy and overly broad in a manner reminiscent of the SPLC’s own work on tracking hate groups. As I explained in a recent piece for Reason detailing the group’s personnel issues, the SPLC tallies hate groups in a manner that suggests hate is always rising, even if it’s not:

According to the SPLC’s hate map, there were more than 1,000 hate groups in the U.S. in 2018—nearly twice as many as existed in 2000. The number has increased every year since 2014.

The map is littered with dots that provide more information on each specific group, and this is where the SPLC gives away the game. Consider a random state—Oklahoma, for example, is home to nine distinct hate groups, by the SPLC’s count. Five of them, though, are black nationalist groups: the Nation of Islam, Israel United in Christ, etc. The SPLC counts each chapter of these groups separately, so the Nation of Islam counts as two separate hate groups within Oklahoma (its various chapters in other states are also tallied separately). The map makes no attempt to contextualize all of this—no information is given on the relative size or influence of each group.

Additionally, the SPLC takes a very broad view of what constitutes hate: It considers the Alliance Defending Freedom, a legal group that defends religious liberty, as an extremist organization. It claims that American Enterprise Institute scholar Charles Murray is a white nationalist.

The Poynter list made similar errors. It included InfoWars (a literal conspiracy site) but also conservative new websites like The Washington Examiner, National Review, and The Washington Free Beacon. These sites get things wrong from to time, but so do mainstream and left-of-center news sources. (Indeed, this entire episode is a prominent example of a mainstream source making a mistake.) But those publications are not misleading in the same sense that Alex Jones is misleading.

Poynter has done some good work in the past. Moving forward, it should be more careful about outsourcing its fact-checking to people who work for the SPLC.

Dan O’Donnell posted this last week:

Wisconsin Conservative Union, a popular Facebook group for conservatives in the state, was apparently taken offline during Facebook’s targeting of offensive personalities and fan pages Thursday.

“I was surprised by this,” said Wisconsin Conservative Union administrator Bob Dohnal, who said on The Dan O’Donnell Show that a friend called him Thursday night to let him know that his page had vanished. “It’s about 2,000 of the conservative leaders around this state. Nobody is talking about revolution or anything like that. It’s just been a place where everybody can exchange ideas and talk about candidacies and stuff.”

Dohnal added that he never received any warnings about any of the group’s posts or any notice that it had violated Facebook standards. He isn’t even sure if the group has been suspended or permanently removed from Facebook. He merely logged on and found it was gone.

On Thursday, Facebook permanently banned a number of fringe right-wing figures, including Alex Jones, Laura Loomer, and Paul Nehlen in addition to Nation of Islam leader Louis Farrakhan. Dohnal doesn’t know why (or even if) his group was lumped in with and removed alongside them.

“There’s never been anything [posted] against gays or anyone of any race or sex or anything like that,” he said. “If there were, I would take them off right away.”

As of the publication of this article, Dohnal was trying to contact Facebook to determine why Wisconsin Conservative Union was pulled.

As of Wednesday, however, the site is back on Facebook.

What about Twitter? Michael Van Der Galien reports:

Last weekend, conservatives discovered that both Twitter and Facebook had launched a grand purge of nationalist-populist (as they prefer to call themselves) users. Alex Jones, Milo, Paul Joseph Watson, Tommy Robinson, and Laura Loomer were all targeted, albeit not all by the same social media at the same time. The bans and suspensions inspired Human Events editor Raheem Kassam to predict that there were more waves to come:

This prediction is right on the money: Monday night, several other rightwing accounts were banned. Among them a parody account of Alexandra Occasio-Cortez, Jewish conservative @OfficeOfMike, and even the @MAGAphobia account whose admin was Jack Posobiec (the same Jack Posobiec mentioned by Kassam in his tweet about who’d be targeted next).

In its explanation of the ban on the AOC parody account, Twitter pretended that it wasn’t made clear in the user’s name and bio that it was indeed parody. However, that’s not true at all:

Conservative Millennial writer Courtney Holland adds:

As former New York Mayor Rudy Giuliani rightfully puts it, these purges are nothing less than censorship.

It’s clear: liberal Silicon Valley has picked a side with regards to the upcoming 2020 elections. All those who dare disagree are at risk of losing their accounts and therefore their audience.

This issue is, to use a word I hate, problematic. To no one’s surprise, Facebook is trying to have this both ways, as Jane Coaston reports:

For years, social media giants tried to avoid the question altogether, recognizing that under American law, digital platforms have unique protections that guard against lawsuits aimed at the content posted on those platforms. But users complained about extremism and misinformation weaponized on Facebook and elsewhere, putting Facebook, Twitter, and other tech companies under immense pressure to increase moderation and close the accounts of bad actors — the same way a publisher might reject an article or a writer.

In doing so, they’ve gotten sucked into the political fray they wanted to avoid. Conservatives, pointing out that Facebook and Twitter are self-described platforms, are arguing that banning some users while permitting others based on a “vague and malleable” rubric is infringing on free expression on sites that they view as more like a town square where all voices should be heard. …

Infowars is a publisher. Alex Jones, who has been the publisher and director of Infowars since its launch in 1999, can publish what he wants on it. If I pitched Alex Jones on an article for Infowars, he would be under no obligation whatsoever to publish it.

Amazon Kindle is a platform, which means Amazon provides the means by which to create or engage with content, but it doesn’t create most of the content itself — or do a lot of policing of it. If I wanted to read Mein Kampf on my Amazon Kindle, Amazon would be unable to stop me from doing so.

An even better example of a platform might be a company like Verizon or T-Mobile, which provides software and the network for you to make phone calls or send texts, but doesn’t censor your phone calls or texts even if you’re arranging to commit a crime.

But for Facebook, and all similar social media sites, this seemingly dense legal question is deeply important — for how it treats figures like Jones and Farrakhan, and even how Facebook arbitrates speech at all.

If Facebook is a platform, it then has legal protections that make it almost impossible to sue over content hosted on the site. That’s because of Section 230 of the Communications Decency Act, which protects websites like Facebook from being sued for what users say or do on those sites.

Passed in 1996, the act reads in part, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Back in 2006, the act protected the website MySpace when it was sued after a teen met an adult male on the site who then sexually assaulted her. The court found that the teen’s claims that MySpace failed to protect her would imply MySpace was liable for content posted on the site — claims that butted up against Section 230.

But if Facebook is a publisher, then it can exercise editorial control over its content — and for Facebook, its content is your posts, photos, and videos. That would give Facebook carte blanche to monitor, edit, and even delete content (and users) it considered offensive or unwelcome according to its terms of service — which, to be clear, the company already does — but would make it vulnerable to same types of lawsuits as media companies are more generally.

If the New York Times or the Washington Post published a violent screed aimed at me or published blatantly false information about me, I could hypothetically sue the New York Times for doing so (and some people have).

So instead, Facebook has tried to thread an almost impossible needle: performing the same content moderation tasks as a media company might, while arguing that it isn’t a media company at all.

Facebook is trying to have its cake and eat it too

At times, Facebook has argued that it’s a platform, but at other times — like in court — that it’s a publisher.

In public-facing venues, Facebook refers to itself as a platform or just a “tech company,” not a publisher. Take this Senate committee hearing from April 2018, for example, where Facebook CEO Mark Zuckerberg argues that while Facebook is responsible for the content people place on the platform, it’s not a “media company” or a publisher that creates content.

But in court, Facebook’s own attorneys have argued the opposite. In court proceedings stemming from a lawsuit filed by an app developer in 2018, a Facebook attorney argued that because Facebook was a publisher, it could work like a newspaper — and thus have the ability to determine what to publish and what not to. “The publisher discretion is a free speech right irrespective of what technological means is used. A newspaper has a publisher function whether they are doing it on their website, in a printed copy or through the news alerts.”

And even the language Zuckerberg has used about Facebook when appearing before Congress, as he did last spring, shows that he thinks of the service as a publisher — while his company simultaneously argues that it’s not.

In his opening statement to committee members, Zuckerberg said, “We didn’t take a broad enough view of our responsibility, and that was a big mistake. It was my mistake, and I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here.” And then he added, “I agree we are responsible for the content” on Facebook, while noting again that Facebook doesn’t produce content itself.

Why the line between platform and publisher matters

Facebook is far from alone in attempting to walk an almost impossible line between responding to users’ demands for moderation and editing while attempting to avoid the legal responsibilities of being a publisher.

Take Tumblr’s recent ban on nudity, Twitter’s continued back-and-forth on suspending and banning extremist users, Facebook’s recent efforts to curtail misleading ads that may have contributed to misinformation surrounding the 2016 presidential campaign: All of these moderating efforts are attempts to get out ahead of users who are dismayed by a constant cavalcade of bad actors and bots that make these sites less enjoyable to use (and less profitable for ad companies that post on these platforms, and thus, for the platforms).

And with the threat of impending regulations arising from European courts where American digital media protections don’t exist, Facebook is keener than ever to stay within the good graces of American users — and politicians.

So companies like Facebook, YouTube, and Tumblr are trying to be more, as Zuckerberg put it, “responsible.” But that’s landed them in a supercharged political environment, drawing the ire of the figures they’ve deemed dangerous and many others. For companies like Facebook, they’re damned if they do moderate content — both legally and politically — and damned if they don’t. …

Facebook wants to enjoy the benefits of being a content publisher — major moderation and editing powers along with the power to ban users for whatever reasons it wants — while also accessing the legal freedoms that come with being a platform under American law. And right now, Facebook is basically a publisher that keeps arguing that it isn’t.

That muddy legal territory has people worried that the social media giant will fail on both accounts — that it won’t handle material on its site as responsibly as a media outlet might, but will also stop providing an online “town square” where controversial voices can be heard.Since Facebook is now apparently reviewing the actions of users even when they’re not on Facebook, some are arguing that the stated terms of service that should dictate what’s permitted on Facebook and Instagram don’t do so in reality. That’s why organizations focused on digital civil liberties are just as concerned about Facebook’s decisions as some on the right.

Jillian York, a Electronic Freedom Foundation director, said in a statement, “Given the concentrated power that a handful of social media platforms wield, those companies owe their users a clear explanation of their rules, clear notice to users when they violate those rules, and an opportunity to appeal decisions.”

In April 2018, Facebook launched the “Facebook Here Together” campaign, stating that Facebook would “do more to keep you safe” from privacy violations and seemingly from bad content.

But that’s the role of a publisher — one that Facebook has argued time and time again that it doesn’t have. And that’s a big, big problem for the world’s most powerful social media company.

Yael Ossowski points out an unintended consequence:

Banning fringe voices from social media networks may be popular among tech and political elites, but it will only further embolden the people with truly dangerous ideas.

The fresh wave of censorship is being led by the reaction to the actions of the deranged terrorist, motivated by very bad ideas, who opened fire on peaceful worshippers at mosques in Christchurch, New Zealand, in March, killing 51 people and leaving 41 injured.

He livestreamed the entire rampage, peppering his deadly killing spree with commentary and phrases found on seedy online chat rooms and websites.

Political leaders in western nations want global regulations on the social media platforms used by the shooter, which you or I use everyday to communicate with our friends and family.

In the rush to prevent another attack, however, we should be warned against any crackdown on social media and Internet freedom. These are the tools of dictatorships and autocracies, not freedom-loving democracies.

But penalizing social media companies and its users for a tragic shooting that took place in real life abrogates responsibility for the individual alleged of this attack, and seeks to curb our entire internet freedom because of one bad actor.

What’s more, trying to play whack-a-mole with bad ideas on the internet in the form of bans or criminal liability will only embolden the seediest of platforms while putting unreasonable expectations on the major platforms. And that leads us to miss the point about this tragedy.

Social media platforms like Facebook or Twitter already employ tens of thousands of moderators around the world to flag and remove content like this, and users share in that responsibility. It will be up to these platforms to address concerns of the global community, and I have no doubt their response will be reasonable.

But on the other hand, this tragedy occurs in the context in which Big Tech is already being vilified for swinging elections, censoring speech of conservatives, and not reacting quickly enough to political demands on which content should be permissible or not.

As such, we are set to hear anti-social media proposals that have very little to do with what happened on that tragic day in Christchurch in idyllic New Zealand.

Australian Prime Minister Scott Morrison wants the G20 to discuss global penalties for social media firms that allow questionable content. Democrats like Sen. Elizabeth Warren, among many congressional Republicans, want to use antitrust regulations to break up Facebook.

A recent national poll found that 71 percent of Democratic voters want more regulation of Big Tech companies.

In the wake of a tragedy, we should not succumb to the wishes of the terrorist who perpetuated these attacks. Overreacting and overextending the power of our institutions to further censor and limit online speech would be met with glee by the killer and those who share his worldview. Reactionary policies to shut these voices out so they cannot read or listen to alternative views will only embolden them and make the internet a seedier place.

Many individuals and companies are now fully reliant on social media platforms for connecting with friends, attracting customers or expressing their free speech. They are overwhelmingly a force for good.

Yes, internet subcultures exist. Most of them, by definition, are frequented by very small numbers of people who are marginalized. But clamping down on social media will only radicalize this minority in greater numbers, and maybe lead to more blowback.

Cooler heads must prevail. Social media does more good than harm, and we cannot use the actions of a fraction of a minority to upend the experience for billions of users.

We can use these tools to condemn and prevent extremist ideas and behavior rather than the force of law or outright bans of controversial figures who make convenient targets.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: