Pages

Internet free speech: We’re doing it wrong

Internet free speech: We’re doing it wrong
By Andrew Couts  —   August 7, 2013

When you ask the kind of geeks who sit around pondering the value of our connected digital world what makes the Internet so great, one answer always pops up: Openness and free speech. Just look at how much access we have to the world’s information, they say. Look at how free speech has thrived and spread throughout most of the globe. The Arab Spring! Occupy Wall Street! “Breaking Bad” episode recaps! And I agree, all those are great examples of the way the Internet has made life better for countless people. But there is a sad fact about our newfound ability to disseminate whatever we want, anytime we want: We just aren’t very good at it.

The most poignant recent example of this comes from the New Yorker’s Ariel Levy, who took a deep dive into the Web’s role in the case of Steubenville, Ohio, football players, two of whom were found guilty earlier this year of raping an intoxicated teenage girl from West Virginia.

Levy’s excellent reporting will undoubtedly evoke sickening outrage at those who were responsible for violating a young girl. But it does something else, too: It shows just what happens when we, the couched commentators of the Web, try to take matters into our own hands – often acting on bad information.

This excerpt from the long-read piece highlights the problem:
In trying to determine what happened in Steubenville, the police and the public began with the same information, gathered from the same online sources: ugly tweets, the Instagram photograph, and a deeply disturbing video. But while the police commandeered phones, interviewed witnesses, and collected physical evidence from the crime scene, readers online relied on collaborative deduction. 
The story they produced felt archetypally right. The “hacktivists” of Anonymous were modern-day Peter Parkers—computer nerds who put on a costume and were transformed into superhero vigilantes. The girl from West Virginia stood in for every one of the world’s female victims: nameless, faceless, stripped of identity or agency. And there was a satisfying villain. Teen-age boys who play football in Steubenville—among many other places—are aggrandized and often do end up with a sense of thuggish entitlement. 
In versions of the story that spread online, the girl was lured to the party and then drugged. While she was delirious, she was transported in the trunk of a car, and then a gang of football players raped her over and over again and urinated on her body while her peers watched, transfixed. The town, desperate to protect its young princes, contrived to cover up the crime. If not for Goddard’s intercession, the police would have happily let everyone go. None of that is true.
That’s right – none of that is true. And yet, in the real-time frenzy of Twitter, Facebook, and blog comment sections, we have a culture in which the nitty-gritty truth does not matter, as long as the overall narrative of any given story is right. And from that flimsy platform, we spring forward with threatening or derogatory words directed at whomever we believe are the villains.

This blind beast of online fury reared its ugly head in the aftermath of the Boston Marathon bombing. Reddit and Twitter users mistakenly identified Sunil Tripathi, a Boston-area college student who had been missing for a month, as one of the possible terrorists.

Law enforcement authorities quickly cleared Tripathi’s name. But, as The New York Times recently reported, it was not nearly quick enough to spare the Tripathi family from the wrath of the Web. Not long after the bombing, Sunli Tripathi’s body was pulled from a river.

Most recently, we saw our misuse of the Internet’s quick and dirty communication tools used to threaten the life of Dave Vonderhaar, design director of Call of Duty: Black Ops 2, over a minor game update that had little impact on the game.

These are just a few notable, high-profile examples of how our use of free speech online has become tainted by a desire to be part of events or conversations for which we have little value to add. Twitter and Facebook are littered with garbage comments and unjustified vitriol. Reddit is a cesspool of flash judgments about people or events, by users who think they know what’s right and what’s wrong better than anyone else.

None of this is to say people aren’t entitled to their opinion, or should keep their thoughts to themselves. Nor am I saying that the Web isn’t equally filled with good vibes and positivity – there is just as much of that as there is hateful ignorance and cruelty. But it seems as though the bad stuff has begun to float further towards the top.

What I am trying to say is that our collective online behavior in cases like Steubenville and Boston could eventually have negative effects on the amazing gift of broad free speech online.

First, the spewing of gut reactions to events degrades the value of our collective discourse to the point where what’s being said online contributes little to the overall conversation. If half of the tweets out there are filled with meanness and misinformation, we have taken a step backwards, not the other way around.

Second, our propensity to jump into real-life events with real-life consequences without a full comprehension of either, as exhibited during the Steubenville and Boston fiascos, could lead to less openness in the offline world. Police and government officials may be less willing to reveal information for fear of an online witch-hunt. And victims, like the victim from West Virginia, may be less willing to come forward about crimes committed against them due to the possibility that thousands of Web users will hound them with cruel messages or worse.

In short, as our use of the Web and social media continues to evolve, we must not lose sight of both the power that these tools have, and the possibility that our abuse of them could destroy what we love about them.


William Shatner, Reddit, And The Complications Of "Free Speech" On The Internet
Whitney Phillips and Kate Miltner | February 12th, 2013

Whitney: Two weeks ago, William Shatner tweeted with Chris Hadfield, an astronaut stationed at the International Space Station. This resulted in the ENTIRE INTERNET BEING WON by Shatner, at least according to this Reddit thread. Apparently the 81-year-old Shatner got wind of the thread, and promptly created an account. He then proceeded to spend the next few days feeling out the platform and openly criticizing its most characteristic elements, namely Reddit's karma system, wherein points are given or deducted based on community feedback, as well as AMA threads, which stands for "Ask Me Anything" and provides celebrities and other notables an opportunity to interact with fans. Regarding the karma system, Shatner expressed outright confusion ("isn't the system basically broken?" he asked), and regarding AMAs, Shatner wondered if it was meaningful for anyone but the people who happened to be sitting in front of the computer as the conversation unfolded. And anyway, he asked, "don't I do that daily on Twitter?"

Shatner then tackled a much meatier problem—Reddit's moderation policies. Or lack thereof, as he lamented, which are inextricably tied to the aforementioned karma system, in which "good" comments are rewarded with karma points and increased visibility while the "bad" comments are downvoted and essentially run out of town. Shatner was appalled by what often passes as "good" commentary on Reddit ("good," here, translating to "most popular/most upvoted," not necessarily "positive"), namely rampant racism, sexism and homophobia. "The fact that someone could come here, debase and degrade people based on race, religion, ethnicity or sexual preference because they 'have a right' to do so without worry of any kind of moderation is sending the wrong message, in my humble opinion," he wrote.

Kate: I guess this whole thing just proves that William Shatner is a Rocket Man, burning up his fuse out there, alone. WILL-IAM! SHAT-NER!

(Sorry.)

Seriously though, good for Shatner. From the comments that followed, he said what a lot of people have been saying or wanted to say, which is that this sort of noxious speech and content is not acceptable, and that it shouldn't be tolerated by the moderators.

Whitney: My interest in Shatner's comments are twofold. The first is good old fashioned Schadenfreude, because how do you like that nerd apple, Reddit (a breakdown of my feelings about the site can be found here). The second reaction was much more reflective, since Shatner's argument—which takes for granted that Reddit as a whole (meaning all its mods and admins) are responsible for, as they say, the shit Reddit says—dredges up a number of larger questions, particularly the ideal relationship between platform user(s) and platform moderator(s). What is the ideal relationship? Do users have a "right" to "free speech" on privately owned platforms, as many Redditors insist between rape jokes? Do platforms have a responsibility to shut that sort of content down before it ingrains itself in the site culture?

Kate: Ah, responsibility. What a loaded word! Okay, so first of all, if we're going to talk about the site culture, we need to look at its roots. Reddit is a platform that is largely based in the libertarian ethos of the early web. Free Speech At All Costs is a fundamental precept of that ethos, and for better or worse, I think that's what's fueling a lot of these claims of FREE SPEECH!!11. Just to provide a little bit of context: in 1996, John Perry Barlow (one of the founders of the Electronic Frontier Foundation) wrote A Declaration of the Independence of Cyberspace. In it, he declared:

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity… In our world, all the sentiments and expressions of humanity, from the debasing to the angelic, are parts of a seamless whole, the global conversation of bits. We cannot separate the air that chokes from the air upon which wings beat.

So, if you come at it from this perspective, you cannot separate "good" speech from "bad" speech, and as such, both are protected (or should be).

The thing is, as much as some Redditors may want to claim otherwise, Reddit is not The Internet. Reddit is a privately owned platform that can decide what sort of user-generated content will or will not be tolerated. Legally, Condé Nast (who owns Reddit) can do whatever they want to control what is posted on the site (which seems like not much, because pageviews, probably). The question that Shatner's comments raise is whether or not they should.

Whitney: And yet the "free speech" issue lingers (scare quotes used to differentiate the legal sense of the term from the cartoon internet sense of the term), which is best summarized by the concurrent assertions that "you are not the boss of me" and "don't tell me what to do." This card is most frequently played by those who think they should be able to do or say whatever they want whenever they want, often at the expense of women, gays and lesbians, and people of color, because… well because free speech (I'm looking at you, Men's Rights-types). In response to Shatner's posts, many Redditors either directly reiterated this position, or argued a slightly more nuanced version of the same thing, namely that imposing some external morality police (for example by assigning paid moderators to each specific subreddit) would undermine the very spirit of the site, which is based on, you guessed it, "free speech."

To be fair, as several Reddit users pointed out, Reddit is already subject to some on-site moderation. Reddit is a self-moderating community; community members police their own borders through upvoting (which, again, is designed to reward the good and punish the bad, at least what passes as "good" and "bad" on that particular subreddit). Furthermore, volunteer moderators, who are also members of the community, can intervene when other users cross whatever behavioral or ethical line deemed acceptable/desirable by the subreddit (which is often determined through karmic upvoting/downvoting—what the community accepts is the content the community consistently upvotes). In other words, Reddit is designed to self-regulate; so long as individual subreddits are allowed to decide what's appropriate for themselves, we should be good.

Kate: Right, but what subreddits often decide is "appropriate" is the type of content that a lot of other people find offensive, and Reddit's boundaries are porous—the site is set up for community-sourced discovery. We talked about this issue of competing norms and mores last time around with our public shaming discussion; what some people find acceptable (or entertaining) can be completely offensive to others. Taste and opinion aside, you're also dealing with a medium where it's relatively easy to misinterpret context, or intent, or any number of things that give content meaning within a particular community.

As redditor Morbert said in the comments, "Reddit isn't a single community. It is a variety of communities, for better or for worse." That makes things incredibly tricky. The truth is, there are some racist, homophobic, misogynist jerks out there who think that there is nothing funnier than a rape joke (ugh). These people are going to congregate somewhere—and I hate to say it, but that is their right—and I mean that in a constitutional sense: it is totally the legal right of gross bigots to hang out on a message board and make disgusting jokes about anyone who is not white and male, because that is what our laws allow.

The other thing I wanted to bring up is why we are even talking about this in the first place. I mean, who cares whether or not Reddit is tolerant of this sort of stuff except for people who hang out on Reddit? Why is this even a story to begin with? Well, it's a story because Reddit is an influential platform—influential enough that President Obama's campaign staff thought it would behoove him to do an AMA. So the reason that this matters is because one of the most influential and highly-trafficked sites on the internet is also a site that hosts a lot of content that demeans and insults the majority of the US population.

Whitney: That Reddit has been plagued with, let's call them, "behavioral issues" isn't just surprising, it's built into the platform, which is then built into the overall ethos of the site. This is not to say that all Reddit users are Violentacrez clones, or that all subreddits are gross—many users are extremely thoughtful (you can see that in one of Shatner's follow-up threads; scroll down to see a handful of Redditors grappling with many of these same issues). Take these comments, for example:

From Redditor oxynirate:

As a girl on reddit I get really upset and disheartened about the amount of sexist bull I see on here. It's not just sexist crap, it's down right hypocritical. One day you'll see an article on the front page about men protesting rape, and the comments will be all about how they would never commit a rape and are super anti rape. Until someone goes in there and posts about their being raped. They get called liars, told they put themselves in that situation and so on. I had one guy tell me I wasn't raped because I gave up protesting, fighting back and saying no. He said persistence doesn't equal rape.

And from BottleRocket2012 (in response to the claim that Reddit isn't a single community, and that for better or worse it is comprised of many smaller communities):

I think this is the frequent reddit response. But the reality is all the racist "humor" that makes the front page is upvoted by the community at large and these millions of people aren't rotated every day. The other reality is everyone reading William Shatner's posts thinks he is talking about other people. I mean "OP is a faggot" is a hilarious meme and he isn't getting the inside joke. And besides a gay person said "I'm gay and I find this hilarious" so now in the mind of a redditor this isn't hating. No racist ever thinks they are, everyone creates a wall of bullshit to believe what he is doing is ok.

And smaller subreddits—a subreddit devoted to the staggering variety of topics covered by other subreddits can be found here—are much less likely to be overrun by violent sexism (except for subreddits devoted to violent sexism, oh for example /r/beatingwomen, which has nearly 34K subscribers).

That said, the site as a whole is undergirded by a basic kind of libertarian permissiveness. In a perfect world, one that has achieved gender, racial and sexual equality, and in which all voices are equally represented, this sort of permissiveness might be enough to ensure a stable, healthy, self-regulating platform. But this is not a perfect world; you can't hand a bunch of racists, misogynists and homophobes the keys to the castle and then reasonably expect them to deny entry to other racists, misogynists and homophobes (for a visual representation of this idea, consider the following infographicfrom Modern Primate). So what to do? The obvious answer is to take away the keys and hand them to someone who doesn't stand to benefit from all that permissiveness. Someone who couldn't give two shits about the karma they stand to win or lose. In other words, you start moderating. And not just moderating, but culling the very worst offenders. Extreme, maybe, but so is /r/beatingwomen. (And yes, I am fully aware of the practical complications of iron-clad moderation policies, namely that lots of moderation requires lots of paid employee labor, and furthermore that said labor is often actively thwarted by those with mayhem in their hearts. I am also aware that ban-happiness flirts with a whole new set of problems, usually having to do with the mod's personal bias. Still, I present to the jury the basic failings of Reddit's current moderation model, and suggest that what they're doing now doesn't work, and merely opens the door for all kinds of abuse.

Kate: Yes, ugh. God. And /r/rapingwomen and /r/killingwomen. Technically, all of those fit the definition of hate speech, which means that they go from offensive to (borderline? technically?) illegal, which is another issue, really. The majority of racist and sexist content/commentary on Reddit—the commentary that Shatner was referencing— isn't that extreme (thank god). And this is where I get squirmy about culling—because it's all so relative and context-dependent ("I said 'OP is a faggot' sarcastically to point out its innate homophobia and call everyone else out on their bigotry, look at my previous trail of comments"). I've already said this elsewhere but: who moderates whom is a major issue. Who gets to decide what is acceptable and what is offensive? That is such a slippery slope—you cull (or dare I say censor) one thing, and then where does it stop? The path to hell is laid with good intentions, etc, etc.

The other thing is that having this stuff out in the open might not entirely be a bad thing, as upsetting as it might be (just stick with me for a second). There are a growing number of people out there who think that we're in a post-racial, post-gender, post-whatever world, and that racism and sexism aren't as problematic as they used to be (AHAHAHA, HA HA HA HA). The more that blatantly prejudicial/bigoted/hateful expression is pushed to the margins, the easier it will be for certain people to be like, "What do you mean, racism and sexism are problems? Oh, THOSE crackpots on weird site no one has heard of? Whatever, they're just a minority. CHECK MAH SOCIAL PROGRESS." I'd like to point out that you and I wouldn't be talking about this right now if these comments were being published on I'mARacist.com– we are only talking about it because it's on Reddit.

As you've noted previously, shaming (or in this context, moderating) ignorant people isn't going to change their fundamental beliefs. They'll just end up taking their isht elsewhere—and that may clean up the tone/content on Reddit/create a filter bubble for offensive content on major platforms, but it won't eliminate the underlying problem. It is absolutely essential that we (as a society, as individuals, as academics, as people who publish their opinions on websites) keep talking about this, frequently and publicly. Otherwise, these beliefs (which are not going away anytime soon) will become (further) silently institutionalized, which is arguably more difficult to combat.

Whitney: Yes, if you give a mouse a cookie, he'll want you to ban the word Christmas from all public-school functions (it's actually not a bad idea). The problem I've always had with that argument—if we start censoring some of the things, what will stop us from censoring ALL of the things??—is that it essentially plays on a person's fear of being silenced, not their sense of basic human decency. In short: this person is being censored for their beliefs. You don't want to be censored for YOUR beliefs, do you?? Then you better defend with your life other Redditors' right (which isn't actually their right, as they're posting to a privately owned website) to post incendiary, unnecessary, completely unproductive bile all day, because "free speech."

In other words, the argument that selective censorship can only lead us down a path to fascism often does little more than to lull everyone else into complicity, and therefore functions as preemptive self-censorship. You are encouraged to hold your tongue when you see something upsetting, because maybe next time you'll be the one whose speech is under the microscope. This is a problem, because some people need to be told to SHUT UP, particularly when their speech interferes with their audience's basic human right—what should be a basic human right—not to be constantly inundated with violently racist, sexist, homophobic, pedophilic or otherwise ignorant bullshit every time they go online. On Reddit, there are ways of shutting the most egregious content down; but in order for that to happen, some people (ahem, white dudes) have to be willing to acknowledge that the "free speech" to which they so desperately cling actually costs quite a bit, a point with which Reddit's managers and investors would also have to make peace. Because banning bigots would mean less traffic, and less traffic would mean less money. And wouldn't that be a shame. Which is not—I repeat, is not—an argument against offensiveness generally. Nor is it an argument against all forms of dissent or discomfort, both of which can be quite generative. This is an argument against what is already dead cultural weight. Nobody benefits from keeping it around, except maybe the websites themselves. But even then, it's not so much "benefit" as "profit."

Kate: I'm not arguing against solid moderation policies on Reddit or anywhere else. Those are private sites that can dictate the tone of discourse however they see fit. However, if we're talking about people shutting their mouths in the larger sense, I don't know if I agree. Speech—and who is listened to—is often about power and access. If restrictions on speech are put in place—even with the aim of helping marginalized groups—I worry that they will end up backfiring. Those who are used to having power are awfully good at figuring out ways to circumvent things to ensure it's business as usual.

A few months ago, social media scholar danah boyd wrote an excellent blog post about the nature of freedom of expression in a cross-national, online context. She was discussing the uproar over The Innocence of Muslims and the racist MTA campaign by the American Freedom Defense Initiative. Different case studies, same issues. She wrapped up the whole problem pretty neatly, so I'm just going to end it with a quote from her:

I think that we need to start having a serious conversation about what freedom of speech means in a networked world where jurisdictions blur, norms collide, and contexts collapse. This isn't going to be worked out by enacting global laws nor is it going to be easily solved through technology. This is, above all else, a social issue that has scaled to new levels, creating serious socio-cultural governance questions. How do we understand the boundaries and freedoms of expression in a networked world?

How, indeed.

No comments:

Post a Comment