<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Catherine Buni | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2017-05-25T14:58:50+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/catherine-buni" />
	<id>https://www.theverge.com/authors/catherine-buni/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/catherine-buni/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Catherine Buni</name>
			</author>
			
			<author>
				<name>Soraya Chemaly</name>
			</author>
			
			<title type="html"><![CDATA[How do you fix Facebook’s moderation problem? Figure out what Facebook is]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2017/5/25/15690590/facebook-leak-report-moderation-broken" />
			<id>https://www.theverge.com/2017/5/25/15690590/facebook-leak-report-moderation-broken</id>
			<updated>2017-05-25T10:58:50-04:00</updated>
			<published>2017-05-25T10:58:50-04:00</published>
			<category scheme="https://www.theverge.com" term="Facebook" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[On Sunday, The Guardian published what it has dubbed the Facebook Files, leaked documents describing the rules Facebook has developed to govern what its 2 billion users are allowed to share publicly. The more than 100 internal documents reveal the chasm between the platform&#8217;s simple and anodyne Terms of Service or Community Guidelines and the [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/676178/P1010051.0.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>On Sunday, <em>The Guardian</em> published what it has dubbed the <a href="https://www.theguardian.com/news/series/facebook-files">Facebook Files</a>, leaked documents describing the rules Facebook has developed to govern what its 2 billion users are allowed to share publicly. The more than 100 internal documents reveal the chasm between the platform&rsquo;s simple and anodyne Terms of Service or Community Guidelines and the complex and granular moderation work that really takes place on the platform. Until now, there has been little information about how hidden teams of workers, assisted by expanding and experimental AI, regulate millions of violent, hateful, abusive, and illegal items every day.</p>

<p>Facebook&rsquo;s guidelines &mdash; which reflect mainstream laws and cultural norms &mdash; have proven woefully inadequate for addressing violent and hateful user-generated content. That&rsquo;s been true for a long time, and it is even more so today as images, both live and still, come to replace text as our dominant form of expression. Murderers and torturers in search of instant fame and glory take advantage of live-streaming and ranking algorithms to broadcast graphic crimes with the swipe of a thumb. Russian hackers steal US Military &ldquo;revenge&rdquo; porn from secret Facebook groups and use it for political blackmail. Terrorists create private groups to trade ideas and cultivate violent ideation.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>Users post text, images, and more relentlessly on Facebook, where teams of thousands cannot scramble fast enough to scrub them off the site</p></blockquote></figure>
<p>Users post text, images, and more relentlessly on Facebook, where teams of thousands cannot scramble fast enough to scrub them off the site. Facebook, for its part, is releasing new tools that attempt to address the thorny fact that images and live video are outpacing, and even replacing, text-based online speech, which are much harder to control. Clearly, Facebook is failing.</p>

<p><a href="http://community.bowdoin.edu/news/2017/03/the-willners-06-silicon-valley-pros-offer-peek-into-tech-culture/">During a visit</a> to his alma mater, Bowdoin, this spring, Dave Willner, who helped build Facebook&rsquo;s first speech guidelines and is now head of community policy at Airbnb, described the challenge of dealing with images in stark terms: &ldquo;[W]hile Facebook has technically hard problems&mdash;like it has to store two billion people&rsquo;s photos indefinitely for the rest of time&mdash;the hard problems those companies face are, what do you do about live video murder? In terms of consequences for the company and the societal impact, the hard problems are the human problems.&rdquo;</p>

<p>Despite years of warnings from academics, sociologists, and civic society advocates about the potential harm of unleashing technologies with minimal understanding of their impacts, social media companies unabashedly continue to espouse utopian visions. Tech powers continue to advertise products with promises of magic and awe. These products often come with little to no safety or privacy protocols against the potential for amplifying long-standing exploitation and violence. Facebook markets Facebook Live as &ldquo;a fun, engaging way to connect with your followers and grow your audience.&rdquo; That may be how the majority of users use the product, but a quick Google search of Facebook Live turns up pages of headlines about live-streamed suicide, murder, and rape.</p>

<p>&ldquo;Keeping people on Facebook safe is the most important thing we do. We work hard to make Facebook as safe as possible while enabling free speech,&rdquo; Facebook&rsquo;s Monika Bickert, head of global policy management, said in a statement responding to <em>The Guardian</em>&rsquo;s story. &ldquo;We&rsquo;re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards and easier for them to contact law enforcement if someone needs help.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>Silicon Valley may not have anticipated the scale, costs, and implications of moderation, but many others have </p></blockquote></figure>
<p>Over the course of the past five years, we have both engaged in free speech and safety advocacy work, as well as <a href="https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech">written extensively</a> about the lack of transparency and power inherent in Facebook&rsquo;s moderation of user-generated content. We have interviewed dozens of industry leaders, experts, employees, former employees, moderators, academics, legal experts, and social scientists. One conclusion is clear and consistent: companies like Facebook &mdash; and the practices they develop to respond to the challenges of moderation &mdash; are human endeavors, governed by human experiences, judgements, and needs. And maybe, above all else, profit. Social media platforms are not simply technologies with problems that can be solved with more proprietary technology, people, or programming &mdash; they are socio-technologies with impacts which far exceed the protocols and profit goals of any platform.</p>

<p>They have, as is now glaringly obvious, serious consequences for individuals and society. And these consequences are a result of private deliberations behind closed doors. Corporations regulate speech all day, every day. Silicon Valley may not have anticipated the scale, costs, and implications of moderation, but many others, like UCLA&rsquo;s Sarah T. Roberts, have been trying to raise public awareness and demand greater corporate transparency for years. &nbsp;</p>

<p>Where does Facebook go from here? Solutions remain elusive, but there are several consistent suggestions we&rsquo;ve heard from observers:</p>
<blockquote class="wp-block-quote has-text-align-none is-layout-flow wp-block-quote-is-layout-flow">
<p><strong>Build a tech fix</strong></p>

<p>There is currently an industry-wide focus on the use of algorithms to &ldquo;solve&rdquo; the problem of moderation. But algorithms are learning machines that rely on inputs and past behavior, often turning the &ldquo;is&rdquo; into the &ldquo;ought,&rdquo; at scale. Automation means the reproduction of status quo inequities evident in the moderation guidelines published by <em>The Guardian</em>. It is currently okay, according to the published documents, to say &ldquo;Let&rsquo;s beat up fat kids,&rdquo; and &ldquo;To snap a bitch&rsquo;s neck, make sure to apply all of your pressure to the middle of her throat,&rdquo; but not to specify which fat kid you want to beat up or which bitch you want to kill. Algorithms cannot appreciate preexisting biases, a specific context, or pervasive environmental hostility in making those decisions. Without accountability, oversight, and thoughtful ethical intervention, algorithms run the risk, arguably and demonstrably, of making an already complex situation worse. Initiatives are underway to improve an algorithm&#8217;s ability to address these issues, but for the foreseeable future the nature of moderation remains intensely human.</p>

<p>&ldquo;Facebook and others keep telling us that machine learning is going to save the day,&rdquo; says Hany Farid, professor and chair of computer science at Dartmouth and a senior advisor to the Counter Extremism Project. He developed the groundbreaking photoDNA technology used to detect, remove, and report child-exploitation images. Farid says that his new technology, eGlyph, built on the conceptual framework of photoDNA, &ldquo;allows us to analyze images, videos, and audio recordings (whereas photoDNA is only applicable to images).&rdquo; But a better algorithm can&rsquo;t fix the mess Facebook&rsquo;s currently in. &ldquo;This promise is still &mdash; at best &mdash; many years away, and we can&rsquo;t wait until this technology progresses far enough to do something about the problems that we are seeing online.&rdquo;</p>

<p><strong>Spend more money and hire more people</strong></p>

<p>In response to recent criticism, Facebook announced that it was hiring an additional 3,000 moderators, on top of the 4,500 individuals it already employs worldwide, to take down questionable content. More moderators, the argument goes, could better handle the fire hose of abusive and damaging, &ldquo;offensive&rdquo; content. But there is no magic number of people that will stem the flow of violent, hateful, and threatening content, even if they are able to respond more quickly to reports. Adding and supporting moderators, while critical, has nothing to do with the process of deciding what is acceptable content.</p>

<p><strong>Regulate platforms more heavily</strong></p>

<p>In the United States, companies like Facebook are highly unregulated when it comes to user-generated content. This is thanks to Section 230 of the Communications Decency Act, which legally absolves platforms from responsibility for user-generated content. Social media systems are not, for the purposes of US law, considered publishers and are almost entirely immune from content-related legal liability. But outside of the US, Facebook is engaged in endless legal battles with governments with beliefs that differ greatly from those of the US government&rsquo;s.</p>

<p>Activist Jillian York, director for International Freedom of Expression at the Electronic Frontier Foundation runs a project, onlinecensorship.org, that tracks the removal of user content by platforms. She is acutely aware of the tightrope walk between government and private corporation regulations of speech. &ldquo;Section 230 is vital to ensuring that platforms like Facebook (as well as smaller websites and individual publishers) aren&#8217;t held liable for users&#8217; speech,&rdquo; says York. She is strongly opposed to any kind of full government oversight, but believes regulation by the state is preferable to regulation by private companies like Facebook. &ldquo;The US is unique in these protections, and while I understand that there are drawbacks, I think the benefits are enormous. That said, Facebook already plays an editorial role in user content.&nbsp;I think the key is more transparency, all around.&rdquo;</p>

<p><strong>Don&rsquo;t regulate or moderate</strong></p>

<p>Free speech activists and absolutists argue that Facebook should not be in the business of moderating content at all. This position is untenable in almost every conceivable way. For Facebook, free rein is incompatible with the existence of Facebook as a profitable, expanding, global brand. An unmoderated platform, Facebook executives and industry experts know, would almost immediately spiral into a pornographic cesspool, driving mainstream users away. Any practical and realistic approach to the problems represented by user-generated content has to presume some level of moderation.</p>

<p><strong>Change how Facebook moderation functions</strong></p>

<p>Almost without fail, executives, civil society advocates, and subject matter experts who have worked with Facebook during the past 10 years believe that greater transparency and engagement are critical to both the assessment of problems and finding solutions. <em>The Guardian</em>&rsquo;s publication of the moderation guidelines is so important because Facebook, like Twitter, YouTube, and other similar platforms, refuse to share details of their content regulation. While there are many understandable reasons &mdash; abusers gaming the system, competitors gaining insights into internal processes &mdash; there are many more compelling ones arguing for more transparency and accountability.</p>

<p>Previously, we have explored the possibility that Facebook effectively functions more like a media company than a tech company and that the act of moderation arguably translates into &ldquo;publishing.&rdquo; While categorizing Facebook as &ldquo;media&rdquo; would not solve the problem of moderation, per se, it would have serious ethical, professional, and legal implications. Facebook would shoulder more responsibility for its powerful influence in the public sphere.</p>

<p>In his 2016 book <em>Free Speech</em>, Timothy Garton Ash calls Facebook a superpower, built exclusively on a profit model with absent moral and legal mechanisms of accountability that exist for traditional media. Facebook controls vast, privately owned public spaces. If the company were a country, it would be the world&rsquo;s largest. But it does not have the formal lawmaking authority of sovereign states. There is no formal mechanism of accountability. &ldquo;Yet [its] capacity to enable or limit freedom of information and expression is greater than most states,&rdquo; writes Garton Ash.</p>

<p>Nicco Mele, a technologist, former deputy publisher of the <em>Los Angeles Times</em>, and now director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, argues that Facebook is a media company &ldquo;because it derives almost all of its revenue from advertising, by monetizing its audience attention.&rdquo; Media companies, he wrote in an email, &ldquo;have special responsibilities to the public good because of their unique power to shape public opinion.&rdquo;</p>

<p>The recent attention to Facebook&rsquo;s moderation fails, he says, &ldquo;increases the likelihood that Facebook will need to put in place new methods of managing the content published on their platform; the company already does this around some kinds of content (child pornography, for example). In my assessment, the company is far from the level of monitoring and infrastructure their platform requires to meet its obligations to the public &mdash; and consequently a likely increase in headcount and expenditures is in its future. The longer the company puts off self-policing, the more likely regulation becomes.&rdquo;</p>

<p>Mele believes regulation could mean many things for consumers. &ldquo;We have a long history of regulating media in different ways &mdash; remember Janet Jackson&rsquo;s Super Bowl halftime fashion mishap? Or more recently, Stephen Colbert&rsquo;s unfortunate use of the word [cock] holster on late-night television?&rdquo; The Federal Communications Commission, he notes, monitors and regulates broadcast media, so, &ldquo;there is a precedent that any regulation could follow. The movie industry, facing potential regulation, voluntarily adopted the parental rating system. Facebook is at a crossroads.&rdquo;</p>
</blockquote>
<p>For now, it&rsquo;s all but inconceivable that Mark Zuckerberg will ever call Facebook a media company or a publisher. Early on, he described Facebook as an &ldquo;information infrastructure.&rdquo; He has recently used the term &ldquo;social infrastructure&rdquo; instead, <a href="https://qz.com/977297/facebook-live-murders-algorithms-are-failing-facebook-can-humanity-save-it/">as reported by Sarah Kessler</a>, an expression that evokes such public service functions as the USPS or public housing. If that&rsquo;s the case, then the debate over moderation is really a debate about a global public commons, even as that commons is privately held and regulated.</p>

<p>On the same day <em>The Guardian</em> released of the leaked Facebook Files, <em>The New York Times</em> ran a profile of Twitter founder Evan Williams. In the piece, Williams apologized for Twitter&rsquo;s possible role in Trump&rsquo;s win, and admitted he&rsquo;d never fathomed that Twitter would be used for nefarious purposes. &ldquo;I thought once everyone could speak freely and exchange information and ideas, the world is automatically going to be a better places. I was wrong about that.&rdquo; His mistake, reads the story, &ldquo;was expecting the internet to resemble the person he saw in the mirror&hellip;&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>You wouldn’t know that the internet was broken from Facebook’s earnings report </p></blockquote></figure>
<p>&ldquo;The Internet Is Broken,&rdquo; the story headline read. &ldquo;Broken&rdquo; has become the preferred expression for referring to the dark, unsavory part of the internet. But you wouldn&rsquo;t know that the internet was broken from Facebook&rsquo;s earnings report for the third quarter of 2016 &mdash; a period during which Facebook&rsquo;s net income increased 166 percent over the same period in 2015. The company&rsquo;s ad <a href="https://investor.fb.com/investor-news/press-release-details/2016/Facebook-Reports-Third-Quarter-2016-Results/default.aspx">revenue hit an unprecedented $6.8 billion</a>. Harassment, fake news, and gruesome, graphically violent, and salacious content <a href="http://www.salon.com/2016/12/17/fake-news-and-online-harassment-are-more-than-social-media-byproducts-theyre-powerful-profit-drivers/">are profitable</a> because they are relentless drivers of user engagement. Harvard Law School professor Susan Crawford, author of <em>Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age,</em> says, &ldquo;This is about Facebook&rsquo;s determination to have billions of people treat its platform as The Internet&hellip; Facebook wants simultaneously to be viewed as basic infrastructure while ensuring its highly profitable ways of doing deals are unconstrained.&rdquo;</p>

<p>In the case of Facebook, or any other major platform, tech founders and leaders, who make billions off the affective digital labor of billions worldwide, have a distinct responsibility to imagine all the ways their platforms can be perverted in a world that includes murder, rape, child abuse, and terrorism, and those who will use platforms like Facebook to enact them.</p>

<p>&ldquo;They&rsquo;ve been trying to contain this problem, real and significant, for a long time,&rdquo; says Roberts, &ldquo;but it&rsquo;s no longer containable.&rdquo;</p>

<p>We are grateful to whoever took the significant risk to share documents Facebook has worked so hard for so long to keep under wraps, as we are grateful to the moderators, community managers, and senior executives who risked their jobs to talk to us about their hard work. For now, in the words of Hany Farid, &ldquo;Here we are in crisis mode.&rdquo; Facebook and others, he says, &ldquo;have been dragging their feet for years to deal with what they knew was a growing problem. There is no doubt that this is a difficult problem and any solution will be imperfect, but we can do much better than we have.&rdquo;</p>

<p><a href="https://twitter.com/schemaly?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor"><em>Soraya Chemaly</em></a><em> and&nbsp;</em><a href="https://twitter.com/ckbuni?lang=en"><em>Catherine Buni</em></a><em>&nbsp;report on online content moderation for&nbsp;</em><a href="http://www.theinvestigativefund.org/"><em>The Investigative&nbsp;Fund</em></a><em>.&nbsp;</em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Catherine Buni</name>
			</author>
			
			<title type="html"><![CDATA[Facebook won’t call itself a media company.​ ​Is it time to reimagine journalism for the digital age?]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2016/11/16/13655102/facebook-journalism-ethics-media-company-algorithm-tax" />
			<id>https://www.theverge.com/2016/11/16/13655102/facebook-journalism-ethics-media-company-algorithm-tax</id>
			<updated>2016-11-16T16:25:07-05:00</updated>
			<published>2016-11-16T16:25:07-05:00</published>
			<category scheme="https://www.theverge.com" term="Facebook" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Report" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[In the days leading up to and following this election, Facebook has been called lots of things &#8212; &#8220;a website,&#8221; &#8220;an internet company,&#8221; &#8220;a major player in the media universe,&#8221; &#8220;a strange new class of media outlet,&#8221; a &#8220;tech behemoth,&#8221; a &#8220;cesspool of nonsense.&#8221; Vox cut to the chase, calling on Facebook to &#8220;admit that [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6090619/mark-zuckerberg-facebook-473.0.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>In the days leading up to and following this election, Facebook has been called lots of things &mdash; <a href="https://techcrunch.com/2016/11/15/google-and-facebook-ban-fake-news-sites-from-their-advertising-networks/">&#8220;a website,&#8221;</a> <a href="http://www.nytimes.com/2016/11/15/technology/google-will-ban-websites-that-host-fake-news-from-using-its-ad-service.html">&#8220;an internet company,&#8221;</a> <a href="http://www.vanityfair.com/news/2016/11/divided-over-trump-facebook-employees-rebel-against-zuckerberg">&#8220;a major player in the media universe,&#8221;</a> <a href="http://www.nytimes.com/2016/08/28/magazine/inside-facebooks-totally-insane-unintentionally-gigantic-hyperpartisan-political-media-machine.html">&#8220;a strange new class of media outlet,&#8221;</a> a <a href="http://www.nytimes.com/2016/11/15/technology/google-will-ban-websites-that-host-fake-news-from-using-its-ad-service.html">&#8220;tech behemoth,&#8221;</a> a <a href="https://www.cnet.com/news/john-oliver-facebook-is-a-cesspool-of-nonsense-donald-trump/">&#8220;cesspool of nonsense.&#8221;</a> <em>Vox</em> <a href="http://www.vox.com/new-money/2016/11/6/13509854/facebook-politics-news-bad">cut to the chase</a>, calling on Facebook to &#8220;admit that it is, in fact, a media company&#8221; observing &#8220;that the design of its news feed inherently involves making editorial decisions, and that it has a responsibility to make those decisions responsibly.&#8221;</p>

<p>Even though Facebook continues to deny its role as part of the media &mdash; &#8220;News and media are not the primary things people do on Facebook,&#8221; <a href="https://www.facebook.com/zuck/posts/10103253901916271">Mark Zuckerberg wrote in a Facebook response</a> on Monday, &#8220;so I find it odd when people insist we call ourselves a news or media company in order to acknowledge its importance.&#8221; &mdash; some <a href="http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/">44 percent of Americans</a> now use Facebook as their primary source of news, according to Pew research. In a series of tweets the day after the election, <em>New York Times</em> columnist <a href="https://twitter.com/zeynep/status/796436689849843712">Zeynep Tufekci wrote</a>, linking to journalist Joshua Benton at Nieman Lab, &#8220;Facebook&rsquo;s algorithm is central to how news &amp; information is consumed in the world today, and no historian will write about 2016 without it.&#8221;</p>

<p>Whether or not we deem it to be a media organization, Facebook will not, in the foreseeable future, wear that badge. But as a &#8220;<a href="http://cs.stanford.edu/people/eroberts/cs201/projects/2010-11/Journalism/index8192.html?page_id=30">new source of journalism</a>&#8221; (a term that&#8217;s recently cropped up), should it be expected to meet Fourth Estate obligations? And, if so, how does it do so responsibly? And if they refuse to, should we tax Facebook and other platforms to fund quality journalism?</p>
<!-- ######## BEGIN SNIPPET ######## --><div class="m-snippet float-left"><q>Should we tax Facebook and other platforms to fund quality journalism?</q></div><!-- ######## END SNIPPET ######## -->
<p>The foundations of the Fourth Estate, fortified by the First Amendment, rest, in large part, on the idea of checks and balances. In quick summary, the press is, in theory, <a href="https://www.hks.harvard.edu/fs/pnorris/Acrobat/Driving%20Democracy/Chapter%208.pdf">watchdog, civic forum, and agenda-setter</a>, holding elected officials to account and bound by longstanding liability laws. In the words of Joseph Pulitzer, the press &#8220;should always fight for progress and reform; never tolerate injustice or corruption; always fight demagogues of all parties&hellip;always oppose privileged classes and public plunderer; never lack sympathy with the poor; always remain devoted to the public welfare&hellip;&#8221;</p>

<p>By contrast, the foundations of Facebook, and other new sources of journalism, rest in large part on the ideals of what John G. Palfrey, the former executive director of the Berkman Kline Center for Internet &amp; Society at Harvard University, called <a href="http://cyber.law.harvard.edu/teaching/ilaw/2011/Summary_of_Four_Phases">the Open Internet</a>. The early Open Internet, considered apart from the law and real life, was reinforced by 1996&rsquo;s Communications Decency Act&rsquo;s Section 230(c), or the Good Samaritan act. As writer and activist Soraya Chemaly and <a href="http://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech">I have previously chronicled</a>, Section 230 is widely cherished as the &#8220;most important law on the Internet,&#8221; credited with making possible a &#8220;trillion or so dollars of value&#8221; according to David Post, legal scholar and fellow at the Center for Democracy and Technology.</p>

<p>The provision reads: &#8220;No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.&#8221; These 26 words put free speech decisions into private hands, effectively immunizing platforms from legal liability for all content that does not violate federal law, such as child pornography. In other words, Facebook and Google enjoy the benefits (and ad revenue) of being members of the media without any of the risk. Asking them to voluntarily declare themselves media companies seems more and more like a fool&rsquo;s errand and unlikely to inspire substantive change.</p>

<p>Nicco Mele is a technologist, former deputy publisher of the <em>Los Angeles Times</em>, and now director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School &mdash; an organization founded to study the media and their role in a democratic society. Mele believes that even if tech companies eventually label themselves media entities, it&rsquo;ll be a decade or so from now.</p>

<p>&#8220;These companies want to say they&rsquo;re tech companies,&#8221; he said, &#8220;and they are. But they&rsquo;re also media companies.&#8221; First, the bulk of their revenue is generated from ads and user attention. Second, they have a disproportionate ability to shape the public sphere. Mele argues that these companies, as a result, should be judged against other media companies. &#8220;They are not building Roombas,&#8221; he said. &#8220;If you shape public opinion, you do have special responsibilities. It&rsquo;s why we have the Pulitzer Prize, to motivate better behavior, and why, ultimately, we had the Fairness Doctrine.&#8221;</p>
<p><q class="right">Asking platforms to voluntarily declare themselves media companies seems more and more likea fool&rsquo;s errand</q></p>
<p>There are good reasons why Facebook doesn&rsquo;t want to be a media company, and the reasons, he said, are not simply legal or regulatory. It&rsquo;s also a matter of brand management, talent, revenue, and regulation, in that order. &#8220;It&rsquo;s cooler to be a tech company,&#8221; he said. And consumers want cool. Tech status helps attract talent. &#8220;Legacy media have reputations as bad places to work.&#8221; As a business model, if defined as media, ad revenue will be scrutinized, and, Mele predicted, &#8220;regulation will force them to address how they meet the public sphere, which will increase [employee] headcount and increase rates.&#8221;</p>

<p>In his 2016 book <em>Free Speech</em>, Timothy Garton Ash calls Facebook and Google superpowers, built exclusively on a profit model absent the moral and legal mechanisms of accountability that exist for traditional media. They control vast privately owned public spaces. They do not have the formal lawmaking authority of sovereign states. There is no formal mechanism of accountability. Their leaders are not accountable to their users. &#8220;Yet their capacity to enable or limit freedom of information and expression is greater than most states.&#8221;</p>

<p>&#8220;New media,&#8221; he writes, &#8220;live in constant tension between public service they offer&#8221; &mdash; freedom of expression and information &mdash; &#8220;and the private profit they pursue.&#8221;</p>

<p>And the tension has never been higher. News is only one subset of Facebook&rsquo;s content. In his Saturday night <a href="https://www.facebook.com/zuck/posts/10103253901916271">Facebook post</a> Zuckerberg himself seemed to conflate a self-serving impetus to keep users on the platform, with the company&rsquo;s public service. Zuckerberg wrote, &#8220;Our goal is to show people the content they will find most meaningful, and people want accurate news.&#8221;</p>

<p>So how should platforms engage with the critical role that journalism serves within a democracy short of bearing the mantle of a media organization?</p>

<p>Among the proposed answers, the British Media Reform Coalition (MRC) and the National Union of Journalists are currently pushing the British Parliament to amend the Digital Economy Bill currently in play, to include a 1 percent levy on &#8220;large digital intermediaries&#8221; &mdash; Facebook and Google in particular &mdash; to fund nonprofit sources of investigative reporting like the Bureau of Investigative Journalism, <em>ProPublica</em>, or BBC. In other words, even if you can&rsquo;t force platforms to take on the accountability of acting as media, we have the power to make them fund journalism.</p>
<p><q class="left">It appears the public understands the value of contribution in support of journalism already</q></p>
<p>Des Freedman, London-based former chair of MRC and author of <em>The Contradictions of Media Power</em>, is among the many arguing that Facebook and Google are media companies, &#8220;even if they deny the case.&#8221; &#8220;What we see in Facebook and Google are utterly decentralized technologies organized in the most unbelievably centralized commercial structures,&#8221; he said earlier this week. &#8220;Some of the oil brands of the last century would be quite jealous of their position.&#8221; It is, he concludes, &#8220;productive and legitimate for the public to demand that they make a contribution.&#8221;</p>

<p>Indeed, it appears the public understands the value of contribution in support of journalism already. On Monday, <a href="http://www.niemanlab.org/2016/11/after-trumps-election-news-organizations-see-a-bump-in-subscriptions-and-donations/">Nieman Lab reported</a> donations and subscriptions spiked post-election at <em>The Atlantic</em>, <em>ProPublica</em>, <em>The New York Times</em>, and <em>The Washington Post</em>. <em>ProPublica</em> saw donations jump as election results rolled in, and a tenfold increase in the days following. What contribution looks like from the perspective of platform &mdash; what form it takes, legal or nonlegal &mdash; is another question.</p>

<p>&#8220;A levy is a very European thing, a welfarist redistribution system,&#8221; Freedman was quick to acknowledge, and, as such, perhaps unlikely to gain traction in the US. And even if a journalism tax were imposed, decisions to tax platforms in order to fund media under a Trump presidency opens the doors to a host of new concerns. Yet, he emphasized, in Bernie Sanders-like echo, asking corporations to contribute publicly for the privilege of profit is &#8220;a traditional form of seeking to equalize power and speech rights.&#8221;</p>

<p>&#8220;Google,&#8221; he noted, &#8220;has regularly made contributions.&#8221; In 2015 the company launched its <a href="https://www.digitalnewsinitiative.com/">DNI news initiative</a> after accusations, in the words of one <em>Guardian</em> report, &#8220;distorting internet search results and acting anti-competitively by European regulators.&#8221; Google has committed some 150 million euros in Europe to date. Billed as &#8220;a collaboration between Google and news publishers in Europe to support high quality journalism and encourage a more sustainable news ecosystem through technology and innovation,&#8221; Freedman considers it a start in the right direction.</p>

<p><a href="https://www.linkedin.com/in/maggieshiels">Maggie Shiels</a>, formerly of the BBC and currently in corporate communications at Google, said, &#8220;We are striving to be better partners with publishers across the board, and recognize the value that quality journalism plays in the world today.&#8221; Though she declined to comment on the specifics of the levy proposal, she did say that Google is supporting quality journalism &#8220;on multiple fronts.&#8221; &#8220;Our News Lab team,&#8221; 10 or so in-house employees, some former journalists like herself, &#8220;trains tens of thousands of journalists every year from around the world on our tools &mdash; all free.&#8221; Among the summary of programs she forwarded, Shiels reported that Google is collaborating with news organizations, through its <a href="https://newslab.withgoogle.com/programs">News Lab program</a>, on using data to tell stories through Google&rsquo;s Trends tool. It is a founding member of the <a href="https://firstdraftnews.com/about/">First Draft News</a> coalition, &#8220;set up to raise awareness and find solutions in all aspects of social news gathering and verification.&#8221; Google is also working with the industry and others on the <a href="http://thetrustproject.org/">Trust Project</a>, which, in her words, &#8220;explores how to make trustworthy journalism stand out.&#8221;</p>

<p>Like Google, Facebook declined comment on the specifics of the levy proposal. Facebook&rsquo;s journalism-related developments include participation in the <a href="https://firstdraftnews.com/social-networks-unite-global-newsrooms-take-action-misinformation-online/">First Draft News Coalition,</a> a group of 20 news organizations, including <em>The Telegraph</em>, the <em>New York Times</em>, <em>Washington Post</em>, and Agence France-Presse, designed to improve reporting from social media and to address fake news. Facebook has plans to work with the First Draft to develop a training program for journalists worldwide. In the last few weeks, on its Facebook for Journalists Site, Facebook introduced an <a href="https://www.facebook.com/journalists">online training,</a> available through <a href="https://www.facebook.com/blueprint">Blueprint,</a> and drawing on <a href="https://media.fb.com/2016/10/25/introducing-online-courses-for-journalists-on-facebook/%20http://www.poynter.org/2016/want-to-learn-how-to-use-facebook-better-facebook-has-a-course-for-you/436084/">a bank of reporting case studies.</a> In October, Facebook updated its <a href="https://media.fb.com/2016/10/17/signal-to-now-surface-live-video-for-journalists/">Signal offerings to include Live video for journalists,</a> and is beginning to see live content generated as a result.</p>

<p>It&rsquo;s early, but it&rsquo;s worth noting that Facebook does not yet appear to be contributing discreet funds to support outside sources of investigative journalism in the way that Google does through its DNI initiative. Instead it appears it is using its platform to increase use and traffic, educate journalists, and produce more content. MRC&rsquo;s Des Freedman hopes Facebook will one day adopt Google&rsquo;s approach.</p>

<p>He also hopes the approach will evolve toward something more sustainable, like a permanent levy. &#8220;It&rsquo;s not about protecting [existing media],&#8221; Freedman said, &#8220;but nurturing new forms of journalism,&#8221; forms that reach local areas where reporting has all but dried up, that represent the issues important to vast swaths of populations currently underreported. And these new forms need to be transparent, with transparent processes for distribution of funds. &#8220;We don&rsquo;t want to replace one form of unaccountability with an equally opaque form of journalism. Otherwise, you&rsquo;re just making the same mistakes.&#8221;</p>

<p>Whether or not private-public journalism partnerships ever take root, a growing number of experts, academics, pundits, and policy makers, and there were already many, are forcefully calling for transparency and accountability, no matter how tech companies move forward.</p>
<p><q class="center">What is the basis on which these journalists and editors will now verify stories?</q></p>
<p>Some, <a href="https://www.theguardian.com/technology/2016/sep/10/facebook-news-media-editor-vietnam-photo-censorship">including Jeff Jarvis, journalism professor at the City University of New York, and Edward Wasserman, dean of the University of California, Berkeley journalism school,</a> have suggested Facebook hire more editors and journalists to help curate and manage its news feeds and algorithms. This, of course, raises even more questions. As Freedman asked, What is the basis on which these journalists and editors will now verify stories? What will be the editorial guidelines underpinning verification? Yet more proof that Google and Facebook are not neutral intermediaries but increasingly important media players with major responsibilities in the emerging news environment.</p>

<p>Others, including <a href="http://www.usatoday.com/story/tech/columnist/2016/11/12/targeting-race-ads-nothing-new-but-stakes-high/93638386/">Safiya U. Noble and Sarah T. Roberts,</a> and Zeynep Tufekci, call for significant AI reform to address outcomes like the one reported in Monday&rsquo;s <a href="https://www.washingtonpost.com/news/the-fix/wp/2016/11/14/googles-top-news-link-for-final-election-results-goes-to-a-fake-news-site-with-false-numbers/?postshare=3671479265863720&amp;tid=ss_tw"><em>Washington Post</em>, headlined</a>: &#8220;Google&rsquo;s top news link for &lsquo;final election results&rsquo; goes to a fake news site with false numbers.&#8221; Facebook struggles with a similar challenge. <a href="http://www.nytimes.com/2016/11/15/opinion/mark-zuckerberg-is-in-denial.html">Writes Tufekci,</a> &#8220;Facebook could tweak its algorithm so that it does less to reinforce users&rsquo; existing beliefs, and more to present factual information&hellip; Facebook should also allow truly independent researchers to collaborate with its data team to understand and mitigate these problems.&#8221; And if the company employs human decisions around news, it could explain those decisions publicly as well. Garton Ash recommends kitemarking all media providers, akin to food labeling, covering such information as editorial process, standards applied, and ownership, and also paying close attention to competition policy</p>

<p>In any case, as Des Freedman said, &#8220;These are early days.&#8221; He continued, &#8220;Let&rsquo;s not kid ourselves. Even if this [levy] model is successful, it&rsquo;s still really important to stress that recent events concerning, for example, Brexit and Trump, are political crises that have journalistic implications.&#8221; As <em>The New York Times</em>&rsquo; Tufekci observed here in the US, &#8220;Mass media trivialized the election, social media inflamed it. But underlying it all: elite failure in responding to global turbulence.&#8221;</p>

<p>It&rsquo;s conceivable that Zuckerberg is in the midst of responding to turbulence of his own. As of press time, he had yet to publicly respond to his renegade employees&rsquo; charges on Monday that fear fuels Facebook&rsquo;s news operation. &#8220;I am confident we can find ways for our community to tell us what content is most meaningful,&#8221; he wrote in Saturday&rsquo;s Facebook post, &#8220;but I believe we must be extremely cautious about becoming arbiters of truth ourselves.&#8221;</p>

<p>&#8220;Identifying the &lsquo;truth,&rsquo;&#8221; he wrote, &#8220;is complicated.&#8221;</p>

<p>It&rsquo;s easier when pushed. On Monday, six days after Trump was elected the 45th president of the United States, after six days of media investigation and damning headlines, both <a href="http://www.nytimes.com/2016/11/15/technology/google-will-ban-websites-that-host-fake-news-from-using-its-ad-service.html">Google and Facebook announced</a> plans to start confronting the problem of fake and malicious news on their platforms. The first step? They&rsquo;re following the money, and restricting advertising on sites that publish hoaxes and lies and call them news.</p>

<p><a href="http://Catherine%20Buni"><em>Catherine Buni</em></a><em> reports on online content moderation for </em><a href="http://www.theinvestigativefund.org/"><em>The Investigative Fund</em></a><em>. </em><a href="http://sorayachemaly.tumblr.com/"><em>Soraya Chemaly</em></a><em> contributed to this story. </em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Catherine Buni</name>
			</author>
			
			<author>
				<name>Soraya Chemaly</name>
			</author>
			
			<title type="html"><![CDATA[The secret rules of the internet]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech" />
			<id>https://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech</id>
			<updated>2016-04-13T10:30:06-04:00</updated>
			<published>2016-04-13T10:30:06-04:00</published>
			<category scheme="https://www.theverge.com" term="Features" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Speech" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[Julie Mora-Blanco remembers the day, in the summer of 2006, when the reality of her new job sunk in. A recent grad of California State University, Chico, Mora-Blanco had majored in art, minored in women&#8217;s studies, and spent much of her free time making sculptures from found objects and blown glass. Struggling to make rent [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/13083979/ericPetersen_skyscraper_RGB_3000x1500__1_.0.0.1460552599.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>Julie Mora-Blanco remembers the day, in the summer of 2006, when the reality of her new job sunk in. A recent grad of California State University, Chico, Mora-Blanco had majored in art, minored in women&rsquo;s studies, and spent much of her free time making sculptures from found objects and blown glass. Struggling to make rent and working a post-production job at Current TV, she&rsquo;d jumped at the chance to work at an internet startup called YouTube. Maybe, she figured, she could pull in enough money to pursue her lifelong dream: to become a hair stylist.</p>
<div class="m-snippet full-image"> <img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6330611/ericPetersen_skyscraper_RGB_fin.0.jpg" alt="Moderation lede final" data-chorus-asset-id="6330611"> <section class="lede"> <h1>The secret rules of the internet</h1> <h2>The murky history of moderation, and how it&rsquo;s shaping the future of free speech</h2> <hr class="roots_div1"> <h3>By Catherine Buni &amp; Soraya Chemaly | Illustrations by Eric Petersen</h3> </section> </div><p> </p><div class="m-snippet thin"> <p class="pt-dropcap">Julie Mora-Blanco remembers the day, in the summer of 2006, when the reality of her new job sunk in. A recent grad of California State University, Chico, Mora-Blanco had majored in art, minored in women&rsquo;s studies, and spent much of her free time making sculptures from found objects and blown-glass. Struggling to make rent and working a post-production job at Current TV, she&rsquo;d jumped at the chance to work at an internet startup called YouTube. Maybe, she figured, she could pull in enough money to pursue her lifelong dream: to become a hair stylist.</p> <aside class="float-right partnership-blurb"> <img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6325253/investigativefundlogo.0.jpg"> <p>This article was reported in partnership with <a target="_blank" href="http://www.theinvestigativefund.org/">The Investigative Fund at The Nation Institute</a></p> </aside> <p>It was a warm, sunny morning, and she was sitting at her desk in the company&rsquo;s office, located above a pizza shop in San Mateo, an idyllic and affluent suburb of San Francisco. Mora-Blanco was one of 60-odd twenty-somethings who&rsquo;d come to work at the still-unprofitable website.</p> <p>Mora-Blanco&rsquo;s team &mdash; 10 people in total &mdash; was dubbed The SQUAD (Safety, Quality, and User Advocacy Department). They worked in teams of four to six, some doing day shifts and some night, reviewing videos around the clock. Their job? To protect YouTube&rsquo;s fledgling brand by scrubbing the site of offensive or malicious content that had been flagged by users, or, as Mora-Blanco puts it, &#8220;to keep us from becoming a shock site.&#8221; The founders wanted YouTube to be something new, something better &mdash; &#8220;a place for everyone&#8221; &mdash; and not another eBaum&rsquo;s World, which had already become a repository for explicit pornography and gratuitous violence.</p> <p>Mora-Blanco sat next to Misty Ewing-Davis, who, having been on the job a few months, counted as an old hand. On the table before them was a single piece of paper, folded in half to show a bullet-point list of instructions: Remove videos of animal abuse. Remove videos showing blood. Remove visible nudity. Remove pornography. Mora-Blanco recalls her teammates were a &#8220;mish-mash&#8221; of men and women; gay and straight; slightly tipped toward white, but also Indian, African-American, and Filipino. Most of them were friends, friends of friends, or family. They talked and made jokes, trying to make sense of the rules. &#8220;You have to find humor,&#8221; she remembers. &#8220;Otherwise it&rsquo;s just painful.&#8221;</p> <p>Videos arrived on their screens in a never-ending queue. After watching a couple seconds apiece, SQUAD members clicked one of four buttons that appeared in the upper right hand corner of their screens: &#8220;Approve&#8221; &mdash; let the video stand; &#8220;Racy&#8221; &mdash; mark video as 18-plus; &#8220;Reject&#8221; &mdash; remove video without penalty; &#8220;Strike&#8221; &mdash; remove video with a penalty to the account. Click, click, click. But that day Mora-Blanco came across something that stopped her in her tracks.</p> <p>&#8220;Oh, God,&#8221; she said.</p> <aside class="float-left q-animated" id="scene1" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">That day Mora-Blanco came across something that stopped her in her tracks</q></aside> <p>Mora-Blanco won&rsquo;t describe what she saw that morning. For everyone&rsquo;s sake, she says, she won&rsquo;t conjure the staggeringly violent images which, she recalls, involved a toddler and a dimly lit hotel room.</p> <p>Ewing-Davis calmly walked Mora-Blanco through her next steps: hit &#8220;Strike,&#8221; suspend the user, and forward the person&rsquo;s account details and the video to the SQUAD team&rsquo;s supervisor. From there, the information would travel to the <a href="http://www.dailymail.co.uk/news/article-2509036/Google-blocks-child-porn-Internet-giant-axes-links-sex-abuse-websites.html">CyberTipline</a>, a reporting system launched by the National Center for Missing and Exploited Children (NCMEC) in 1998. Footage of child exploitation was the only black-and-white zone of the job, with protocols outlined and explicitly <a href="http://www.missingkids.com/History">enforced by law since the late 1990s</a>.</p> <p>The video disappeared from Mora-Blanco&rsquo;s screen. The next one appeared.</p> <p>Ewing-Davis said, &#8220;Let&rsquo;s go for a walk.&#8221;</p> <p><em>Okay. This is what you&rsquo;re doing</em>, Mora-Blanco remembers thinking as they paced up and down the street. <em>You&rsquo;re going to be seeing bad stuff</em>.</p> <p>Almost a decade later, the video and the child in it still haunt her. &#8220;In the back of my head, of all the images, I still see that one,&#8221; she said when we spoke recently. &#8220;I really didn&rsquo;t have a job description to review or a full understanding of what I&rsquo;d be doing. I was a young 25-year-old and just excited to be getting paid more money. I got to bring a computer home!&#8221; Mora-Blanco&rsquo;s voice caught as she paused to collect herself. &#8220;I haven&rsquo;t talked about this in a long time.&#8221;</p> <p>Mora-Blanco is one of more than a dozen current and former employees and contractors of major internet platforms from YouTube to Facebook who spoke to us candidly about the dawn of content moderation. Many of these individuals are going public with their experiences for the first time. Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history. As law professor Jeffrey Rosen <a href="http://www.nytimes.com/2010/12/13/technology/13facebook.html?_r=0">first said</a> many years ago of Facebook, these platforms have &#8220;more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president.&#8221;</p> <br> <hr class="roots_div4"> <br> <p class="pt-dropcap">Launched in 2005, YouTube was the brainchild of Chad Hurley, Steve Chen, and Jawed Karim&mdash;three men in their 20s who were frustrated because technically there was no easy way for them to share two particularly compelling videos: clips of the 2004 tsunami that had devastated southeast Asia, and Janet Jackson&rsquo;s Superbowl &#8220;wardrobe malfunction.&#8221; In April of 2005, they tested their first upload. By October, they had posted their first one million-view hit: Brazilian soccer phenom Ronaldinho trying out a pair of gold cleats. A year later, Google paid an unprecedented $1.65 billion to buy the site. Mora-Blanco got a title: content policy strategist, or in her words, &#8220;middle man.&#8221; Sitting between the front lines and content policy, she handled all escalations from the front-line moderators, coordinating with YouTube&rsquo;s policy analyst. By mid-2006, YouTube viewers were watching more than <a href="http://www.usatoday.com/tech/news/2006-07-16-youtube-views_x.htm">100 million videos</a> a day.</p> <p>In its earliest days, YouTube attracted a small group of people who mostly shared videos of family and friends. But as volume on the site exploded, so did the range of content: clips of commercial films and music videos were being uploaded, as well as huge volumes of amateur and professional pornography. (Even today, the latter eclipses every other category of violating content.) Videos of child abuse, beatings, and animal cruelty followed. By late 2007, YouTube had codified its commitment to respecting copyright law through the creation of a <a href="http://www.youtube.com/yt/copyright/content-verification-program.html">Content Verification Program</a>. But screening malicious content would prove to be far more complex, and required intensive human labor.</p> <aside class="float-right q-animated" id="scene2" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">They followed a guiding-light question: &#8220;Can I share this video with my family?&#8221;</q></aside> <p>Sometimes exhausted, sometimes elated, and always under intense pressure, the SQUAD reviewed all of YouTube&rsquo;s flagged content, developing standards as they went. They followed a guiding-light question: &#8220;Can I share this video with my family?&#8221; For the most part, they worked independently, debating and arguing among themselves; on particularly controversial issues, strategists like Mora-Blanco conferred with YouTube&rsquo;s founders. In the process, they drew up some of the earliest outlines for what was fast becoming a new field of work, an industry that had never before been systematized or scaled: professional moderation.</p> <p>By fall 2006, working with data and video illustrations from the SQUAD, YouTube&rsquo;s lawyer, head of policy, and head of support created the company&rsquo;s first booklet of rules for the team, which, Mora-Blanco recalls, was only about six pages long. Like the one-pager that preceded it, copies of the booklet sat on the table and were constantly marked up, then updated with new bullet points every few weeks or so. No booklet could ever be complete, no policy definitive. This small team of improvisers had yet to grasp that they were helping to develop new global standards for free speech.</p> <p>In 2007, the SQUAD helped create YouTube&rsquo;s first clearly articulated rules for users. They barred depictions of <a href="https://en.wikipedia.org/wiki/Pornography">pornography</a>, criminal acts, gratuitous violence, threats, spam, and hate speech. But significant gaps in the guidelines remained &mdash; gaps that would challenge users as well as the moderators. The Google press office, which now handles YouTube communications, did not agree to an interview after multiple requests.</p> <p>As YouTube grew up, so did the videos uploaded to it: the platform became an increasingly important host for newsworthy video. For members of the SQUAD, none of whom had significant journalism experience, this sparked a series of new decisions.</p> </div><div class="m-snippet full-image"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6330627/ericPetersen_poolOfBlood_RGB_fin.0.jpg" alt="Moderation head final" data-chorus-asset-id="6330627"></div><div class="m-snippet thin"> <p>In the summer of 2009, Iranian protesters poured into the streets, disputing the presidential victory of <a href="https://en.wikipedia.org/wiki/Mahmoud_Ahmadinejad">Mahmoud Ahmadinejad</a>. Dubbed the Green Movement, it was one of the most significant political events in the country&rsquo;s post-Revolutionary history. Mora-Blanco, soon to become a senior content specialist, and her team &mdash; now dubbed Policy and more than two-dozen strong &mdash; monitored the many protest clips being uploaded to YouTube.</p> <p>On June 20th, the team was confronted with a video depicting the death of a young woman named <a href="http://www.nytimes.com/2009/06/23/world/middleeast/23neda.html?_r=0">Neda Agha-Soltan</a>. The 26-year-old had been struck by a single bullet to the chest during demonstrations against pro-government forces and a shaky cell-phone video captured her horrific last moments: in it, blood pours from her eyes, pooling beneath her.</p> <p>Within hours of the video&rsquo;s upload, it became a focal point for Mora-Blanco and her team. As she recalls, the guidelines they&rsquo;d developed offered no clear directives regarding what constituted newsworthiness or what, in essence, constituted ethical journalism involving graphic content and the depiction of death. But she knew the video had political significance and was aware that their decision would contribute to its relevance.</p> <p>Mora-Blanco and her colleagues ultimately agreed to keep the video up. It was fueling important conversations about free speech and human rights on a global scale and was quickly turning into a viral symbol of the movement. It had tremendous political power. <em>They</em> had tremendous political power. And the clip was already available elsewhere, driving massive traffic to competing platforms.</p> <p>The Policy team worked quickly with the legal department to relax its gratuitous violence policy, on the fly creating a newsworthiness exemption. An engineer swiftly designed a button warning that the content contained graphic violence &mdash; a content violation under normal circumstances &mdash; and her team made the video available behind it, where it still sits today. Hundreds of thousands of individuals, in Iran and around the world, could witness the brutal death of a pro-democracy protester at the hands of government. The maneuvers that allowed the content to stand took less than a day.</p> <br> <hr class="roots_div3"> <br> <p class="pt-dropcap">Today, YouTube&rsquo;s <a href="https://www.youtube.com/yt/press/statistics.html">billion-plus</a> users upload 400 hours of video every minute. Every hour, Instagram users <a href="http://mashable.com/2013/09/16/facebook-photo-uploads/#etm3I0VEwEqr">generate</a> 146 million &#8220;likes&#8221; and Twitter users send 21 million tweets. Last August, Mark Zuckerberg <a href="https://www.facebook.com/zuck/posts/10102329188394581">posted on Facebook</a> that the site had passed &#8220;an important milestone: For the first time ever, one billion people used Facebook in a single day.&#8221;</p> <p>The moderators of these platforms &mdash; perched uneasily at the intersection of corporate profits, social responsibility, and human rights &mdash; have a powerful impact on free speech, government dissent, the shaping of social norms, user safety, and the meaning of privacy. What flagged content should be removed? Who decides what stays and why? What constitutes newsworthiness? Threat? Harm? When should law enforcement be involved?</p> <p>While public debates rage about government censorship and free speech on college campuses, customer content management constitutes the quiet transnational transfer of free-speech decisions to the private, corporately managed corners of the internet where people weigh competing values in hidden and proprietary ways. Moderation, explains Microsoft researcher Kate Crawford, is &#8220;a profoundly human decision-making process about what constitutes appropriate speech in the public domain.&#8221;</p> <aside class="float-right q-animated" id="scene3" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">Moderation is &#8220;a profoundly human decision-making process.&#8221;</q></aside> <p>During a panel at this year&rsquo;s South by Southwest, Monika Bickert, Facebook&rsquo;s head of global product policy, shared that Facebook users flag more than one million items of content for review every day. The stakes of moderation can be immense. As of last summer, social media platforms &mdash; predominantly Facebook &mdash; accounted for 43 percent of all traffic to major news sites. Nearly two-thirds of Facebook and Twitter users access their news through their feeds. Unchecked social media is routinely implicated in sectarian brutality, intimate partner violence, violent extremist recruitment, and episodes of mass bullying linked to suicides.</p> <p>Content flagged as violent &mdash; a beating or beheading &mdash; may be newsworthy. Content flagged as &#8220;pornographic&#8221; might be political in nature, or as innocent as breastfeeding or sunbathing. Content posted as comedy might get flagged for overt racism, anti-Semitism, misogyny, homophobia, or transphobia. Meanwhile content that may not explicitly violate rules is sometimes posted by users to perpetrate abuse or vendettas, terrorize political opponents, or out sex workers or trans people. Trolls and criminals exploit anonymity to dox, swat, extort, exploit rape, and, on some occasions, broadcast murder. Abusive men threaten spouses. Parents blackmail children. In Pakistan, the group Bytes for All &mdash; an organization that previously <a href="http://en.wikipedia.org/wiki/Bytes_for_All_v._Federation_of_Pakistan">sued the Pakistani government</a> for censoring YouTube videos &mdash; released three <a href="https://content.bytesforall.pk/sites/default/files/Tech_Driven_Violence_Against_Women.pdf">case studies</a> showing that social media and mobile tech cause real harm to women in the country by enabling rapists to blackmail victims (who may face imprisonment after being raped), and stoke sectarian violence.</p> <p>A prevailing narrative, as one story in <em>The</em> <em>Atlantic</em> put it, is that the current system of content moderation is &#8220;<a href="http://www.theatlantic.com/technology/archive/2014/08/the-way-we-report-harassment-on-the-social-web-is-broken/378730/">broken</a>.&#8221; For <a href="http://www.theatlantic.com/technology/archive/2014/10/the-unsafety-net-how-social-media-turned-against-women/381261/">users</a> who&rsquo;ve been harmed by online content, it is difficult to argue that &#8220;broken&#8221; isn&rsquo;t exactly the right word. But something must be whole before it can fall apart. Interviews with dozens of industry experts and insiders over 18 months revealed that moderation practices with global ramifications have been marginalized within major firms, undercapitalized, or even ignored. To an alarming degree, the early seat-of-the-pants approach to moderation policy persists today, hidden by an industry that largely refuses to participate in substantive public conversations or respond in detail to media inquiries.</p> <p>In an October 2014 <a href="http://www.wired.com/2014/10/content-moderation/"><em>Wired</em> story</a>, Adrian Chen documented the work of front line moderators operating in modern-day sweatshops. In Manila, Chen witnessed a secret &#8220;army of workers employed to soak up the worst of humanity in order to protect the rest of us.&#8221; Media coverage and researchers have compared their work to garbage collection, but the work they perform is critical to preserving any sense of decency and safety online, and literally saves lives &mdash; often those of children. For front-line moderators, these jobs can be crippling. Beth Medina, who runs a program called <a href="http://shiftwellness.org">SHIFT</a> (Supporting Heroes in Mental Health Foundational Training), which has provided resilience training to Internet Crimes Against Children teams since 2009, details the severe health costs of sustained exposure to toxic images: isolation, relational difficulties, burnout, depression, substance abuse, and anxiety. &#8220;There are inherent difficulties doing this kind of work,&#8221; Chen said, &#8220;because the material is so traumatic.&#8221;</p> <aside class="float-left q-animated" id="scene4" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">Even basic facts about the content moderation industry remain a mystery</q></aside> <p>But as hidden as that army is, the orders it follows are often even more opaque &mdash; crafted by an amalgam of venture capitalists, CEOs, policy, community, privacy and trust and safety managers, lawyers, and engineers working thousands of miles away. Sarah T. Roberts is an assistant professor of Information and Media Studies at Western University and author of the forthcoming <em>Behind the Screen: Digitally Laboring in Social Media&rsquo;s Shadow World</em>. She says &#8220;commercial content moderation&#8221; &mdash; a term she coined to denote the kind of professional, organized moderation featured in this article &mdash; is not a cohesive system, but a wild range of evolving practices spun up as needed, subject to different laws in different countries, and often woefully inadequate for the task at hand. These practices routinely collapse under the weight and complexity of new challenges &mdash; as the decisions moderators make engage ever more profound matters of legal and human rights, with outcomes that affect users, workers, and our digital public commons. As seen with <a href="http://www.wired.com/2015/10/how-black-lives-matter-uses-social-media-to-fight-the-power/">Black Lives Matter</a> or the <a href="http://www.nytimes.com/2012/02/19/books/review/how-an-egyptian-revolution-began-on-facebook.html?_r=1">Arab Spring</a>, whether online content stays or goes has the power to shape movements and revolutions, as well as the sweeping policy reforms and cultural shifts they spawn.</p> <p>Yet, even basic facts about the industry remain a mystery. Last month, in a piece titled &#8220;<a href="http://www.whoishostingthis.com/blog/2015/04/15/moderating-facebook/">Moderating Facebook: The Dark Side of Social Networking</a>,&#8221; Who Is Hosting This? suggested that one third of &#8220;Facebook&rsquo;s entire workforce&#8221; is comprised of moderators, a number Facebook refutes as an overestimate. Content moderation is fragmented into in-house departments, boutique firms, call centers, and micro-labor sites, all complemented by untold numbers of algorithmic and automated products. Hemanshu Nigam, founder of SSP Blue, which advises companies in online safety, security, and privacy, <a href="http://www.wired.com/2014/10/content-moderation/">estimates</a> that the number of people working in moderation is &#8220;well over 100,000.&#8221; Others speculate that the number is many times that.</p> <br>  <p><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6330631/ericPetersen_coverEyes_RGB_fin.0.jpg" alt="Moderation face final" data-chorus-asset-id="6330631"></p>  <br> <p>At industry leaders such as Facebook, Pinterest, and YouTube, the moderation process is improving drastically; at other platforms it might as well be 2006. There, systems are tied to the hands-off approach of an earlier era, and continue to reflect distinct user and founder sensibilities. At 4chan, for example, users are instructed against violating US law but are also free to post virtually any type of content, as long as they do so on clearly defined boards. According to the site&rsquo;s owner Hiroyuki Nishimura, 4chan still relies heavily on a volunteer system of user-nominated &#8220;janitors.&#8221; These janitors, Nishimura said in a recent email exchange, play a critical role, &#8220;tasked with keeping the imageboards free of rule-breaking content.&#8221; As a janitor application page on <a href="https://www.4chan.org/janitorapp">the website</a> lays out, &#8220;Janitors are able to view the reports queue, delete posts, and submit ban and warn requests&#8221; for their assigned board. 4chan janitors, Nishimura said, use &#8220;chat channels,&#8221; to discuss content questions with supervising moderators, some paid, some unpaid. &#8220;If they can&rsquo;t decide,&#8221; he wrote, &#8220;they ask me, so that I&rsquo;m the last one in 4chan. And, in case I couldn&rsquo;t judge. I asked with lawyers.&#8221; Even after more than a decade, 4chan remains a site frequently populated by harassment and threats. Content has included everything from widespread distribution of <a href="https://www.washingtonpost.com/news/the-intersect/wp/2014/09/25/absolutely-everything-you-need-to-know-to-understand-4chan-the-internets-own-bogeyman/">nonconsensual porn</a>, to &#8220;<a href="http://knowyourmeme.com/memes/racists-on-4chan-niggerwalks">Niggerwalk</a>&#8221; memes, to <a href="http://www.gazettetimes.com/news/local/racist-messages-on-osu-chat-appear-to-have-been-organized/article_0cb481b2-824e-5699-a86b-747bbb17f849.html">racist mobs evoking Hitler</a> and threatening individual users. &#8220;People who try to do bad things use YouTube, Facebook, Twitter, and 4chan also,&#8221; Nishimura told us. &#8220;As long as such people live in the world, it happens. Right now, I don&rsquo;t know how to stop them and I really want to know. If there is a way to stop it, we definitely follow the way.&#8221;</p> <aside class="float-left q-animated" id="scene5" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">The details of moderation practices are routinely treated as trade secrets</q></aside> <p>The details of moderation practices are routinely hidden from public view, siloed within companies and treated as trade secrets when it comes to users and the public. Despite persistent calls from civil society advocates for transparency, social media companies do not publish details of their internal content moderation guidelines; no major platform has made such guidelines public. Very little is known about how platforms set their policies &mdash; current and former employees like Mora-Blanco and others we spoke to are constrained by nondisclosure agreements. Facebook officials Monika Bickert and Ellen Silver, head of Facebook&rsquo;s Community Support Team, responded to questions regarding their current moderation practices, and Pinterest made safety manager Charlotte Willner available for an interview. However, Facebook and Pinterest, along with Twitter, Reddit, and Google, all declined to provide copies of their past or current internal moderation policy guidelines. Twitter, Reddit, and Google also declined multiple interview requests before deadline. When asked to discuss Twitter&rsquo;s Trust and Safety teams&rsquo; operations, for example, a spokesperson wrote only:</p> <p>&#8220;Our rules are designed to allow our users to create and share a wide variety of content in an environment that is safe and secure for our users. When content is reported to us that violates our rules, which include a ban on violent threats and targeted abuse, we suspend those accounts. We evaluate and refine our policies based on input from users, while working with outside safety organizations to ensure that we have industry best practices in place.&#8221;</p> <p>Several motives drive secrecy, according to Crawford, Willner, and others. On the one hand, executives want to guard proprietary tech property and gain cover from liability. On the other, they want the flexibility to respond to nuanced, fast-moving situations, and they want to both protect employees who feel vulnerable to public scrutiny and protect the platform from users eager to game a policy made public. The obvious costs of keeping such a significant, quasi-governmental function under wraps rarely rank as a corporate concern. &#8220;How,&#8221; asks Roberts, the content moderation researcher, &#8220;do you or I effect change on moderation practices if they&rsquo;re treated as industrial secrets?&#8221;</p> <p>Dave Willner was at Facebook between 2008 and 2013, most of that time as head of content policy, and is now in charge of community policy at Airbnb. Last spring, we met with him in San Francisco. He wore a rumpled red henley and jeans, and he talked and walked fast as we made our way across the Mission.</p> <p>Members of the public, &#8220;as much as &lsquo;the public&rsquo; exists,&#8221; he said, hold one of three assumptions about moderation: moderation is conducted entirely by robots; moderation is mainly in the hands of law enforcement; or, for those who are actually aware of content managers, they imagine content is assessed in a classroom-type setting by engaged professionals thoughtfully discussing every post. All three assumptions, he said, were wrong. And they&rsquo;re wrong, in great part, because they all miss the vital role that users themselves play in these systems.</p> <aside class="float-right q-animated" id="scene6" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">The public holds one of three assumptions about moderation. All three assumptions are wrong</q></aside> <p>By and large, users think of themselves as customers, or consumers. But platforms rely on users in three profound ways that alter that linear relationship: One, users produce content &mdash; our stories, photos, videos, comments, and interactions are core assets for the platforms we use; two, user behaviors &mdash; amassed, analyzed, packed, and repackaged &mdash; are the source of advertising revenue; and three, users play a critical role in moderation, since almost every content moderation system depends on users flagging content and filing complaints, shaping the norms that support a platform&rsquo;s brand. In other words, users are not so much customers as uncompensated digital laborers who play dynamic and indispensable functions (despite being largely uninformed about the ways in which their labor is being used and capitalized).</p> <p>Anne Collier, the founder of <a href="http://icanhelpline.org">iCanHelpline</a>, a social media tool for schools, suggests that users have not yet recognized their collective power to fix the harms users have themselves created in social media. &#8220;They&rsquo;re called &lsquo;users&rsquo; for a reason,&#8221; she said, &#8220;and collectively still think and behave as passive consumers.&#8221; By obfuscating their role, she argues, the industry delays users&rsquo; recognition of their agency and power.</p> <p>Some of the larger companies &mdash; notably Facebook and Google &mdash; engage civil society through the creation of expert task forces and targeted subject matter working sessions dedicated to the problem of online harassment and crime. Organizations such as the Global Network Initiative, the Anti-Cyberhate Working Group, Facebook&rsquo;s Safety Advisory Board, and Twitter&rsquo;s new Trust and Safety Council are all examples of such multidisciplinary gatherings that bring together subject matter experts. However, these debates (unlike say, congressional hearings), are shielded from public view, as both corporate and civil society participants remain nearly silent about the deliberations. Without greater transparency, users, consumers &mdash; the public at large &mdash; are ill-equipped to understand exactly how platforms work and how their own speech is being regulated and why. This means that the most basic tools of accountability and governance &mdash; public and legal pressure &mdash; simply don&rsquo;t exist.</p> <br> <hr class="roots_div2"> <br> <p class="pt-dropcap">In the earliest &#8220;information wants to be free&#8221; days of the internet, objectives were lofty. Online access was supposed to unleash positive and creative human potential, not provide a venue for sadists, child molesters, rapists, or racial supremacists. Yet this radically free internet quickly became a <a href="http://www.cybercrimejournal.com/editorialijccjuly2008.pdf">terrifying</a> home to heinous content and the users who posted and consumed it.</p> <p>This early phase, from its earliest inceptions in the 1960s until 2000, is what J.G. Palfrey, the former executive director of the Berkman Center for Internet and Society at Harvard University, calls <a href="http://cyber.law.harvard.edu/teaching/ilaw/2011/Summary_of_Four_Phases">the Open Internet</a>. It was in great part the result of 1996&rsquo;s Communications Decency Act&rsquo;s Section 230(c), known as the Good Samaritan Act, which absolved companies of liability for content shared on their services.</p> <p>Section 230 is widely cherished as the <a href="https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/08/27/a-bit-of-internet-history-or-how-two-members-of-congress-helped-create-a-trillion-or-so-dollars-of-value/">&#8220;most important law on the Internet,&#8221;</a> credited with making possible a &#8220;trillion or so dollars of value&#8221; according to David Post, legal scholar and fellow at the Center for Democracy and Technology. He <a href="https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/08/27/a-bit-of-internet-history-or-how-two-members-of-congress-helped-create-a-trillion-or-so-dollars-of-value/">calls</a> Section 230, a &#8220;rather remarkable provision.&#8221; It reads: &#8220;No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.&#8221; These 26 words put free speech decisions into private hands, effectively immunizing platforms from legal liability for all content that does not violate federal law, such as child pornography. All the checks and balances that govern traditional media would not apply; with no libel risk there were, in effect, no rules.</p> <aside class="float-right q-animated" id="scene7" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">On the internet, the checks and balances that govern traditional media would not apply</q></aside> <p>Moderation&rsquo;s initially haphazard, laissez-faire culture has its roots here. Before companies understood how a <em>lack</em> of moderation could impede growth and degrade brands and community, moderators were volunteers; unpaid and virtually invisible. At AOL, moderation was managed by a <a href="http://en.wikipedia.org/wiki/AOL_Community_Leader_Program">Community Leader program</a> composed of users who had previously moderated chat rooms and reported &#8220;offensive&#8221; content. They were tasked with building &#8220;communities&#8221; in exchange for having their subscription fees waived.</p> <p>By 2000, companies had begun to take a more proactive approach. CompuServe, for instance, developed one of the earliest &#8220;acceptable use&#8221; policies barring racist speech, after a user with a Holocaust revisionist stance started filling a popular forum with antisemitic commentary. In 2001, eBay <a href="http://www.internetnews.com/ec-news/article.php/758221/eBay+Bans+Nazi+Hate+Group+Memorabilia.htm">banned</a> Nazi and Ku Klux Klan memorabilia, as well as other symbols of racial, religious, and ethnic hatred. Democratized countries joined forces to take down child pornography. Palfrey calls this phase Access Denied, characterized by a concerted effort across the industry and the government to ban unappetizing content.</p> <p>Over the next decade, companies and governments honed these first-generation moderation tools, refining policies and collapsing the myth that cyberspace existed on a separate plane from real life, free from the realities of regulation, law, and policy.</p> <p>This was the era in which Mora-Blanco began her career at YouTube. Trying to bring order to a digital Wild West one video at a time was grueling. To safeguard other employees from seeing the disturbing images in the reported content they were charged with reviewing, her team was sequestered in corner offices; their rooms were kept dark and their computers were equipped with the largest screen protectors on the market.</p> <p>Members of the team quickly showed signs of stress &mdash; anxiety, drinking, trouble sleeping &mdash; and eventually managers brought in a therapist. As moderators described the images they saw each day, the therapist fell silent. The therapist, Mora-Blanco says, was &#8220;quite literally scared.&#8221;</p> <p>Around 2008, she recalls, YouTube expanded moderation to offices in Dublin, Ireland and Hyderabad, India. Suicide and child abuse don&rsquo;t follow a schedule, and employing moderators across different time zones enabled the company to provide around-the-clock support. But it also exposed a fundamental challenge of moderating content across cultures.</p> <aside class="float-left q-animated" id="scene8" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">He quietly began removing blackface videos. At the time, YouTube considered blackface non-malicious</q></aside> <p>Soon after the expansion, Mora-Blanco found herself debating a flagged video that appeared to show a group of students standing in a circle with two boys brawling in the middle. Moderation guidelines prohibited content showing minors engaged in fighting for entertainment and are written to be as globally applicable and translatable as possible. But interpretation of those guidelines, she discovered, could be surprisingly fluid. Moderators in India let the flagged video remain live because to them, the people in the video were not children, but adults. When the video was flagged again, it escalated to the Silicon Valley team, as most escalations reportedly still do. To Mora-Blanco, the video clearly violated YouTube&rsquo;s policy. &#8220;I didn&rsquo;t know how to more plainly describe what I was seeing,&#8221; she said. The video came down.</p> <p>Cultural perspective is a constant and pervasive issue, despite attempts to make &#8220;objective&#8221; rules. One former screener from a major video-sharing platform, who participated in <a href="http://ir.lib.uwo.ca/commpub/12/">Roberts&rsquo; research</a> and spoke with us on the condition of anonymity, recounted watching videos of what he characterized as extreme violence &mdash; murder and beatings &mdash; coming from Mexico.</p> <p>The screener was instructed to take down videos depicting drug-related violence in Mexico, while those of political violence in Syria and Russia were to remain live. This distinction angered him. Regardless of the country, people were being murdered in what were, in effect, all civil wars. &#8220;[B]asically,&#8221; he said, &#8220;our policies are meant to protect certain groups.&#8221; Before he left, he quietly began removing blackface videos he encountered. At the time, YouTube considered blackface non-malicious.</p> </div><div class="m-snippet full-image"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6330663/ericPetersen_moderators_RGB_fin.0.jpg" alt="Moderation moderators final" data-chorus-asset-id="6330663"></div><!-- ######## END SNIPPET ######## --><div class="m-snippet thin"> <p>When Dave Willner arrived at Facebook in 2008, the team there was working on its own &#8220;one-pager&#8221; of cursory, gut-check guidelines. &#8220;Child abuse, animal abuse, Hitler,&#8221; Willner recalls. &#8220;We were told to take down anything that makes you feel bad, that makes you feel bad in your stomach.&#8221; Willner had just moved to Silicon Valley to join his girlfriend, then Charlotte Carnevale, now Charlotte Willner, who had become head of Facebook&rsquo;s International Support Team. Over the next six years, as Facebook grew from less than 100 million users to well over a billion, the two worked side by side, developing and implementing the company&rsquo;s first formal moderation guidelines.</p> <aside class="float-right q-animated" id="scene9" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">&#8220;If Facebook is here to make the world more open, why would you delete anything?&#8221;</q></aside> <p>&#8220;We were called The Ninjas,&#8221; he said, &#8220;mapping the rabbit hole.&#8221; Like Mora-Blanco, Willner described how he, Charlotte, and their colleagues sometimes laughed about their work, so that they wouldn&rsquo;t cry. &#8220;To outsiders, that sounds demented,&#8221; he said.</p> <p>Just like at YouTube, the subjectivity of Facebook&rsquo;s moderation policy was glaring. &#8220;Yes, deleting Hitler feels awesome,&#8221; Willner recalls thinking. &#8220;But, <em>why</em> do we delete Hitler? If Facebook is here to make the world more open,&#8221; he asked himself, &#8220;why would you delete anything?&#8221; The job, he says, was &#8220;to figure out Facebook&rsquo;s central <em>why</em>.&#8221;</p> <p>For people like Dave and Charlotte Willner, the questions are as complex now as they were a decade ago. How do we understand the context of a picture? How do we assign language meaning? Breaking the code for context &mdash; nailing down the ineffable question of why one piece of content is acceptable but a slight variation breaks policy &mdash; remains the holy grail of moderation.</p> <p>In the absence of a perfectly automated system, Willner said, there are two kinds of human moderation. One set of decisions relies on observable qualities that involve minimal interpretation. For example, a moderator can see if a picture contains imagery of a naked toddler in a dimly lit hotel room &mdash; clearly a violation. In these cases, trained moderators can easily lean on detailed guidance manuals. The other method of decision, however, is more complex and interpretation is central. Recognizing bullying, for example, depends on understanding relationships and context, and moderators are not privy to either.</p> <p>&#8220;For instance, let&rsquo;s say that I wear a green dress to work one day and everyone makes fun of me for it,&#8221; explained Facebook&rsquo;s Monika Bickert when we talked. &#8220;Then when I get home, people have posted on my Facebook profile pictures of green frogs or posts saying, &lsquo;I love your dress!&rsquo; If I report those posts and photos to Facebook, it won&rsquo;t necessarily be clear to the content reviewers exactly what is going on. We try to keep that in mind when we write our policies, and we also try to make sure our content reviewers consider all relevant content when making a decision.&#8221;</p> <p>Created in 2009, Facebook&rsquo;s first &#8220;abuse standards&#8221; draft ran 15,000 words and was, Willner said, &#8220;an attempt to be all-encompassing.&#8221;</p> <p>While Willner is bound by an NDA to not discuss the document, a leaked Facebook cheat sheet used by freelance moderators made news in 2012. (Willner says the sheet included some minor misinterpretations and errors.) &#8220;Humor and cartoon humor is an exception for hate speech unless slur words are being used or humor is not evident,&#8221; read one rule. &#8220;Users may not describe sexual activity in writing, except when an attempt at humor or insult,&#8221; read another. Moderators were given explicit examples regarding the types of messages and photos described and told to review only the reported content, not unreported adjacent content. As in US law, content referring to &#8220;ordinary people&#8221; was treated differently than &#8220;public figures.&#8221; &#8220;Poaching of animals should be confirmed. Poaching of endangered animals should be escalated.&#8221; Things like &#8220;urine, feces, vomit, semen, pus and earwax,&#8221; were too graphic, but cartoon representations of feces and urine were allowed. Internal organs? No. Excessive blood? OK. &#8220;Blatant (obvious) depiction of camel toes and moose knuckles?&#8221; Prohibited.</p> <aside class="float-left q-animated" id="scene10" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">Internal organs? No. Excessive blood? OK.</q></aside> <p>The &#8220;Sex and Nudity&#8221; section perhaps best illustrates the regulations&rsquo; subjectivity and cultural biases. The cheat sheet barred naked children, women&rsquo;s nipples, and &#8220;butt cracks.&#8221; But such images are obviously not considered inappropriate in all settings and the rules remain subject to cultural contexts.</p> <p>In 2012, for instance, when headlines were lauding social media for its role in catalyzing the Arab Spring, a Syrian protester named Dana Bakdounes posted a picture of herself with a sign advocating for women&rsquo;s equal rights. In the image Bakdounes is unveiled, wearing a tank top. A Facebook moderator <a href="http://www.nowlebanon.com/NewsArticleDetails.aspx?ID=455346&amp;MID=11&amp;PID=2">removed</a> the photo and blocked the administrators of an organization she supported, Uprising of Women in the Arab World. Her picture had been reported by conservatives who believed that images of women, heads uncovered and shoulders bare, constituted obscenity. Following public protest, Facebook quickly issued an apology and &#8220;worked to rectify the mistake.&#8221;</p> <p>The issue of female nudity and culturally bound definitions of obscenity remains thorny. Last spring, Facebook blocked a 1909 photograph of an indigenous woman with her breasts exposed, a violation of the company&rsquo;s ever evolving rules about female toplessness. In response, the Brazilian Ministry of Culture announced its intention to sue the company. Several weeks later, protesters in the United States, part of the #SayHerName movement, confronted Facebook and Instagram over the removal of photographs in which they had used nudity to highlight the plight of black women victimized by the police.</p> <br> <hr class="roots_div1"> <br> <p class="pt-dropcap">In early March, at packed panel at South by Southwest called &#8220;How Far Should We Go To Protect Hate Speech Online?&#8221; Jeffrey Rosen, now president of the National Constitution Center, was joined by Juniper Downs, head of public policy at Google / YouTube, and Facebook&rsquo;s Monika Bickert, among others. At one point, midway through the panel, Rosen turned to Downs and Bickert, describing them as &#8220;the two most powerful women in the world when it comes to free speech.&#8221; The two demurred. Entire organizations, they suggested, make content decisions.</p> <p>Not exactly. Content management is rarely dealt with as a prioritized organizational concern &mdash; centrally bringing together legal, customer service, security, privacy, safety, marketing, branding, and personnel to create a unified approach. Rather, it is still usually shoehorned into structures never built for a task so complex.</p> <aside class="float-right q-animated" id="scene11" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">Moderation remains a relatively low-wage, low-status sector, often managed and staffed by women</q></aside> <p>The majority of industry insiders and experts we interviewed described moderation as siloed off from the rest of the organization. Few senior level decision-makers, they said &mdash; whether PR staff, lawyers, privacy and security experts, or brand and product managers &mdash; experience the material in question first-hand. One content moderator, on condition of anonymity, said her colleagues and supervisors never saw violent imagery because her job was to remove the most heinous items before they could. Instead, she was asked to describe it. &#8220;I watched people&rsquo;s faces turn green.&#8221;</p> <p>Joi Podgorny is former vice president at ModSquad, which provides content moderation to a range of marquee clients, from the State Department to the NFL. Now a digital media consultant, she says founders and developers not only resist seeing the toxic content, they resist even understanding the practice of moderation. Typically cast off as &#8220;customer-service,&#8221; moderation and related work remains a relatively low-wage, low-status sector, <a href="http://www.hiremorewomenintech.com">often managed and staffed by women</a>, which stands apart from the higher-status, higher-paid, more powerful sectors of engineering and finance, which are overwhelmingly male. &#8220;I need you to look at what my people are looking at on a regular basis,&#8221; she said. &#8220;I want you to go through my training and see this stuff [and] you&rsquo;re not going to think it&rsquo;s free speech. You&rsquo;re going to think it&rsquo;s damaging to culture, not only for our brand, but in general.&#8221;</p> <p>Brian Pontarelli, CEO of the moderation software company Inversoft, echoes the observation. Many companies, he told us, will not engage in robust moderation until it will cost them not to. &#8220;They sort of look at that as like, that&rsquo;s hard, and it&rsquo;s going to cost me a lot of money, and it&rsquo;s going to require a lot of work, and I don&rsquo;t really care unless it causes me to lose money,&#8221; he said. &#8220;Until that point, they can say to themselves that it&rsquo;s not hurting their revenue, people are still spending money with us, so why should we be doing it?&#8221;</p> <p>When senior executives do get involved, they tend to parachute in during moments of crisis. In the wake of last December&rsquo;s San Bernardino shootings, Eric E. Schmidt, executive chairman at Google, called on industry to build tools to reduce hate, harm, and friction in social media, &#8220;sort of like spell-checkers, but for hate and harassment.&#8221;</p> <p>Likewise the words of former Twitter CEO Dick Costolo in an <a href="http://www.theverge.com/2015/2/4/7982099/twitter-ceo-sent-memo-taking-personal-responsibility-for-the">internal memo</a>, published by <em>The Verge</em> in February 2015. &#8220;We lose core user after core user by not addressing simple trolling issues that they face every day,&#8221; he wrote, concluding, &#8220;We&rsquo;re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.&#8221; As if it were so simple.</p> <p>Mora-Blanco worked for five years at Twitter, where she said, &#8220;there was a really strong cultural appreciation for the Trust &amp; Safety team,&#8221; responsible for moderation related to harassment and abuse. She joined the company in 2010 and soon became manager of the User Safety Policy Team, where she developed policies concerning abuse, harassment, suicide, child sexual exploitation, and hate speech. By the end of her tenure in early 2015, she had moved to the Public Policy team, where she helped to launch several initiatives to prevent hate speech, harassment, and child sexual exploitation, and to promote free speech.</p> <aside class="float-left q-animated" id="scene12" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">For Reddit, leaving users largely to their own devices has translated into years of high-profile catastrophes</q></aside> <p>&#8220;It was embedded within the company, that the Trust &amp; Safety team were doing important work,&#8221; she said. Even so, she found, her team didn&rsquo;t have the tools to be effective. &#8220;The Trust &amp; Safety teams had lots of ideas on how to implement change,&#8221; she said, &#8220;but not the engineering support.&#8221; Prior to 2014, according to Mora-Blanco and former Twitter engineer Jacob Hoffman-Andrews, there was not a single engineer dedicated to addressing harassment at Twitter. During the past two years, the company has taken steps to reconcile the need to stem abuse with its commitment to the broadest interpretation of unmoderated free speech, publicly announcing the formation of an advisory council and introducing training sessions with law enforcement.</p> <p>According to a source close to the moderation process at Reddit, the climate there is far worse. Despite the site&rsquo;s size and influence &mdash; attracting <a href="https://www.reddit.com/r/AskReddit/about/traffic">some 4 to 5 million page views a day</a> &mdash; Reddit has a full-time staff of only around 75 people, leaving Redditors to largely police themselves, following a &#8220;<a href="https://www.reddit.com/wiki/reddiquette">reddiquette</a>&#8221; post that outlines what constitutes acceptable behavior. Leaving users almost entirely to their own devices has translated into years of high-profile <a href="http://www.slate.com/articles/technology/bitwise/2015/07/hate_speech_on_reddit_a_simple_novel_plan_to_quarantine_it.2.html">catastrophes</a> involving virtually every form of objectionable content &mdash; including entire toxic subreddits such as /r/jailbait, /r/creepshots, /r/teen_girls, /r/fatpeoplehate, /r/coontown, /r/niggerjailbait, /r/picsofdeadjailbait, and a whole <a href="http://www.buzzfeed.com/brendanklinkenberg/reddit-bans-racist-subreddits">category</a> for anti-black Reddits called the &#8220;Chimpire,&#8221; which flourished on the platform.</p> </div><div class="m-snippet full-image"><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6330671/ericPetersen_silos_RGB_fin.0.jpg" alt="Moderation silos " data-chorus-asset-id="6330671"></div><!-- ######## END SNIPPET ######## --><div class="m-snippet thin"> <p>In the wake of public outrage over CelebGate &mdash; the posting on Reddit of hacked private photos of more than 100 women celebrities &mdash; a <a href="https://docs.google.com/document/d/1QJBPZt0oa3UCkL6QGBHp6vITXs3f1bYcCyA5xIQcFZw/pub#h.gcq3lpkbcrfv">survey</a> of more than 16,000 Redditors found that 50 percent of those who wouldn&rsquo;t recommend Reddit cited &#8220;hateful or offensive content or community&#8221; as the reason why. After the survey was published in March 2015, the company announced, &#8220;we are seeing our open policies stifling free expression; people avoid participating for fear of their personal and family safety.&#8221; Alexis Ohanian, a Reddit co-founder, and other members of the Reddit team, described the company&rsquo;s slow response to CelebGate as &#8220;a missed chance to be a leader&#8221; on the issue of moderating nonconsensual pornography. Two months later, Reddit published one of its first corporate anti-harassment moderation policies, which prohibited revenge porn and encouraged users to email moderators with concerns. Reddit includes a report feature that is routed anonymously to volunteer moderators whose ability to act on posts is <a href="https://www.reddit.com/wiki/moderation">described</a> in detail on the site.</p> <aside class="float-right q-animated" id="scene13" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">Alexis Ohanian described Reddit&rsquo;s slow response to CelebGate as &#8220;a missed chance to be a leader.&#8221;</q></aside> <p>But the survey also laid bare the philosophical clash between the site&rsquo;s commitment to open expression, which fueled early growth, and the desire for limits among the users who may fuel future growth: 35 percent of complaints from &#8220;extremely dissatisfied users&#8221; were due to &#8220;heavy handed moderation and censorship.&#8221; The company continues to grapple with the paradox that to expand, Reddit (and other platforms) will likely have to regulate speech in ways that alienate a substantive percentage of their core customer base.</p> <p>When asked in the summer of 2015 about racist subreddits that remained in place despite the company&rsquo;s new policies, CEO Steve Huffman said the content is &#8220;<a href="http://www.buzzfeed.com/charliewarzel/nothing-changes-at-reddit#.ypO2dzgv8">offensive to many, but does not violate our current rules for banning</a>,&#8221; and <a href="http://www.buzzfeed.com/charliewarzel/nothing-changes-at-reddit#.rdaWGZrbp">clarified</a> that the changes were not &#8220;an official update to our policy.&#8221; By then, as <em>Slate</em> tech columnist David Auerbach <a href="http://www.slate.com/articles/technology/bitwise/2015/07/hate_speech_on_reddit_a_simple_novel_plan_to_quarantine_it.html">wrote</a>, Reddit was widely seen as &#8220;a cesspool of hate in dire need of repair.&#8221; Within weeks, Reddit <a href="http://boingboing.net/2015/08/05/reddits-new-content-policy-g.html">announced</a> the removal of a list of racist and other &#8220;communities that exist solely to annoy other Redditors, [and] prevent us from improving Reddit, and generally make Reddit worse for everyone else.&#8221;</p> <p>The sharp contrast between Facebook, with its robust and long-standing Safety Advisory Board, and Reddit, with its skeletal staff and dark pools of offensive content, offers up a vivid illustration for how content moderation has evolved in isolated ways within individual corporate enclaves. The fragmentation means that content banned on one platform can simply pop up on another, and that trolling can be coordinated so that harassment and abuse that appear minor on a single platform are amplified by appearing simultaneously on multiple platforms.</p> <p>A writer who goes by Erica Munnings and asked that we not use her real name out of fear of retaliation, found herself on the receiving end of one such attack, which she describes as a &#8220;high-consequence game of whack-a-mole across multiple social media platforms for days and weeks.&#8221; After writing a feminist article that elicited conservative backlash, a five-day &#8220;Twitter-flogging&#8221; ensued. From there, the attacks moved to Facebook, YouTube, Reddit, and 4chan. Self-appointed task forces of Reddit and 4chan users published her address and flooded her professional organization with emails, demanding that her professional license be rescinded. She shut down comments on her YouTube videos. She logged off Twitter. On Facebook, the harassment was debilitating. To separate her personal and professional lives, she had set up a separate Facebook page for her business. However, user controls on such pages are thin, and her attackers found their way in. &#8220;I couldn&rsquo;t get one-star reviews removed or make the choice as a small business not to have &lsquo;Reviews&rsquo; on my page at all,&#8221; she said. &#8220;Policies like this open the floodgates of internet hate and tied my hands behind my back. There was no way I could report each and every attack across multiple social media platforms because they came at me so fast and in such high volume. But also, it became clear to me that when I did report, no one responded, so there really was no incentive to keep reporting. That became yet another costly time-sink on top of deleting comments, blocking people, and screen-grabbing everything for my own protection. Because no one would help me, I felt I had no choice but to wait it out, which cost me business, and income.&#8221;</p> <p>Several content moderation experts point to Pinterest as an industry leader. Microsoft&rsquo;s Tarleton Gillespie, author of the forthcoming <em>Free Speech in the Age of Platform</em>, says the company is likely doing the most of any social media company to bridge the divide between platform and user, private company and the public. The platform&rsquo;s moderation staff is well-funded and supported, and Pinterest is reportedly breaking ground in making its processes transparent to users. For example, <a href="https://about.pinterest.com/en/acceptable-use-policy">Pinterest posts visual examples</a> to illustrate the site&rsquo;s &#8220;acceptable use policy&#8221; in an effort to help users better understand the platform&rsquo;s content guidelines and the decisions moderators make to uphold them.</p> <aside class="float-left q-animated" id="scene14" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">In response to the film &#8216;Fifty Shades of Grey&#8217;, Pinterest was hustling to develop new BDSM standards</q></aside> <p>When we met with Charlotte Willner, now Pinterest&rsquo;s Safety Manager, the film <em>Fifty Shades of Grey</em> had just been released. She and her team were hustling to develop new BDSM standards. &#8220;We realized,&#8221; she later explained by email, &#8220;that we were going to need to figure out standards for rape and kidnapping fantasy content, which we hadn&rsquo;t seen a lot of but we began to see in connection with the general BDSM influx.&#8221;</p> <p>The calls were not easy, but it was clear that her team was making decisions on a remarkably granular level. One user was posting fetish comments about cooking Barbie-size women in a stew pot. Should this be allowed? Why not? The team had to decipher whether it was an actual threat. A full-size woman can&rsquo;t fit into stew pot, the team figured, so the content was unlikely to cause real-world harm. Ultimately, they let the posts stand &mdash; that is, according to Willner, until the &#8220;stew pot guy&#8221; began uploading more explicit content that clearly violated Pinterest&rsquo;s terms and her team removed his account.</p> <p>On that same South by Southwest panel, Rosen expressed concerns about both corporate regulation of free speech and newly stringent European Union regulations such as the Right to be Forgotten. &#8220;Censorship rules that have a lower standard than the First Amendment are too easily abused by governments,&#8221; he said. Yet he offered some warm words of praise, saying that companies such as Google, Facebook, and Twitter are &#8220;trying to thread an incredibly delicate and difficult line&#8221; and &#8220;the balance that they&rsquo;re striking is a sensible one&#8221; given the pressures they face. He added, &#8220;Judges and regulators and even really smart, wonderful Google lawyers&#8221; are doing &#8220;about as good of a job with this unwelcome task as could be imagined.&#8221; It was a striking remark, given Google&rsquo;s <a href="https://twitter.com/elatable/status/706921780422660096">confirmation</a>, only days earlier, that it had hired controversial 4chan founder Christopher Poole. Prominent industry critic Shanley Kane <a href="https://modelviewculture.com/news/google-hires-4chan-founder-sends-huge-fuck-you-to-marginalized-users">penned</a> an outraged post describing Poole as responsible &#8220;for a decade + of inculcating one of the most vile, violent and harmful &lsquo;communities&rsquo; on the Internet.&#8221; For marginalized groups, she wrote, the decision &#8220;sends not only a &lsquo;bad message,&rsquo; but a giant &lsquo;fuck you.&rsquo;&#8221;</p> <br> <hr class="roots_div4"> <br> <p class="pt-dropcap">Several sources told us that industry insiders, frustrated by their isolation, have begun moving independently of their employers. Dave Willner, and others who spoke on the condition of anonymity, told us that 20 or 30 people working in moderation have started meeting occasionally for dinner in San Francisco to talk informally about their work.</p> <p>One front-line expert, Jennifer Puckett, has worked in moderation for more than 15 years and now serves as social reputation manager heading up the Digital Safety Team at <a href="http://www.emoderation.com/the-company/">Emoderation</a>. She believes that moderation, as an industry, is maturing. On the one hand, the human expertise is growing, making that tableful of young college grads at YouTube seem quaint. &#8220;People are forming their college studies around these types of jobs,&#8221; Mora-Blanco says. &#8220;The industry has PhD candidates in internship roles.&#8221; On the other hand, efforts to automate moderation have also advanced.</p> <p>Growing numbers of researchers are developing technology tooled to understand user-generated content, with companies hawking unique and proprietary analytics and algorithms that attempt to measure meaning and predict behavior. <a href="http://www.datanami.com/2016/02/09/big-data-and-the-race-to-be-president/">Adaptive Listening technologies</a>, which are increasingly capable of providing sophisticated analytics of online conversations, are being developed to assess user context and intent.</p> <aside class="float-right q-animated" id="scene15" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">&#8220;If the moderation system is a factory, this approach moves what is essentially piecework toward assembly.&#8221;</q></aside> <p>In May 2014, Dave Willner and Dan Kelmenson, a software engineer at Facebook, patented a <a href="http://www.google.com.gh/patents/US20080256602">3D-modeling technology</a> for content moderation designed around a system that resembles an industrial assembly line. &#8220;It&rsquo;s tearing the problem [of huge volume] into pieces to make chunks more comprehensible,&#8221; Willner says. First, the model identifies a set of malicious groups &mdash; say neo-Nazis, child pornographers, or rape promoters. The model then identifies users who associate with those groups through their online interactions. Next, the model searches for other groups associated with those users and analyzes those groups &#8220;based on occurrences of keywords associated with the type of malicious activity and manual verification by experts.&#8221; This way, companies can identify additional or emerging malicious online activity. &#8220;If the moderation system is a factory, this approach moves what is essentially piecework toward assembly,&#8221; he said. &#8220;And you can measure how good the system is.&#8221;</p> <p>Working with Microsoft, Hany Farid, the chair of Computer Science at Dartmouth College, developed something called PhotoDNA, which allows tech companies to automatically detect, remove, and report the presence of child exploitation images on their networks.</p> <p>When we spoke, Farid described the initial resistance to PhotoDNA, how industry insiders said the problem of child exploitation was too hard to solve. <em>Yes, it&rsquo;s horrible</em>, he recalled everyone &mdash; executives, attorneys, engineers &mdash; saying, <em>but there&rsquo;s so much content</em>. Civil liberties groups, he recalls, also balked at an automated moderation system.</p> <p>Talk to those people today, he said, and they&rsquo;ll tell you how successful PhotoDNA has been. PhotoDNA works by processing an image every two milliseconds and is highly accurate. First, he explains, known child exploitation images are identified by NCMEC. Then PhotoDNA extracts from each image a numeric signature that is unique to that image, &#8220;like your human DNA is to you.&#8221; Whenever an image is uploaded, whether to Facebook or Tumblr or Twitter, and so on, he says, &#8220;its photoDNA is extracted and compared to our known CE images. Matches are automatically detected by a computer and reported to NCMEC for a follow-up investigation.&#8221; He describes photoDNA as &#8220;agnostic,&#8221; saying, &#8220;There is nothing specific to child exploitation in the technology.&#8221; He could just as easily be looking for pictures of cats. The tricky part of content moderation is not identifying content, he says, but the &#8220;long, hard conversations&#8221; &mdash; conducted by humans, not machines &mdash; necessary for reaching the tough decisions about what constitutes personal or political speech. Farid is now working with tech companies and nonprofit groups to develop similar technology that will identify extremism and terrorist threats online &mdash; whether expressed in speech, image, video, or audio. He expects the program to launch within months, not years.</p> <p>While tech solutions are rapidly emerging, the cultural ones are slower in coming. Emily Laidlaw, assistant professor of law at University of Calgary and author of <em>Regulating Speech in Cyberspace</em>, calls for &#8220;a clarification of the applicability of existing laws.&#8221; For starters, she says, Section 230 of the 1996 Communications Decency Act needs immediate overhaul. Companies, she argues, should no longer be entirely absolved of liability for the content they host.</p>  <p><img src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6330679/ericPetersen_factory_RGB_fin.0.jpg" alt="Moderation factory" data-chorus-asset-id="6330679"></p>  <p>For more than five years, Harvard&rsquo;s Berkman Center for Internet and Society has pushed for industry-wide best practices. Their recommendations include corporate transparency, consistency, clarity, and a mechanism for customer recourse. Other civil society advocates <a href="http://www.ohchr.org/Documents/Issues/Business/A-HRC-17-31_AEV.pdf">call</a> for corporate grievance mechanisms that are accessible and transparent in accordance with international human rights law, or call on corporations to engage in public dialogue with such active stakeholders as the Anti Defamation League, the Digital Rights Foundation, and the National Network to End Domestic Violence. &#8220;What we do is informed by external conversations that we have,&#8221; explained Facebook&rsquo;s Bickert in early March. &#8220;Every day, we are in conversations with groups around the world&hellip; So, while we are responsible for overseeing these policies and managing them, it is really a global conversation.&#8221;</p> <aside class="float-left q-animated" id="scene16" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle1" data-depth="1.0"></span> <q class="center style1">&#8220;We have one set of content standards for the entire world.&#8221;</q></aside> <p>Some large established companies like YouTube, Pinterest, Emoderation, Facebook, and Twitter are beginning to make headway in improving moderation practices, using both tech and human solutions.</p> <p>In the eight months since CEO Dick Costolo&rsquo;s departure, Twitter has reached out to users and advocates in an effort to be more responsive. In early February, Patricia Cartes, Twitter&rsquo;s head of global policy outreach, <a href="https://blog.twitter.com/2016/announcing-the-twitter-trust-safety-council">announced the formation of a Trust &amp; Safety Council</a>, a multidisciplinary advisory board comprised of 40 organizations and experts. Still, members of bodies such as this one, and Facebook&rsquo;s Safety Advisory Board, established in 2009, operate under NDAs, meaning the conversations taking place in these places remain behind-the-scenes. In 2010, Google founded Google Ideas (now Jigsaw), a multi-disciplinary think tank to tackle challenges associated with defending against global security threats and protecting vulnerable populations. Meanwhile, a virtual cottage industry of internet helplines the world over has cropped up, such as Zo&euml; Quinn&rsquo;s Crash Override Network, an online abuse helpline for schools and &#8220;revenge porn&#8221; hotlines in the United States and the United Kingdom.</p> <p>As these policy debates move forward in private, platforms are also taking steps to protect their front-line moderation workers. Medina told us that representatives from Google, Microsoft, Yahoo, Facebook, and Kik have recently attended SHIFT&rsquo;s resilience trainings. Facebook&rsquo;s hundreds of moderators &mdash; located in Menlo Park, Austin, Dublin, and Hyderabad &mdash; says Silver, head of Facebook&rsquo;s Community Support Team, now receive regular training, detailed manuals, counseling and other support. The company brings on experts or trains specialists in specific areas such as suicide, human trafficking, child exploitation, violence against women, and terrorism. Moderator decisions on reported pieces of content are regularly audited, she says, to ensure that moderation is consistent and accurately reflects guidelines. &#8220;We have one set of content standards for the entire world,&#8221; explains Facebook&rsquo;s Bickert. &#8220;This helps our community be truly global, because it allows people to share content across borders. At the same time, maintaining one set of standards is challenging because people around the world may have different ideas about what is appropriate to share on the Internet.&#8221;</p> <p>Many US-based companies, however, continue to consign their moderators to the margins, shipping their platforms&rsquo; digital waste to &#8220;special economic zones&#8221; in the Global South. As Roberts recounts in her paper &#8220;Digital Refuse,&#8221; these toxic images trace the same routes used to export the industrial world&rsquo;s physical waste &mdash; hospital hazardous refuse, dirty adult diapers, and old model computers. Without visible consequences here and largely unseen, companies dump child abuse and pornography, crush porn, animal cruelty, acts of terror, and executions &mdash; images so extreme those paid to view them won&rsquo;t even describe them in words to their loved ones &mdash; onto people desperate for work. And there they sit in crowded rooms at call centers, or alone, working off-site behind their screens and facing cyber-reality, as it is being created. Meanwhile, each new startup begins the process, essentially, all over again.</p> <aside class="float-right q-animated" id="scene17" data-scalar-x="20" data-scalar-y="20"> <span class="layer left sstyle2" data-depth="1.0"></span> <q class="center style2">Meanwhile, each new startup begins the process, essentially, all over again</q></aside> <p>And even industry leaders continue to rely on their users to report and flag unacceptable content. This reliance, <a href="http://boundary2.org/2015/12/16/dewandre-on-pascal/">says</a> Nicole Dewandre, an <a href="https://ec.europa.eu/digital-single-market/en/nicole-dewandre-biography%E2%80%A6.can">advisor</a> to the European Commission on Information and Communication Technology policy, is &#8220;an exhausting laboring activity that does not deliver on accountability.&#8221; The principle of <a href="http://www.demos.co.uk/project/counter-speech/">counter speech</a>, by which users are expected to actively contradict hateful or harmful messages, firmly puts the burdens and risks of action on users, not on the money-making platforms who depend on their content. Perhaps for that reason, counter speech has become an industry buzzword. Susan Benesch, founder of the <a href="http://dangerousspeech.org/">Dangerous Speech Project</a>, which tracks inflammatory speech and its effects, notes that while counter speech is an important tool, &#8220;it cannot be the sole solution. We don&rsquo;t yet understand enough about when it&rsquo;s effective, and can often put the counter-speaker at risk of attack, online or offline.&#8221;</p> <p>Sarah T. Roberts, the researcher, cautions that &#8220;we can&rsquo;t lose sight of the baseline.&#8221; The platforms, she notes, &#8220;are soliciting content. It&rsquo;s their solicitation that invites people to upload content. They create the outlet and the impetus.&#8221; If moderators are, in Dave Willner&rsquo;s estimation, platforms&rsquo; emotional laborers, users are, in the words of labor researcher <a href="http://www.amazon.com/s/ref=dp_byline_sr_book_1?ie=UTF8&amp;text=Kylie+Jarrett&amp;search-alias=books&amp;field-author=Kylie+Jarrett&amp;sort=relevancerank">Kylie Jarrett</a>, their &#8220;<a href="http://www.amazon.com/Feminism-Labour-Digital-Media-Cyberculture/dp/1138855790">digital housewives</a>&#8221; &mdash; volunteering their time and adding value to the system while remaining unpaid and invisible, compensated only through affective benefits. The question, now, is how can the public leverage the power inherent in this role? Astra Taylor, author of <a href="http://www.nytimes.com/2014/07/20/books/review/the-peoples-platform-by-astra-taylor.html?_r=0"><em>The People&rsquo;s Platform</em></a>, says, &#8220;I&rsquo;m struck by the fact that we use these civic-minded metaphors, calling Google Books a &lsquo;library&rsquo; or Twitter a &lsquo;town square&rsquo; &mdash; or even calling social media &lsquo;social&rsquo; &mdash; but real public options are off the table, at least in the United States.&#8221; Though users are responsible for providing and policing vast quantities of digital content, she points out, we then &#8220;hand the digital commons over to private corporations at our own peril.&#8221;</p> <br> <hr class="roots_div3"> <br> <p class="pt-dropcap">On January 19th, 2015, Mora-Blanco packed up her desk and left Twitter.</p> <p>From her last position in Public Policy, she no longer had to screen violent images day after day and, after months of laying groundwork, she told us, &#8220;things had started to shift and move along lines I&rsquo;d always wanted.&#8221; She and her coworkers &mdash; from Legal, Safety, Support, Public Policy, Content &#8220;all the pieces of the puzzle all across the platform&#8221; &mdash; had started meeting weekly to tackle online harassment and threat. &#8220;We got to sit in the room together,&#8221; she said, &#8220;and everyone felt really connected and that we could move forward together.&#8221;</p> <p>But she&rsquo;d finally earned enough money to take a year to train as a hair stylist. Just as important, she felt she could finally leave without betraying her colleagues, especially those working on the front lines.</p> <p>Today, Mora-Blanco is studying cosmetology at the Cinta Aveda Institute in downtown San Francisco. She loves her new work, she told us, because it allows her the freedom to innovate. &#8220;I love to think big, but I am really passionate about being creative.&#8221;</p> <p>&#8220;What are you doing here?&#8221; her instructor sometimes asks. &#8220;You could go out and make so much money.&#8221;</p> <p>Occasionally she takes an online safety consulting gig &mdash; she hasn&rsquo;t completely ruled out jumping back in. &#8220;But things would need to change,&#8221; she said. &#8220;I&rsquo;d like to be a part of that change, but I need to be convinced that the attitude toward this advocacy work is going to be treated seriously, that the industry is going to be different.&#8221;</p> </div><div class="m-snippet">  <p>* *</p>  <em><p><strong>Correction:</strong> A previous draft of this piece suggested Google acquired YouTube in October of 2005. In fact, that acquisition took place in October of 2006.</p></em>  <p>* *</p> <p><strong>Catherine Buni</strong> is a writer and editor whose work has appeared in the <em>New York Times</em>, theAtlantic.com, the <em>Los Angeles Review of Books</em>, and <em>Orion</em>, among others.</p> <p><strong>Soraya Chemaly</strong> is a media critic, writer and activist whose work focuses on free speech and the role of gender in culture. Her work appears regularly in <em>Salon</em>, the <em>Guardian</em>, the <em>Huffington Post</em>, CNN.com,<em> Ms.</em>, and other outlets, and she is a frequent media commentator on related topics.</p>  </div><!-- ######## END SNIPPET ######## --><div class="m-snippet">  <p>* *</p> <p>Design by <a href="http://www.theverge.com/users/Happicamp">James Bareham</a></p> <p>Product by <a href="http://www.theverge.com/users/Frank%20Bi">Frank Bi</a></p> <p>Edited by <a href="https://www.thenation.com/authors/esther-kaplan/">Esther Kaplan</a> and <a href="http://www.theverge.com/users/mvzelenko">Michael Zelenko</a></p>  </div><!-- js --><p>      <!-- try { Typekit.load({ async: true }); } catch (e) {} // -->   <!-- var scene1 = document.getElementById('scene1'); var parallax1 = new Parallax(scene1); var scene2 = document.getElementById('scene2'); var parallax2 = new Parallax(scene2); var scene3 = document.getElementById('scene3'); var parallax3 = new Parallax(scene3); var scene4 = document.getElementById('scene4'); var parallax4 = new Parallax(scene4); var scene5 = document.getElementById('scene5'); var parallax5 = new Parallax(scene5); var scene6 = document.getElementById('scene6'); var parallax6 = new Parallax(scene6); var scene7 = document.getElementById('scene7'); var parallax7 = new Parallax(scene7); var scene8 = document.getElementById('scene8'); var parallax8 = new Parallax(scene8); var scene9 = document.getElementById('scene9'); var parallax9 = new Parallax(scene9); var scene10 = document.getElementById('scene10'); var parallax10 = new Parallax(scene10); var scene11 = document.getElementById('scene11'); var parallax11 = new Parallax(scene11); var scene12 = document.getElementById('scene12'); var parallax12 = new Parallax(scene12); var scene13 = document.getElementById('scene13'); var parallax13 = new Parallax(scene13); var scene14 = document.getElementById('scene14'); var parallax14 = new Parallax(scene14); var scene15 = document.getElementById('scene15'); var parallax15 = new Parallax(scene15); var scene16 = document.getElementById('scene16'); var parallax16 = new Parallax(scene16); var scene17 = document.getElementById('scene17'); var parallax17 = new Parallax(scene17); if (/Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test(navigator.userAgent)) { $('body').addClass('mobile_device'); } // --> </p>
						]]>
									</content>
			
					</entry>
	</feed>
