<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Simon Hurtz | The Verge</title>
	<subtitle type="text">The Verge is about technology and how it makes us feel. Founded in 2011, we offer our audience everything from breaking news to reviews to award-winning features and investigations, on our site, in video, and in podcasts.</subtitle>

	<updated>2023-09-15T17:44:59+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.theverge.com/author/simon-hurtz" />
	<id>https://www.theverge.com/authors/simon-hurtz/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.theverge.com/authors/simon-hurtz/rss" />

	<icon>https://platform.theverge.com/wp-content/uploads/sites/2/2025/01/verge-rss-large_80b47e.png?w=150&amp;h=150&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Simon Hurtz</name>
			</author>
			
			<title type="html"><![CDATA[X continues to throttle links to competitors]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/9/15/23875251/x-twitter-links-throttling-facebook-instagram-threads" />
			<id>https://www.theverge.com/2023/9/15/23875251/x-twitter-links-throttling-facebook-instagram-threads</id>
			<updated>2023-09-15T13:44:59-04:00</updated>
			<published>2023-09-15T13:44:59-04:00</published>
			<category scheme="https://www.theverge.com" term="Elon Musk" /><category scheme="https://www.theverge.com" term="Facebook" /><category scheme="https://www.theverge.com" term="Instagram" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Tech" /><category scheme="https://www.theverge.com" term="Twitter - X" />
							<summary type="html"><![CDATA[The platform formerly known as Twitter still takes a surprisingly long time to load a few bytes of data &#8212; at least, if that data leads to a platform that Elon Musk might consider a competitor. An analysis by The Markup has found that X makes users wait about two and a half seconds to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Twitter is called X now, and the logo still looks weird to anyone who was used to the blue bird. | Illustration: The Verge" data-portal-copyright="Illustration: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24805887/STK160_X_Twitter_005.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Twitter is called X now, and the logo still looks weird to anyone who was used to the blue bird. | Illustration: The Verge	</figcaption>
</figure>
<p>The platform formerly known as Twitter still takes a surprisingly long time to load a few bytes of data &mdash; at least, if that data leads to a platform that Elon Musk might consider a competitor. <a href="https://themarkup.org/investigations/2023/09/15/twitter-is-still-throttling-competitors-links-check-for-yourself">An analysis by <em>The Markup</em></a> has found that X makes users wait about two and a half seconds to access links to Facebook, Instagram, Threads, Bluesky, and Substack.</p>

<p>If you click a link on X, you get redirected via X&rsquo;s link shortener, t.co. Most sites load within 30 to 40 milliseconds. Meta&rsquo;s platforms, Bluesky, and Substack take more than 60 times longer. <em>The Markup</em> has built a tool that lets you check the load times to any domain yourself.</p>

<p>It isn&rsquo;t the first time X has throttled competitors. In August, <a href="https://www.washingtonpost.com/technology/2023/08/15/twitter-x-links-delayed/">an analysis by <em>The Washington Post</em></a> revealed that <a href="https://www.theverge.com/2023/8/15/23833314/x-twitter-throttling-traffic-competitors-news-sites-elon-musk">X put a delay in place</a> both for other social media sites and news organizations that Musk had <a href="https://twitter.com/elonmusk/status/1689257914362445824">publicly</a> and <a href="https://twitter.com/elonmusk/status/1642395451209940993">frequently attacked</a>. The affected domains included <em>The New York Times</em> and <em>Reuters</em>. Back then, users had to wait five seconds to get redirected.</p>

<p>After the analysis had been published, X reversed the throttling on some sites. At least the <em>Times </em>and <em>Reuters </em>aren&rsquo;t affected by the current slowdown. It&rsquo;s unclear if X ever stopped throttling competing social platforms; Musk <a href="https://www.theverge.com/2023/8/15/23832701/elon-musk-lies-mark-zuckerberg-fight-creepy">dislikes Meta and Mark Zuckerberg</a> even <a href="https://www.theverge.com/2023/4/21/23692449/elon-musk-twitter-government-funded-media-labels-removed">more than the media</a>.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“Writers cannot build sustainable businesses if their connection to their audience depends on unreliable platforms.”</p></blockquote></figure>
<p>In 2017, <a href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/page-load-time-statistics/">a Google study found</a> that slow load times can harm the companies that run the sites. As page load times went from one to three seconds, the probability of users abandoning the link increased by 32 percent. Substack told <em>The Markup</em>: &ldquo;Writers cannot build sustainable businesses if their connection to their audience depends on unreliable platforms that have proven they are willing to make changes that are hostile to the people who use them.&rdquo;</p>

<p>For Musk, that&rsquo;s a common tactic. X temporarily disabled likes, replies, and reposts if a post <a href="https://www.theverge.com/2023/4/7/23674185/substack-twitter-retweet-like-disabled-block">contained any links to Substack</a>. Users were also <a href="https://www.theverge.com/2022/12/18/23515221/twitter-bans-links-instagram-mastodon-competitors">briefly banned from linking</a> to Mastodon, Instagram, Facebook, and other competitors.</p>

<p><em>The Markup</em> spoke to Max von Thun of the Open Markets Institute, who researches antitrust and competition issues. He considers the slowdown &ldquo;an anti-competitive tactic designed to undermine X&rsquo;s rivals and keep users on its platform.&rdquo;</p>

<p>The behavior would likely be illegal for a powerful &ldquo;gatekeeper&rdquo; under <a href="https://www.theverge.com/2023/9/6/23859570/european-union-commission-digital-markets-act-gatekeepers-apple-google-meta-microsoft">the Digital Markets Act</a>, a regulation the EU put in place to ensure fair competition. So-called gatekeepers have to comply with all the provisions by March 2024. X is too small to qualify for gatekeeper status, but Thun still thinks that regulators should launch antitrust investigations into its link throttling. &ldquo;If proven, then those authorities could fine Twitter and force it to end the practices in question,&rdquo; he told <em>The Markup</em>.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Simon Hurtz</name>
			</author>
			
			<title type="html"><![CDATA[OpenAI wants GPT-4 to solve the content moderation dilemma]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/8/15/23833406/openai-gpt-4-content-moderation-ai-meta" />
			<id>https://www.theverge.com/2023/8/15/23833406/openai-gpt-4-content-moderation-ai-meta</id>
			<updated>2023-08-15T16:50:15-04:00</updated>
			<published>2023-08-15T16:50:15-04:00</published>
			<category scheme="https://www.theverge.com" term="AI" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="News" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Speech" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[OpenAI is convinced that its technology can help solve one of tech&#8217;s hardest problems: content moderation at scale. GPT-4 could replace tens of thousands of human moderators while being nearly as accurate and more consistent, claims OpenAI. If that&#8217;s true, the most toxic and mentally taxing tasks in tech could be outsourced to machines. In [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Illustration: The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24390406/STK149_AI_03.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>OpenAI is convinced that its technology can help solve one of tech&rsquo;s hardest problems: <a href="https://www.techdirt.com/2019/11/20/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well/">content moderation at scale</a>. GPT-4 could replace tens of thousands of human moderators while being nearly as accurate and more consistent, claims OpenAI. If that&rsquo;s true, the <a href="https://www.theverge.com/2019/12/16/21021005/google-youtube-moderators-ptsd-accenture-violent-disturbing-content-interviews-video">most toxic and mentally taxing tasks</a> in tech could be outsourced to machines.</p>

<p><a href="https://openai.com/blog/using-gpt-4-for-content-moderation">In a blog post</a>, OpenAI claims that it has already been using GPT-4 for developing and refining its own content policies, labeling content, and making decisions. &ldquo;I want to see more people operating their trust and safety, and moderation [in] this way,&rdquo; OpenAI head of safety systems Lilian Weng <a href="https://www.semafor.com/article/08/15/2023/can-chatgpt-become-a-content-moderator#room-for-disagreement">told <em>Semafor</em></a>. &ldquo;This is a really good step forward in how we use AI to solve real world issues in a way that&rsquo;s beneficial to society.&rdquo;</p>

<p>OpenAI sees three major benefits compared to traditional approaches to content moderation. First, it claims people interpret policies differently, while machines are consistent in their judgments. Those guidelines can be as long as a book and change constantly. While it takes humans a lot of training to learn and adapt, OpenAI argues large language models could implement new policies instantly.</p>

<p>Second, GPT-4 can allegedly help develop a new policy within hours. The process of drafting, labeling, gathering feedback, and refining usually takes weeks or several months. Third, OpenAI mentions the well-being of the workers who are continually exposed to harmful content, such as videos of child abuse or torture.</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>OpenAI might help with a problem that its own technology has exacerbated</p></blockquote></figure>
<p>After nearly two decades of modern social media and even more years of online communities, content moderation is still one of the most difficult challenges for online platforms. Meta, Google, and TikTok rely on armies of moderators who have to look through dreadful and often traumatizing content. Most of them are located in developing countries with lower wages, <a href="https://www.theverge.com/interface/2019/11/1/20941952/cognizant-content-moderation-restructuring-facebook-twitter-google">work for outsourcing firms</a>, and struggle with mental health as they receive only a minimal amount of mental health care.</p>

<p>However, OpenAI itself <a href="https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots">heavily relies on clickworkers and human work</a>. Thousands of people, many of them in African countries such as Kenya, annotate and label content. The texts can be disturbing, the job is stressful, <a href="https://time.com/6247678/openai-chatgpt-kenya-workers/">and the pay is poor</a>.</p>

<p>While OpenAI touts its approach as new and revolutionary, AI has been used <a href="https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation">for content moderation for years</a>. Mark Zuckerberg&rsquo;s vision of a perfect automated system hasn&rsquo;t quite panned out yet, but Meta uses algorithms to moderate the vast majority of harmful and illegal content. Platforms like YouTube and TikTok count on similar systems, so OpenAI&rsquo;s technology might appeal to smaller companies that don&rsquo;t have the resources to develop their own technology.</p>

<p>Every platform openly admits that perfect content moderation at scale is impossible. Both humans and machines make mistakes, and while the percentage might be low, there are still millions of harmful posts that slip through and as many pieces of harmless content that get hidden or deleted.</p>

<p>In particular, the gray area of misleading, wrong, and aggressive content that isn&rsquo;t necessarily illegal poses a great challenge for automated systems. Even human experts struggle to label such posts, and machines frequently get it wrong. The same applies to satire or images and videos that <a href="https://www.theverge.com/2022/9/15/23353593/meta-facebook-oversight-board-decisions-automated-image-takedowns-extremist-groups">document crimes or police brutality</a>.</p>

<p>In the end, OpenAI might help to tackle a problem that its own technology has exacerbated. Generative AI such as ChatGPT or the company&rsquo;s image creator, DALL-E, makes it much easier to create misinformation at scale and spread it on social media. Although OpenAI has promised to make ChatGPT more truthful, <a href="https://www.theverge.com/2023/8/15/23825056/chatgpt-and-bard-still-willingly-spit-out-lies">GPT-4 still willingly produces</a> news-related falsehoods and misinformation.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Simon Hurtz</name>
			</author>
			
			<title type="html"><![CDATA[Meta accused of ignoring reports on dangerous content]]></title>
			<link rel="alternate" type="text/html" href="https://www.theverge.com/2023/8/7/23820422/meta-internews-trusted-partners-human-rights-report" />
			<id>https://www.theverge.com/2023/8/7/23820422/meta-internews-trusted-partners-human-rights-report</id>
			<updated>2023-08-07T11:38:45-04:00</updated>
			<published>2023-08-07T11:38:45-04:00</published>
			<category scheme="https://www.theverge.com" term="Facebook" /><category scheme="https://www.theverge.com" term="Instagram" /><category scheme="https://www.theverge.com" term="Meta" /><category scheme="https://www.theverge.com" term="Policy" /><category scheme="https://www.theverge.com" term="Security" /><category scheme="https://www.theverge.com" term="Tech" />
							<summary type="html"><![CDATA[According to Meta, its Trusted Partner program &#8220;is a key part of our efforts to improve our policies, enforcement processes, and products, to help keep users safe on our platforms.&#8221; According to some trusted partners, though, Meta neglects its flagship initiative &#8212; leaving it significantly underresourced, understaffed, and allegedly prone to &#8220;operational failures&#8221; as a [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo by Grayson Blackmon / The Verge" data-has-syndication-rights="1" src="https://platform.theverge.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/23057878/VRG_VRP_Metaverse_Thumbnail_textless.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p><a href="https://transparency.fb.com/policies/improving/bringing-local-context/">According to Meta</a>, its Trusted Partner program &ldquo;is a key part of our efforts to improve our policies, enforcement processes, and products, to help keep users safe on our platforms.&rdquo; According to some trusted partners, though, Meta neglects its flagship initiative &mdash; leaving it significantly underresourced, understaffed, and allegedly prone to &ldquo;operational failures&rdquo; as a result.</p>

<p>That&rsquo;s one of the core accusations of a report that the media nonprofit <em>Internews</em> <a href="https://internews.org/new-study-key-meta-system-for-reporting-harmful-content-has-serious-flaws/">published on Wednesday</a>. The Trusted Partner program consists of 465 global civil society and human rights groups. It aims to provide them with a designated channel to alert Facebook and Instagram of dangerous and harmful content such as death threats, hacked accounts, and incitement to violence. Meta promises to prioritize those reports and escalate them quickly.</p>

<p>But <em>Internews</em> claims that some participating organizations experience the same treatment as regular users: they wait months for replies to a report, get ignored, and are alienated by poor and impersonal communication. According to the report, response times are erratic, and in some cases, Meta doesn&rsquo;t react at all or offer any explanation. That allegedly applies even to highly time-sensitive content, like serious threats and calls for violence.</p>

<p>&ldquo;Two months plus. And in our emails we tell them that the situation is urgent, people are dying,&rdquo; one anonymous trusted partner said. &ldquo;The political situation is very sensitive, and it needs to be dealt with very urgently. And then it is months without an answer.&rdquo;</p>
<figure class="wp-block-pullquote alignleft"><blockquote><p>“Two months plus. And in our emails we tell them that the situation is urgent, people are dying.”</p></blockquote></figure>
<p>For the report, <em>Internews</em> gathered assessments of 23 trusted partners from every major global region and added its own observations as a partner of the program. Most organizations reported similar experiences, but there was one exception: Ukraine, where responsiveness was far above average. Ukrainian partners can expect a response within 72 hours, while in Ethiopia, reports relating to the Tigray War can go unanswered for several months.</p>

<p>The report&rsquo;s conclusions fit with previous leaks and reports on Meta&rsquo;s global priorities. Trusted partners are particularly important outside of North America and Europe, where users can&rsquo;t rely on content being constantly checked by AI and thousands of human Meta moderators. Yet, two years ago, former Facebook employee Frances Haugen <a href="https://www.theverge.com/22740969/facebook-files-papers-frances-haugen-whistleblower-civic-integrity">published internal documents</a> that revealed how <a href="https://www.theverge.com/22743753/facebook-tier-list-countries-leaked-documents-content-moderation">little Meta cares about the global south</a>. In countries such as Ethiopia, Syria, Sri Lanka, Morocco, and Myanmar, Facebook and Instagram <a href="https://restofworld.org/2021/why-facebook-keeps-failing-in-ethiopia/">fail to stop extremists from inciting violence</a>. The alleged failure of trusted partners may be part of the reason why.</p>

<p>In May 2023, nearly 50 human rights and tech accountability groups <a href="https://deadlyby.design/letter/">signed an open letter</a> to Mark Zuckerberg and Nick Clegg after Meareg Amare, a Tigrayan professor, was doxxed in a racist attack on Facebook and <a href="https://www.businessinsider.com/facebooks-local-partners-say-hate-speech-stays-on-the-platform-2023-4">murdered shortly afterward</a> in Ethiopia. His son, Abrham, tried in vain to get Facebook to take the posts down. &ldquo;By failing to invest in and deploy adequate safety improvements to your software or employ sufficient content moderators, Meta is fanning the flames of hatred, and contributing to thousands of deaths in Ethiopia,&rdquo; the letter reads.</p>

<p>&ldquo;Trusted flagger programs are vital to user safety, but Meta&rsquo;s partners are deeply frustrated with how the program has been run,&rdquo; said Rafiq Copeland, platform accountability advisor at <em>Internews</em> and author of the report. Copeland thinks that more investment is needed to ensure Meta&rsquo;s platforms are safe for users. &ldquo;People&rsquo;s lives depend on it.&rdquo;</p>

<p>The review was originally set up as a collaboration with Meta. In 2022, the company withdrew its participation. Meta claims that &ldquo;the reporting issues of the small sample of Trusted Partners who contributed to the report do not, in our view, represent a full or accurate picture of the program.&rdquo; <em>Internews</em> says that it requested Meta&rsquo;s assistance with notifying its partners of the review, but Meta declined.</p>

<p>Meta does not reveal its average and target response times or the number of employees that work on the program full time. A spokesperson declined to comment on the report.</p>
						]]>
									</content>
			
					</entry>
	</feed>
