More from The Supreme Court hears arguments for two cases that could reshape the future of the internet
We’re now in the final stretch. Eric Schnapper, who also presented arguments for the plaintiffs yesterday in Gonzalez v. Google, focused a lot on the potential of recommendation functions to cause harm. In today’s case, plaintiffs allege Twitter recommendations helped ISIS generally recruit more fighters. Schnapper concedes this doesn’t have anything to do with a specific attack — a standard Twitter is claiming plaintiffs would have to meet.
We’re deep in the dictionary definitions of things with Justice Gorsuch, who is exploring personal identity. I will admit I was not expecting this line of inquiry, but if you’ll excuse me, I’m off to rewatch I Heart Huckabees.
“Let’s say a known terrorist walks into a bank,” says Justice Kagan, before getting into the weeds of a “knowing your customer” hypothetical.
The government expresses skepticism that the ability to tweet is as valuable as having a place to store money. As a former power user of Twitter, I would have to agree.
(Incidentally, Elon Musk has expressed plans to turn Twitter into an actual bank, so this distinction might not work in future lawsuits.)
Edwin Kneedler, on behalf of the government, faces a theoretical question on pager companies providing services to terrorists. It’s not clear how this act of technological paleontology from Thomas will provide a comparison to Twitter. But it’s interesting by virtue of the fact that while pagers could be used directly to plan an attack, pager companies probably knew way less about their customers’ beliefs and actions than a modern social platform.
Thus concluding Twitter’s arguments. Now it’s the US government’s turn.
Barrett suggests to Waxman that it’s obvious what ISIS is about and what it will do in the future: “If you know ISIS is using [Twitter], you know ISIS is going to be doing bad things… what work does turning your focus on the specific act do? Aiding ISIS is aiding the commission of specific acts in the future.”
Building on previous questioning from Justice Kagan, it seems like the court is trying to get Twitter to draw a line about how willfully dumb it can play about certain accounts and users on the platform.
Receiving laughs from the gallery, Gorsuch presses Waxman on whether Twitter has read the law incorrectly. “I can’t help but wonder if some of the struggle you’ve had this morning … comes from your reading of the text.” Waxman has been arguing that they have to support an act, not just the person behind it — “that seems a pretty abstract way to read the statute.”
Waxman tries to explain, but Gorsuch isn’t impressed: “Maybe we oughta just stop.”
Justice Kagan alludes to the Elon Musk school of Twitter: is Twitter liable if its policy is “let a thousand flowers bloom?” Waxman still says no. “If they said, we don’t want our platforms to be used to support terrorist groups or terrorist acts, but they don’t do anything to enforce it,” he claims, they’re not aiding and abetting.
Kagan seems extremely unconvinced. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism.”
Sotomayor:
“There is a focus on how much your platform helped ISIS, and less on how you actually helped them, and there is a difference between the two things. … [Your argument is that] in a neutral business setting, using something that is otherwise not criminal, a platform, to communicate with people, and you’re doing it not by as in the bank situation or pharmaceutical situation, to help this particular person to commit a crime, but in a general business situation that others are coming to you and you can’t find them ahead of time, that that doesn’t constitute substantiality.”
But that doesn’t get Twitter off the hook here, as the court is questioning Waxman about whether the company’s conduct meets the Halberstam standard for liability. Here are the basics of that standard:
(1) “the party whom the defendant aids must perform a wrongful act that causes an injury”; (2) “the defendant must be generally aware of his role as part of an overall illegal or tortious activity at the time that he provides the assistance”; and (3) “the defendant must knowingly and substantially assist the principal violation.”
Twitter’s counsel, Seth Waxman, is up first in oral arguments. If his first performance is a preview of what’s to come, we’re going to hear a lot of weird metaphors about criminal activity. Waxman is clearly angling toward the idea that Twitter had to specifically know what criminals on the platform were going to do to be liable for their actions.
Justice Thomas and Waxman opened with dueling hypotheticals about respectively giving your friend a gun and breaking a padlock to steal your neighbor’s sheep. Sure.
It’s entirely fair that the Supreme Court doesn’t want to broadcast video from its hearing room, but the American people deserve a serious upgrade to its audio livestream capabilities. Unless you’re an expert on the court it’s often difficult to tell who is speaking at any given time, because there’s no live transcript or indication of the current speaker.
Twitter’s moderation practices are a subject of today’s case before the Supreme Court, and things have only gotten sketchier and more chaotic since a new Chief Twit took over the company.
Musk may have inherited this legal mess when he bought Twitter for $44 billion, but now he literally owns it. It will be interesting to see if the company’s new reputation under Musk will give the plaintiffs an edge in arguments.
The justices will reconvene at 10AM ET and oral arguments will begin shortly after in Twitter v Taamneh. If you need to catch up on the action in Gonzalez v. Google, check out yesterday’s coverage from Adi.
You can listen along live to today’s arguments here as we post updates throughout the hearing:
That’s the last word from the plaintiffs in rebuttal, and it’s a statement worth chewing on. There’s a lot to think about here, but the court is now adjourned until tomorrow’s arguments in Twitter v. Taamneh. Stay tuned for more coverage, and thanks for joining us!
We agree, Google. Check us out on the web: www.theverge.com.
This might be too obvious to point out, but national and international internet organizations often say they experience substantial hardship when laws are fragmented. That’s why California and the EU have been so instrumental in leading the way on internet regulation; it can be easier for platform giants to simply harmonize the rules everywhere based on the strictest regulation in one place, rather than forking their platform and policies to comply with a bunch of localities.
Google and every other big platform does not want to be subject to an even greater patchwork of laws, which could be an outcome of 230 being weakened.
Google has come out swinging, pushing back fiercely against the court for “incorrect” premises in its questioning. One colorful moment that just happened: Blatt offering a hypothetical if 230 gets overturned.
According to Google, it’s a land of extremes. We’ll either live in The Truman Show, where everyone moderates everything into oblivion, or a horror show, where nobody moderates anything. These are not hyperbolic examples — it’s exactly the question at the heart of 230 protections.
Google’s attorney Lisa S. Blatt is now up in the final hour of arguments, and she’s already getting some pointed questions — off the bat, who is really responsible for one of YouTube’s recommendations? The court suggests it’s not the user, who merely uploaded content and is not responsible for how the overall system works.
DOJ pointed out during arguments that when a computer is doing things there is “no live human being” making a choice, at least on an individual basis. And that’s true when large teams of people are making distributed and collective decisions.
In the case of Twitter, however, we now have an example of what happens when one man explicitly turns the knobs in a certain direction.
Yes, Elon Musk created a special system for showing you all his tweets first
The DOJ is threading a needle here between respecting the expansive possibilities of Section 230 on one side and fields like antitrust law on the other. When speaking about algorithmic recommendations, DOJ says “I don’t know if we would call it the platform’s own speech but the platform’s own conduct.” I’m very curious to hear more about the overlap of “speech” and “conduct” here since a distinction has been drawn!
We’ve heard this a few times already: the court referring to an algorithm operating on “neutral terms”. Justice Gorsuch just poked a big hole in that idea by noting “some [algorithms] might even favor one point of view over another,” for example, by privileging revenue motives.
Indeed, there is no such thing as a “neutral” algorithm. They are all built by human beings with various and competing motivations and intents.
This is one of the more exciting Supreme Court oral argument sessions on tech in a while!
And yeah, that’s the whole point of this case: what does Section 230 really protect? Does it have limits? What are the limits? Still, it’s helpful that Justice Sotomayor said it out loud: “let’s assume we’re looking for a line, because it’s clear from our questions that we are.”
She also added that the court is “uncomfortable” with a line that says “merely recommending something without adornment” could constitute defamation.
The court is now getting into the weeds of what it means to “post” something. DOJ is doing a decent job of unpacking this, but it’s still more nuanced than the conversation suggests so far. The question is really: if someone posts something to YouTube, and YouTube knows what it is explicitly, and refuses to take it down, is YouTube also “posting” it?
I’m calling this The Poster’s Dilemma.
Justice Kavanaugh, questioning Malcom Stewart from the DOJ:
I don’t know how many employment decisions are made in the country every day, but I know that hundreds of millions, billions responses of inquiries on the internet are made every day. … under your view, every one of those would be the possibility of a lawsuit.
