The DOJ is threading a needle here between respecting the expansive possibilities of Section 230 on one side and fields like antitrust law on the other. When speaking about algorithmic recommendations, DOJ says “I don’t know if we would call it the platform’s own speech but the platform’s own conduct.” I’m very curious to hear more about the overlap of “speech” and “conduct” here since a distinction has been drawn!
T.C. Sottek

Executive Editor
Executive Editor
More From T.C. Sottek
We’ve heard this a few times already: the court referring to an algorithm operating on “neutral terms”. Justice Gorsuch just poked a big hole in that idea by noting “some [algorithms] might even favor one point of view over another,” for example, by privileging revenue motives.
Indeed, there is no such thing as a “neutral” algorithm. They are all built by human beings with various and competing motivations and intents.
This is one of the more exciting Supreme Court oral argument sessions on tech in a while!
And yeah, that’s the whole point of this case: what does Section 230 really protect? Does it have limits? What are the limits? Still, it’s helpful that Justice Sotomayor said it out loud: “let’s assume we’re looking for a line, because it’s clear from our questions that we are.”
She also added that the court is “uncomfortable” with a line that says “merely recommending something without adornment” could constitute defamation.
The court is now getting into the weeds of what it means to “post” something. DOJ is doing a decent job of unpacking this, but it’s still more nuanced than the conversation suggests so far. The question is really: if someone posts something to YouTube, and YouTube knows what it is explicitly, and refuses to take it down, is YouTube also “posting” it?
I’m calling this The Poster’s Dilemma.
Justice Kavanaugh, questioning Malcom Stewart from the DOJ:
I don’t know how many employment decisions are made in the country every day, but I know that hundreds of millions, billions responses of inquiries on the internet are made every day. … under your view, every one of those would be the possibility of a lawsuit.
Supreme Court justices are notoriously clever about the questions they ask, and they’ll often ask questions during oral arguments that belie their true feelings about the subject matter. But, so far today, each member of the court who has asked questions has seemed pretty skeptical about the idea that Section 230 should be obliterated because of YouTube’s thumbnails.
We’ll see what happens, of course, but today’s arguments have been exceptional in the sense that the government seems to be employing more wisdom than we usually see when interrogating technology. (Adi says she’s reserving judgment until she sees how weird their questions to Google are.)
Kavanaugh notes that the court received a lot of concern in amicus curiae briefs that meddling with Section 230 would have devastating effects on the economy — something he says the court needs to take quite seriously. Plaintiffs didn’t have a great answer for this, vaguely noting that lots of things would still be protected if they get their way.
Plaintiffs:
Most recommendations just aren’t actionable. there is no cause of action for telling someone to look at a book that has something defamatory in it.
The Supreme Court is likely to face battles over AI search in the future, and today we’ve gotten our first signal that it’s already on the court’s radar. Justice Gorsuch noted that AI is already capable of creating new things based on the wealth of content already available on the internet.
I love this honesty from Justice Kagan, who is expressing extreme skepticism on the suggestion that the court ought to strip protection from companies operating on the internet.
“Isn’t that something for Congress, not the court?”
Correction: The line in question was from Justice Kagan, not Justice Sotomayor, as originally attributed.
Justice Sotomayor asks:
If you write an algorithm for someone that in its structure ensures the discrminiation between people, a dating app, for example. … someone says “i’m going to create an algorithm that inherently discriminates against people.” you would say that internet provider is discriminating, correct?
Apparently this stumped the plaintiffs, who declared this hypothetical too abstract to respond to. Strange, considering the YouTube algorithm is probably more complicated than this scenario.