The Supreme Court has shown unusual self-awareness in addressing Section 230 and said that since they aren't even close to being internet experts, deciding the future of the web should probably be done by Congress.
The Supreme Court is this week hearing two cases that both concern Section 230, which gives platforms immunity from their users' content. In Gonzales vs Google, the case concerns Nohemi Gonzalez, who died in an ISIS attack which the suit claims was aided by Google allowing inciting videos to be published on YouTube.
Section 230 is a very brief law that immunizes web sites or services for content generated by its users, assuming a "good faith" effort is made to moderate illegal content. The crux of the case is that since Google didn't filter out ISIS videos, it was responsible for Gonzelez' death.
Beyond the specifics of this case, the hearing has potentially a wider impact on all issues around Section 230, which Supreme Court Justice Elena Kagan addressed.
"Every other industry has to internalize the costs of misconduct. Why is it that the tech industry gets a pass?" Kagan said. "It's a little bit unclear."
"On the other hand, I mean, we're a court we really don't know about these things," she continued. "These are not like the nine greatest experts on the internet."
However, Justice Kagan also noted that she doesn't "have to accept 'the sky is falling' stuff [in order] to accept... there is a lot of uncertainty about going the way [internet companies] would have us go, in part because of the difficulty of drawing lines in this area."
"And just because of the fact that once we go with you, all of a sudden, we're finding that Google isn't protected, and maybe Congress should want that system," said Justice Kagan. "But isn't that something for Congress to do? Not the court."
Speaking on behalf of the Gonzales suit, Eric Schnapper said that YouTube's algorithms direct users to certain videos. "And the underlying substantive claim is encouraging people to go look at ISIS videos would be aiding and abetting ISIS," he said.
Justice Kagan pointed out that "every time anybody looks at anything on the Internet, there is an algorithm involved... whether it's a Google search engine or whether it's this YouTube site..."
She took that point further and raised whether the fact that "everything involves ways of organizing and prioritizing material" mean the plaintiff's case sends "us down the road such that 230 really can't mean anything at all?"
"[We're] focusing on the recommendation function," said Schnapper, "that they are affirmatively recommending or suggesting ISIS content... You turn on your computer and the... the computers at YouTube send you stuff you didn't ask them for."
"I'm afraid I'm completely confused by whatever argument you're making at the present time," responded Justice Alito.
Google on the future of the internet
Google attorney Lisa Blatt was asked about whether sites are being encouraged to moderate while not having to fear punishment for problems. Blatt argued that if sites "take down anything that anyone might object to, and then you basically have — and I'm speaking figuratively and not literally — but you have the Truman Show versus a horror show."
"You have only anodyne, you know, cartoon-like stuff that's very happy talk," she said, "otherwise you just have garbage on the Internet."
The hearing also dwelt on the issue of the thumbnail images that YouTube displays as posters for videos.
"They are intended to encourage the viewer to click on them and go see a video," said Schnapper. "It's the use of algorithms to generate these — these thumbnails that's at issue, and the thumbnails, in turn, involve a — involve content created by the defendant."
So the argument is partly that it is YouTube that makes the poster thumbnails, not the uploader or content creator. (If a YouTuber does not provide their own poster image, YouTube takes a frame from the video.)
Schnapper appeared to further argue that the choice of which video's thumbnail poster is shown to a user is part of the case. The Supreme Court Justices were not convinced, with Justice Kagan commenting that the videos show are "based upon what the algorithm suggests the user is interested in."
What happens next
US Deputy Solicitor General Malcolm Stewart argued in the hearing that the Supreme Court should direct the lower courts to consider the case further. However, the Supreme Court was not convinced.
"You're asking us right now to make a very precise predictive judgment that 'Don't worry, that it's really not going to be that bad,'" Justice Kavanaugh replied. "I don't know that that's at all the case. And I don't know how we can assess that in any meaningful way."
The hearing lasted almost three hours. A similar hearing is to be held on Wednesday concerning Twitter v. Taamneh.
12 Comments
I'm glad the supreme court finally realized, at least in this instance, that they don't know anything about the internet. As for Congress being better suited, I highly doubt that as well. Congress, made up of politicians, makes many decisions based on who's paying them the most amount of money. Going after Google for something posted on youtube is hilarious when there are purported news agencies who constantly make up things and have rarely been held accountable. It take a lawsuit to stop them and even then that is subject to our judicial system, which is suspect at best.
It's nice to see a court at least give lip service to the notion that it's the job of Congress to create or amend laws, and not that of the Court.
You understand that a single Justice’s remarks during oral arguments does not represent the Supreme Court collectively admitting anything, right? Not only does a comment by Justice Kagan not represent an opinion of the whole Court, it also doesn’t represent what her own opinion may ultimately be. So the headline “Supreme Court admits” is grossly incorrect and misleading.
They should recategorize the concept of algorithm as “promotion” to build a stronger case which gets at the core of the issue. Forcing a change to algorithms will not make the internet anodyne, the bad stuff is still out there. The tech companies police and remove content we all find objectionable like child porn, why not at a minimum steer people away from ISIS content? Beheadings seems as bad to me.
After reading Mindf*ck by Christopher Wylie (Cambridge Analytica whistleblower) it's clear that this is not a free speech issue anymore. These companies knowingly spread false information. The creation of content is a free speech issue, but once google and facebook share it and spread it through their algorithms they should be held to account.