The Supreme Court has maintained that internet platforms such as social media are not responsible for users' content and actions, even if it results in criminal conduct or death.
On Thursday, the Supreme Court ruled that Twitter could not be held responsible for aiding and abetting an ISIS-executed terrorist attack, upholding immunity granted by Section 230.
The original lawsuit was by the family of Nawras Alassaf, who was among 39 people killed in a January 2017 ISIS attack in Istanbul. Filed in California, the Antiterrorism Act allows U.S. nationals to sue those who "aids and abets, by knowingly providing substantial assistance" to terrorism.
The logic is that since Twitter and others knew ISIS used their services, tech companies didn't work hard enough to remove the content from view. Twitter believes that knowing there's terrorism isn't the same as knowing about "specific accounts that substantially assisted" in the attack, which it could directly act upon.
Section 230 refers to the part of the 1996 Communications Decency Act that immunizes websites or services for content generated by its users, assuming a "good faith" effort is made to moderate illegal content.
In short, platforms like Facebook, YouTube, and Twitter cannot be considered the publisher of that content if it is posted there by someone else.
A problem with the existence of Section 230 is that some parties believe it is too protective of service providers. In addition, the law's definition is broad on purpose, leading to accusations that it is being overused.
Part of that is the definition of the objectionable content that could be considered removable. When it comes to political content, the removal of that content could be thought of as a political commentary or, worse, censorship.
While Section 230 remains the status quo, growing bipartisan support for changes suggests this may not always be true.
A full repeal of Section 230 is unlikely, and tweaking is more plausible. However, while support for changes may be bipartisan, each party wants different and incompatible changes.
9 Comments
This is a good ruling. It’s impossible for social media platforms to police all content. Particularly when it comes to “objectionable” content.
The people who wanted to abolish or rewrite Section 230 were so zealous in their fever to prevent “suppression” of certain views that they did not seem to understand that if Section 230 were substantially altered, sites would simply eliminate any opportunity for users to comment at all, for fear of litigation.
I'm headed out the door so I won't get lost in the details of this decision or too far into the procedural stance of the case as it reached the Supreme Court. But I did want to point out that Section 230 had nothing to do with this decision. The Court found that the plaintiffs hadn't sufficiently made out a claim under the Justice Against Sponsors of Terrorism Act, which was the basis for what remained of the suit.
Had the Court ruled the other way, the case would likely have went back to the district court where Section 230 might have become an issue. As it was, the district court didn't reach the Section 230 issue because it didn't need to. And when the Ninth Circuit reversed the district court, it didn't address the Section 230 issue because the district court hadn't done so. Without the petition to the Supreme Court (and cert grant), the district court - after being reversed on the JASTA claim - would likely have addressed the Section 230 issue. Then its decision on that might have been appealed to the Ninth Circuit before possibly being appealed to the Supreme Court.
At any rate, this Supreme Court decision tells us nothing about how Section 230 might protect Twitter and others in similar situations.
I think it’s reasonable to preclude social platforms from guilt of terrorist activities. They’re not posting these messages.
The individual posting is the responsible party full stop.