The U.S. Department of Justice has unveiled new draft legislation that would reform Section 230, a key protection for technology companies from users posting illicit content.
In June, the DOJ announced a set of reforms that would hold technology companies more legally responsible for the content that their users posted. That followed President Donald Trump signing an executive order that sought to weaken social media platform protections.
The proposed legislation on Wednesday would narrow the criteria that online platforms would need to meet to earn Section 230 liability protections. One of the main reforms is carving out immunity in cases such as child sexual abuse. It would need to be passed by Congress.
It also includes a "Bad Samaritan" section that could deny immunity to platforms that don't take action on content that violates federal law, or fail to report illegal material. That's similar to a variant of the EARN IT act that passed in the Judiciary Committee in July.
The proposal also states that nothing in the statute should prevent enforcement under separate laws, such as antitrust regulations.
Section 230 of the Communications Decency Act protects online platforms from being liable for the content that users post. It also requires them to moderate and remove harmful content to be extended Section 230 protections.
Those protections allowed technology platforms and the internet to flourish in their nascent years, but have since come under scrutiny. Trump signed the executive order in May, for example, after Twitter fact checked one of his tweets.
That scrutiny has bled into other areas of technology oversight. At what was supposed to be an antitrust hearing in July, many members of the U.S. Senate Judiciary Committee criticized companies like Facebook for alleged "censorship" of political views.
The Justice Department, specifically, has been looking into Section 230 reforms for the better part of a year. Attorney General William Barr said in December 2019 that the department was "thinking critically" about Section 230, and later invited experts to debate the law in February.
28 Comments
Here are the DOJ’s proposed changes.
I think the changes the Administration is trying to make are bad. But I don’t think they’ve worded the legislation such that it actually does make those changes.
Seems as tho what the current administration apparently wants would lead to MORE censorship and not less. Companies will err on the side of caution and simply remove anything that might have the appearance of being untrue as they'd now be responsible for what the public posts on their platforms? Geesh, that sounds totally unmanageable and the only bulletproof solution would be blocking all comments.
Sigh - politicians get it wrong, yet again.
I don't think it's unreasonable to allow platforms to moderate content. Personally I'd rather they not censor anything and let me decide after a warning (and being able to disable "warnings" would be better still - the amount of ideological "warnings" is getting nuts on YouTube).
What's unreasonable is the completely oblique and inconsistent way they moderate content, often with guidelines that are non-existent or unpublished.
Sunlight is the best disinfectant. I think if they want to keep their 230 safe harbor, companies should be required:
1) Publish publicly their specific rules. No more "we reserve the right for any reason". You want to hid behind any reason? No problem - no more 230 for you.
2) Publish publicly all moderation, citing specifically how the rules from #1 were used for the moderated outcome.
It won't be perfect, but it should lead to a LOT more consistency. Twitter is rife with left wing violence, but if any other group posts violent content it's removed immediately. It should all be removed, or none of it removed.
As others pointed out, these proposed changes are just going to encourage companies to error on the side of caution more, which isn't good
Now that I've read more from the DOJ, I don't think it intended to do what I previously thought it intended to do - i.e., remove the immunity which providers have for content they allow if they choose to censor some material in objectionable ways.
The wording of the proposed changes wouldn't have that effect and it seems that wasn't the intended effect. These changes aren't as substantial - or as bad - as I had originally thought. Even if this proposal was enacted, providers (and users, for that matter) would still enjoy unconditioned immunity for (i.e. not be treated as publishers of) the speech of others which they left up.EDIT: I should be clear, the immunity provided by Section 230(c)(1) wouldn't be unconditional. But it wouldn't be conditioned on the provider not censoring certain material. The proposed changes actually place some meaningful conditions on Section 230(c)(1) immunity.
It's sad that both Biden and Trump want to do away with 230. Both have been very vocal about it, especially Biden. I wish they would leave it alone. The one positive is that with Trump in office it probably will never pass.