Posted by hn_acker 10/28/2024
Some material you can be required to remove by law, or required to suspend pending review for various MDCA safe harbor provisions. But when you remove content in excess of that, where does the liability end?
If you have a cat forum and you remove dog posts, are you also required to remove defamatory posts in general? If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?
I generally dislike section 230, as I feel like blanket immunity is too strong -- I'd prefer that judges and juries make this decision on a case-by-case basis. But the cost of litigating these cases could be prohibitive, especially for small or growing companies. It seems like this would lead to an equilibrium where there was no content moderation at all, or one where you could only act on user reports. Maybe this wouldn't even be so bad.
Taken to the extreme it obviously leaves a window for a crazy abuse where you let people upload individual letters, then you remove letters of your choice to create new sentences and claim the contributors of the letters are the speakers, not the editor.
However, as far as I know, nobody is yet quite accused of that level of moderation to editorialize. Subreddits however ARE similar to that idea. Communities with strict points of view are allowed to purge anything not aligned with their community values. Taking away their protection basically eliminates the community from being able to exist.
Twitter was sued for this, because they attached a note to a user's post. But note that this was not a user-generated community note. It was authored directly by Twitter.
Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content. I think even acting on user reports would still result in liability. The two court cases that stablished this are here:
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
> It is argued that ... the power to censor, triggered the duty to censor. That is a leap which the Court is not prepared to join in.
And
> For the record, the fear that this Court's finding of publishers status for PRODIGY will compel all computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure
It is a tough needle to thread, but it leaves the door open to refining the factors the specific conditions under which a services provider is liable for posted content -- it is neither a shield of immunity nor an absolute assumed liability.
Prodigy specifically advertised its boards to be reliable sources as a way of getting adoption, and put in place policies and procedures to try to achieve that, and, in doing so, put itself in the position of effectively being the publisher of the underlying content.
I personally don't agree with the decision based on the facts of the case, but to me it is not black and white and I would have preferred to stick to the judicial regime until it because clearer what the parameters of moderation can be without incurring liability.
Because Cubby did zero moderation.
> Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
What gives you the impression that this was because the moderation was "heavy handed"? The description in the Wikipedia page reads:
> The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in three ways: 1) by posting content guidelines for users; 2) by enforcing those guidelines with "Board Leaders"; and 3) by utilizing screening software designed to remove offensive language.
Posting civility rules and filtering profanity seems like pretty straightforward content moderation. This isn't "heavy handed moderation" this is extremely basic moderation.
These cases directly motivated Section 230:
> Some federal legislators noticed the contradiction in the two rulings,[4] while Internet enthusiasts found that expecting website operators to accept liability for the speech of third-party users was both untenable and likely to stifle the development of the Internet.[5] Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR) co-authored legislation that would resolve the contradictory precedents on liability while enabling websites and platforms to host speech and exercise editorial control to moderate objectionable content without incurring unlimited liability by doing so.
[1] https://www.dmlp.org/sites/citmedialaw.org/files/1995-05-24-...
This is the relevant section of your link:
> Plaintiffs further rely upon the following additional evidence in support of their claim that PRODIGY is a publisher:
> (A)promulgation of "content guidelines" (the "Guidelines" found at Plaintiffs' Exhibit F) in which, inter alia, users are requested to refrain from posting notes that are "insulting" and are advised that "notes that harass other members or are deemed to be in bad taste or grossly repugnant to community standards, or are deemed harmful to maintaining a harmonious online community, will be removed when brought to PRODIGY's attention"; the Guidelines all expressly state that although "Prodigy is committed to open debate and discussion on the bulletin boards,
> (B) use of a software screening program which automatically prescreens all bulletin board postings for offensive language;
> (C) the use of Board Leaders such as Epstien whose duties include enforcement of the Guidelines, according to Jennifer Ambrozek, the Manager of Prodigy's bulletin boards and the person at PRODIGY responsible for supervising the Board Leaders (see Plaintiffs' Exhibit R, Ambrozek deposition transcript, at p. 191); and
> (D) testimony by Epstien as to a tool for Board \Leaders known as an "emergency delete function" pursuant to which a Board Leader could remove a note and send a previously prepared message of explanation "ranging from solicitation, bad advice, insulting, wrong topic, off topic, bad taste, etcetera." (Epstien deposition Transcript, p. 52).
So they published content guidelines prohibiting harssment, they filtered out offensive languages (presumably slurs, maybe profanity), and the moderation team deleted offending content. This is... bog standard internet forum moderation.
> In one article PRODIGY stated:
> "We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate."
The judge goes on to note that while Prodigy had since ceased its initial policy of direct editorial review of all content, they did not make an official announcement of this, so were still benefitting from the marketing perception that the content was vetted by Prodigy.
I don't know if I would have ruled the same way in that situation, and honestly, it was the NY Supreme Court, which is not even an appellate jurisdiction in NY, and was settled before any appeals could be heard, so it's not even clear that this would have stood.
A situation where each individual case was decided on its merits until a reasonable de facto standard could evolve I thing would have been more responsible and flexible than a blanked immunity standard which has led to all sorts of unfortunate dynamics that significantly damage the ability to have an online public square for discourse.
To be perfectly pedantic, the history lesson ends in 1991 and the CDA was passed in 1996.
Also, I am not sure that this author really understands where things stand anymore. He calls the 3rd circuit TikTok ruling "batshit insane" and "deliberately ignores precedent". Well, it is entirely possible (likely even?) that it will get overruled. But that ruling is based on a Supreme Court ruling earlier this year (Moody vs. NetChoice). Throw your precedence out the window, the Supreme Court just changed things (or maybe we will find out that they didn't really mean it like that).
The First Amendment protects you from the consequences of your own speech (with something like 17 categories of exceptions).
Section 230 protects you from the consequences of publishing someone else's speech.
Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech. And the 3rd circuit TikTok ruling spends paragraphs discussing this. You can read it for yourself and decide if it makes sense.
Right.
"Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech."
????????????????????????????? Read what you wrote. It's the user who created it.
Things have changed. Hold on to your previous belief at your own peril.
It does no such thing. It protects your right to speak, but does not protect you from the consequences of what you say.
There can always be social consequences for speech that is legal under the first amendment.
> This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).
The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.
Realistically, the entire web ecosystem and thus a significant part of our economy rely on Section 230's protections for companies. IMO, regulation that provides users of large social networks with greater transparency and control into what their algorithms are showing to them personally would be a far more fruitful discussion.
Should every human have the right to understand that an algorithm has classified them in a certain way? Should we, as a society, have the right to understand to what extent any social media company is classifying certain people as receptive to content regarding, say, specific phobias, and showing them content that is classified to amplify those phobias? Should we have the right to understand, at least, exactly how a dial turned in a tech office impacts how children learn to see the world?
We can and should iterate on ways to answer these complex questions without throwing the ability for companies to moderate content out the window.
It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.
Chubby Inc. vs. CompuServe established that a non-moderated platform evaded liability for user generated content. https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.
People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.
A lot of people talk about a requirement to explain why someone was given a particular recommendation. Okay, so Google, Facebook, et. al. provide a mechanism that supplies you with a CSV of tens of thousands of entries describing the weights used to give you a particular recommendation. What problem does that solve?
Conservatives often want to amend section 230 to limit companies' ability to down-weight and remove conservative content. This directly runs afoul the First Amendment; the government can't use the threat of liability to coerce companies into hosting speech they don't want to. Not to mention, the companies could just attribute the removal or down-ranking to other factors like inflammatory speech or negative user engagement.
I think the big problem with "Should every human have the right to understand that an algorithm has classified them in a certain way" is just that they flat out can't. You cannot design a trash can that every human can understand but a bear can't. There is a level of complexity that your average person won't be able to follow.
I hope you view his writing and POV as independent from his work with us. On matters like 230 you can find archives of very consistent writing from well before joining.
However, my initial feelings are correct. NYT article is bemoaning about Section 230, Mike seems to ignore why those feelings are coming up and burying there might be conflict of interest in caring since I guess BlueSky has algorithms it runs to help users? Again, admitting I know nothing about BlueSky. In any case, I don't think consistent PoV should bypass disclosure of that.
His arguments about why Section 230 should be left intact are solid and I agree with some of them. I also think he misses the point that letting algorithms go insane with 100% Section 230 protection may not be best idea. Whether or not Section can be reformed without destroying the internet or if First Amendment gets involved here, I personally don't know.
This is what most of Section 230 fight is about. Some people, myself included, would say "No, Facebook is selecting content that doesn't involve user choice, they are drifting into publisher territory and thus should not be 100% immune to liability."
EDIT: I forgot, Section 230 has also been used by Online Ad Publishers to hide their lack of moderation with scam ads.
I have had reasonable success with Youtube's built-in algorithm-massaging features. It mostly always respects "Do Not Recommend Channel", and understands "Not Interested" after a couple repetitions.
That doesn’t work as well for stuff that you wish other people would find noxious but they don’t: neo-Nazis probably would rather see more neo-Nazi things rather than fewer, even if the broader social consensus is (at least on the pages of the New York Times) that those things fall under the uselessly vague header of “toxic.”
Even if the recommenders only filter out some of what you want less of, ditching them entirely means you’ll see more of the portion of the slop that they’re deprioritizing the way you want them to.
No, they're incentivized to increase the amount of content advertisers find acceptable.
Plus, youtube isn't strictly watching, many people do make a living from the platform and these algorithmic controls are not available to them in any way at all.
> ditching them entirely
Which is why I implied that user oriented control is the factor to care about. Nowhere did I suggest you had to do this, just remove _corporate_ control of that list.