Top
Best
New

Posted by hn_acker 10/28/2024

NY Times gets 230 wrong again(www.techdirt.com)
149 points | 128 commentspage 2
syndicatedjelly 10/29/2024|
Section 230 discussions seem to always devolve into a bunch of CS people talking way outside their depth about “free speech”, and making universal claims about the First Amendment as if the literal reading of the amendment is all that matters. I wish some actual lawyers would weigh in here
p3rls 10/28/2024||
Wrong-- section 230 is whatever some 80-year-old judge wants to interpret it as. Didn't you guys watch the LiveJournal court case? Those guys had to run away to Russia while Twitter, Reddit, Discord, etc. all are completely flooded with pirated content and it's cool.
wodenokoto 10/29/2024||
Maybe we can add " Section " before 230 in the title? I honestly thought this was a numerical thing.
andrewla 10/28/2024||
The question that section 230, and the Communications Indecency Act in general, is the same one that plagued the court cases leading up to it, is to what degree the voluntary removal of some content implies an endorsement of other content.

Some material you can be required to remove by law, or required to suspend pending review for various MDCA safe harbor provisions. But when you remove content in excess of that, where does the liability end?

If you have a cat forum and you remove dog posts, are you also required to remove defamatory posts in general? If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?

I generally dislike section 230, as I feel like blanket immunity is too strong -- I'd prefer that judges and juries make this decision on a case-by-case basis. But the cost of litigating these cases could be prohibitive, especially for small or growing companies. It seems like this would lead to an equilibrium where there was no content moderation at all, or one where you could only act on user reports. Maybe this wouldn't even be so bad.

basch 10/28/2024||
That is the entire point of 230. You can remove whatever you want for whatever reason, and what you leave up doesn't make you the speaker or endorser of that content.

Taken to the extreme it obviously leaves a window for a crazy abuse where you let people upload individual letters, then you remove letters of your choice to create new sentences and claim the contributors of the letters are the speakers, not the editor.

However, as far as I know, nobody is yet quite accused of that level of moderation to editorialize. Subreddits however ARE similar to that idea. Communities with strict points of view are allowed to purge anything not aligned with their community values. Taking away their protection basically eliminates the community from being able to exist.

Manuel_D 10/28/2024||
> If you have a new forum and you remove misinformation, does that removal constitute an actionable defamation against the poster?

Twitter was sued for this, because they attached a note to a user's post. But note that this was not a user-generated community note. It was authored directly by Twitter.

Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content. I think even acting on user reports would still result in liability. The two court cases that stablished this are here:

https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....

https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

andrewla 10/29/2024||
> Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content

Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.

Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:

> It is argued that ... the power to censor, triggered the duty to censor. That is a leap which the Court is not prepared to join in.

And

> For the record, the fear that this Court's finding of publishers status for PRODIGY will compel all computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure

It is a tough needle to thread, but it leaves the door open to refining the factors the specific conditions under which a services provider is liable for posted content -- it is neither a shield of immunity nor an absolute assumed liability.

Prodigy specifically advertised its boards to be reliable sources as a way of getting adoption, and put in place policies and procedures to try to achieve that, and, in doing so, put itself in the position of effectively being the publisher of the underlying content.

I personally don't agree with the decision based on the facts of the case, but to me it is not black and white and I would have preferred to stick to the judicial regime until it because clearer what the parameters of moderation can be without incurring liability.

Manuel_D 10/29/2024||
> Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.

Because Cubby did zero moderation.

> Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:

What gives you the impression that this was because the moderation was "heavy handed"? The description in the Wikipedia page reads:

> The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in three ways: 1) by posting content guidelines for users; 2) by enforcing those guidelines with "Board Leaders"; and 3) by utilizing screening software designed to remove offensive language.

Posting civility rules and filtering profanity seems like pretty straightforward content moderation. This isn't "heavy handed moderation" this is extremely basic moderation.

These cases directly motivated Section 230:

> Some federal legislators noticed the contradiction in the two rulings,[4] while Internet enthusiasts found that expecting website operators to accept liability for the speech of third-party users was both untenable and likely to stifle the development of the Internet.[5] Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR) co-authored legislation that would resolve the contradictory precedents on liability while enabling websites and platforms to host speech and exercise editorial control to moderate objectionable content without incurring unlimited liability by doing so.

andrewla 10/29/2024||
Wikipedia's characterization is too broad. You can read the decision here [1] and decide for yourself.

[1] https://www.dmlp.org/sites/citmedialaw.org/files/1995-05-24-...

Manuel_D 10/29/2024||
What are you reading in that decision that suggests Prodigy was doing moderation beyond what we'd expect a typical internet forum to do?

This is the relevant section of your link:

> Plaintiffs further rely upon the following additional evidence in support of their claim that PRODIGY is a publisher:

> (A)promulgation of "content guidelines" (the "Guidelines" found at Plaintiffs' Exhibit F) in which, inter alia, users are requested to refrain from posting notes that are "insulting" and are advised that "notes that harass other members or are deemed to be in bad taste or grossly repugnant to community standards, or are deemed harmful to maintaining a harmonious online community, will be removed when brought to PRODIGY's attention"; the Guidelines all expressly state that although "Prodigy is committed to open debate and discussion on the bulletin boards,

> (B) use of a software screening program which automatically prescreens all bulletin board postings for offensive language;

> (C) the use of Board Leaders such as Epstien whose duties include enforcement of the Guidelines, according to Jennifer Ambrozek, the Manager of Prodigy's bulletin boards and the person at PRODIGY responsible for supervising the Board Leaders (see Plaintiffs' Exhibit R, Ambrozek deposition transcript, at p. 191); and

> (D) testimony by Epstien as to a tool for Board \Leaders known as an "emergency delete function" pursuant to which a Board Leader could remove a note and send a previously prepared message of explanation "ranging from solicitation, bad advice, insulting, wrong topic, off topic, bad taste, etcetera." (Epstien deposition Transcript, p. 52).

So they published content guidelines prohibiting harssment, they filtered out offensive languages (presumably slurs, maybe profanity), and the moderation team deleted offending content. This is... bog standard internet forum moderation.

andrewla 10/30/2024||
"additional evidence" your quote says. Just before that, we have:

> In one article PRODIGY stated:

> "We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate."

The judge goes on to note that while Prodigy had since ceased its initial policy of direct editorial review of all content, they did not make an official announcement of this, so were still benefitting from the marketing perception that the content was vetted by Prodigy.

I don't know if I would have ruled the same way in that situation, and honestly, it was the NY Supreme Court, which is not even an appellate jurisdiction in NY, and was settled before any appeals could be heard, so it's not even clear that this would have stood.

A situation where each individual case was decided on its merits until a reasonable de facto standard could evolve I thing would have been more responsible and flexible than a blanked immunity standard which has led to all sorts of unfortunate dynamics that significantly damage the ability to have an online public square for discourse.

Kon-Peki 10/28/2024||
> Note that the issue of Section 230 does not come up even once in this history lesson.

To be perfectly pedantic, the history lesson ends in 1991 and the CDA was passed in 1996.

Also, I am not sure that this author really understands where things stand anymore. He calls the 3rd circuit TikTok ruling "batshit insane" and "deliberately ignores precedent". Well, it is entirely possible (likely even?) that it will get overruled. But that ruling is based on a Supreme Court ruling earlier this year (Moody vs. NetChoice). Throw your precedence out the window, the Supreme Court just changed things (or maybe we will find out that they didn't really mean it like that).

The First Amendment protects you from the consequences of your own speech (with something like 17 categories of exceptions).

Section 230 protects you from the consequences of publishing someone else's speech.

Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech. And the 3rd circuit TikTok ruling spends paragraphs discussing this. You can read it for yourself and decide if it makes sense.

whoitwas 10/28/2024||
"Section 230 protects you from the consequences of publishing someone else's speech."

Right.

"Where we are right now is deciding if the algorithm is pumping out your speech or it is pumping out someone else's speech."

????????????????????????????? Read what you wrote. It's the user who created it.

Kon-Peki 10/28/2024||
I strongly suggest reading the actual 3rd circuit TikTok ruling as well as the actual Moody vs NetChoice Supreme Court ruling, both from the year 2024.

Things have changed. Hold on to your previous belief at your own peril.

SoftTalker 10/28/2024||
> The First Amendment protects you from the consequences of your own speech

It does no such thing. It protects your right to speak, but does not protect you from the consequences of what you say.

Nasrudith 10/29/2024|||
Under that standard North Korea has free speech.
ideashower 10/28/2024|||
Bingo. If you threaten, or promote harm/hate speech, you're not suddenly immune from the consequences of that. It's that the platform is (generally) immune from those same consequences.
TheCleric 10/29/2024|||
It’s not just that. If I say something that people find derogatory, and they do not want to associate with me because of that, that’s THEIR first amendment right.

There can always be social consequences for speech that is legal under the first amendment.

SoftTalker 10/29/2024||
Yes, this is really what I meant. The First Amendment does protect you from government retaliation for critical speech. It does not prevent you from becoming a pariah in your community or more widely for what you say. Just look at any celebrity who lost multi-million dollar sponsorship contracts over a tweet.
AStonesThrow 10/29/2024|||
Free Speech https://xkcd.com/1357/
btown 10/28/2024||
It's important to note that this article has its own biases; it's disclosed at the end that the author is on the board of Bluesky. But, largely, it raises very good points.

> This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).

The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.

Realistically, the entire web ecosystem and thus a significant part of our economy rely on Section 230's protections for companies. IMO, regulation that provides users of large social networks with greater transparency and control into what their algorithms are showing to them personally would be a far more fruitful discussion.

Should every human have the right to understand that an algorithm has classified them in a certain way? Should we, as a society, have the right to understand to what extent any social media company is classifying certain people as receptive to content regarding, say, specific phobias, and showing them content that is classified to amplify those phobias? Should we have the right to understand, at least, exactly how a dial turned in a tech office impacts how children learn to see the world?

We can and should iterate on ways to answer these complex questions without throwing the ability for companies to moderate content out the window.

Manuel_D 10/28/2024||
> The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.

It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.

Chubby Inc. vs. CompuServe established that a non-moderated platform evaded liability for user generated content. https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....

If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.

People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.

zajio1am 10/28/2024||
I guess most people that think section 230 is excessive are not advocating for its complete removal, but more like for adding some requirements that platforms have to adhere in order to claim such immunity.
Manuel_D 10/28/2024||
Sure, but I find that few people are able to articulate in any detail what those requirements are and explain how it will lead to a better ecosystem.

A lot of people talk about a requirement to explain why someone was given a particular recommendation. Okay, so Google, Facebook, et. al. provide a mechanism that supplies you with a CSV of tens of thousands of entries describing the weights used to give you a particular recommendation. What problem does that solve?

Conservatives often want to amend section 230 to limit companies' ability to down-weight and remove conservative content. This directly runs afoul the First Amendment; the government can't use the threat of liability to coerce companies into hosting speech they don't want to. Not to mention, the companies could just attribute the removal or down-ranking to other factors like inflammatory speech or negative user engagement.

lesuorac 10/28/2024||
IIUC, most large ad providers allow you to see and tailor what they use in their algorithms (ex. [1]).

I think the big problem with "Should every human have the right to understand that an algorithm has classified them in a certain way" is just that they flat out can't. You cannot design a trash can that every human can understand but a bear can't. There is a level of complexity that your average person won't be able to follow.

[1]: https://myadcenter.google.com/controls

jmpetroske 10/28/2024||
Yes but it _appears_ that there are very different algorithms/classifications used for which ads to recommend vs what content to recommend. Opening up this insight/control for content recommendations (instead of just ads) would be a good start.
lesuorac 10/28/2024||
Have at it.

https://support.google.com/youtube/answer/6342839?hl=en&co=G...

jmpetroske 10/30/2024||
Yeah there’s some controls, but they are much less granular than what ad-tech exposes. I’ve just never really been sure why Google/meta/etc. choose to expose this information differently for ads vs content.
freejazz 10/28/2024||
[flagged]
stackskipton 10/28/2024||
[flagged]
tedunangst 10/28/2024||
Techdirt's coverage of section 230 has been pretty consistent from before bluesky existed.
pfraze 10/28/2024||
In fact, the causality is reversed; he's on the board due to his influence on us. Masnick wrote the Protocols not Platforms essay which inspired Dorsey to start the Bluesky project. Then Bluesky became the PBC, we launched, became independent, etc etc, and Masnick wasn't involved until the past year when we invited him to join our board.

I hope you view his writing and POV as independent from his work with us. On matters like 230 you can find archives of very consistent writing from well before joining.

stackskipton 10/28/2024||
TIL. I'll admit, I'm not avid reader of TechDirt, follower of Mike Masnick or care that much about Bluesky since I don't interact a ton with social media.

However, my initial feelings are correct. NYT article is bemoaning about Section 230, Mike seems to ignore why those feelings are coming up and burying there might be conflict of interest in caring since I guess BlueSky has algorithms it runs to help users? Again, admitting I know nothing about BlueSky. In any case, I don't think consistent PoV should bypass disclosure of that.

His arguments about why Section 230 should be left intact are solid and I agree with some of them. I also think he misses the point that letting algorithms go insane with 100% Section 230 protection may not be best idea. Whether or not Section can be reformed without destroying the internet or if First Amendment gets involved here, I personally don't know.

TheCleric 10/29/2024||
BlueSky’s big selling point is no algorithm by default. It’s default timeline is users you follow in descending chronological order. You can use an algorithm (called a feed) if you like. They provide a few, but also are open about their protocol and allow anyone who wants to write an algorithm to do so, and any user who wants to opt in to using it can.
whoitwas 10/28/2024||
The user who made the bonkers content is liable.
stackskipton 10/28/2024||
Sure, but with other forms of media, the publisher is liable as well about bonkers content with certain exceptions.

This is what most of Section 230 fight is about. Some people, myself included, would say "No, Facebook is selecting content that doesn't involve user choice, they are drifting into publisher territory and thus should not be 100% immune to liability."

EDIT: I forgot, Section 230 has also been used by Online Ad Publishers to hide their lack of moderation with scam ads.

vundercind 10/28/2024||
Reading the law, it sure seems to be aimed at protecting services like ISPs, web hosts, CDNs/caches, email hosts, et c, not organizations promoting and amplifying specific content they’ve allowed users to post. It’s never seemed to me that applying 230 to, say, the Facebook feed or maybe even to Google ads is definitely required by or in the spirit of the law, but more like something we just accidentally ended up doing.
whoitwas 10/28/2024||
I thought safe harbor was the relevant statute here (section 512 of DMCA)?
vundercind 10/28/2024||
That’s narrowly concerned with copyright infringement, no?
whoitwas 10/28/2024||
Yeah. It's been a while. This is interesting
thot_experiment 10/28/2024||
[flagged]
readthenotes1 10/28/2024|
[flagged]
akira2501 10/28/2024|
[flagged]
kurisufag 10/28/2024||
>zero control

I have had reasonable success with Youtube's built-in algorithm-massaging features. It mostly always respects "Do Not Recommend Channel", and understands "Not Interested" after a couple repetitions.

alwa 10/28/2024||
I seem to remember part of TikTok’s allure being that it learned your tastes well from implicit feedback, no active feedback required. We around here probably tend to enjoy the idea of training our own recommenders, but it’s not clear to me that the bulk of users even want to be bothered with a simple thumbs-up/thumbs-down.
alwa 10/28/2024||
If they want to make money, don’t they need you to stick around? As a side effect of making money, then, aren’t they incentivized to reduce the amount of large categories of content you’d find noxious?

That doesn’t work as well for stuff that you wish other people would find noxious but they don’t: neo-Nazis probably would rather see more neo-Nazi things rather than fewer, even if the broader social consensus is (at least on the pages of the New York Times) that those things fall under the uselessly vague header of “toxic.”

Even if the recommenders only filter out some of what you want less of, ditching them entirely means you’ll see more of the portion of the slop that they’re deprioritizing the way you want them to.

akira2501 10/28/2024||
> aren’t they incentivized to reduce the amount of large categories of content you’d find noxious?

No, they're incentivized to increase the amount of content advertisers find acceptable.

Plus, youtube isn't strictly watching, many people do make a living from the platform and these algorithmic controls are not available to them in any way at all.

> ditching them entirely

Which is why I implied that user oriented control is the factor to care about. Nowhere did I suggest you had to do this, just remove _corporate_ control of that list.

More comments...