Top
Best
New

Posted by hn_acker 10/28/2024

NY Times gets 230 wrong again(www.techdirt.com)
149 points | 128 comments
harshreality 10/28/2024|
I think the author is right that harms caused by incorrect content aren't—and shouldn't be—the fault of section 230, and are instead the fault of the original producers of the content.

I think the author is wrong in claiming that modern attention-optimizing recommendation algorithms are better than more primitive, poorer recommendation algorithms. Appearing to be more engaging/addictive does not imply more value. It's a measurement problem.

johnnyanmac 10/28/2024||
> Appearing to be more engaging/addictive does not imply more value.

for modern day businesses it sadly does. But that misalignment of how to define "quality" is a part of why we're in this real time divide of whether social media is good/bad to begin with.

basch 10/28/2024||
Id like to propose something akin to the Ship of Theseus Paradox: lets call it the Ransom Letter Paradox.

At what point do newspaper clipping arranged together become the work of the arranger and not the individual newspapers. If I take one paragraph from the NYT and one paragraph from the WSJ am I the author or are the NYT and WSJ the author? If I take 16 words in a row from each and alternate, am I the author? If I alternate sentences am I the author?

At some point, there is a higher order "creation" of context between individually associated videos played together in a sequence. If I arrange one minute clips into an hour long video, I can say something the original authors never intended. If I, algorithmically, start following up videos with rebuttals, but only rebuttals that support my viewpoint, I am ADDING context by making suggestions. Sure people can click next, but in my ransom note example above, people can speed read and skip words as well. Current suggestion algorithms may not be purposely "trying to say something" but they effectively BECOME speakers almost accidently.

Ignoring that a well crafted sequences of videos can create new meaning leaves us with a disingenuous interpretation of what suggestion algorithms either are doing or can do. I'm not saying that google is purposely radicalizing children into lets say white nationalists, buuut there may be something akin to negligence going on, if they can always point to a black box algorithm, one with a mind of its own, as the culprit. Winter v. GP Putnam giving them some kind of amnesty from their own "suggestions" rubs me the wrong way. Designing systems to give people "more of what they want" rubs me the wrong way because it narrows horizons not broadens them. That lets me segue into again linking to my favorite internet article ever (which the bbc has somehow broken the link to so here is the real link, and an archive https://www.bbc.co.uk/blogs/adamcurtis/entries/78691781-c9b7... https://archive.ph/RoBjr ) Im not sure I have an answer, but current recommendation engines are the opposite of it.

fluoridation 10/28/2024|||
If one treats the order of content as a message unto itself, then wouldn't an attempt to regulate or in some way restrict recommendation algorithms infringe upon freedom of speech? If I decide to tweak my site's recommendation algorithm to slightly more often show content in favor of a particular political party, isn't that my right?
BobaFloutist 10/29/2024|||
Section 230 is about who's liable if speech either breaks the law or causes damages in some way unprotected by the First Amendment. That's the table-stakes of the discussion. It's a little silly to bring up the First Amendment given that context.
basch 10/30/2024|||
You have it backwards.

230 is about who is NOT liable. Platforms are NOT liable for what they don't moderate, just because they moderated other things. It protects them from imperfect moderation being used to claim endorsement.

fluoridation 10/30/2024||
Surely they're one and the same. By explicitly stating that someone is not liable, you're indirectly defining more the set of people who are liable.
fluoridation 10/29/2024|||
How could you break the law by merely reordering the content a user will see?
basch 10/28/2024|||
That's why I proposed the paradox. At what tipping point does the arranger become the speaker?

Read 230.

https://www.law.cornell.edu/uscode/text/47/230

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

"No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.

230 says you can moderate however you like and what you choose to leave up doesnt become your own speech through endorsement osmosis.

I agree with 230 to a point, but at some extreme it can be used to misrepresent speech as "someone elses." Similar to how the authors of newspapers wouldnt be the speaker of a ransom note because they contributed one letter or word, and it woild be absurd to claim otherwise.

If the arranger is the speaker, restrictions on free speech apply to their newly created context. Accountability applies.

kelseyfrog 10/30/2024||
The way the problem is phrased makes it reducible to the sorites problem.

There are better ways of formulating the question that avoid this paradox, such as "what are the necessary and sufficient conditions for editorial intervention to dominate an artifact?"

wbl 10/29/2024||||
And for better or worse spreading white nationalist propaganda isn't illegal. It's not good, but we have the first amendment in this country because we don't want the government to decide who can speak.
blahlabs 10/29/2024|||
That first BBC link loaded the article for me and then threw a 404... what is that about?
basch 10/30/2024||
they broke something. very odd.

i conject they had some kind of page move and archival that is broke.

maxlybbert 10/28/2024||
I’m sure that the difficulty the New York Times editors have about summarizing laws related to online publishing shouldn’t make you wonder about what glaring mistakes are in their other reports about topics the newspaper wouldn’t be expected to know as deeply.
htk 10/28/2024|
Related to that, there's the "Gell-Mann Amnesia" effect[1], where an expert can see numerous mistakes on his area of expertise being reported on the news, but somehow takes the rest as being accurate.

[1]: https://www.epsilontheory.com/gell-mann-amnesia/

shadowmanifold 10/29/2024||
Murray Gell-Mann - "The quality of information" lecture on youtube from 1997 is also worth a listen.

Murray basically points out how much communication in general is really just exchanging errors for no reason.

I suspect this is where the idea of "Gell-Mann Amnesia" came from.

jjmarr 10/28/2024||
I don't like Section 230 because "actual knowledge" no longer matters, as tech companies willfully blind themselves to the activities on their platforms.

~~As an example, there are subreddits like /r/therewasanattempt or /r/interestingasfuck that ban users that post in /r/judaism or /r/israel (there used to be a subreddit /r/bannedforbeingjewish that tracked this but that was banned by reddit admins). This isn't the First Amendment, since it's just a ban based on identity instead of posting content.

According to the US legal system, discrimination based on religion is wrong. I should be able to fix this by complaining to reddit and creating actual knowledge of discrimination. In practice, because there is no contact mechanism for reddit, it's impossible for me to create actual knowledge.~~

edit: I still believe the above behaviour is morally wrong, but it isn't an accurate example of the actual knowledge standard as others have pointed out. I'm leaving it here for context on the follow-up comments.

The TechDirt article doesn't engage with this. It asserts that:

>> Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.

> None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable.

However, if you try to use Facebook's defamation form[1] and list the United States as an option:

> Facebook is not in a position to adjudicate the truth or falsity of statements made by third parties, and consistent with Section 230(c) of the Communications Decency Act, is not responsible for those statements. As a result, we are not liable to act on the content you want to report. If you believe content on Facebook violates our Community Standards (e.g., bullying, harassment, hate speech), please visit the Help Center to learn more about how to report it to us.

[1]https://www.facebook.com/help/contact/430253071144967

Moto7451 10/28/2024||
> This isn't the First Amendment, since it's just a ban based on identity instead of posting content.

This is not why the First Amendment does not apply. The First Amendment does not apply to private entities. It restricts the government…

> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

At least from a First Amendment standpoint, Reddit can do what it feels like to establish a religion, ban all religious discussion, tell reporters to go to hell, start a news feed, etc. There are other laws they do need to deal with of course.

More commentary on this here:

https://www.cnn.com/2021/01/12/politics/first-amendment-expl...

jjmarr 10/28/2024||
I agreed with you. If Reddit wanted to ban certain types of posts, they're entitled to under the First Amendment.

They're not entitled to discriminate against certain races, ethnicities, or religions though.

https://harvardlawreview.org/print/vol-133/white-v-square/

riskable 10/28/2024|||
They most certainly are allowed to discriminate based on religion. The only laws on the books regarding religious discrimination are for employers who could potentially discriminate against their employees (or in hiring) based on their religion:

https://www.eeoc.gov/religious-discrimination

There's no law that says you can't say, run a website that only allows atheists to participate, or non-Jews, or non-Christians, or non-Muslims, or whatever religion or religious classification that you want.

Discriminating based on race or ethnicity is a different topic entirely. You can choose your religion (it's like, just your opinion, man) but you can't choose your race/ethnicity. There's much more complicated laws in that area.

tzs 10/28/2024|||
> The only laws on the books regarding religious discrimination are for employers who could potentially discriminate against their employees (or in hiring) based on their religion.

You've overlooked prohibitions on religious discrimination in public accommodations [1].

[1] https://www.law.cornell.edu/uscode/text/42/2000a

Manuel_D 10/28/2024||
Reddit is not a place of public accommodation. It's a private web company. And furthermore, Reddit isn't the one doing the banning. It's Reddit users that are blocking members from their subreddit. The rest of Reddit is free to be browsed by said users.

If I create a Google chat group, and I only invite my church members to said group is Google violating anti-discrimination laws? No.

~~You're~~ the commenter 3 layers above is trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself.

tzs 10/28/2024||
> You're trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself

You are failing to take into account context. My comment had nothing whatsoever to do with anything about Reddit.

My comment is responding to a comment that asserted that the only laws regarding religious discrimination are for employers potentially discriminating against employees.

I provided an example of a law prohibiting religious discrimination in something other than employment.

pc86 10/28/2024|||
There absolutely is such a law, the Civil Rights Act of 1964, specifically Title II. If you provide public accommodations you may not discriminate based on religion. You can't stick a "No Muslims allowed" sign on your restaurant because it's open to the public.

Reddit is publicly available even if they require registration, and neither Reddit nor subreddit mods may legally discriminate based on anything covered under the CRA.

https://www.justice.gov/crt/title-ii-civil-rights-act-public...

riskable 10/28/2024|||
The CRA only covers physical spaces (places of "public accommodation"). Not services (like Reddit).
Manuel_D 10/28/2024|||
My understanding is that businesses cannot deny service based on protected class. E.g. Reddit couldn't put "Catholics are barred from using Reddit" in their TOS.

But subreddit bans are done by users of Reddit, not by Reddit itself. If someone on Xbox Live mutes the chat of Catholics and kicks them from the lobbies they're hosting, you can't go complain to Microsoft because these are the actions of a user not the company.

pc86 10/29/2024||||
Websites are public accommodations, this has been litigated over and over and over again. That why you can sue the owner of a website over ADA violations if they offer services or products to the public.

I don't know why you keep doubling down when you're verifiably, provably wrong. I understand you may not want this to be the case, but it is.

Manuel_D 10/28/2024||||
But Reddit isn't discriminating against certain races, ethnicities, or religions. Individual subreddit admins are discriminating on the basis of identity. This no different than creating a Discord server or IRC chat channel where you only let in your church friends. Reddit isn't refusing service on the basis of protected class. Reddit users are doing so.
jjmarr 10/28/2024||
The issue is that individual subreddit moderators each control hundreds of subreddits with millions of users. If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.

If one friend group is racist and you can't eat dinner at their house, that's qualitatively different than systemic discrimination by the restaurant industry.

In this case, Reddit's platform has enough systemic discrimination that you have to choose between full participation in front-page posts or participation in Jewish communities.

Manuel_D 10/28/2024||
If you're talking about what your opinion of is morally right, or a healthy social media ecosystem I'm not really disagreeing with you - I don't think it's good for the subreddit mods to do this. But as per your comments, it does sound like you're making the claim that this activity is running afoul of nondiscrimination laws. This is incorrect.

> If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.

The impact is not what matters. What matters is that the banning is done by users, not by the company. Non-discrimination laws prohibit businesses from denying business to customers on the basis of protected class. It doesn't dictate what users of internet platforms do with their block button.

> that's qualitatively different than systemic discrimination by the restaurant industry.

Right, but a restaurant refusing a customer is a business denying a customer. If Discord or Reddit put "We don't do business with X race" in their ToS that's direct discrimination by Reddit. If subreddit moderators ban people because they do or don't belong to a protected class, that's an action taken by users. You're free to create your own /r/interestingasfuckforall that doesn't discriminate.

A bar can't turn away a customer for being Catholic. If a Catholic sits down at the bar, and the people next to him say "I don't want to sit next to a Catholic", and change seats to move way from a Catholic patron that's their prerogative. Subreddit bans are analogous to the latter.

jjmarr 10/29/2024|||
> But as per your comments, it does sound like you're making the claim that this activity is running afoul of nondiscrimination laws.

I edited my original comment because others have pointed out it doesn't run afoul of antidiscrimination laws. You acknowledge that this behaviour is morally wrong, but don't say whether or not a platform should have a responsibility to prevent this behaviour. I believe they should

While the mechanism by which systemic discrimination occurs is different because it's individual users instead of the business, the impact is the same as businesses/public spaces discriminating against individuals.

This is because common social spaces are barred off to people of certain ethnicities and that means they can't fully engage in civic life.

Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their services. Attempts to sue the stores fail, because the store no longer exists by the time you gather the information necessary to sue it. While the mall itself does not promote racial discrimination, it is impossible for visible minorities to shop at the mall.

Should the mall have an obligation to prevent discrimination by its tenants?

I would say "yes". In your bar example, it is still possible for a Catholic to get a drink at the bar. In the mall example, it is impossible for a minority to shop.

On Reddit, if it is impossible for someone to be Jewish or Israeli because they are banned on sight from most subreddits, that should be illegal.

Manuel_D 10/29/2024||
> Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their service

Again, this is already illegal because the stores are businesses and they can't deny service on the basis of protected class.

The issue at hand is that the government cannot compel individual users to interact with other users. How would this work? You try to mute someone on Xbox Live and you get a popup, "Sorry, you've muted too many Catholics in the last month, you can't mute this player." Likewise, would Reddit force the moderators to allow posts and comments from previously banned users? And what would prevent their content from just getting downvoted to oblivion and being automatically hidden anyways?

jjmarr 10/29/2024||
It's a similar situation because the laws aren't enforceable against the pop-up stores in the same way you can't sue an anonymous subreddit moderator from being discriminatory.

> Likewise, would Reddit force the moderators to allow posts and comments from previously banned users?

In Reddit's case, moderators are using tools that automatically ban users that have activity in specific subreddits. It's not like it's hidden bias, the bans are obviously because of a person's religion.

Manuel_D 10/29/2024||
Correct, and what's to keep users of the subreddit from tagging posters who've posted in the Israel subreddit and down voting them until they're hidden? There's no effective way to force users to interact with other users they don't want to interact with.
ars 10/29/2024|||
Reddit can (and sometimes does) control who can be, and who is a moderator.

So your distinction doesn't exist: This is effectively the business Reddit engaged in discrimination.

Plus when I read Reddit I'm not interacting with a moderator, I'm interacting with a business.

An actual analogous example would be if individual people use a tool to block anyone Jewish from seeing their comments and replying to them. It would be pretty racist of course, but not illegal. A subreddit though, is not the personal playing area of a moderator.

Manuel_D 10/29/2024||
Reddit bans subreddits whose moderators do not remove content that breaks the ToS. They do not require that communities refrain from banning certain people or content. Basically, you can only get sanctioned by reddit as a moderator for not banning and removing content from your subreddit.

> A subreddit though, is not the personal playing area of a moderator.

Oh, yes. Yes it is.

Many of the better communities have well organized moderation teams. But plenty do not. And the worst offenders display the blunt reality that a subreddit is indeed the play thing of the top mod.

Moto7451 10/28/2024||||
None of this has to do with the First Amendment including the legal review you linked to.

The Unruh Civil Rights Act that is discussed does not extend the First Amendment as the First Amendment does not restrict the actions of businesses. The Unruh Civil Rights Act does not extend the First Amendment as it does not restrict the actions of Congress or other legislatures.

Freedom of Speech in the Amendment also has specific meaning and does not fully extend to businesses.

https://constitution.findlaw.com/amendment1/freedom-of-speec...

fallingknife 10/28/2024|||
People need to understand that the only entity that can violate the constitution is the government. Citizens and companies are not restricted in their actions by the constitution, only the law.
singleshot_ 10/28/2024||
False. See generally the state actor doctrine. Courts have ruled extensively in the context of criminal investigations and FedEx; railroads and drug testing; NCMEC and CSAM hashes; and informant hackers and criminal prosecution.
fallingknife 10/29/2024||
Which is just the government acting through proxies
ideashower 10/28/2024|||
> I don't like Section 230 because "actual knowledge" no longer matters, as tech companies willfully blind themselves to the activities on their platforms.

This is misleading. It seems like you're predicating your entire argument on the idea that there is a version of Section 230 that would require platforms to act on user reports of discrimination. But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.

Section 230 immunity doesn't depend on "actual knowledge." The law specifically provides immunity regardless of whether a platform has knowledge of illegal content. Providers can't be treated as publishers of third-party content, period.

It's not that "'actual knowledge' no longer matters," it's that it never mattered. Anti-discrimination law is usually for things like public accommodations, not online forums.

jjmarr 10/28/2024||
My point is that platforms should have more of a responsibility when they currently have none.

> But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.

I understand that this is the purpose of the law, and I disagree with it. Section 230 has led to large platforms outsourcing most of their content to users because it shields the platform from legal liability. A user can post illegal content, engage in discrimination, harassment, etc.

> Anti-discrimination law is usually for things like public accommodations, not online forums.

Anti-discrimination law should be applicable to online forums. The average adult spends more than 2 hours a day on social media. Social media is now one of our main public accommodations.

If one of the most-used websites in the USA has an unofficial policy of discriminating against Jewish people that isn't covered by the current laws as that policy is enforced solely by users, that means the law isn't achieving its objectives of preventing discrimination.

ideashower 10/28/2024||
> platforms should

> Anti-discrimination law should

I don't disagree with you. But you must distinguish between what the law does, and what it should do, in your view. Otherwise you are misleading people.

jjmarr 10/28/2024||
You're correct (as you pointed out elsewhere), so I edited my original comment.
criley2 10/28/2024|||
I'm not sure where you learned that the US legal system is against religious discrimination in private organizations, but it's not strictly true.

Many religious organizations in the US openly discriminate against people who are not their religion, from christian charities and businesses requiring staff to sign contracts that state they agree with/are members of the religion, to catholic hospitals openly discriminating against non-catholics based on their own "religious freedom to deny care". One way they can do this is an exemption to discrimination called bona fide occupational qualification suggesting that only certain people can do the job.

In a more broad sense, any private organization with limited membership (signing up vs allowing everyone) can discriminate. For example some country clubs discriminate based on race to this day. One reason for this is that the constitution guarantees "Freedom of Association" which includes the ability to have selective membership.

jjmarr 10/28/2024||
It's on a state-by-state basis.[1] In California in particular, courts have ruled online businesses that are public accommodations cannot discriminate:[2]

> The California Supreme Court held that entering into an agreement with an online business is not necessary to establish standing under the Unruh Act. Writing for a unanimous court, Justice Liu emphasized that “a person suffers discrimination under the Act when the person presents himself or herself to a business with an intent to use its services but encounters an exclusionary policy or practice that prevents him or her from using those services,” and that “visiting a website with intent to use its services is, for purposes of standing, equivalent to presenting oneself for services at a brick-and-mortar store.”

I'm not sure what Catholic hospitals refuse non-Catholics care. My understanding is they refuse to provide medical treatments such as abortion that go against Catholic moral teachings, and this refusal is applied to everyone.

[1] https://lawyerscommittee.org/wp-content/uploads/2019/12/Onli...

[2] https://harvardlawreview.org/print/vol-133/white-v-square/

ideashower 10/28/2024||
> In California in particular, courts have ruled online businesses that are public accommodations cannot discriminate.

Yes.

But there is a legal distinction between a business website that offers goods/services to the public (like an online store), and a social media platform's moderation decisions or user-created communities.

Prager University v. Google LLC (2022)[1] - the court specifically held that YouTube's content moderation decisions didn't violate the Unruh Act. There's a clear distinction between access to services (where public accommodation laws may apply), and content moderation/curation decisions (protected by Section 230).

[1] https://law.justia.com/cases/california/court-of-appeal/2022...

jjmarr 10/28/2024||
You are right, thank you for the citation.

edit: there's another comment chain you might be interested in about whether the federal civil rights act is applicable.

ideashower 10/28/2024||
Thanks :-)
fallingknife 10/28/2024||
That's a good thing. We don't want Meta to be adjudicating defamation. Just look at the mess DMCA takedown notices are. When you tell companies to adjudicate something like copyright or defamation, they are just going to go with an "everybody accused is guilty" standard. (The only exception is large and well known accounts that bring in enough ad revenue to justify human involvement.) This will just turn into another mechanism to force censorship by false reporting.
chomp 10/28/2024||
Mike Masnick is a treasure. He posts on Bluesky here: https://bsky.app/profile/mmasnick.bsky.social
consumer451 10/29/2024|
I just found out that he replaced Dorsey on the board of Bluesky as well.

https://bsky.app/profile/mmasnick.bsky.social/post/3l7lnkz6f...

janalsncm 10/28/2024||
I create recommender systems for a living. They are powerful and also potentially dangerous. But many people fall into the trap of thinking that just because a computer recommends something it’s objectively good.

It is math but it’s not “just math”. Pharmaceuticals is chemistry but it’s not “just chemistry”. And that is the framework I think we should be thinking about these with. Instagram doesn’t have a God-given right to flood teen girls’ feeds with anorexia-inducing media. The right is granted by people, and can be revoked.

> Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.

Let’s flag for a moment that this is a value judgement. The author is using “can’t” when they really mean “should not”. I also think it is a strawman to suggest anyone is requiring absolute certainty.

When dealing with baby food manufacturers, if their manufacturing process creates poisoned food, we hold the manufacturer liable. Someone might say it’s unfair to require that a food manufacturer guarantee none of their food will be poisoned, and yet we still have a functioning food industry.

> The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”

Sure. But “relevant” is fuzzy and not quantifiable. Computers like things to be quantifiable. So instead we might use a proxy like click. Click will lead to boosting clickbait content. So maybe you include text match. Now you boost websites that are keyword stuffing.

If you continue down the path of maximal engagement somewhere down the line you end up with some kind of cesspool of clickbait and ragebait. But choosing to maximize engagement was itself a choice, it’s not objectively more relevant.

hn_acker 10/28/2024||
The original title of the article is:

> NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment

mikestew 10/28/2024|
Four chars too long, though.

And it's "Section 230", if you were not parsing the integer value in the title.

nickthegreek 10/28/2024||
They know, they are the one who submitted the article to HN. I believe they were posting the comment just to denote that they had to change it.

The following preserves all info under the title limit: NYT Gets 230 Wrong Again; Misrepresenting History, Law, And The 1st Amendment

> NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment

mikestew 10/28/2024||
I get that. It was more directed at the dork that downvoted the comment, to let them know why OP might have elaborated.
jay_kyburz 10/28/2024||
I can't speak to the legality or meaning of section 230, but I can share my somewhat controversial opinions about how I think the internet should operate.

The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms. This is OK I guess because the author _can_ be held responsible.

If the author had been anonymous and the publisher could not accurately identify who should be responsible, then I would like to live in a society where the publisher _was_ held responsible. I don't think that's unreasonable.

Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility. I think big tech needs to be able to accurately identify the author of the harmful material, or take responsibility themselves.

kaibee 10/28/2024||
Yeah, it's illegal to shout "fire" in a crowded theater, but if you hook up the fire-alarm to a web-api, the responsibility for the ensuing chaos disappears.
pessimizer 10/28/2024||
It is not illegal to shout "fire" in a crowded theater. That was from an argument about why people should be jailed for passing out fliers opposing the US draft during WWI.

https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...

yunwal 10/29/2024||
It essentially is, you’ll get a disorderly conduct charge (the Wikipedia article confirms this). You’ll also be held liable for any damages caused by the ensuing panic.

You can contrive situations where it falls within the bounds of the law (for instance, if you do it as part of a play in a way that everyone understands it’s not real) but if you interpret it the way it’s meant to be interpreted it’s breaking a law pretty much anywhere.

freejazz 10/28/2024|||
>The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms.

I don't think this is true at all. If a publisher publishes a book that includes information that is not only incorrect, but actually harmful if followed, and represents it as true/safe, then they would be liable too.

jay_kyburz 10/28/2024||
From the article.

>I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.

"We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs."

freejazz 10/29/2024||
How odd, that's not the case for defamation, where publishers have a duty to investigate the truth of the statements they publish. What's going on in the ninth circuit?
phkahler 10/28/2024||
>> Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility.

I sometimes suggest that the internet should start from strongly verifiable identity. You can strip identity in cases where it makes sense, but trying to establish identity is very hard. When people can be identified it make it possible to track them down and hold them accountable if they violate laws. People will generally behave better when they are not anonymous.

yunwal 10/29/2024||
“We should dox everyone on the internet” is definitely not a take I expected.
Nasrudith 10/29/2024||
That take has been around since Facebook's real name policy as what seemed like a good idea at the time. It failed to make people behavior any better. Yet absolute idiots keep on thinking that if we just make the internet less free it will solve all of our problems and deliver us into a land of rainbows, unicorns, and gumdrops where it rains chocolate. For god's sake stop doing the work for a dystopia for free!
blackeyeblitzar 10/28/2024||
It isn’t surprising that they get details wrong. It’s the same NY Times that called the constitution “dangerous”(https://www.nytimes.com/2024/08/31/books/review/constitution...), fanning the flames of a kind of uncivil line of thinking that has unfortunately been more and more popular.

But this article itself makes mistakes - it does not seem to understand that the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says. The author makes an illogical claim that there is a category of speech that we want to illegitimize and shield platforms from. This is fundamentally opposed to the principles of free speech. Yes there is the tricky case of spam. But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.

Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.

pessimizer 10/28/2024||
> the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says.

The First Amendment definitely is not about "free speech principles." It's the first of a short list of absolute restraints on the previous text, which is a description of US government, insisted upon by interests suspicious of federalization under that government. Free speech writ large is good and something to fight for, but the First Amendment is not an ideology, it is law.

The reason (imo) to talk about the First Amendment in terms of these giant social media platforms is simply because of their size, which was encouraged by friendly government acts such as Section 230 in the first place, without which they couldn't scale. Government encouragement and protection of these platforms gives the government some responsibility for them.

kaibee 10/28/2024||
> But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.

> Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.

This is all well and good, but maybe the place to rehash a debate about whether vaccines work is not in fact, the center town square. I would say that a person who has no idea about any of the underlying science and evidence, but is spreading doubt about it anyway (especially while benefiting financially), is not in fact, 'sharing their ideas', because they don't meet the minimum standard to actually have an opinion on the topic.

pests 10/28/2024|||
Everyone is equal.

Just because they don't "meet the minimum standard" doesn't mean their view or opinion is irrelevant.

There are people sprouting crazy ideas in actual public town squares all the time, and they have done so forever. You don't have to go there and you don't have to listen.

blackeyeblitzar 10/28/2024|||
> they don't meet the minimum standard to actually have an opinion on the topic

Who should judge that and why? I think that’s what makes free speech a basic right in functional democracies - there is no pre judging it. Challenging authority and science is important if we want to seek truth.

In the case of vaccines, for example, people were getting censored for discussing side effects. Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines. But not long ago it was labeled as a “conspiracy theory” and you would get banned on Twitter or Reddit for mentioning it.

kaibee 10/29/2024||
> Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines.

Sure. But the MRNA vaccines were the first ones to be available and the Myocarditis risk from getting COVID-19 while unvaccinated is still multiple times higher than the risk from the MRNA vaccine. So, all that telling people about the risk of myocarditis does is dissuade some portion of people from getting the MRNA vaccine... which leads to more cases of myocarditis/deaths.

riskable 10/28/2024||
The main point the author is making is that algorithms represent the opinion of the corporation/website/app maker and opinions are free speech. That is, deciding what to prioritize/hide in your feed is but a mere manifestation of the business's opinion. Algorithms == Opinions.

This is a fine argument. The part where I think they get it wrong is the assumption/argument that a person or corporation can't be held accountable for their opinions. They most certainly can!

In Omnicare, Inc. v. Laborers District Council Construction Industry Pension Fund the Supreme Court found that a company cannot be held liable for its opinion as long as that opinion was was "honestly believed". Though:

    the Court also held, however, that liability may result if the company omitted material facts about the company's inquiry into, or knowledge concerning, the statement of opinion, and those facts conflict with what a reasonable investor would understand as the basis of the statement when reading it.
(from: https://www.jonesday.com/en/insights/2015/03/supreme-court-c...)

That is, a company can be held liable if it intentionally mislead its client (presumably also a customer or user). For that standard to be met the claimant would have to prove that the company was aware of the facts that proved their opinion wrong and decided to mislead the client anyway.

In the case of a site like Facebook--if Meta was aware that certain information was dangerous/misleading/illegal--it very well could be held liable for what its algorithm recommends. It may seem like a high bar but probably isn't because Meta is made aware of all sorts of dangerous/misleading information every day but only ever removes/de-prioritizes individual posts and doesn't bother (as far as I'm aware) with applying the same standard to re-posts of the same information. It must be manually reported and review again, every time (though maybe not? Someone with more inside info might know more).

I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents. Since all software is 100% algorithms (aside from comments, I guess) that would mean all software is simply speech and speech isn't patentable subject matter (though the SCOTUS should've long since come to that conclusion :anger:)

fallingknife 10/28/2024||
> That is, a company can be held liable if it intentionally mislead its client

But only in a case where it has an obligation to tell the truth. The case you cited was about communication to investors, which is one of the very few times that legal obligation exists.

Furthermore, you would be hard pressed to show that an algorithm is intentionally misleading unless you can show that it has been explicitly designed to show a specific piece of information. And recommendation algorithms don't do that. They are designed to show the user what he wants. And if what he wants happens to be misinformation, that's what he will get.

johnnyanmac 10/28/2024||
Yeah, that's the motte-and-bailey argument about 230 that makes me more scrutinous of tech companies by the day

motte: "we curate content based on user preferences, and are hands off. We can't be responsible for every piece of (legal) content that is posted on your platform .

bailey: "our algorithm is ad-friendly, and we curate content or punish it based on how happy or mad it makes out adverts, the real customers for our service. So if adverts don't like hearing the word "suicide" we'll make creators who want to be paid self-censor".

if you want to take hands on what content is allowed on that granular a level, I don't see why 230 should protect you.

>I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents.

I'm sure they'd word it very carefully to prevent that, or limit it only to software defined as "social media".

aidenn0 10/28/2024||
This is the actual reason for s230 existing; without 230, applying editorial discretion could potentially make you liable (e.g. if a periodical uncritically published a libelous claim in its "letters to the editor"), so the idea was to allow some amount of curation/editorial discretion without also making them liable, lest all online forums become cesspools. Aiding monetization through advertising was definitely one reason for doing this.

We can certainly decide that we drew the line in the wrong place (it would be rather surprising if we got it perfectly right that early on), but the line was not drawn blindly.

johnnyanmac 10/29/2024||
I'd say the line was drawn badly. I'm not surprised it was drawn in a way to basically make companies the sorts of lazy moderators that are commonly complained about, all while profiting billions from it.

Loopholes would exist, but the spirit of 230 seemed to be that moderation of every uploaded piece of content was bound to not represent the platform. Enforcing private rules that represents the platforms will seems to go against that point.

Nasrudith 10/29/2024||
Remember your history for one. Most boards at the time were small volunteer operations or side-jobs for an existing business. They weren't even revenue neutral, let alone positive. Getting another moderator depended upon a friend with free time on their hands who hung around there anyway. You have been downright spoiled by multimillion dollar AI backed moderation systems combined with large casts of minimum wage moderators. And you still think it is never good enough.

Your blatant ignorance of history shines further. Lazy moderation was the starting point. The courts fucked things up as they wont to do by making lazy moderation the only way to protect yourself from liability. There was no goddamned way that they could keep up instantly with all of the posts. Section 230 was basically the only constitutional section of a censorship bill and was designed to specifically 'allow moderation' instead of opposed to 'lazy moderator'. Not having Section 230 means lazy moderation only.

God, people's opinions on Section 230 have been so poisoned by propaganda from absolutely morons. The level of knowledge of how moderation works has gone backwards!

johnnyanmac 10/29/2024||
You say "spoiled", I say "ruined". Volunteer moderation of the commons is much different from a platform claiming to deny liability for 99% of content but choosing to more or less take the roles of moderation themselves. Especially with the talks of AI

My issue isn't with moderation quality so much as claiming to be a commons but in reality managing it as if you're a feudal lord. My point it that they WANT to try to moderate it all now, removing the point of why 230 shielded them.

And insults are unnecessary. My thoughts are my own from some 15 years of observing the landscape of social media dynamics change. Feel free to disagree but not sneer.

echoangle 10/28/2024|
> It being just a suggestion or a recommendation is also important from a legal standpoint: because recommendation algorithms are simply opinions. They are opinions of what content that algorithm thinks is most relevant to you at the time based on what information it has at that time.

Is that really how opinions work in US law? Isn’t an opinion something a human has? If google builds a machine that does something, is that protected as an opinion, even if no human at google ever looks at it? „Opinion“ sounds to me like it’s something a humans believes, not the approximation a computer generates.

TheCleric 10/29/2024|
Not a lawyer, but that’s how I understand it. In the Citizens United case the Supreme Court has given corporations free speech rights. Included in that right is the right to state an opinion. And when you develop an algorithm that promotes content you’re implicitly saying is “better” or “more relevant” or however the algorithm ranks it, that’s you providing an opinion via that algorithm.
esbranson 10/29/2024||
Not a lawyer either, but I believe regulations on speech, as here, are scrutinized ("strictly") such that regulations are not allowed to discriminate based on content. Moderation is usually based on content. Whether something is fact or not, opinion or not, is content-based.

Citizens United held that criminalizing political films is illegal. Allowing Fahrenheit 9/11 to be advertised and performed but criminalizing Hillary: The Movie is exactly why Citizens United was correct and the illiberal leftists are wrong.

More comments...