Posted by hn_acker 10/28/2024
I think the author is wrong in claiming that modern attention-optimizing recommendation algorithms are better than more primitive, poorer recommendation algorithms. Appearing to be more engaging/addictive does not imply more value. It's a measurement problem.
for modern day businesses it sadly does. But that misalignment of how to define "quality" is a part of why we're in this real time divide of whether social media is good/bad to begin with.
At what point do newspaper clipping arranged together become the work of the arranger and not the individual newspapers. If I take one paragraph from the NYT and one paragraph from the WSJ am I the author or are the NYT and WSJ the author? If I take 16 words in a row from each and alternate, am I the author? If I alternate sentences am I the author?
At some point, there is a higher order "creation" of context between individually associated videos played together in a sequence. If I arrange one minute clips into an hour long video, I can say something the original authors never intended. If I, algorithmically, start following up videos with rebuttals, but only rebuttals that support my viewpoint, I am ADDING context by making suggestions. Sure people can click next, but in my ransom note example above, people can speed read and skip words as well. Current suggestion algorithms may not be purposely "trying to say something" but they effectively BECOME speakers almost accidently.
Ignoring that a well crafted sequences of videos can create new meaning leaves us with a disingenuous interpretation of what suggestion algorithms either are doing or can do. I'm not saying that google is purposely radicalizing children into lets say white nationalists, buuut there may be something akin to negligence going on, if they can always point to a black box algorithm, one with a mind of its own, as the culprit. Winter v. GP Putnam giving them some kind of amnesty from their own "suggestions" rubs me the wrong way. Designing systems to give people "more of what they want" rubs me the wrong way because it narrows horizons not broadens them. That lets me segue into again linking to my favorite internet article ever (which the bbc has somehow broken the link to so here is the real link, and an archive https://www.bbc.co.uk/blogs/adamcurtis/entries/78691781-c9b7... https://archive.ph/RoBjr ) Im not sure I have an answer, but current recommendation engines are the opposite of it.
230 is about who is NOT liable. Platforms are NOT liable for what they don't moderate, just because they moderated other things. It protects them from imperfect moderation being used to claim endorsement.
Read 230.
https://www.law.cornell.edu/uscode/text/47/230
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
"No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.
230 says you can moderate however you like and what you choose to leave up doesnt become your own speech through endorsement osmosis.
I agree with 230 to a point, but at some extreme it can be used to misrepresent speech as "someone elses." Similar to how the authors of newspapers wouldnt be the speaker of a ransom note because they contributed one letter or word, and it woild be absurd to claim otherwise.
If the arranger is the speaker, restrictions on free speech apply to their newly created context. Accountability applies.
There are better ways of formulating the question that avoid this paradox, such as "what are the necessary and sufficient conditions for editorial intervention to dominate an artifact?"
i conject they had some kind of page move and archival that is broke.
Murray basically points out how much communication in general is really just exchanging errors for no reason.
I suspect this is where the idea of "Gell-Mann Amnesia" came from.
~~As an example, there are subreddits like /r/therewasanattempt or /r/interestingasfuck that ban users that post in /r/judaism or /r/israel (there used to be a subreddit /r/bannedforbeingjewish that tracked this but that was banned by reddit admins). This isn't the First Amendment, since it's just a ban based on identity instead of posting content.
According to the US legal system, discrimination based on religion is wrong. I should be able to fix this by complaining to reddit and creating actual knowledge of discrimination. In practice, because there is no contact mechanism for reddit, it's impossible for me to create actual knowledge.~~
edit: I still believe the above behaviour is morally wrong, but it isn't an accurate example of the actual knowledge standard as others have pointed out. I'm leaving it here for context on the follow-up comments.
The TechDirt article doesn't engage with this. It asserts that:
>> Section 230 now has been used to shield tech from consequences for facilitating deadly drug sales, sexual harassment, illegal arms sales and human trafficking. And in the meantime, the companies grew to be some of the most valuable in the world.
> None of those cases would have succeeded under the First Amendment, again because the companies had no actual knowledge of the underlying issues, and thus could not be held liable.
However, if you try to use Facebook's defamation form[1] and list the United States as an option:
> Facebook is not in a position to adjudicate the truth or falsity of statements made by third parties, and consistent with Section 230(c) of the Communications Decency Act, is not responsible for those statements. As a result, we are not liable to act on the content you want to report. If you believe content on Facebook violates our Community Standards (e.g., bullying, harassment, hate speech), please visit the Help Center to learn more about how to report it to us.
This is not why the First Amendment does not apply. The First Amendment does not apply to private entities. It restricts the government…
> Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
At least from a First Amendment standpoint, Reddit can do what it feels like to establish a religion, ban all religious discussion, tell reporters to go to hell, start a news feed, etc. There are other laws they do need to deal with of course.
More commentary on this here:
https://www.cnn.com/2021/01/12/politics/first-amendment-expl...
They're not entitled to discriminate against certain races, ethnicities, or religions though.
https://www.eeoc.gov/religious-discrimination
There's no law that says you can't say, run a website that only allows atheists to participate, or non-Jews, or non-Christians, or non-Muslims, or whatever religion or religious classification that you want.
Discriminating based on race or ethnicity is a different topic entirely. You can choose your religion (it's like, just your opinion, man) but you can't choose your race/ethnicity. There's much more complicated laws in that area.
You've overlooked prohibitions on religious discrimination in public accommodations [1].
If I create a Google chat group, and I only invite my church members to said group is Google violating anti-discrimination laws? No.
~~You're~~ the commenter 3 layers above is trying to interpret anti-discrimination laws to cover the actions of users of a service, no the service itself.
You are failing to take into account context. My comment had nothing whatsoever to do with anything about Reddit.
My comment is responding to a comment that asserted that the only laws regarding religious discrimination are for employers potentially discriminating against employees.
I provided an example of a law prohibiting religious discrimination in something other than employment.
Reddit is publicly available even if they require registration, and neither Reddit nor subreddit mods may legally discriminate based on anything covered under the CRA.
https://www.justice.gov/crt/title-ii-civil-rights-act-public...
But subreddit bans are done by users of Reddit, not by Reddit itself. If someone on Xbox Live mutes the chat of Catholics and kicks them from the lobbies they're hosting, you can't go complain to Microsoft because these are the actions of a user not the company.
I don't know why you keep doubling down when you're verifiably, provably wrong. I understand you may not want this to be the case, but it is.
If one friend group is racist and you can't eat dinner at their house, that's qualitatively different than systemic discrimination by the restaurant industry.
In this case, Reddit's platform has enough systemic discrimination that you have to choose between full participation in front-page posts or participation in Jewish communities.
> If 10% of the top subreddits ban anyone that participates in /r/Judaism or /r/Israel, that's a much bigger impact than a ban happy Discord mod.
The impact is not what matters. What matters is that the banning is done by users, not by the company. Non-discrimination laws prohibit businesses from denying business to customers on the basis of protected class. It doesn't dictate what users of internet platforms do with their block button.
> that's qualitatively different than systemic discrimination by the restaurant industry.
Right, but a restaurant refusing a customer is a business denying a customer. If Discord or Reddit put "We don't do business with X race" in their ToS that's direct discrimination by Reddit. If subreddit moderators ban people because they do or don't belong to a protected class, that's an action taken by users. You're free to create your own /r/interestingasfuckforall that doesn't discriminate.
A bar can't turn away a customer for being Catholic. If a Catholic sits down at the bar, and the people next to him say "I don't want to sit next to a Catholic", and change seats to move way from a Catholic patron that's their prerogative. Subreddit bans are analogous to the latter.
I edited my original comment because others have pointed out it doesn't run afoul of antidiscrimination laws. You acknowledge that this behaviour is morally wrong, but don't say whether or not a platform should have a responsibility to prevent this behaviour. I believe they should
While the mechanism by which systemic discrimination occurs is different because it's individual users instead of the business, the impact is the same as businesses/public spaces discriminating against individuals.
This is because common social spaces are barred off to people of certain ethnicities and that means they can't fully engage in civic life.
Here's another question. Let's say a mall is composed entirely of pop up stores that close at the end of every month. These stores invariably ban visible minorities from using their services. Attempts to sue the stores fail, because the store no longer exists by the time you gather the information necessary to sue it. While the mall itself does not promote racial discrimination, it is impossible for visible minorities to shop at the mall.
Should the mall have an obligation to prevent discrimination by its tenants?
I would say "yes". In your bar example, it is still possible for a Catholic to get a drink at the bar. In the mall example, it is impossible for a minority to shop.
On Reddit, if it is impossible for someone to be Jewish or Israeli because they are banned on sight from most subreddits, that should be illegal.
Again, this is already illegal because the stores are businesses and they can't deny service on the basis of protected class.
The issue at hand is that the government cannot compel individual users to interact with other users. How would this work? You try to mute someone on Xbox Live and you get a popup, "Sorry, you've muted too many Catholics in the last month, you can't mute this player." Likewise, would Reddit force the moderators to allow posts and comments from previously banned users? And what would prevent their content from just getting downvoted to oblivion and being automatically hidden anyways?
> Likewise, would Reddit force the moderators to allow posts and comments from previously banned users?
In Reddit's case, moderators are using tools that automatically ban users that have activity in specific subreddits. It's not like it's hidden bias, the bans are obviously because of a person's religion.
So your distinction doesn't exist: This is effectively the business Reddit engaged in discrimination.
Plus when I read Reddit I'm not interacting with a moderator, I'm interacting with a business.
An actual analogous example would be if individual people use a tool to block anyone Jewish from seeing their comments and replying to them. It would be pretty racist of course, but not illegal. A subreddit though, is not the personal playing area of a moderator.
> A subreddit though, is not the personal playing area of a moderator.
Oh, yes. Yes it is.
Many of the better communities have well organized moderation teams. But plenty do not. And the worst offenders display the blunt reality that a subreddit is indeed the play thing of the top mod.
The Unruh Civil Rights Act that is discussed does not extend the First Amendment as the First Amendment does not restrict the actions of businesses. The Unruh Civil Rights Act does not extend the First Amendment as it does not restrict the actions of Congress or other legislatures.
Freedom of Speech in the Amendment also has specific meaning and does not fully extend to businesses.
https://constitution.findlaw.com/amendment1/freedom-of-speec...
This is misleading. It seems like you're predicating your entire argument on the idea that there is a version of Section 230 that would require platforms to act on user reports of discrimination. But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.
Section 230 immunity doesn't depend on "actual knowledge." The law specifically provides immunity regardless of whether a platform has knowledge of illegal content. Providers can't be treated as publishers of third-party content, period.
It's not that "'actual knowledge' no longer matters," it's that it never mattered. Anti-discrimination law is usually for things like public accommodations, not online forums.
> But you're fundamentally misunderstanding the law's purpose: to protect platforms from liability for user content while preserving their right to moderate that content as they choose.
I understand that this is the purpose of the law, and I disagree with it. Section 230 has led to large platforms outsourcing most of their content to users because it shields the platform from legal liability. A user can post illegal content, engage in discrimination, harassment, etc.
> Anti-discrimination law is usually for things like public accommodations, not online forums.
Anti-discrimination law should be applicable to online forums. The average adult spends more than 2 hours a day on social media. Social media is now one of our main public accommodations.
If one of the most-used websites in the USA has an unofficial policy of discriminating against Jewish people that isn't covered by the current laws as that policy is enforced solely by users, that means the law isn't achieving its objectives of preventing discrimination.
> Anti-discrimination law should
I don't disagree with you. But you must distinguish between what the law does, and what it should do, in your view. Otherwise you are misleading people.
Many religious organizations in the US openly discriminate against people who are not their religion, from christian charities and businesses requiring staff to sign contracts that state they agree with/are members of the religion, to catholic hospitals openly discriminating against non-catholics based on their own "religious freedom to deny care". One way they can do this is an exemption to discrimination called bona fide occupational qualification suggesting that only certain people can do the job.
In a more broad sense, any private organization with limited membership (signing up vs allowing everyone) can discriminate. For example some country clubs discriminate based on race to this day. One reason for this is that the constitution guarantees "Freedom of Association" which includes the ability to have selective membership.
> The California Supreme Court held that entering into an agreement with an online business is not necessary to establish standing under the Unruh Act. Writing for a unanimous court, Justice Liu emphasized that “a person suffers discrimination under the Act when the person presents himself or herself to a business with an intent to use its services but encounters an exclusionary policy or practice that prevents him or her from using those services,” and that “visiting a website with intent to use its services is, for purposes of standing, equivalent to presenting oneself for services at a brick-and-mortar store.”
I'm not sure what Catholic hospitals refuse non-Catholics care. My understanding is they refuse to provide medical treatments such as abortion that go against Catholic moral teachings, and this refusal is applied to everyone.
[1] https://lawyerscommittee.org/wp-content/uploads/2019/12/Onli...
[2] https://harvardlawreview.org/print/vol-133/white-v-square/
Yes.
But there is a legal distinction between a business website that offers goods/services to the public (like an online store), and a social media platform's moderation decisions or user-created communities.
Prager University v. Google LLC (2022)[1] - the court specifically held that YouTube's content moderation decisions didn't violate the Unruh Act. There's a clear distinction between access to services (where public accommodation laws may apply), and content moderation/curation decisions (protected by Section 230).
[1] https://law.justia.com/cases/california/court-of-appeal/2022...
edit: there's another comment chain you might be interested in about whether the federal civil rights act is applicable.
https://bsky.app/profile/mmasnick.bsky.social/post/3l7lnkz6f...
It is math but it’s not “just math”. Pharmaceuticals is chemistry but it’s not “just chemistry”. And that is the framework I think we should be thinking about these with. Instagram doesn’t have a God-given right to flood teen girls’ feeds with anorexia-inducing media. The right is granted by people, and can be revoked.
> Because you can’t demand that anyone recommending anything know with certainty whether or not the content they are recommending is good or bad. That puts way too much of a burden on the recommender, and makes the mere process of recommending anything a legal minefield.
Let’s flag for a moment that this is a value judgement. The author is using “can’t” when they really mean “should not”. I also think it is a strawman to suggest anyone is requiring absolute certainty.
When dealing with baby food manufacturers, if their manufacturing process creates poisoned food, we hold the manufacturer liable. Someone might say it’s unfair to require that a food manufacturer guarantee none of their food will be poisoned, and yet we still have a functioning food industry.
> The whole point of a search recommendation is “the algorithm thinks these are the most relevant bits of content for you.”
Sure. But “relevant” is fuzzy and not quantifiable. Computers like things to be quantifiable. So instead we might use a proxy like click. Click will lead to boosting clickbait content. So maybe you include text match. Now you boost websites that are keyword stuffing.
If you continue down the path of maximal engagement somewhere down the line you end up with some kind of cesspool of clickbait and ragebait. But choosing to maximize engagement was itself a choice, it’s not objectively more relevant.
> NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment
And it's "Section 230", if you were not parsing the integer value in the title.
The following preserves all info under the title limit: NYT Gets 230 Wrong Again; Misrepresenting History, Law, And The 1st Amendment
> NY Times Gets 230 Wrong Again; Misrepresenting History, Law, And The First Amendment
The author points out that the publisher of a book of mushrooms cannot be held responsible for recommending people eat poisonous mushrooms. This is OK I guess because the author _can_ be held responsible.
If the author had been anonymous and the publisher could not accurately identify who should be responsible, then I would like to live in a society where the publisher _was_ held responsible. I don't think that's unreasonable.
Over on the internet, content is posted mostly anonymously and there is nobody to take responsibility. I think big tech needs to be able to accurately identify the author of the harmful material, or take responsibility themselves.
https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...
You can contrive situations where it falls within the bounds of the law (for instance, if you do it as part of a play in a way that everyone understands it’s not real) but if you interpret it the way it’s meant to be interpreted it’s breaking a law pretty much anywhere.
I don't think this is true at all. If a publisher publishes a book that includes information that is not only incorrect, but actually harmful if followed, and represents it as true/safe, then they would be liable too.
>I know I’ve discussed this case before, but it always gets lost in the mix. In Winter v. GP Putnam, the Ninth Circuit said a publisher was not liable for publishing a mushroom encyclopedia that literally “recommended” people eat poisonous mushrooms. The issue was that the publisher had no way to know that the mushroom was, in fact, inedible.
"We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs."
I sometimes suggest that the internet should start from strongly verifiable identity. You can strip identity in cases where it makes sense, but trying to establish identity is very hard. When people can be identified it make it possible to track them down and hold them accountable if they violate laws. People will generally behave better when they are not anonymous.
But this article itself makes mistakes - it does not seem to understand that the first amendment is about protecting free speech principles, which are actually much bigger than just what the first amendment says. The author makes an illogical claim that there is a category of speech that we want to illegitimize and shield platforms from. This is fundamentally opposed to the principles of free speech. Yes there is the tricky case of spam. But we should not block people based on political views or skepticism about science or anything else that is controversial. The censorship regime of big social media platforms should be viewed as an editorial choice, under law and in principle.
Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.
The First Amendment definitely is not about "free speech principles." It's the first of a short list of absolute restraints on the previous text, which is a description of US government, insisted upon by interests suspicious of federalization under that government. Free speech writ large is good and something to fight for, but the First Amendment is not an ideology, it is law.
The reason (imo) to talk about the First Amendment in terms of these giant social media platforms is simply because of their size, which was encouraged by friendly government acts such as Section 230 in the first place, without which they couldn't scale. Government encouragement and protection of these platforms gives the government some responsibility for them.
> Lastly - large social media platforms are utility communication services and public squares. They need to be regulated and treated like a government agency, restricting their ability to ban users and content. After all, so much of today’s speech is on these platforms. Not being able to share your ideas there is similar to not having free speech rights at all.
This is all well and good, but maybe the place to rehash a debate about whether vaccines work is not in fact, the center town square. I would say that a person who has no idea about any of the underlying science and evidence, but is spreading doubt about it anyway (especially while benefiting financially), is not in fact, 'sharing their ideas', because they don't meet the minimum standard to actually have an opinion on the topic.
Just because they don't "meet the minimum standard" doesn't mean their view or opinion is irrelevant.
There are people sprouting crazy ideas in actual public town squares all the time, and they have done so forever. You don't have to go there and you don't have to listen.
Who should judge that and why? I think that’s what makes free speech a basic right in functional democracies - there is no pre judging it. Challenging authority and science is important if we want to seek truth.
In the case of vaccines, for example, people were getting censored for discussing side effects. Myocarditis is now officially acknowledged as a (rare) side effect of the MRNA based COVID vaccines. But not long ago it was labeled as a “conspiracy theory” and you would get banned on Twitter or Reddit for mentioning it.
Sure. But the MRNA vaccines were the first ones to be available and the Myocarditis risk from getting COVID-19 while unvaccinated is still multiple times higher than the risk from the MRNA vaccine. So, all that telling people about the risk of myocarditis does is dissuade some portion of people from getting the MRNA vaccine... which leads to more cases of myocarditis/deaths.
This is a fine argument. The part where I think they get it wrong is the assumption/argument that a person or corporation can't be held accountable for their opinions. They most certainly can!
In Omnicare, Inc. v. Laborers District Council Construction Industry Pension Fund the Supreme Court found that a company cannot be held liable for its opinion as long as that opinion was was "honestly believed". Though:
the Court also held, however, that liability may result if the company omitted material facts about the company's inquiry into, or knowledge concerning, the statement of opinion, and those facts conflict with what a reasonable investor would understand as the basis of the statement when reading it.
(from: https://www.jonesday.com/en/insights/2015/03/supreme-court-c...)That is, a company can be held liable if it intentionally mislead its client (presumably also a customer or user). For that standard to be met the claimant would have to prove that the company was aware of the facts that proved their opinion wrong and decided to mislead the client anyway.
In the case of a site like Facebook--if Meta was aware that certain information was dangerous/misleading/illegal--it very well could be held liable for what its algorithm recommends. It may seem like a high bar but probably isn't because Meta is made aware of all sorts of dangerous/misleading information every day but only ever removes/de-prioritizes individual posts and doesn't bother (as far as I'm aware) with applying the same standard to re-posts of the same information. It must be manually reported and review again, every time (though maybe not? Someone with more inside info might know more).
I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents. Since all software is 100% algorithms (aside from comments, I guess) that would mean all software is simply speech and speech isn't patentable subject matter (though the SCOTUS should've long since come to that conclusion :anger:)
But only in a case where it has an obligation to tell the truth. The case you cited was about communication to investors, which is one of the very few times that legal obligation exists.
Furthermore, you would be hard pressed to show that an algorithm is intentionally misleading unless you can show that it has been explicitly designed to show a specific piece of information. And recommendation algorithms don't do that. They are designed to show the user what he wants. And if what he wants happens to be misinformation, that's what he will get.
motte: "we curate content based on user preferences, and are hands off. We can't be responsible for every piece of (legal) content that is posted on your platform .
bailey: "our algorithm is ad-friendly, and we curate content or punish it based on how happy or mad it makes out adverts, the real customers for our service. So if adverts don't like hearing the word "suicide" we'll make creators who want to be paid self-censor".
if you want to take hands on what content is allowed on that granular a level, I don't see why 230 should protect you.
>I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents.
I'm sure they'd word it very carefully to prevent that, or limit it only to software defined as "social media".
We can certainly decide that we drew the line in the wrong place (it would be rather surprising if we got it perfectly right that early on), but the line was not drawn blindly.
Loopholes would exist, but the spirit of 230 seemed to be that moderation of every uploaded piece of content was bound to not represent the platform. Enforcing private rules that represents the platforms will seems to go against that point.
Your blatant ignorance of history shines further. Lazy moderation was the starting point. The courts fucked things up as they wont to do by making lazy moderation the only way to protect yourself from liability. There was no goddamned way that they could keep up instantly with all of the posts. Section 230 was basically the only constitutional section of a censorship bill and was designed to specifically 'allow moderation' instead of opposed to 'lazy moderator'. Not having Section 230 means lazy moderation only.
God, people's opinions on Section 230 have been so poisoned by propaganda from absolutely morons. The level of knowledge of how moderation works has gone backwards!
My issue isn't with moderation quality so much as claiming to be a commons but in reality managing it as if you're a feudal lord. My point it that they WANT to try to moderate it all now, removing the point of why 230 shielded them.
And insults are unnecessary. My thoughts are my own from some 15 years of observing the landscape of social media dynamics change. Feel free to disagree but not sneer.
Is that really how opinions work in US law? Isn’t an opinion something a human has? If google builds a machine that does something, is that protected as an opinion, even if no human at google ever looks at it? „Opinion“ sounds to me like it’s something a humans believes, not the approximation a computer generates.
Citizens United held that criminalizing political films is illegal. Allowing Fahrenheit 9/11 to be advertised and performed but criminalizing Hillary: The Movie is exactly why Citizens United was correct and the illiberal leftists are wrong.