Posted by mrjaeger 4 hours ago
Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.
A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.
I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.
Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.
How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.
I believe that all these platforms will end up being treated like publishers for this reason.
So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.
I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.
> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material
> (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This is a protection for being a platform for third-party (including user-generated) content.
Some more discussion on this distinction [2]:
> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.
It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".
By the way, my view here isn't a fringe view [3]:
> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.
This is exactly my view.
[1]: https://www.law.cornell.edu/uscode/text/47/230
[2]: https://bipartisanpolicy.org/article/section-230-online-plat...
[3]: https://www.naag.org/attorney-general-journal/the-future-of-...
Just look at the Cox decision from the Supreme Court today. As long as the (Internet) service isn't designed for or sold as a method of downloading copyrighted material, the provider isn't responsible for any actions by its users. In other words, intent matters.
I find that technical people really get stuck on this aspect of the law. They look for technical compliance or an absolute proof standard because we're used to doing things like proving something works mathematically. But the law is subjective and holistic. It looks at the totality of evidence and applies a subjective test.
And intent here is fairly easy to establish. We could take an issue like Russia and look at all the posts and submissions and see how many views and interactions those posts got. We then divide them into pro-Russian and pro-Ukraine and establish a clear bias. We also look at any modifications made to the algorithm to achieve those goals.
This is nothing like Cloudfare DDoS protection.
I’ve argued in the past that the right way to create the change in corporations we want is to change the laws, and people have made valid points that Congress has basically given up on doing that. But even so, civil cases with fines don’t seem like that way to make lasting change. In the analogues to the tobacco fights, there are LAWS that regulate tobacco company behaviors as a result. The civil case here isn’t going to result in any law. So what are companies supposed to do? Tiptoe around some ill defined social boundary and hope they don’t get sued? Because apparently the defense of, “no I didn’t target that person and I didn’t break any laws” is still going to get you fined. What happens when a company from a conservative location gets sued in a liberal location for causing a social ill? Oh, we’re cool with that. But what if a company from a liberal location gets sued in a conservative location for the same thing? Oh, maybe we don’t like that as much. I’m taking the libertarian side here. I know plenty of people who don’t watch TV, don’t use Facebook, and I know plenty of people that recognized that they were spending too much time on digital platforms and decided to quit or cut back. So a healthy person can self regulate on these apps, I’ve seen it and done it. I’m just not sure how much responsibility Meta and YouTube bear in my mind. If they’re getting fined $3M plus some TBD punitive amount, are we saying that this 20 year old person lost out on earning that much money in their life or would need to spend $3M on therapy because of Meta or YouTube? It feels a little steep off a fine for one person.
If Meta and YouTube really were/are making addictive products, wouldn’t a lot more people be harmed? Shouldn’t this be a class action suit where anyone with mental trauma or depression be included?
I don’t know the details of the case, but I highly doubt that this one plaintiff was targeted specifically, and I doubt their case is that unique. I read tons of news articles about cyber bullying, depression, suicide attempts, and tech addiction. Does every one get to sue Meta and YouTube for $3M now?
If I sell you gizmo, and I know, or should know, that using the gizmo could seriously harm you, and I don't tell you or do anything about it, I am liable for damages you incur.
Jury finds Meta liable in case over child sexual exploitation on its platforms
Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away
Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.
But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.
It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback
Yes? Is there an algorithm or not?
What about the "infinite" broadcasts found on all television channels?
This is ridiculous and pathetic.