Top
Best
New

Posted by mrjaeger 3 hours ago

Meta and YouTube Found Negligent in Landmark Social Media Addiction Case(www.nytimes.com)
213 points | 90 commentspage 2
mikece 2 hours ago|
A good time to (re-)recommend the movie "The Social Dilemma".
dlcarrier 2 hours ago||
This is the kind of stuff that is causing them to push for mandatory identity verification laws. If they are being held liable for the the desires of their users, they're being forced micromanage the affairs of their customers, which preclude anonymous usage.
13415 1 hour ago||
Not only that, in my opinion the many positive reactions to this decision are a sign of a decline of personal responsibility and a desire of people to be managed by the government and treated like cattle. Blaming everyone else but themselves for personal problems and failures has become the default for many people.
intended 1 hour ago||
Meta is not pushing for mandatory age verification laws.They are pushing for age verification burdens to be pushed to the OS / App Store layer.
ChrisArchitect 3 hours ago||
Notably a different case from the other one in New Mexico:

Jury finds Meta liable in case over child sexual exploitation on its platforms

https://news.ycombinator.com/item?id=47509984

SpicyLemonZest 2 hours ago|
And one with much deeper implications on how they operate. It's easy for Meta to just hire more moderators or treat reports of exploitation with higher priority; if this verdict stands, I think they have no realistic choice but to abandon usage targets.
aprilthird2021 2 hours ago||
Realistically they will hire expensive lawyers, pay out hundreds of millions to billions in settlements, fire lots of people (workforce is predominantly American), etc.

Even if they do what you're saying, lots of people who've used any Meta property in the last 15 years has a potentially viable case, and no future work can swat those away

jmyeet 2 hours ago||
I believe social media is on a collision course with an iceberg called Section 230.

Broadly speaking, Section 230 differentiates between publishers and platforms. A platform is like Geocities (back in the day) where the platform provider isn't liable for the content as long as they staisfy certain requirements about havaing processes for taking down content when required. A bit like the Cox decision today, you're broadly not responsible for the actions of people using your service unless your service is explicitly designed for such things.

A publisher (in the Section 230 sense) is like any media outlet. The publisher is liable for their content but they can say what they want, basically. It's why publishers tend to have strict processes around not making defamatory or false statements, etc.

I believe that any site that uses an algorithmic news feed is, legally speaking, a publisher acting like a platform.

Example: let's just say that you, as Twitter, FB, IG or Youtube were suddenly pro-Russian in the Ukraine conflict. You change your algorithm to surface and distribute pro-Russian content and suppress pro-Ukraine content. Or you're pro-Ukrainian and you do the reverse.

How is this different from being a publisher? IMHO it isn't. You've designed your algorithm knowingly to produce a certain result.

I believe that all these platforms will end up being treated like publishers for this reason.

So, with today's ruling about platforms creating addiction, (IMHO) it's no different to surfacing content. You are choosing content to produce a certain outcome. Intentionally getting someone addicted is funtionally no different to changing their views on something.

I actually blame Google for all this because they very successfully sold the idea that "the algorithm" ranks search results like it's some neutral black box but every behavior by an algorithm represents a choice made by humans who created that algorithm.

timdev2 1 hour ago|
Why do you believe that "Section 230 differentiates between publishers and platforms"?
jmyeet 49 minutes ago||
Section 230(c)(i) [1]:

> (c) (c)Protection for “Good Samaritan” blocking and screening of offensive material

> (1) Treatment of publisher or speaker

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This is a protection for being a platform for third-party (including user-generated) content.

Some more discussion on this distinction [2]:

> Section 230’s legal protections were created to encourage the innovation of the internet by preventing an influx of lawsuits for user content.

It goes on to talk about publishers, distributors and Internet Service Providers, the last of which I characterize as "platforms".

By the way, my view here isn't a fringe view [3]:

> One argument advanced by those who want to limit immunity for platforms is that these algorithms are a form of content creation, and should therefore be outside the scope of Section 230 immunity. Under this theory, social media companies could potentially be held liable for harmful consequences related to content otherwise created by a third party.

This is exactly my view.

[1]: https://www.law.cornell.edu/uscode/text/47/230

[2]: https://bipartisanpolicy.org/article/section-230-online-plat...

[3]: https://www.naag.org/attorney-general-journal/the-future-of-...

Dracophoenix 22 minutes ago||
This isn't good reasoning. According to your analysis, any website, ISP, or hosting provider that uses a firewall or Cloudflare is by definition a publisher, since they algorithmically shape traffic to prohibit suspicious IP addresses from access.
jmyeet 13 minutes ago||
Not at all. Intenet matters. Is Cloudfare trying to shape user behavior or push a particular position or content? No.

Just look at the Cox decision from the Supreme Court today. As long as the (Internet) service isn't designed for or sold as a method of downloading copyrighted material, the provider isn't responsible for any actions by its users. In other words, intent matters.

I find that technical people really get stuck on this aspect of the law. They look for technical compliance or an absolute proof standard because we're used to doing things like proving something works mathematically. But the law is subjective and holistic. It looks at the totality of evidence and applies a subjective test.

And intent here is fairly easy to establish. We could take an issue like Russia and look at all the posts and submissions and see how many views and interactions those posts got. We then divide them into pro-Russian and pro-Ukraine and establish a clear bias. We also look at any modifications made to the algorithm to achieve those goals.

This is nothing like Cloudfare DDoS protection.

Handy-Man 2 hours ago||
IMO, parents share just as much blame here, if not more. Giving your kids independence doesn't mean being oblivious to what they're doing online. Too many parents confuse hands-off parenting with not parenting at all.
bluedevil2k 2 hours ago||
Have you met kids? They’re devious, tech knowledgeable, and scheming and can find ways around any rule. Plus, no matter how good of a parent you are, you’re somewhat at the mercy of their friends’ parents as well. I can block TikTok from my daughter’s phone, but can’t block her from watching her friend’s phone while she’s out of the house.
intended 1 hour ago||
I dont think parents going up against psychologists, data scientists, product managers and software engineers with the best pay in the world is any kind of fair fight.
2OEH8eoCRo0 2 hours ago||
Huge if upheld. This was the bellwether case for thousands of other similar cases.
apopapo 2 hours ago||
Will they also find liable all the companies that produce addictive food by injecting sugar into everything?

What about the "infinite" broadcasts found on all television channels?

This is ridiculous and pathetic.

btmiller 1 hour ago|||
A full sentence answer for you: yes.
BoredPositron 2 hours ago|||
In other countries that's the case so I don't know why it shouldn't be applicable in the US?
richwater 1 hour ago||
People provide proof that other companies apply punitive damages to food companies knowinly adding sugar to food
BoredPositron 1 hour ago||
8 countries in Europe, 4 in South America and 3 in Asia have an added sugar tax. So yes they did.
pixl97 2 hours ago||
"Libertarian demands companies have unlimited freedom until a corporation with unlimited freedom repeatedly eats their face with no consequences, wonders why the face eating leopards they voted for are actually allowed"
aprilthird2021 2 hours ago|
I can't help but feel these are "revenge" verdicts. Public perception of these companies is dirt low, and there are so few levers the average person has to change what they feel is an increase in atomization, loneliness, breakdown of civic discourse, Cambridge Analytica level political targeting, misinformation, etc.

Maybe the social media companies could do more to combat all these. They certainly have a level of profit compared to what they provide to the average person that makes people squirm.

But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction? It's like saying cable television is responsible for people who binge watch TV.

It's hard to square this circle while sports gambling apps and Polymarket / Kalshi are tearing through the landscape right now with no real pushback

bitwank 1 hour ago|
>But does anyone believe for a second that YouTube is responsible for a person's internet / video watching addiction?

Yes? Is there an algorithm or not?

More comments...