Posted by computerliker 3 hours ago
The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
Even better, make the flags granular: <recommended age>, <content flag>, <source>, <type>
13+, profane language, user, text
17+, violence, self, video
18+, unmoderated content, user, text
13+, drug themes, self, audio
and so on...
Foreign sites, places that aren't trying to publish things for children? The default state should be unrated content for consumers (adults) prepared to see the content they asked for.
0+, kid friendly, self, interactive content
It doesn't even matter if you can get something that technically works. Half the "age appropriate" content targeted at children is horrifying brainrot. Hardcore pornography would be less damaging to them.
Just supervise your damn children people.
Also, not all 13-year-olds are of equal level of maturity/content appropriate material. I find it very annoying that I can’t just set limits like: no drug-referencing but idgaf about my kid hearing swear words.
On other machines: I do not want certain content to ever be displayed on my work machine. I’d like to have the ability to set that. Someone who has specific background may not want to see things like: children in danger. This could even be applied to their Netflix algorithm. The website: does the dog die, does a good job of categorizing these kinds of content.
- It's much easier for web sites to implement, potentially even on a page-by-page basis (e.g. using <meta> tags).
- It doesn't disclose whether the user is underage to service providers.
- As mentioned, it allows user agents to filter content "on their own terms" without the server's involvement, e.g. by voluntarily displaying a content warning and allowing the user to click through it.
That's why I have a hard time crediting the theory that today's proposals are just harmlessly clueless and well intentioned (as dynm suggests). There are many possible ways to make a child-safe internet and it's been a concern for a long time. But, just in the last year there are simultaneous pushes in many regions to enact one specific technique which just happens to pipe a ton of money to a few shady companies, eliminate general purpose computing, be tailor made for social control and political oppression, and on top of that, it isn't even any better at keeping porn away from kids! I think Hanlon's razor has to give way to Occam's here; malice is the simpler explanation.
This could pretty easily be solved by just giving sites some incentive to actually provide a rating.
Some people have enough self control to do that and quit cold turkey. Other people don't even consciously realize what they are doing as they perform that maladaptive action without any thought at all, akin to scratching a mosquito bite.
If someone could figure out why some people are more self aware than others, a whole host of the worlds problems would be better understood.
But I strongly prefer my solution!
Parent's proposal is better in that it would only take away general purpose computing from children rather than from everyone. A sympathetic parent can also allow it anyway, just like how a parent can legally provide a teen with alcohol in most places. As a society we generally consider that parents have a right to decide which things are appropriate for their children.
I wouldn't say it's a lack of understanding, but that any compromise is seen as weakness by other members of their party. That needs to end.
Give in 20+ years and you'll be called a kook for thinking otherwise.
Maybe I will have more energy for it tomorrow, I've been through this probably a couple dozen times on HN and I don't have the energy to go through the whole rigmarole today because usually it results in 2-3 days of someone fiercely disagreeing down some long chain and in the end I provide all the evidence and by that point no one is paying attention and it just goes into this pyrrhic victory where I get drained dry just for no one to give a shit. I should probably consolidate it into a blog post or something.
It's incredibly sad as an optimistic person trying to find any silver lining here.
William Tong, Anne E. Lopez, Dave Yost, Jonathan Skrmetti, Gwen Tauiliili-Langkilde, Kris Mayes, Tim Griffin, Rob Bonta, Phil Weiser, Kathleen Jennings, Brian Schwalb, Christopher M. Carr, Kwame Raoul, Todd Rokita, Kris Kobach, Russell Coleman, Liz Murrill, Aaron M. Frey, Anthony G. Brown, Andrea Joy Campbell, Dana Nessel, Keith Ellison, Lynn Fitch, Catherine L. Hanaway, Aaron D. Ford, John M. Formella, Jennifer Davenport, Raúl Torrez, Letitia James, Drew H. Wrigley, Gentner Drummond, Dan Rayfield, Dave Sunday, Peter F. Neronha, Alan Wilson, Marty Jackley, Gordon C. Rhea, Derek Brown, Charity Clark, and Keith Kautz
--
Always operate under the assumption that the people serve the state, not the other way around. There are some names in that list that are outwardly infamous of this behavior, and none are surprising considering what type of person looks to be an AG. Maybe fighting fire with fire is appropriate - no such thing as a private life for any of these people, all their communications are open to the public 100% of the time and there are precisely 0 instances where it is not the case. It's only fair considering that is what their goal is for everyone not of the state.
The worst that can happen is you don't change things.
The best? Maybe you'll find a receptive ear. Your lawmaker stops co-sponsoring KOSA. Your state AG stops pushing for it.
You need to make it easier for your lawmakers to be on that list too. Show them there's people who won't rake them over the coals for bowing out.
putting the consiracy hat on, the exploit is to direct as many installed AGs to push for such bills, with no big letdown if they dont pass, why/because, the demographics on dissention are valuable and are, passed to a hostile federal government.
So the worst that can happen could be worse than nothing.
[] https://www.aclu.org/press-releases/department-of-homeland-s...
No.
I said your state Attorney General's office and your elected federal Senators and members of the House.
So I reiterate - the worst that can happen is you don't change where things have been going to.
The best? Your elected officials bow out of this.
"Many social media platforms deliberately target minors, fueling a nationwide youth mental health crisis."
". These platforms are intentionally designed to be addictive, particularly for underaged users, and generate substantial profits by monetizing minors’ personal data through targeted advertising. These companies fail to adequately disclose the addictive nature of their products or the well-documented harms associated with excessive social media use. Increasing evidence demonstrates that these companies are aware of the adverse mental health consequences imposed on underage users, yet they have chosen to persist in these practices. Accordingly, many of our Offices have initiated investigations and filed lawsuits against Meta and TikTok for their role in harming minors. "
Yet, the comapnies aren't being regulated, nor the algorithims, the marketing or even the existence. It's the users that are the problem therefore everyone has to submit their Identity to use the Internet if this passes.
Here's the actual title of the article, which is much more concerning than the HN title.
That doesn't mean they should get what they might want, or that its Constitutional.
"The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors."
Okay, so why aren't they going after the social media companies?