Posted by smartmic 16 hours ago
In the short term the way it will be implemented is this — age verification will not be a binary, it will also want to push your DoB, name, location etc and they say “the choice is with the user” but the default will be to send everything. Very soon there will be services that require DoB or name or something else to gate new or existing functionality. That is the slippery slope it will be built as and that is how they win the game
No chips for you, random government. No chips for you either, or you. And you.
(Spring is nearly here and my excuse about my rig also heating my house is about to end. Soon I will be paying extra to run my a/c as my rig pumps out a steady 1000w under load.)
Of course you're probably going to say something about how guns and bank accounts are crucial components to crime, in which case the same holds for AI in the mexican telecoms hack.
That sounds reasonable. A bank can just be an institution that holds money for people; they don't need to be all over their customer's business. It is like a telecom not being responsible for what their customers say. In a simple sense banks don't need KYC.
Nope. That is a storage locker. A bank uses the money it holds for other purposes such as loans or its own investments, possibly returning interest to the depositor. But, most importantly, a bank disperses money. it therefore needs to know who deposits what so that it doesn't eventually release funds to the wrong person. And then there are the lengthy procedures for handing out money without customer permission. People die. Governments garnish wages. Courts order payments to for child support. If you hold money you have to be prepared for this stuff. So you need to be absolutely confident in the identity of everyone you deal with.
Want a simple bank? A bank that doesn't ask for ID? Keep your cash under your mattress. Or put it all in a crypto wallet.
Given that the criminals aren't going to be using the banks it would make sense for the banks to not have mandatory administrative overhead that is easy to avoid.
> Nope. That is a storage locker.
Again, sounds good to me. Let people have a storage locker with a plastic debit card attached. If people had the option of a bank that was a little bit more responsible and didn't roll the dice of total collapse every financial crisis there'd be many that would go for that. Prepper types for example. The discourse glosses over how crazy it is that full-reserve or near-full-reserve banks are soft-banned.
Once a common technology that everyone has access to becomes powerful enough to alter the lives of others on command, do we as a society just need to do away with the concept of anonymity? We are all just too powerful in isolation, and too much of a threat to the collective, that we cannot reasonably expect not to have some governing body watching at all times?
Today, you can buy parts/print a completely untraceable firearm, so do we license sales of steel tubing and 3D printers?
Considering most places does direct deposit and that requires a bank account (so KYC), I don't see what's particularly new here. Many places also do background and/or work eligibility checks, which again is a form of KYC.
>Today, you can buy parts/print a completely untraceable firearm, so do we license sales of steel tubing and 3D printers?
Fortunately 3d printed guns are bad enough that it's not really an issue, although the bigger threat is probably CNC machines. However that's probably will get a pass, because they're eye-wateringly expensive compared to black market guns that nobody would bother.
Slippery slope is a fallacy, they said.
> Many places also do background and/or work eligibility checks, which again is a form of KYC.
Except that it isn't KYC at all, both because employees aren't customers (most people are the employees of one company but the customers of hundreds or more), and because the majority of people don't have that requirement imposed on them by the government. There are many jobs you can get without a background check.
The issue with centrally registering guns is than when you country is taken over by hostile forces (whether an invading army or a democratically elected abuser who turns it into a dictatorship), they know who has the guns and can force those people to surrender them (politely at first, authoritarians always use a salami slicing technique).
The issue with no controls is that even anti-social and mentally ill people can get them.
I wonder if the right middle ground could be:
- Sellers have to do their due diligence - require ID, proof of psychological examination, whatever else is deemed the right balance.
- Not doing due diligence means they get punishment equal to that for any offense committed with that gun.
- They might be required to mark/stamp the gun so that it can be traced back to them or have witnesses for the transfer.
The first is the mentally ill. Intuitively it seems desirable to say that someone undergoing treatment for e.g. depression shouldn't buy a gun. The problem here is the massive perverse incentive. If you're pretty depressed but you're not inclined to forfeit your ability to buy firearms, you now have a significant incentive to avoid seeking treatment. At which point you can still buy a gun but now your mental illness is going untreated, which is very worse than where we started.
The second is career criminals, i.e. people who have already been convicted of a crime and want to commit another one. The problem here is that career criminals... don't follow laws. If they want a gun they steal one or recruit someone without a criminal record into their gang etc., both of which are actually worse than just letting them buy one.
On top of that, when people get caught, prosecutors generally try to get them to testify against other criminals in exchange for a deal, who are then going to be pretty mad at them. Which gives them a much higher than average legitimate need to exercise their right to self-defense once they get back out. And then you get three independent bad outcomes: If they can't defend themselves they get killed for snitching, if they acquire a gun anyway so they don't then they could go back to prison even if they were otherwise trying to reform themselves, and if they think about this ahead of time or are advised of it by their lawyers then they'll be less likely to cooperate with prosecutors because the other two scenarios that are both bad for them only happen if they snitch.
Meanwhile the proposal was only ever expected to address a minority of the problem to begin with because plenty of the people who do bad things can pass the background check. And if you have a policy that doesn't even solve most of the original problem while creating several new ones, maybe it's just a bad idea?
Occasionally, gun owners are THE hostile force buying guns explicifely to bully and threaten. But that is about it, really.
One glance and I have your full name, home address, SSN, all online handles and aliases, employment history, email, and phone number, instantaneously on a HUD. It doesn't even need to be marketed as "doxxing as a service;" it can just be marketed as "professional networking" or "social media." That way people will voluntarily submit their information and all rights over it to the platform.
Until people feel their privacy being viscerally raped on a minute to minute basis nothing will change.
1> Auto-nude. Today we can "nudify" photos and videos. Soon, augemented reality glasses will be able to nudify eveyone in real time. (This is totally possible today.)
2> Auto-tranlation. Cool. Everyone can talk to everyone, but users will have censorship options. I don't much like hearing australians so I will just have the glasses make them all sound like proper Texans. And the sound of people with alternative views to my own are replaced with calming country music.
3> Lie detection. Glasses will look for facial/voice body ticks suggestive of deception. Good luck talking your way out of a ticket, or explaining to you boss how you were "sick", when they have a lie detector online 24/7.
4> Censorship of "bad" objects. Signs with ads or news that I do not agree with will be blocked and replaced with more appropriate text. Mosques will appear as churches. Garbage and pollution will become happy birds and clear blue skies. Homeless people will be replaced with attractive young people (see #1 above).
5> Race replacement. I don't like certain races. So my glasses now make everyone Chinese. So long as I don't turn off the glasses, I can live my custom racist utopia.
Live translation seems either better than autonude or worse, but not in the middle of the pack I’d assume? Am I missing something here?
This is what they want to happen with the initial round of "it's just a DOB field bro" legislation. It'll be completely useless, easy to bypass, and annoying to adults. But, everyone will be warming up to this government mandated prompt in their OS. Perfect, now legislators know they have a foundation to work with to introduce "reasonable" amendments to this prompt that require you to upload ID, for example. Frogs in a pot.
Now - wouldn't a government LOVE to know who's saying what? Rather than shutting down the entire $$$$$ international corporate internet.
Money concerns as usual.
I think people who say this should back it up by posting their full name, date of birth, SSN or other ID number, and address. A phone number would also be helpful so we can call and verify that they made the post. Otherwise they're not being honest.
But this isn't (intellectually) honest, either?
Maybe you can justify asking that they post under their real name, but asking for the kind of information that's required to steal their identity isn't the same as asking them who they are.
Do please be specific about those. Provide concrete examples and justify for the class why those involved couldn't have voluntarily done away with anonymity for that particular interaction.
Hypothetically someone can browse a tor site in one tab, post on 4chan in a second one, all while accessing online banking in a third. The bank can use hardware backed 2FA to verify you. Where's the issue here?
Here is one example: It's likely that we will never know who was behind the attempted backdoor in the xz library, which was almost successful in making a huge number of Linux installations worldwide vulnerable to remote exploitation. [1]
That malicious contributor is protected by online anonymity. Now, I know that it's probable that a state actor was behind "Jia Tan", meaning they could have been supplied with a fake ID as well, but that's still a higher barrier.
I don't think (and have not stated) that anonymity is worthless - it definitely is, especially if you're persecuted minority or under other kinds of threat. I just don't think it's helpful to pretend that it is completely unproblematic.
The project in question could have chosen to verify identities if they deemed it worthwhile to do so.
In USA, small business, small bank and credit unit are often used as excuse to push back these kind of rules.
So you are afraid of minor information leaks getting you killed, but you’re also trying to tell us that online anonymity is a bad thing?
Come on. This argument isn’t even coherent from paragraph to paragraph.
> I don't think it's reasonable to keep dreaming of the 90s or 00s when the internet was a comparatively innocent place
This is such a strange argument as the internet was most definitely NOT an innocent place, even relatively speaking, in that period.
I think there is a lot of nostalgic history rewriting in these claims. Much like political movements that claim that the past was a better time, it’s easy to only remember the good parts of how things were in the past.
I directly quoted your beliefs that minor information leaks on the internet can lose your house and get you killed, as well as your claim that the internet was significantly more innocent in the past.
These were the points you were putting forward along with your insistence that we have to “be real” about the problems of anonymity on the internet.
Its hard for me to believe that you don’t recognize the dissonance between the two points you were putting forward.
Your silly “Are you an American” attempt at an insult or rebuttal reveals the level of conversation you’re having, though.
> So you are afraid of minor information leaks getting you killed, but you’re also trying to tell us that online anonymity is a bad thing?
Which is a really severe misrepresentation of my argument.
My argument is that anonymity has drawbacks, and that it's bad to just ignore those drawbacks.
> Its hard for me to believe that you don’t recognize the dissonance between the two points you were putting forward.
But there absolutely is a dissonance? This is what's called a dilemma: Online anonymity protects some people, and puts other people at risk. If competent people ignore the latter, incompetent people will be trying to solve it instead, so we get these laws.
> Your silly “Are you an American” attempt at an insult or rebuttal reveals the level of conversation you’re having, though.
Sorry about the accusation, it was somewhat flippant. It just seems you and others read an opinion that goes slightly against your own, and immediately you assume that I actually hold the polar opposite opinion, which I don't.
How about this is actually the real problem? Online banking is not worth an omniscient global surveillance state, let alone the immense amount of leverage gained by this digitization.
Online anonymity has significant, real-world benefits which every doxxed person ever will list for you.
The solution is called a durable power of attorney and then moving significant assets to different financial institutions with e-statements. Or the heavyweight option is a living trust.
Mandatory identity verification or locking down software really have no bearing on this problem. Scammers leverage generic apps in the app stores just fine.
This problem most certainly is a part of the global turn towards fascism, which is ultimately based on frustrated people demanding easy answers and then empowering those who are able to give them easy answers by lying to them.
To show you that I'm maybe not just blowing smoke out of my ass on this topic, here is me personally dealing with a scammer-adjacent problem: https://news.ycombinator.com/item?id=47125550
Conceptually, if a proof was truly zero knowledge and there were no restrictions on generating it, there would also be nothing stopping someone from launching a website where you clicked a button and were given a free token generated from their ID. If it was truly a zero knowledge proof it would be impossible to revoke the ID that generated it, so there is no disincentive to freely share IDs.
So every real world “zero knowledge” proof eventually restricts something. Some require you to request your tokens from a government entity. Others try to do hardware attention chains so theoretically you can’t generate them outside of the approved means.
But the hacker fantasy of truly zero knowledge proofs is impossible because 1 hour after launch there would be a dozen “Show HN” posts with vibe coded websites that dispense zero knowledge tokens.
You need some kind of proof system if you need a central authority to certify something, but why is that required? The parents know the age of their kids. They don't need the government to certify that to them. And then the parents can get the kids a device that allows them to set age restrictions.
Whether those restrictions are imposed by the device on content it displays (which is the correct way to do it) or by the device telling the service the approximate age of the user (which needlessly leaks information), you don't actually need a central authority to certify anything to begin with because either way it's just a configuration setting in the child's device.
The Brazilian government passed a law requiring age verification for every site categorized as 16+. It can't be self-declared, so companies usually resort to facial scans and ID verification. I DO NOT want photos of our Brazilian children going to foreign agents who are PROVEN to profit from and do God-knows-what with our biometric data. And the funniest part? The same law says 'regulation shall not, under any circumstances, authorize or result in the implementation of mass surveillance mechanisms,' but also mandates that these measures must be 'AUDITABLE.' In other words, someone needs access to that data. It’s all so stupid and incoherent.
People who are less tech-literate FIERCELY support the measure, and whenever someone opposes it, they claim that person supports digital child abuse...
Anyway... the responsibility of protection should come from the parents, not from companies that profit off your biometric data.
It takes incredible conviction and force of will to keep your kids off the phone till they’re 16. Fewer than 1% of parents manage it. The problem is that the teenager wants a thing that everyone else has and it’s hard to keep saying no.
I think internet connected smartphones should be illegal for kids under 16 to own or use. It’s a tough sell tho.
But I also think the internet has more potential for harm now. Widespread social media makes it easy for predators. YouTube actively incentivises content creators to produce brain numbing shit instead of the more amateur and educational content I was exposed to. Instagram creates vicious dopamine hooks that children have no mental defense against.
Also sorry to sound egotistical but I think I was an outlier that drifted into doing educational things, many or most kids will spend every moment they get just playing video games.
That being said, I’m in favour of parents doing the parenting, not the government.
This aspect of parenting is really hard. If your kid is 10 years old and all their classmates have Roblox, saying 'no' to your kid does isolate them socially, because all the other kids are talking about what they did in Roblox at school and play Roblox together after school. To make it worse, some primary schools even allow kids to play Roblox at school during breaks or the teachers make TikTok videos, making kids want to have Tik Tok as well (TikTok-teachers are a real phenomenon), etc. So, even when you are trying, it gets undermined by others. Trying to fight it is kind of pointless, because most other parents don't see the issue.
Same for e.g. instant messaging, it is basically Sophie's choice: you allow them into these addiction machines or you isolate them socially. It would be much easier if social media and certain types of addictive games were just not allowed under 16. Just like we don't sell cigarettes or alcohol to kids.
I also completely agree with the counterpoint that age verification on the internet is generally bad.
Luckily, some things can be done without grave privacy violations. E.g. where high schools 10-15 years ago would gloat about being iPad or laptop schools, more and more are completely banning smart phones and laptops during school time.
At any rate, it's perfectly possible to hold both views at the same time: social media and addictive games should be forbidden under 16 and the age verification initiatives are terrible for privacy.
Maybe we should just ban Facebook, TikTok, etc. no more addiction, no more age verification needed :).
I am in the same predicament as both of you, having grown up with unfiltered internet access, and not wanting it to have went any other way (I love my life, actually!)
There is a condescending tendency when people hear what I said above, to tell me that I am an outlier, or, God forbid, a "genius", and other equally worrying conclusions regarding my character.
I agree that, today, there are millions more ways that children can fall for objectively negative things, that have been completely, and intentfully engineered to be terrible in a way which can be exploited for profit.
But also, I simply think that, with enough access to mind-numbing content, for long enough... people will simply realize that, actually, they don't want that. At least, not just that.
Adults are not a good term for comparision in the matter of less aggressive addictions, like with social media, because they already have lives they want to escape, with responsibilities and whatnot.
These are not scientifically sourced claims, but, in my experience, children have a lot more time, energy, curiosity, and will/intent to create, for one reason or another, and they have been doing those things since time immemorial.
This is just a consequence of having access to ~the entirety of all human knowledge at their fingertips, with no restrictions, and with an incredible amount of free time at their disposal.
So instead every time I got a new Linux or FreeBSD CD-ROM set, I would go through all the documentation and try everything out, and read source code. I got Pascal and C books through the local library, where you had to order the book and usually wait two or three weeks.
But I also didn't have the omnipresent cameras (you could still do dumb stuff as a kid and not get filmed/photographed). No pressure to show a fake version of yourself on social media. No pressure to be always available through instant messaging.
I feel like it was the best time to be a kid. Access to information was relatively easy (albeit slower than on the internet), but without all the terrible downsides for kids. Without all the dopamine shots and highly addictive social media and games. Without the all-ways present tracking of your every move.
Though even the kids slightly after me probably still had a good time. Early 2000s, Internet access became more ubiquitous, but it still took almost 10 years for the worst of addictive websites, etc. to rise. I sure miss the early web.
I'm roughly the same as you in terms of information access, though whether I was a child is debatable; was 14 when I got my first dialup connection. My family wasn't tech-adjacent so it was me who pushed for it; the only control in place was the amount of time I'd spend there.
The only control I have in place on my son in terms of content is whether something is scary or if he won't be able to understand most of it, because arguably he's still too young for many things.
But once he's 12 I don't think I want to restrict most things in terms of content, and by 16 I personally don't care if he watches hardcore midget porn, as long as I have the chance to contextualise and explain the industry.
But.
What I'd rather control (or ban, even) is rather all ML-driven doomscrolling platforms and the "social media" that turned people no longer social. The Internet you and I grew up in no longer exists (or it's a small hidden fraction of it), and now it's a wasteland of engagement traps and corporate revenue directed dark patterns.
You and I learnt to separate wheat from chaff, research, deep dive, and what not. Internet is now, by and large, instant gratification loops and user tracking. I don't want my son (or myself, actually) pulled into that. Porn is literally healthier: you bust a nut and go on with your day, but I see some people wasting hours on end, reel-after-reel, with increasingly targeted ads shoved to their face. Hard pass on that.
Age control, if any, should lie in the hands of the parent/guardian. Make it by law a setting on the routers (new devices are <18 until admin approves them), or the ISPs for mobiles. I'm okay with that. Absolutely not on random third parties handling personal information filling the gap for every random website.
All of that leaving aside the fact that zero knowledge proofs solve this problem without sharing any sensitive information.
But of course, the corporations benefiting from this are not interested in pushing those, IMO reasonable, age controls.
I saw very quickly that what separates a live person from a very deceased flat person was a moment of sillyness/forgetfullness/stupidity. "I didn't SUSPECT that is even possible to happen to a person!" - "We're....fragile?!" - "Ah, bike helmet... I think they're REALLY GOOD idea...."
PSA's just aren't listened to by teenagers. But something that's real - that happened, with the security camera timestamp in the corner... kids learn safety.
I mean, is that good?
Isn’t another way of looking at that to say that it poisoned an innocent time and left you aware and afraid of death when you might otherwise have been enjoying the end of your childhood without that burden?
In general parents might want their kids to be a little more mindful, but not grow up too soon.
Yeah now they know they might die, but they also know they will die.
Cool. What now? You might have a kid thinking that they are going to die tomorrow, for the next 70 years.
If you expect admins of edgelord websites to respect the laws of different countries or even care about kids, I suggest checking out 4Chan’s response to various attempts to regulate them.
The uncensored internet taught me more than I could ever have been taught in school, and I'll be forever grateful for that. It didn't take me long to understand that I could generally hate no ethnicity or people or country, and the people who do are manipulated by their government or other powerful figures in their life (or disproportionately swayed by experiences in their life). Humans are pretty much all the same, we all have far far more in common than we do differences. I have a stronger perspective of this than my immediate ancestors (demonstrated over and over throughout my life) and I do credit my exposure to the open internet for a huge amount of that.
There is one huge and problematic difference now, though: the uncensored internet of the 90's is nothing like the disinformation-saturated internet of today.
I've heard this a few times, but what was so bad? And, sorry to break it you, reality has some bad bits to it - do you think being ignorant of these is useful, or that it just sets you up for a bigger fall?
Why do you think removing independence (nannying) from another human being is the answer? Would you want to be nannied for ever, by corporations and governments?
Oftentimes the answer is "nobody". There's just nobody you can rely on to get the level of care you require. There are lots of arguments like Bowling Alone for how the breakdown of community has contributed to this separate issue.
In my view, by constructing and supporting legislation like this, people are implicitly admitting that parents, teachers, schools, communities, and all the rest are failing at their job of keeping moderation local and raising the next generation.
But the thing is, unfortunately this is a true statement in too many cases, including mine. My parents failed to parent me well enough, and my counselors were either instrumental in my own trauma or failed to address my issues soon enough, and as such I developed a sex addiction in adolescence fueled by persistent ongoing stress from my upbringing that I continue to seek treatment for to this day. Could content moderation laws have cured my parents' narcissism? Nope. Could they have prevented me from needing to act out to relieve the stress of my early relational trauma? Nope. Could they have helped match me with more competent therapists? Nope.
Could they have caused me to go to rehab for alcohol abuse instead of porn? Maybe. For all his statements I disagree with, I subscribe to Gabor Mate's view that traumatized individuals are compelled to be addicted to something. At that point, there are a lot of things to become addicted to other than the ones you can content moderate, given the (false) assumption that it's possible moderate enough of it.
Pornography was necessary but not sufficient for me to have it that bad coming out of childhood. Early exposure to it was only incidental. My upbringing was far more significant a cause in this. But unlike which websites I was allowed to visit as a child, a 100% chance of having emotionally involved parents isn't something you can legislate into existence.
What I feel isn't being talked about enough in this discussion is that this implicit realization that the world just sucks sometimes leads to justification that someone else needs to step in to protect children's fragile minds if the formerly trusted institutions aren't. The big option left is the platforms and systems hosting the tech themselves so they're targeted instead.
My opinion? If your parents aren't able to raise you to be free of significant trauma spawning "hungry ghosts" that you will need to turn to your unfettered internet access to feed, whether TikTok or LiveLeak or elsewhere, lest you are bombarded by stress every waking moment... then the situation was hopeless to begin with. You can't fix that problem with laws. You should have just had better parents, as awful as that sounds. And because of nothing more than bad luck, you're just going to have to unpack that problem with the healthcare system for years/decades, because there's not much else we know of that can meaningfully address childhood trauma that severe.
However, I don't think the medical establishment will necessarily help. Or looking outside generally - this will probably only compound or defer the problem. You will have to deal with it yourself in the end. I believe everyone already has all they need in themselves to do this.
What makes me extremely sad and concerned is that more recent generations simply have no idea or expectation of privacy online anymore. There will never be more of a fight against all this Orwellian behavior.
Above all, the LLM panopticon will watch us all.
Technology will not save us. Nothing will save us but ourselves and we're busy making rent and doomscrolling.
The information asymmetry between individuals and the powerful is permanently reversed.
Thinking about it in terms of the monopoly of violence being the root of power negotiations; typically a resistance movement has more information about the state/colonizer than vice versa, because power has to be visible - guerilla warfare thrives on this.
That's gone. The powerful will have complete detailed information and automatic analysis.
The medium is the message.
What's different is that, for a while, the early Internet age (and a little bit earlier - Usenet etc) made that underground very accessible. Now we're reverting back to the original situation where it was very much shunned and criminalized.
What is it you mean by this?
I see so many offhand comments about the dystopian UK here but AFAICT there’s a lot of noise and very little meat. The only thing I can think you mean is the UK is currently debating a bill to limit jury trials to more serious offences. While I do find that pretty offensive, there’s nothing fast track about any of its justice system at the moment.
On the contrary, people are waiting years for trial, which is bad for the accused because they have it hanging over them, and bad for victims who get no swift resolution.
For example:
>Courts will sit for 24 hours to fast-track sentencing under government plans to crack down on far-Right riots that swept Britain on Saturday.[1]
[1]https://www.telegraph.co.uk/news/2024/08/03/courts-open-24-h...
There is also this:
>Only Australia arrested climate and environmental protesters at a higher rate than UK police. One in five Australian eco-protests led to arrests, compared with about 17% in the UK. The global average rate is 6.7%.
>The UK’s Police, Crime, Sentencing and Courts Act 2021 and the Public Order Act 2022 transformed the relationship between protesters and the state, handing police extensive new powers to curtail protests and criminalising a range of protest activities. [2]
[2] https://www.theguardian.com/environment/2024/dec/11/britain-...
Boot, face, forever, etc
And given the Southport riots were, well riots, it’s unsurprising they were dealt with harshly.
That said, I agree that what’s happening with protest in both the UK and Australia is deeply wrong. New South Wales in particular seems to be awful on this front.
It’s a shame that guardian article doesn’t link to the actual study.
It’s not especially surprising that there is a high rate of arrests in the subcategory of protests they picked - environmental (not climate) protests often involve things like blockading mine sites and blocking roads here in Aus. In some of the countries mentioned in the article you may just be physically moved, beaten or even shot for that behaviour. Which is not to say that the higher arrests aren’t concerning, but the picture isn’t exactly clear after reading the article, particularly as it mentions over 2000 environmental protestors were killed during that period, I’d hope none in the UK or Aus, which to me that even though the arrests aren’t rates are higher in these countries, to imply that they are the worst in their treatment of protest is probably wrong.
you didn't read or care to understand my argument at all, which is not about the target of the process, but the existence of the process and the process itself. Looks like I have to spell it out: next time won't be race rioters, next time will be protesters protesting the farage gov crackdown on immigrants and minorities.
>It’s not especially surprising that there is a high rate of arrests in the subcategory of protests they picked
the article mentions the rate of arrests is high COMPARED with other countries. And again you're getting lost in the details; this wasn't about what the protests were about, but the brutal swift crackdown AND the laws passed giving police more powers.
Yes, this time they hit your out-group, so all is well. fine. next time, (and this is the crux of my argument), _using the exact same tools_, it's your group, you, that will be targeted.
Yes, I know you think it’s bad that it exists. I don’t.
So long as it is carried out with proper oversight and people have time to prepare, it actually appears preferable to endless delay which is the current hallmark of the British justice system. Do you disagree? Why?
Do you have a reason to think that justice served this way is less fair or rigorous?
Because frankly I’d rather get in the express lane at that point if I was on the receiving end, than have to live with the process over my head for 2-3 years.
> the article mentions the rate of arrests is high COMPARED with other countries.
Yes, and it also says some of those other countries are killing environmental protestors, so the picture is not as clear cut as you might like.
Seriously, maybe read it again if you think this is entirely un-nuanced.
I agree with you that giving the police extra powers is bad.
I disagree that faster justice is bad.
I disagree that a higher arrest rate than other countries on a subset of protests is as black and white as you think.
Or perhaps our current Home Sec in 2014 declaring "Rioters face years in prison as Home Secretary Yvette Cooper promises ‘swift justice’" [1]
[0] https://www.telegraph.co.uk/news/uknews/crime/8695988/London...
[1] https://www.standard.co.uk/news/crime/riots-prison-justice-l...
It's all part of making effective protesting illegal. You can justify each little step as you clutch your pearls (even me, to an extent if I don't think of the bigger picture), but then when you realise that the sum of all that is permitted is standing alone creating no disturbance for anyone, effecting no change, and you realise effective protesting is banned.
And the second article is about people setting fire to cars and buildings.
This is not “effective protest”, it’s criminal damage and arson and would be prosecuted as such in any western nation.
Are you seriously arguing you should be able to get away with setting fire to a community library because you reckon you’ve got a legit grievance?
Yeah nah, no thanks.
Which is funny as thats what I heard from my older family growing up. Except it's a lie and they have plenty to hide!
And once you step outside HN, forget it. You can save yourself, but there are thousands of people that do respond to the "think of the children!" nonsense and will call you a creep for objecting to it. It's game over now, you will fight against this for the rest of your life.
Algorithmic feeds are propaganda tools. A foreign government being able to propagandize your citizens is a legitimate threat, not handwaving.
If anybody says that propaganda is a valid reason for censorship I say, censor thyself first.
> Restricted Stock Units (RSUs) are a form of equity compensation where employers promise company shares, typically vesting over time, offering a way to align employee interests with company performance
For example, it seems to me there is a whole lot of worry around megacorporations, often related to capitalism and the inequalities it brings.
In that context, if you don't place privacy as a priority, how are you not either stupid or ignorant? Is my premise just wrong?
It's really more just concluding that those corporations should be liable for their behavior. It also has nothing to do with "the Internet" which is largely unaffected. Except of course ideas for forcing OS behavior coming out of California which are obviously bad.
I actually think things could be a lot simpler if we just made the laws like alcohol: it's illegal (with criminal liability) for a non-parent adult to provide <restricted thing> to a child. Simple enough. Seems to work fine as-is for Internet alcohol purchases. Businesses dealing in restricted industries can figure out how to avoid that liability. That's entirely compatible with making it illegal for businesses to stalk everyone, which we should also do!
The best way (and only way) to prevent retaining information is to not share them in the first place.
> You can be in favor of privacy while simultaneously thinking porn, gambling, and advertisers shouldn't be targeting children.
There are other method to achieve this without mandatory identification. You can force these content to be served with an HTTP header providing their legal minimum age of consultation or type of content, and blocking them browser side. Governments could maintain filter lists for different age bracket and release them to everyone, allowing easy compliance on the device parental control settings.
You can make intentional targeting illegal without criminalizing the accidental. And mandating self categorization of content by service providers would enable standardized filtering that was broadly effective.
The above won't get kids off of social media and it won't serve the purposes of the surveillance state but it will meet the stated goals of those pushing these measures.
Keeping children off of social media is a much trickier problem. I think we'd be better served by banning certain sorts of algorithmic feeds.
They're not actually owed a solution for how to make their business model work. They can just be told that what they're doing is unacceptable, and they can figure out what they'd like to do next. If you're worried they might react with some other unacceptable thing, we can clarify that that's not okay either.
> They're not actually owed a solution for how to make their business model work. They can just be told that what they're doing is unacceptable,
You listed a few different things previously. Which one are we talking about here?
I think the rest of us are owed a solution where we can still do what we want without having our privacy violated. Regulations need to take the end user into account.
I already proposed what I think would be a workable solution to achieve the stated goals without unduly eroding the status quo. Do you have any response to it?
We don't give kids special debit cards that detect and block purchases of cigarettes and alcohol and say "make sure your kids don't get cash". We make it a crime to sell those things to a child.
Why is online ID verification a problem for e.g. porn and gambling but it's fine for alcohol? Why should it be fully anonymous? Should we also allow anonymous porn and cigarette vending machines in person? Why is online special?
This whole idea of anonymous access can't even work in a world where you actually pay for things online, which makes the whole proposition even more dubious. If you're an adult and spending money online, you already told them who you are (modulo darknet markets with crypto). Or you could buy a porn gift card in person with an ID flash like other restricted physical items if you're uncomfortable with online payments. And treat the gift card as restricted as well: giving it to a minor is a crime. So then what's the problem exactly? Ad supported porn specifically somehow is important enough to be special?
More to the point: as far as I know, if you perform a sex act in plain view inside of a private establishment that's open to the general public with no restrictions, then that's public indecency/lewd conduct, a criminal act, even if the owner consents. If children are present it can become a felony and you're going on the sex offender list along with jail time. Why is an unrestricted public website different?
Why are you "owed" this privacy online when someone running an open to all, fully anonymous, unchecked porn theatre in person would be arrested? How about the privacy you are owed is that your business stays between you and whomever you interact with, and even they can be asked/required not to keep or share notes about you? But they can still be expected to know you are an adult before they sell you adult services.
Also I think you have this entire thing exactly backwards. It's not on the rest of us to convince the other camp that ID shouldn't be required. Rather it's on the other camp to put forward a convincing case that ID should be required - that there is no realistic alternative and that the tradeoffs are worth the cost. Otherwise the current status quo wins out.
> Self categorization has been the status quo since the 90s and has been proven to be insufficient.
What are you on about? Legally mandated self categorization has never been tried and would presumably work if there were penalties for violations. You don't even need 100% compliance, you just need high enough compliance that the default becomes to filter out any site that fails to do so.
Voluntary self categorization isn't particularly useful because almost no operators bother to do it. So you're left with no (workable) option other than whitelist filtering.
> have a third party come up with a solution that people can buy to filter us
I never suggested anything of the sort.
> The liability belongs on the people dealing in the restricted item.
The items are not currently restricted and I don't agree with you that they should be. However I would agree to changing things to make all providers liable for accurately self categorizing the content they serve up by means of a standardized header format or some other protocol.
> Why is online ID verification a problem for e.g. porn and gambling but it's fine for alcohol?
Presumably because you have to take receipt of the shipment so the vendor is already going to collect your PII.
Why is legally requiring that a gambling website send a header categorizing itself as such unworkable yet somehow it's all going to work out just fine if we require them to do the much more complicated thing of securely handling and accurately verifying identification documents? That seems like an obvious contradiction to me.
> Why should it be fully anonymous? Should we also allow anonymous porn and cigarette vending machines in person?
Don't we effectively do exactly that? There's no requirement for ID retention on sale of alcohol or cigarettes and until recently the norm was for the clerk to briefly eyeball your license. They also didn't used to bother checking ID if you looked old enough. (That's changed at the major retailers around here lately but that's a different matter.)
Anyway I never claimed the brick and mortar way of doing things was ideal so arguing as though I've agreed to that seems rather disingenuous.
> If you're an adult and spending money online, you already told them who you are
But I did not give them a copy of my ID or any otherwise unnecessary PII and do not want to be required to do so. Also there are plenty of ways to pay for things online without readily revealing your identity to the couterparty. I expect you are well aware of that fact.
> Why is an unrestricted public website different?
For practical reasons I'd imagine. Analogies are great and all but at the end of the day a global electronic communication network has rather different properties than a physical brick and mortar location that you walk into.
Regardless, the reputable services all seem to agree with you (as do I) and thus go out of their way to send headers marking them as adult only. It's roughly equivalent to a shop hanging a "no under 18 allowed" on the door but then not bothering to ID anyone. If parents can't be bothered to configure even the most basic of controls on their children's devices why should the rest of society be made to suffer for that?
There's no requirement for ID retention online either. In fact, unlike in person, it is banned. And a framework where you just say "you are liable for what you provide to children" actually allows for a site employee to briefly eyeball your ID or just look at you and decide you look old enough (though that doesn't really work with realtime video generation).
Record retention is a different question from checking. I think I and the actual relevant laws have been pretty clear that we should disallow that. No, we do not have anonymous cigarette vending machines (at least anywhere I've been in the US). They are always behind a counter with an ID check.
Except for crypto, I don't think I am familiar with any way to pay for something online without revealing my identity. I'm pretty sure 100% of online purchases I've made over the last 20 years have required name/address and usually phone number along with payment details. Even with crypto, as far as I know common wisdom on darknet markets is (or was?) to use your real name/address as that's the least suspicious. I don't actually know a single place where you don't give that info to your counterparty. I can't imagine it's common.
What parental controls? As far as I know, Safari is the only modern browser that checks RTA headers (if it still does). There are no options for Chrome or more importantly Firefox, which is the only browser that's fit for purpose with malware blocking (especially for children). Similarly Android has no controls.
The early Internet users weren't people who subscribed to AOL to look at porn in the 90's. They were the people who were granted access to the ARPANET to work in the 80's. The Internet was an exclusive community back then. You had government employees, knowledge workers, and elite university students who had all passed institutional screening processes. You were only allowed to use the ARPANET if you were using it to do something useful and aligned. Therefore you could feel reasonably assured that anyone you talked to online was going to be better than the average person you'd find going outside and walking down the street. If you wanted to know who they were, you could just finger their username. If you wanted to know who owned a domain, you could whois it, get their name and then even write them mail or call them.
People have wanted that old Internet back for a long time. i.e. the one that existed before Eternal September. Those are the people who run your tech companies. The ones who remember what it was like. These people understand what people actually want isn't always the same thing as what they say they want. They understand why the only truly successful Internet spaces on the modern Internet are the ones like Facebook that got people to be non-anonymous. Another example is the best places to work that folks desperately want to get into are the companies like Google whose intranets are much more like the old Internet. These are really the only Internet spaces that normal people want to use. Because people want to interact with other people who are similar to them. Because people want to know who other people are. Otherwise we can't operate as the social creatures that evolution designed us to be. I don't think any civilization in history has operated its public square as a gigantic red light district where everyone is required to wear a mask. So why should we?
Overcoming the anonymous religion problem that somehow glommed onto the hacker and cyberpunk movements is more important and urgent now than it's ever been, because the Internet has been filling up with billions of AI agents. It's gonna be Eternal September in overdrive. Humanity is really facing a tradeoff where you'll have to have gatekeeping again and won't be allowed to conceal who you are, or you can be gaslit by machines forever in your own robot fantasy.
Of course, I don’t blame them. They haven’t lived in a context where they need to care. All of the reasons they’ve heard to care have come from stories of people who lived before them. But ignoring warnings for no good reason is still dumb.
A better thing to engage with is whether we can meaningfully change the situation. It might still be possible, but it requires an effective immune response from everybody on this particular topic. I’m not sure we can, but it’s worth trying to.
You might believe you don't need opsec, and then new laws are passed, or your national supreme court overturns the case that gave you your rights, or someone invades; and now suddenly you're wanted for anything from overstaying a visa, outright murder, or simply existing.
USA, right now, peoples lives are being destroyed because the wrong people got their data. Lethal consequences exist in Russia, Ukraine, Israel, Palestine, Lebanon, Iran.
Certain professions per definition: Journalists, Lawyers, Intelligence, Military.
Certain Ethnicities. (Jewish, Somali) ; Faiths...
It doesn't need to be quite this dramatic though. But you might accidentally have broken some laws and don't even know about it yet. Caught a fish? Released a fish? Give the wrong child a bowl of soup [1]. Open the door, refuse to open the door. Signed a register; didn't sign a register. The list of actual examples is endless. The less people know about you, the less they can prosecute.
[1] A flaw in the Dutch Asylum Emergency Measures Act (2025) that would have criminalized offering even a bowl of soup to an undocumented person. The Council of State confirmed this reading. A follow-up bill was needed to fix it.
But turns out that if your opsec is decent, and even using mostly publicly available tools like Snowden did, you might survive even that.
In the nuanced case, normal people applying more normal opsec can handle more normal things, would seem to follow.
What bearing does that have on anything.
I don’t intend to give up or accept limitations on these rights because you consider yourself to have “privacy rights” or ownership interests in my records, my memories, my perceptions, or the reality in front of me. I find the notion of the government or another person interfering in this process, the perception and recollection of reality, to be creepy and totalitarian by itself.
In 1984, it is not only that the government is aware of Winston, but that it routinely tampers with or destroys evidence of the past & demands to control the perception of the present. I do not think we should let a government do that, even for a good reason like “protect your privacy” any more than we should let it destroy general purpose computing “for the children.”
> This obtains even when using electronic tools and even when working in association with others.
I think it is reasonable to place limits on public "speech" (ex uploading videos of people) without interfering with private (in the case of electronics E2EE) communications.
It didn't take long for the CIA to sniff everything on everyone, early 2000's.
Maybe you're referring to the 90's but at that time the internet wasn't really that popular, it was a niche thing.
The digital age presents with it novel forms of danger for children, and for adults for that matter, and there is absolutely no way to effectively address these risks without some amount of reduction in privacy. And before someone inevitably says “where were the parents?” and wash their hands of the situation, a healthy society should care for and protect all children, especially those whose parents do not.
It’s one thing to hold the opinion “I am willing to sacrifice some number of lives, in order to preserve privacy”. That is an honest and potentially justifiable opinion someone may hold. But declaring the situation to simply be a facade to harvest people’s data seems to me like a reflexive response to avoid uncomfortable truths regarding the situation.
Well, your example wouldn't be solved by age verification in any way. They could still legally access Roblox or a discord private chat (or even another private chat method) after this law.
So the example show how it is about irrational fear and not protection in any way.
And this is an tragic edge case, if you want to take this kind of edge case in consideration, you also have to take in consideration what the age verification would imply as tragic edge case.
After all, dating apps are an even more extreme version of this. If you're attractive enough, you get to have many one night stands and many murder opportunities.
People are occasionally hospitalized due to self, family, or friends handling food improperly. That doesn't warrant a legal intervention whereas dining establishments do.
> before someone inevitably says “where were the parents?” and wash their hands of the situation
Nope, that's exactly what I say. The law cannot reasonably replace responsible parenting if society is to remain a pleasant place to live.
Many of us are pretty damn okay at beating back the flame and controlling the flow of the worst of things away from homes, but nobody is perfect.
We don't expect every family and parent in these areas to have fire fighting skills, self evacuation is recommended.
Parents every where now find themselves surrounded by the delibrately laid gasoline of addictive social media and grooming risks et al. and it's infeasible to expect every parent be skilled in defensive cyber secuirty.
It's reasonable to expect communities to want simple barriers and means of protection, the existance of reasonable control and throttling options for parents.
> It's reasonable to expect communities to want simple barriers and means of protection, the existance of reasonable control and throttling options for parents.
I agree 100%! However ID verification is not a reasonable (or even particularly effective) solution to that. I apologize if I've misconstrued your intended meaning but given the broader context that's what it seems like you're implying.
Realistically there's no way to prevent grooming other than keeping tabs on your child. The least labor intensive (but also most intrusive) way to do that is probably whitelist parental controls and watching for unauthorized devices. It is not even remotely realistic to expect a communication platform to detect that a child is speaking with an adult they don't know (as opposed to one they do) and also that it isn't a benign interaction (such as a gaming group or etc) and then somehow act on that information (how?) without manufacturing an absurd dystopia in the process.
When it comes to filtering I think it would be reasonable to impose a standard self categorization protocol on all website operators. That would make non-whitelist filtering much more reliable (a boon to parents, educators, and employers) without negatively impacting privacy or personal freedoms.
* there are very few urban population clump on the planet that don't face the threat of child grooming and exploitation, both before and after the digital device explosion.
* that threat vector significantly increased and morphed with the spread of personal digital devices for children; the threat comes no longer from potentially any personal with contact in real life, it has now expanded to include potentially the entire digital world and now can be automated via groomGPT
* A simple "where were the parents" response on a per parent basis is unfair in the sense that spotting grooming in a digital device world is a difficult challenge .. even a simple constrained playground with stock babytalk language construction can be socially backdoored (See: "I want to stick my long-necked Giraffe up your fluffy white bunny" )
* Concerned parents will look for solutions, communities, at local, state, and federal levels should devote resources to providing solutions in informed contexts and graduated levels.
* Unaware parents will exist, and will likely dominate the demographics, or not?
* Is the correct _default_ social policy here (answer varies by country and culture) to shield the less cyber aware from the worst of the worst with filters ... that the better informed can bypass or deselect?
I guess where we diverge on PoV is where the perimeter of swiss cheese protection should extend to.
Roblox can straightforwardly require ID verification on their own, of both the parent responsible for the account, as well as the children directly (request documentation from their school, birth certificate, etc. Yes, high touch to verify these documents. But we're talking protecting children here, right?)
If anything this type of legislation is about absolving them of the responsibility of doing so!. Imagine a company making their offering "for adults only", with de facto kid usage as parents relent and just let their kid use an older age on the computer.
And this is why these arguments never translate well to mainstream politics.
By declaring a-priori that it is not about the children, and leaping straight to a deeper, more sinister motive that you're sure is there, even if you're right that there are people behind the scenes agitating for these sinister reason, you ignore that a lot of the general public and a lot of the political class genuinely do see this as a child protection issue.
If you can't even concede this, then you're missing large parts of the picture and your attempts to resist it will be that much harder.
At points Louis and whatever absolute scumbag he's with walk around the streets while the guy is filming his own content.
There are kids, literally 11/12 year olds, walking up to these predatory, evil, scammers on the street going "oh my god it's MC" or whatever their name is. Multiple times.
And he hardly gets to spend any time with these men because they clock pretty quickly they're not going to come off well.
In the space of like 3 days, Louis caught on camera at least 10/20 young kids recognizing these toxic people from videos they had watched. Even the ones who'd been banned from most platforms, because their videos get reshared under different accounts and insta/tiktok/facebook aren't bothering to catch these reshares.
It really is about the kids.
And it all comes down to these people convincing young men to spend money on scam courses or invest in scam brokerages by getting them to join telegram group chats. And suddenly it's really clear to me why telegram's under scrutiny.
Therefore, the push to ID everyone using the internet (even down to the hardware) is a way to prove that ads are being served to real humans in their target demographic.
The goal has always been identification. And the goal of identification is control.
Never be fooled by the 'easy to evade' part. That is always just a first step to get you to care less to oppose the introduction. Once in place, the enforcement and compliance mechanisms rapidly change to the real system.
As the Heritage Foundation admitted:
> Keeping trans content away from children is protecting kids. No child should be conditioned to think that permanently damaging their healthy bodies to try to become something they can never be is even remotely a good idea.
https://arstechnica.com/tech-policy/2024/07/kids-online-safe...
Can you explain what you mean with trans ideology? because AFAIK 'ideology' means a collection of beliefs or philosophies, and I don't really understand how that applies to a physiological condition, like it does in the case of religion.
If you think you could be convinced by anyone that you're not living out your true gender identity, I have news for you... Most people, children too, are not having those thoughts unless there's actually a journey waiting for them.
And FYI, I've seen it happen with one of my own family members - someone who so far as I can tell isn't 'a man in a woman's body', but rather just someone who never fit in and was always a bit of a social outcast.
Their struggle was never their bloody gender, it was their struggle to find a way to fit into the world.
And that's what a lot of transitioning actually is. Because human psychology works such that when we're not fitting in, when we feel insecure and out of place, a subconscious pressure emerges to reinvent ourselves due to the current formula not working for us.
It's offensive to me that you'd make such claims whilst clearly so naive about it.
While you may think they are being affected by certain kinds of messaging, you also have to interrogate that you yourself are being affected by other kinds of messaging, shaping your view on how gender functions.
This is the exact same reasoning behind anti-gay laws around the world: That telling people it's OK to be gay will cause them to somehow "choose" to be gay, which to anyone who is actually either straight or gay is completely absurd - they couldn't change that aspect of themselves even if they wanted to.
This isn’t an actual risk to anybody, and I can’t believe I have to say that.
I do wonder how many would detransition if it wasn’t too embarrassing for them or because they’re effectively stuck that way if they did bottom surgery. Certainly I’m sure there are many more that quietly do it and not be a part of a community around it - I know of one person who did.
I’m sure there are many reasons people transition. For instance, there used to be a subreddit that collected (many) trans people commenting on how much them transitioning or being the other gender turned them on (Reddit being Reddit, it was banned). Some might do it because it feels like a way to get a new start. Some might do it because it’ll get them attention. Some, I’m sure, do it because they genuinely feel like they have the wrong body.
All of those could (of course) be affected by social interactions, with only the last one being positive. Unfortunately it would be really hard for us to ever know the true statistics, as that sort of thing is hard for even the person experiencing it to suss out.
Orders of magnitude more people are maimed, disfigured, or outright killed by:
1. Guns.
2. Vehicles.
3. Alcohol.
Weirdly, suspiciously weirdly, the people that are vehemently for age-verification to protect potential trans-indoctrination victims from any risk of bodily harm are the very same people that are very much in favour of those three things. Or at least, show zero apparent interest in using age verification to block 2nd amendment nuts spreading their propaganda -- and I use the word very literally -- because the NRA is funded by Russia[1], or blocking young impressionable kids watching Nascar and being influenced to engage in dangerous speeding, or blocking alcohol advertising from ever being seen by a minor.
[1] https://www.npr.org/2019/09/27/764879242/nra-was-foreign-ass...
Go read some threads on the detrans subreddit.
Look, it's cool to be trans, no problem. These women I know are good people and net contributors to society. But they are off the ideological deep-end, and would happily spend 3 hours at the family BBQ lecturing an impressionable 13 year old about how those weird body feelings are very likely gender dismorphia. They're just as drunk on their flavor of delusional social media as any other religious nut is crazy about God.
But identity as a whole is a very murky thing - if you ask me it's largely an adaptive abstraction that our minds invent.
The purpose of said adaption is to adopt a role which functions within the tribe/society for purposes of survival.
I think we way over-simplify the whole thing by making it about gender and gender roles.
And it's that over-simplification that I would label as the ideology. Because that's what ideologies do: they take the complex ambiguities of the world and try to cram them into a simplistic box.
>> Keeping trans content away [...]
Isn't it a stretch to round off "trans content" to "LGBT+ content"? I mean, from a pure logical point of view the statement is correct, because "trans content" is a subset of "LGBT+ content", and therefore "suppressing LGBT+ content" is technically correct, but it's at least misleading. The left's version of this would be something like "twitter is suppressing anti-immigration content!", and the actual example is some alt-right commenter saying that immigrants should be lynched. Immigrants being lynched is certainly an subset of "anti-immigration", but it's still misleading.
Just one of the many, many, many reasons that trans rights are human rights.
It quite literally is the slippery slope argument. You just don't want to call it that because the term is almost always used in the context of a fallacy, and you think you're right. It's like "freedom fighters" vs "terrorists". Nobody calls themselves terrorists, even terrorists.
Moreover the "It's effective because it sounds "reasonable" on its face, but it's a ploy" argument works equally well for any side, eg. it's not hard to imagine someone on the right saying "today it's Jan 6th protesters and that might seem reasonable, but tomorrow it's anyone at unite the right protests, and when president AOC's in power it's anyone who's protesting against trans surgery for minors".
Not really. Do you think the people attempting to ban trans content are otherwise fine with kids being gay/lesbian/etc? Do you think they view gay/lesbian identities as legitimate, rather than unnatural perversion? It’s the same rhetoric in my experience, we’re all just deviants making choices. It seems like casual uninvested people just got used to gays being in the public eye and anti-gay people lost the ability to get anyone to care about that position. Turns out they’re just normal people trying to live their lives.
> Immigrants being lynched is certainly a subset of "anti-immigration", but it's still misleading
I don’t think your analogy works unless you believe that transgender people are uniquely extreme compared to other identities. If true, I think that more shows your prejudice than anything. Maybe if enough trans people end up in the public eye, casual uninvested people will stop thinking negatively about trans people generally too. Maybe one day they’re realize we’re just people trying to live our lives.
[1] https://www.eff.org/deeplinks/2026/03/rep-finke-was-right-ag...
If you want access control, the appropriate point for regulation is with ISPs and cellular providers, and the appropriate mode of regulation is requiring these companies to provide choice and education for families, and awareness of liability.
Require ISPs and cellular network providers to offer a standard set of controls to their customers informing the common person (in common language) who is using those connections and what they are doing with them. For ISPs, this looks like an option for a router with robust access controls, designating some devices (based on MAC address) as belonging to children and filtering those devices' network requests at the network gateway, or filtering one hop up onto the provider's infrastructure (e.g. the ONT for fiber connections). For cellular providers, it looks like an app available to parents' devices and similar filtering for devices designated as belonging to children (based on IMEI).
When a family signs up for Internet service, either at-home access or cellular data, the provider must give both parents a presentation about these tools, and about the liability the parents face for allowing their children unsupervised, latchkey access to adult content, no different than allowing children to drink alcohol.
It may even make sense to require ISPs and cellular providers to track MAC addresses and IMEIs of devices their own customers designate as "for children" and make those providers liable for not filtering Internet for those devices, and also liable for allowing targeted advertising against those devices.
I don't think achieving that setup is likely, but it's fundamentally the right way to solve this problem, and parents are pushing for a solution one way or another. I don't love it, but if it's coming almost inevitably we should at least push to do it right. It's a dead-end, losing strategy to blanket oppose one solution to legislators and provide no alternative. I write all of that as someone who values privacy and liberty, both in meatspace and cyberspace.
If my kid takes their tablet to grandma and grandpas I want the preferences and signals to carry forward, even when connected to a network at household that is nominally only adults.
These technologies don't need to be bullet proof to be effective and they don't need to send more information than "treat all requests from device as being from under 8/13/18." The ills these age verification efforts are trying to address (and they are real problems) are from excessive, not casual or incidental use. Yes, there will be many kids that get around any reasonable control, but just making it less convenient will reduce harm.
I have various content controls on at my house. I'm the admin, I can turn them off whenever I want to. I almost never do, because 1) the block reminds me I should probably shouldn't be going to whatever site I'm going to and 2) for the most part, my experience is better with the "restricted" search engines/youtube/social media.
These things are not possible with any reliability, we spent two decades encrypting everything.
> Guardianship is something else. It is the contextual responsibility of parents, teachers, schools, and other trusted adults to decide what is appropriate for a child, when exceptions make sense, and how supervision should evolve over time. Moderation is partly technical. Guardianship is relational, local, and situated in specific contexts.
But there is no mention how this guardianship is supposed to work in practice if unsupervised internet access is pushed everywhere: Kids are expected to have their own devices (or will use one from a friend), school whatsapp groups are at the same time essential for communication and potentially dangerous. Even if a page filter is set on a phone, which pages exactly would you block or unblock?