Posted by mdhb 5 days ago
If one wants to work in that industry is a personal ethical one, but 20 years from now we’ll probably look at folks working at these companies like we’d look at someone who worked as a tobacco executive. Made good money but maybe not leaving a legacy of an ethical career.
This is the world that software developers create. Any society which rewards less laborious work for significantly greater pay will eventually find reasons to reward, "profits over people." Whether they're Neokantian or free-market liberal justifications it doesn't matter. Thankfully you people have to put up with Forever Trump which almost makes the thing bearable.
-Silicon Valley before the 80's
Personally I do somewhere between one and three strikes with companies. Of course I still must use certain things at certain times, but generally a lot of them can be avoided if you develop the habit of looking for other solutions. It's great fun, actually, once you accept the challenge.
It's only a small action, but it's good on a personal level to practice any kind of resisting.
"As individuals? If your friends and family all use meta products, are you suggesting to get new friends and family, or to convince them all to use other products?"
I think it is perfectly reasonable to try and convince friends and family that they use other products if you believe that a Meta Monopoly is harmful.
Ideally they could all switch to a different platform but getting everyone in a group to make that switch is difficult.
My point is that social networks and real-world interaction aren't exclusive. These products facilitate a lot of real-world social interaction as well, and the network effect of most people having an account there makes it hard to move away from.
That's it. It hasn't let me down yet in my many long years of life.
- who provides your utilities?
- who provides your food, medications, other stuff that goes in your body?
- where do you get financial services, insurance, etc?
- do you drive? who made your car? do you ever fly?
For many of these categories there are likely a few examples of local governments, co-ops, or mid-size/small companies offering in some of these categories, but not in a comprehensive way -- i.e. you can get some of your food from a local CSA but likely not your whole diet, you might get much of your medical care from a Direct Primary Care model until you need something that's outside of their capacities, etc.
It's pretty sensible. You wouldn't advise people the opposite, would you?
If the behavior is identical between party A who uses the insulin but somehow doesn't "trust" the producer, and party B who both uses it and "trusts" the producer, what has party A achieved through their mistrust?
So even though there exist people at Facebook that have human attributes of empathy and "let's not fuck up half of society" – as a company, they don't behave that way, since it affects more abstract non-human concepts like the survival of the organization, or profit motives that are detached from individuals (like an employee's stock price or yearly bonuses).
I've seen acquaintances share fact sheets about times when drug makers were sued/fined for lying to the FDA, harming customers, manipulating prices, etc. All true! So people reasonably ask why they should be "forced" to have their products injected into them. And then they can get into all the reasons not to trust the FDA too...
Logically, just because a company has done some bad things doesn't mean their vaccines are unsafe. Or that the risks are worse than the disease. Or that sometimes mistakes just happen. And of course in their own lives people are hypocrites, break rules, do things like go back to cheating partners, etc.
I don't have a point here except to lament that things are complicated. Of course people are looking for justification of their beliefs. But maybe we should have held these companies to higher standards, and by allowing them to persist we were unwittingly eroding public trust to a tipping point that is now putting all of us at risk.
Vaccine manufacturers are not special. They are for-profit corporations, and the importance of the product they make gives them tremendous power.
For example take a look at Hep B vaccination. I spent hours one night trying to dig up primary source material and research from the 70s to justify it and the 3-course recommendation. It's obvious that Hep B is a serious illness for babies that can lead the problems much later in life, we know that. But how prevalent was it in the USA before the standard vaccine schedule was rolled out? Has anyone actually gone and looked through VAERS over the past 40 years and compared the rate of serious side effects like GB to a counterfactual base rate of Hep B? That's not a trivial statistics project, and nobody that I'm aware of has done it (although I'm bad at searching), yet we continue to vaccinate every single baby with 3 courses of Hep B. It's probably not a big deal, and I'm willing to believe that the people at the CDC probably know what they're doing (pre-2024) and have/had access to the right data and the right decision-making tools to set a good vaccine schedule. But if it came out that Hep B vaccination actually wasn't all that useful and we should probably stop doing it, it would certainly be inconvenient for the vaccine manufacturer. So there is absolutely an incentive to steer legitimate scientific inquiry toward some directions and away from directions.
All that is to say, trusting the science and being a supporter of evidence-based public health requires skepticism, precisely because for-profit corporations are always going to act like for-profit corporations regardless of what business they are in.
I mean, of course we don't trust big corporations.
It couldn't possibly be because developers in general have proved themselves untrustworthy as well... right?
It couldn't possibly be because users have proven education and countless warnings are ineffective... right?
Common sense outside of our HN bubble says that if merely serving me food is regulated, if merely giving me a haircut requires registration and licensing, why is building apps that can steal my data, my money, and my reputation... not regulated? Surely, it's easier for most people to discern the quality of their food, or the quality of a barber, than an app! Yet even for food, and freaking haircuts, we societally don't trust people to understand warnings and use common sense. Either fix tech (even with laws that make HN furious)... or say those laws regarding haircuts are stupid too.
One difference here is the tool that you own is built to undermine your authority and instead do whatever Google says. It'd be like if scissors required biometric validation with Great Clips to open "to protect people from unlicensed haircutters".
In my home state, unlicensed barbering is up to $2,000 per incident. So sure, nothing is stopping you. Just as even now, nothing is stopping you from installing a custom ROM and running your own code, even if you might not be able to run other people's code.
> One difference here is the tool that you own is built to undermine your authority and instead do whatever Google says. It'd be like if scissors required biometric validation with Great Clips to open "to protect people from unlicensed haircutters".
This is also a thing in the real world; it's licensing to be able to purchase key fob reprogrammers. It's a real pain, even if the tools (illegally) end up on eBay. That's because the risk of a potentially stolen car is seen as extremely high... but an app's potential makes that look quaint.
Locking down car repair tools is another obviously abusive practice that primarily benefits the manufacturer and harms the owner, justified through some weak appeal to security, yes.
These days though. Yeah, it's kind of obvious that you can't have a space faring civilization with the Internet and social media weighing you down. Honestly the Eugenics wars probably get kick started by social media.
Like, IRL we can't fire modern artillery over the horizon without a computer assisting us, and that's only a few hundred miles; a starship within range of their transporters (up to three times the diameter of this planet) is just an invisible dot on an invisible dot if you're looking for it out of a window. (IRL you can see the ISS flybys because it's only a few hundred km up, last I heard nobody can see any of the geostationary satellites).
Or comms: Uhura was written in an era when telephone switchboard were still around, manually connecting your phone calls by plugging and unplugging cords. (Did any later shows even have a comms officer?)
Even later, VOY tries to show how fancy the ship is with "bio-neural gel packs", but even when that show was written, silicon transistors were already faster (by response time) than biological synapses by the same degree to which going for a walk is faster than continental drift.
The horizon is in mortar range. Like 10 km at 10m elevation of the observer.
The horizon is not very far usually.
I may have overestimated the maximum range even then, but the core point was that you need computer assistance even for relatively short distances on the ground, let alone in space.
Or maybe the old adage of "a station wagon hurtling down the highway has more bandwidth than the biggest network links" would apply here -- send little storage modules at warp speed around the universe.
But also, in the show, they have clearly solved this problem, given that they can be out in Beta quadrant and still have live conversations with Starfleet back in San Francisco.
Don’t they also have ways of sending messages wirelessly in real time, just bounded by speed of light? That’s a down-sight lot better than what we have now as we basically just blast radio signals in all directions at roughly the speed of light- which degrades very rapidly over distance.
I’m coloured largely by Voyager, but I don’t see any technology that we have now that they don’t have, not at the distances it would need to work at and without the infrastructure to make it work.
Honestly, I don't know what the conversation is about either.
My favorite part: just-in-time ad delivery to your suicidal teen for products they might need
Facebook will not try to show your suicidal teen stuff that could help them. Facebook will only show your suicidal teen things that keep your suicidal teen doomscrolling.
Facebook WILL put a small textbox of "Here's the suicide hotline" and then overshadow it with a huge ad for "You aren't pretty enough, buy this body deodorant" that autoplays and includes sound and can take over part of your screen.
Facebook WILL show your suicidal teen stuff that makes them really angry. They do this on purpose. They do this knowingly. That's what "optimizing for engagement" means
Tobacco has zero utilities, meanwhile Facebook is heavily used for connecting families and sharing small life events.
Saying it is the same as Tobacco isn't useful. It's an exaggeration, which makes it hard to take the argument seriously.
I don't see social media being a whole lot more useful. Cool you can share some photos, and organize some events, but you can do that without Facebook and all the unnecessary shit that goes along with it.
It's pretty obvious that they surface rage-bait content on purpose, for example.
Self-regulation is a complete and utter joke.
You dont have to bury the report if it is never written. The only reason you would write it is if you think you are actually doing gods work, think you can whitewash it and manipulate the outcome to say you are or you are grossly incompetent.
It seems to me possible solutions could be a mix of:
a) company monitors all conversations (privacy tradeoff)
b) validates age
c) product not available to kids
d) product available to kids, leave up to parents to monitor
e) the product records a window on behalf of each customer, and the customer can report an incident like this to both Meta and legal authorities including such a recording. Strangers who sexually proposition kids get removed from the platform and may face legal consequences. The virtual space is like a public physical space where anyone else can report your crimes.
If this were a physical space (e.g. a park?) and your pre-teen kids were able to hang out there, the analogs to a-c would all sound crazy. Being carded upon entry to a park, or knowing that everything you say there will be monitored by a central authority would both be really weird. Saying "parents must watch their kids" seems less practical in a VR space where you can't necessarily just keep line-of-sight to your kids.
It's like saying Amazon's business is not scalable because they need warehouse workers.
This is the whole point. Amazon had to hire hundreds of thousands of warehouse workers to scale. They have 1.5 million employees. Facebook is capable of doing the same. The idea that they "can't scale" if they have to stop unloading their negative externalities is absurd. Amazon scaled, while hiring 1.5 million employees. Meta can scale and do the same.
1. With aggressive, noisy referrals to prosecution, and banning people who report others in bad faith, can you get these people to stop approaching kids on the platform? Can you get the human review burden to a tractable level b/c the rate of real issues and the rate of false reports is sufficiently low?
2. Can better moderation / safety measures _facilitate_ growth b/c people won't be scared or disgusted away from your product? We have plenty of people whose advice is "don't let your kids use their products unsupervised" and assuming you don't have the free time to _watch_ your kids use their product that quickly turns into "don't let your kids use their products". A safe platform that people _believe_ is safe might experience faster growth.
2. I don't think the scalability issues are related to the size of the social network, so I don't think this is ever a relevant question, at least from my perspective. My point is that it would not be commercially reasonable for Meta to actually employ the number of people required to run down, verify and then forward reports.
Sorry, but from my point of view, they serve pedos to police on a silver platter. If the police don't take action, that's not Facebook's fault.
That's a bit of a strawman. I've never seen it suggested that the problem is that govts do not prosecute enough of what Facebook reports and that is why so much of it happens on Facebook. I certainly wasn't making that suggestion. My point is that a lot of child solicitation does happen on Facebook. Despite phone verification, so I'm not sure what point you are really making. It seems more like you are coming about it from an abstract privacy perspective, which is valid, but not what you are claiming. Facebook is an oasis for pedos. They are all over Facebook and Instagram trying to interact with kids. Plenty of articles about it and how meta takes very few if any simple precautionary steps, and sometimes even connects these people through the applications of its social algorithms. You are acting like children hang out on the dark web or something. They don't. They are on Facebook. They are on Instagram, YouTube and on video games.
How odd, I wonder if there's a reason for that.
I remember in one transparency report, FB itself sent over 12 million referrals to NCMEC, yet we don't see stories about all those being rounded up for justice
How. Odd.
This is what legislators are generally going for; but it turns out there’s plenty of other stuff on the Internet deserving age restrictions by the same logic.
I’m at the point where I know we’re not going back; that battle is already lost. The question is how to implement it in the most privacy preserving manner.
I’m also at the point where I believe the harm to children exceeds, and is exceeding, the harm of losing a more open internet. Kids are online now, parental controls are little used and don’t work, that’s our new reality.
For anyone who responds this is a “think of the children,” that ignores we have tons of laws thinking about the children, because sometimes you do need to think of the children. One glance at teen’s mental health right now proves that this is one of those times. Telling parents to do better after a decade of trying is not a realistic solution.
My friends with healthy attachments to social media had healthy and present parents. You have to make sure your kid doesn’t want to drop out of society by being too overbearing, and obviously you need to be there to tell them the pitfalls of addiction and superficiality that only experience can reveal. Walking this line every day while your kid is kicking and screaming at you is way harder if you’ve already been kicked and screamed at work for 8 hours, so you just put them on the iPad and hope for the best -> and that’s how we get here. It begins and ends with capitalism’s productivity fetish
If parents only had to work 20 hours… watch half care more about their kids, while the other half gets a second job anyway to buy a boat, or immediately goes into an addiction spiral, their job previously being the time restraint. The jobs that keep us from our hobbies, are also checks on the darker sides of human nature.
On that note, even this doesn’t fix the problem; as now the iPad is still an all-or-nothing device, unless the parent knows how to fluently manage multiple endpoints on multiple operating systems - and this is so universal the law can safely consider it handled. I think that’s less likely to work than a genocide-free communist state.
The reason your argument is wrong is because it’s a restatement of Hobbes, who is a pessimist and can be refuted in many many many ways. Moreover it ignores the very real economic reality that many parents face, which is simply that they have less money or time to provide quality care for their children than they did before, and that’s evidenced by the rising wealth inequality among iPad-owning populations.
I do agree that parents can sometimes be unequipped to raise children, but you seem to be saying that decreasing the amount of work they have to do outside of raising children would make it harder for them to raise well and I can’t really agree with that.