Posted by Tomte 2 days ago
For I second I thought this employee was talking about what's healthy for the user. Certainly not though; they mean what's healthy for the "user-base". I find very interesting how this sort of language leads to certain employee behaviour. Using the concept of "health" to mean retention and engagement, might overcast thinking about health from a user's perspective— it's similar terminology but very different, and sometimes even opposite, goals.
If we don't, these narratives are getting normalized. A society is on a curve of collective behavior, there is no stable point. Only direction.
It was like a daily ritual, and I couldn't escape it for a while. I decided to go cold turkey, since it felt like the only option. All my friends moaned and complained for a while. They even tried to revive the 'streak' back, but I persisted. Feels really silly when I look back, but 700 days means I was sending snaps everyday for 2 years straight.
I still have the app and there are still few friends of mine, who send me snaps about their whereabouts, but I have stopped using it. Blocking the notifications was one of the best decision that I could have made, since that was the single biggest factor in not opening the app itself.
I’ve done this for all social media, and more recently deleted all social apps. I’ll go on Facebook sometime through the web browser, mainly for marketplace.
Facebook was the first app I tested disabling notifications on. This had to be about 10 years ago, I noticed they would give me a new notification every 5-10 minutes. I was addicted to checking what the notification as. Usually garbage, and the less I used Facebook the more garbage the notice. Since I’ve stopped using Facebook for anything but marketplace my entire feed is now garbage. The algorithm doesn’t know what to do with me now and its former history.
Having no social apps has been a hard change to get used to. But I feel so much better not feeling like I need to scroll.
I only scroll on hacker news now… which is easy because the top page doesn’t get that many updates in a day, and after several minutes of browsing “new” I’m satiated I’ve seen all I might want to see
I think every social media platform with an "age limit" should be required to do this as well. And open it up, so that anyone can create their own disabling geofence on their property. How great would it be to have a snapchat free home zone? Or FB, or tiktok
Thing is, there was a comma between "people" and "a smile" which made his poorly thought out joke read a lot differently. Dumb way to throw away your education.
Edit for clarity: /s - I went to the same university which had the above slogan.
So he got kicked out because of an extra comma, which he added to make it even more edgy, at the cost of reducing plausible deniability to nearly zero.
Personally, I think he just flubbed it. At the time, memes like "I'm gonna cut you <line break> up some vegetables" were popular. Can't expect a dumbass edgelord to have good grammar.
Either way, it was a stupid thing to do and he paid for it.
Depending on the location of the comma, the speaker is either planning to make happy gestures at people, or killing people with a firearm which makes them happy.
We have different profiles for different devices to allow, for example, YouTube on the television but not on kids tablets or phones.
Apple devices would still have parental controls in that case, though, I think?
Cellphone companies should really step up, here.
Lots of comments: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
https://theweek.com/culture-life/third-places-disappearing
But of course, social media companies will pour incredible amounts of money into political campaigns long before they let anything close to this happen.
Some $EVIL technology being fashioned to harm individuals isn't to blame - the companies behind that technology are. You can pile up your geofencing rules, the real solution lies somewhere between you deleting the app and your government introducing better regulation.
Which of course it can so why can’t a part of the solution be technological?
People clearly want the product, and I would clearly stand to make a lot of money from it.
Ehhh, that's just a poorly thought out slogan whose "truth" comes from endless repetition. Societal problems can have technical origins or technical enablers. In which case a technical solution might work to make things better.
So no, there's no technical solution to "people being mean to each other," but there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."
That was my point.
> [...] there is a technical solution to, say, "people being meaner to each other because they can cloak themselves with anonymization technology."
I've never used (or even heard of) YikYak before, but what solution are you suggesting exactly? De-anonymisation? How would you achieve that? Suppose you have a magical^W technological de-anonymising wand, how would that not cut both ways?
So YikYak enabled geofencing, to alleviate the problem they've caused in the first place? But let's suppose they didn't do that.
How could I, as an average parent trying to protect my child, employ such a solution on my own? Could my tech-savvy neighbor help me somehow? Is there a single person outside of YikYak who can build a solution that any parent could use?
> As one internal report put it: [...damning effects...]
I recall hearing of related embarrassing internal reports from Facebook.
And, earlier, the internal reports from big tobacco and big oil, showing they knew the harms, but chose to publicly lie instead, for greater profit.
My question is... Why are employees, who presumably have plush jobs they want to keep, still writing reports that management doesn't want to hear?
* Do they not realize when management doesn't want to hear this?
* Does management actually want to hear it, but with overwhelming intent bias? (For example, hearing that it's "compulsive" is good, and the itemized effects of that are only interpreted as emphasizing how valuable a property they own?)
* Do they think the information will be acted upon constructively, non-evil?
* Are they simply trying to be honest researchers, knowing they might get fired or career stalled?
* Is it job security, to make themselves harder to fire?
* Are they setting up CYA paper trail for themselves, for if the scandal becomes public?
* Are they helping their immediate manager to set up CYA paper trails?
We did that work because our mandate was to understand the users and how to serve them.
We did that with full good natured ethical intent.
We turned the findings in to project proposals and MVPs.
The ones that were revenue negative were killed by leadership after all that work, repeat cycle.
Or was it not as conscious, more an accident of following industry conventions for corporate roles, and corporate inefficiency&miscommunication?
Meta is doing this thousands of times per month, all the time.
They hire people on the autism spectrum who are inclined to say things out loud without much regard/respect for whether they are "supposed to" say it. *cough* James Damore.
There are plenty of autistic people who wouldn't say what Damore did, and there are non-autistic people who would.
I also know autistic people who are very highly-valued in key roles, including technical expert roles interfacing directly with customer senior execs in high-profile enterprise deals.
People are individuals, and we tend to end up treating individuals unfairly because of labels and biases, so we should try to correct for that when we can.
(Note my indifference to your discomfort with my comment.)
In my personal experience, kids and young adults particularly those who grew up immersed in social media (born after ~1995–2000), seem to struggle with recognizing appropriate, undistorted social cues and understanding the real-world consequences of their actions.
To Snapchat harming kids, I think it is more than just evil people doing "five key clusters of harms".
Even adults often expect the same instant reactions and flexible social dynamics found online, which blinds them to the more permanent, harsher outcomes that exist outside of digital spaces.
Anecdotally, the utter shock that shows on some people's face when they realize this is sad, and very disconcerting. (At an extreme think "pranksters", that get shot or punched in the face, and they are confused why that happened, when "everyone loves it online".)
How to fix this? the suggested solutions will not solve this problem, as it does not fit the "clusters of harms".
Basic ethics, and more importantly the law, need to catch up.
Surveilling, analyzing, then manipulating people psychologically to mine them for advertisers is just as real a poison as fentanyl.
And when it scales, that mean billions of dollars in revenue, actual trillions of dollars in market value unrelentingly demanding growth, playing whack-a-mole with the devastating consequences isn’t going to work.
Conflicts of interest are illegal in many forms. Business models incorporating highly scalable conflicts of interest need to be illegal.
We could still have social media in healthier forms. They wouldn’t be “monetizing” viewers, they would be serving customers.
Facebooks army of servers isn’t required to run a shared scrapbook. All those servers, and most of Facebook’s algorithms and now AI, are there to manipulate people to the maximum extent possible.
Advertising - banned, smoking indoors - banned, and most importantly, taxing the hell out of them (every 10% increase in cigarette prices results in a 4% decrease in adult consumption and a 7% decrease in youth consumption).
There isn't really directly comparable policy to taxing these free social media platforms., however, and the whole thing is a bit stickier. Before any policies can stick, the public needs to be aware of the issues. That is tough when most people's 'awareness of issues' comes directly from social media.
until this country gets serious about this stuff - and don't hold your breath on that - this is the absolute acceptable norm.
I find all of these "social media is bad" articles (for kids or adults) basically boil down to: Let humans communicate freely, some of them will do bad things.
This presents a choice: Monitor everyone Orwell-style, or accept that the medium isn't going to be able to solve the problem. Even though we tolerate a lot more monitoring for kids than adults, I'm still pretty uncomfortable with the idea that technology platforms should be policing everyone's messages.
So I sleep just fine knowing that some kids (and adults) are going to have bad experiences. I send my kid to the playground knowing he could be hurt. I take him skiing. He just got his first motorcycle. We should not strive for a risk-free world, and I think efforts to make it risk-free are toxic.
The damning part is that these companies know they harm they are doing, and choose to lean into to it for more $$$.
Thanks for your response. Your open source contributions are perhaps less damned than those of an actual Snap employee ;)
I am the parent. The ski resort provides the mountain, the snow, and the lifts.
He's a bit too young to be interested in taking pictures of his wang but I'd like to think this is a topic I can handle. Teaching him to navigate a dangerous world is sort of my job. I'm not losing sleep over it.
I also do everything correctly, but one time a drunk driver still almost killed me.
Oh really? I'd love to hear a few examples.
That’s just normal phone calls - no one is complaining about those.
But social networks have algorithms that promote one kind of content over another.
I keep getting recommended YouTube videos of gross and mostly fake pimple removal, on Facebook AI generated fake videos of random crap like Barnacle removal, and google ads for an automated IoT chicken coop.
I have never searched for these things and no living person has ever suggested such things to me. The algorithm lives its own life and none of it is good.
Maybe you're starving the algorithm and it's trying random things? Look up how to reset the YT algo, I'm sure it's possible. Then try subscribing/liking a few things that you actually like.
If you're within a standard deviation or two of the typical HNer, look up "Practical Engineering" and like a few of his videos. That should get you started.
It makes no sense to group these things together; "youtube leads to sexploitation" is nonsense. What I think I'm hearing is ennui about technology in general, which I can understand, but keep your arguments straight.
There is no need for location based recommendations, streaks, nudges, etc. They should be building their social networks in the real world. And if they need friends outside of school, that can come through parentally facilitated activities like sports, clubs, etc. Later you start playing Magic the Gathering at the nerd shop or go to "shows" at the VFW hall.
I have some support to the Trust&Safety team at the same period of the whole debate about the section 230; and from what I can tell Snap has some flagging mechanisms quite good related with people selling firearms, drugs and especially puberty blockers.
The thing that I can say is that a lot of parents are sleeping at the wheel with teenagers and not following what is going on with their child.
This generation is failing at recognizing the dangers of social media.
Teenagers and even children are being radicalized on-line, sold dangerous diets, manipulated by state sponsored creators, lied by companies, taught anti-science, and the list goes on and on.
How is all this not heavily regulated? Even adults need protection from scammers, fake products, misleading ads, hidden product promotions that look like personal opinions...
We have gone back a 100 years when it comes to consumer rights, and children are the ones that are paying the highest price.
I just failed to be able to do anything about it.
You were a teenager once, I'm sure you can remember how little influence your parents actually had over how you actually spent your time. Or at least saw that in your friends.
This is a society wide thing. Parents are pretty much powerless.
So yes, regulation. But you'll see how discussion of any proposal for this goes down in this forum. Just imagine across the whole polis.
Kids don't need cellphones. We want them to have one often because of our own insecurities.
Kids are… resourceful.
https://www.cbsnews.com/news/teen-goes-viral-for-tweeting-fr...
Last week, a 15-year-old girl named Dorothy looked at the smart fridge in her kitchen and decided to try and talk to it: "I do not know if this is going to tweet I am talking to my fridge what the heck my Mom confiscated all of my electronics again." Sure enough, it worked. The message Dorothy said out loud to her fridge was tweeted out by her Twitter account.
(And before that, she used her DS, her Wii, and a cousin's old iPod. There's always a friend's house, too.)
$50 (or a hand-me-down from a friend) will buy you an Android burner phone that can hop on the neighbor's wifi.
People in prisons manage to conceal contraband (including cell phones) in their cells, and they have substantially fewer hiding spots.
Turning your house into a prison with random room tossings has consequences, too.
The prison warden doesn't care if the prisoners love him 20 years from now.
I took the mobo, CPU, RAM, hard drive and PSU out of the case, put them in my backpack and went to my friends house. He never noticed.
That said, I still couldn't use the PC when I was at home. Physically taking away the machine wasn't really the punishment.
This would apply to cell phones and such too. Sure they might figure out some workaround that works sometimes, but it won't be the same.
Actually, I remember the opposite. I had problems with screen time so my parents put a password on the computer. It wasn't 100% effective, of course, but it was closer to 90% than 0%.
There might be bias here if one remembers one's own teenage years, because I'm sure many teenagers _think_ their parents don't have influence over them. If you ask the parents though I'm sure many would agree aren't fully in control, but do notice they have a lot of influence still.
Personally, the older I grow, the more I realize how much influence in general my parents actually had over me.
Makes no difference -- it's completely unenforced by the teachers. They're practically physically adults, teachers don't want to risk the confrontation, etc. And the kids suffer for it.
And my youngest uses no social media but their mind is still eaten by constant phone usage.
More than social media, the problem is the device. The form factor.
The "smartphone" is a malevolent technology.
Neither give a shit about their phone and we have to force them to take it if they are going out so we can call them if we need them.
Like there are parental control systems and all that you could set up but that requires you to be pretty tech savy as a parent. I think you are already doing great if you keep your child away from phones and tablets until they are of school age but keeping teenagers away from smart phones seems very unrealistic if you don't live in a remote commune or something.
I really, really wish it weren't the case.
That'd already be much, much better than using it at every possible moment.
Why do people just give up proactively? Yes, you can't prevent it 100%, but you can still try to restrict it as much as possible.
Because we're up against trillion dollar companies that employ armies of experts with the goal of inducing addictive behavior. We're deeply outgunned.
Because kids have a genuine need for socialization, and being the one without a phone means you just don't get invited to shit. Birthday parties, hangouts, random trips to the ice cream shop.
Because kids are smart. I'm very technical - I had a pfSense firewall, Pihole, and Apple's screen time on my kids' devices. They found ways around that within hours; kids at school swap VPN/proxy instructions and whatnot.
Because kids these days get a school laptop, on which I have zero admin rights.
Because I don't want to be a jail warden, I want to be a parent.
Whack-a-mole is fun at an arcade. It's not fun when it's your kids.
The goal of parenting is to raise good kids. Unfortunately, it's not always going to be fun.
But we've learned things like "no Snapchat at all" make for a social pariah, which is frequently worse than the problem it's trying to solve.
It isn’t properly regulated because the CEO’s and founders just moan that it isn’t possible to regulate so much user generated content. I’m of the opinion that, in that case, their sites shouldn’t exist but people seem to have convinced themselves that Facebook et al provide too much value to stand up to.
I totally agree with this.
If,for example, hydrogen cars exploded all the time that will not be a reason to not regulate them but a reason for a complete ban.
Surely if a CEO with a billion dollar budget can’t regulate it, neither can a parent?
(I finished watching the last episode just as you posted this comment, still giddy about it. :D
I tried to spread out watching the season for the first time over more than a week, and failed miserably...)