Top
Best
New

Posted by andsoitis 5 days ago

Stop big tech from making users behave in ways they don't want to(economist.com)
307 points | 182 commentspage 2
1vuio0pswjnm7 4 days ago|
Alternative to archive.ph

No Javascript, no CAPTCHA, no DDoS^1, no geo-blocking, other nonsense^2

   echo '
   url https://www.economist.com/by-invitation/2026/04/29/stop-big-tech-from-making-users-behave-in-ways-they-dont-want-to
   user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Liskov"
   header accept: 
   output 1.htm
   '|curl -K/dev/stdin

   firefox ./1.htm
1. https://gyrovague.com/2026/02/01/archive-today-is-directing-...

2. What's up with the LinkedIn reCAPTCHA sitekey in the page source

cortesoft 5 days ago||
While I agree with the premise, I do wonder how you can write a law that would stop the behavior we want to stop without hurting beneficial features or allowing the law to be too easily bypassed.

How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?

general1465 5 days ago||
Very simple - force companies into data interoperability. That will allow users to move to competition without any data loss. I.e. nobody actually cares that GitHub is constantly down because you can move your repos to a different git provider or to your own server.
Aurornis 5 days ago||
> I.e. nobody actually cares that GitHub is constantly down because you can move your repos to a different git provider or to your own server.

I honestly can't tell if this is serious or satire, so apologies if missed the joke.

Pushing a git repo to a new server is built into git itself.

Github project data is easy to export: https://docs.github.com/en/issues/planning-and-tracking-with...

There are import tools for many competing projects that will transfer it over in various ways.

octoberfranklin 5 days ago||
> Github project data is easy to export: https://docs.github.com/en/issues/planning-and-tracking-with...

Only the project owner can do that.

Aurornis 5 days ago||
As a project owner, I don't want random individuals exporting my project data and cloning it somewhere else
donaldjbiden 5 days ago||
too bad because it's easy to get it through the API
ryandrake 5 days ago|||
> How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?

I don't know how you'd write it in a law either, but if you're in a meeting at your tech company, and the product owner or tech lead uses language like "We need to get users to do..." and "We need to incentivize..." and "It should be easy to do X and hard to do Y..." then do whatever is in your power to steer/stop. You're not really building a product users want, you're pushing a behavior-modification scheme onto users.

pbasista 5 days ago|||
> It should be easy to do X and hard to do Y

> you're pushing a behavior-modification scheme onto users

In general I think that your comment is reasonable. I just would like to point out that such "behavior-modification" schemes are sometimes introduced for genuinely good and ethical reasons.

For instance, it is in my opinion desirable to make it more difficult for users to delete all their photos by e.g. having to confirm their decision in a dialog first. Because it prevents them from accidentally doing something they might not want to do and which is potentially impossible to revert.

cortesoft 5 days ago|||
I feel like they will just frame it differently: “Users aren’t getting the full value from product x, so let’s change the workflow to help enable them to get more value with no additional effort” or “Users are losing out on a ton of value by cancelling their subscriptions without realizing what they are losing out on, so let’s implement feature x to make them less likely to mistakenly cancel”
traderj0e 5 days ago|||
One way is intent. If a company's internal communications show that they're intentionally making it addictive, or worse they know it causes harm, you have the smoking gun. This of course doesn't catch all the abuse, but at least it makes it much harder to do this down an entire reporting chain. They have to get really good at winking.

One famous case was Apple suing Samsung over patents. Hard to prove until internal comms surfaced showing intent to copy the iPhone.

cortesoft 5 days ago||
Companies are onto this, though, and do training with their staff about how to phrase things in emails to make it look better.
traderj0e 5 days ago||
Yeah I've done those trainings. That's expected. Even if people learn to say things without saying them, it's a lot harder to communicate across multiple people. And some people are still loudmouths, like at Samsung evidently.
conductr 5 days ago|||
Agree. My first thought is most people in early days didn’t even want to start using PCs for work to begin with. The businesses generally had to mandate it. I imagine many people are facing this today with AI.
akersten 5 days ago|||
> How do you describe in a legal way the difference between a useful feature people want and an addictive feature they don’t want?

For laws like this it always boils down to "I'll know it when I see it" which is such a shockingly poor way to write legislation that I'm flabbergasted it doesn't immediately fail any amount of rudimentary scrutiny. Not to mention the latitude it grants for selective enforcement. It's basically Washington asking (through the Economist) for a leash on platforms that host their critics that they can yank at any time the population gets too rowdy, with the convenient justification that the algorithm is too good and our attention spans are in danger or whatever.

intended 5 days ago||
A large portion of the laws you live under, are exactly like this. Norms make up the major chunk of governance.
Seattle3503 5 days ago|||
You create an agency and give it a mandate that requires it to balance concerns.
octoberfranklin 5 days ago||
This answer can be applied to pretty much any social question.

If it were so easy, we'd do this all the time. We already do it a lot, and there are heaps of examples where it goes wrong.

Seattle3503 5 days ago||
And examples where it goes right. Federal reserve, FDA, SEC, etc..
y0eswddl 5 days ago|||
dark patterns are pretty well documented and understood at this point. I don't think identifying them is all that hard.

Infinite scroll is one obvious one. As well as forcing algorithmic feeds of accounts we don't follow.

thaumasiotes 5 days ago||
Well, you could look to the gambling market for inspiration and let people voluntarily sign up for a blacklist on that feature.

That would be a lot of extra work for the platforms, but I think the results would be interesting. It amounts to legislating that certain features have to be optional and configurable.

securicat 5 days ago||
It takes five minutes to delete your TikTok, Meta, and Instagram accounts. Setting up forwarding rules from Gmail to Fastmail or another provider takes maybe a little longer, after three months hopefully all your emails are going to the new account after changing them. These companies can’t manipulate you if you don’t use their products.

Edit: I know what network effects are, I was talking about steps individual users can (and should IMO) take. We should be helping our friends, family and neighbors find safe and health alternatives like Signal for comms. Build different networks that are actually social and not doomscrolling.

dakiol 5 days ago||
Same can be said about Claude, Codex, etc. These tools are amazing (technically speaking) but they don't play in our favor (most of us are regular, replaceable employees). Only the usual suspects benefit from AI (executive layer, investors, etc)

Still amazes me how engineers on HN are in awe of AI and LLMs knowing that 90% of us will be affected (we won't be able to bring money to the table) once the higher ups start to normalize even more the usage of AI to reduce headcount. Not everything is about the technical details people, grow up

fragmede 5 days ago|||
It's an iterated prisoner's dilemma with all the other developers in the world, and some are vocally choosing to defect. The only rational strategy then is to also defect.
dakiol 5 days ago||
Right. It seems then that all these "elite" engineers on HN aren't as smart as we thought (and yeah, I include myself in that bag).

It's deeply sad to see how our most beloved work (those side projects we pour ourselves into purely for the joy of it) will, at the end, be the very reason most of us lose our jobs (not all of us, but the majority). Openai/antrhopic/etc and others simply took all of that and turned it to their advantage. It's capitalism, sure, but it's heartbreaking... I wouldnt mind be out of job for another reason, but not for that one pls

apsurd 5 days ago||
All is not lost though is it? We can invest our efforts into local models and frontier competitors.

I'm not blind, I have Claude pro (not max) and Cursor subscription. But I'm really hesitant to go balls to the wall on the most powerful models because it isn't sustainable; I don't want it to be. So how much can I get from the older models, the smaller, cheaper ones that will hopefully inevitably be commoditized. I think the harness improvements are making headway. I continue to think Cursor Composer 2 is more than adequate.

Then again if one believes it's a race to the singularity, then that's another story. I don't.

fragmede 5 days ago||
Why not?
apsurd 5 days ago||
The most concise answer as of now is because AI has no "will".

LLMs are objectively smarter than any one person so in some definition we've already created super-intelligence. The problem is they just sit there. They have all the answers already, if you think about it. Whenever we ask it something it gives us the answer, it's amazing, we can even say it can synthesize new information. We can agree with all the claims.

But what does it do with that super-intelligence? Nothing. It can't. it doesn't have will. Or interest. Curiosity? Biological imperative. Who knows.

So we create loops and introspection and set them free. Does giving AI a goal make the AI conscious? That's easily silly if you ask me.

(I'm trying really hard not to make this philosophy. I really like the philosophy aspect, but this is my 30 second answer to the question)

fragmede 5 days ago||
The singularity won't happen because sticking a cron job in front of an LLM and telling it to do something (make money) is "silly"?

I am no philosopher but https://poc.bcachefs.org/ seems conscious.

apsurd 5 days ago||
It's not.

It's no more conscious than running that cron job to send you today's weather. That's as far as I understand what this link is. The agent is posting blog updates and such. Because it was told to. It has no will. LLM generative output is incredible. It's also not conscious.

securicat 5 days ago||||
As if Claude and Instagram are remotely similar products. But again, these products make it incredibly easy to cancel. If work requires that you use it, make the next job you get not require it or just use it on the job.
AvAn12 5 days ago|||
Both try to maximize engagement. Both (soon to be) ad supported. Both driven by algorithms that show the user what they want to see.
dakiol 5 days ago|||
I see engineers addicted to Claude the same way non-tech people (friends of mine) are addicted to instagram. At the end it's all the same: making multibillion dollar companies richer every day
joe_mamba 5 days ago|||
>These tools are amazing (technically speaking) but they don't play in our favor (most of us are regular, replaceable employees).

I'm a mid programmer at best, like compared to top guys in the industry, who built stuff like OpenClaw or those prodigy 16 year-old coders who became millionaires, and yet I don't fear the LLM assisted coding future. I'm at peace knowing that I will adapt to the LLM programming world using my knowledge in my favor, or adapt to a world where I will no longer be a SW engineer, but something else.

Also I find it ironic and poetic how some SW devs here want us to rise up and fight LLMs and the companies making them for disrupting this profession, when the SW dev profession was so well paid precisely because the SW products they wrote, disrupted other peoples' professions, moving the savings from labor costs into the pocket of employers, who used SW to optimize processes and repetitive labor and not have to hire as many people, yet they never saw an issue with other people losing their jobs. "Learn to code" eh?

Oh how the turntables.

LPisGood 5 days ago||
I haven’t looked at OpenClaw but I get the impression anyone could build it. It doesn’t do anything technically impressive, does it?
joe_mamba 5 days ago||
>anyone could build it

Then why hasn't anyone else done it before?

With hindsight, it's always easy to say anyone could have done it too, but there's more to product success than just coding and shipping an app out the door.

The first iPhone was built using COTS(commercial off the shelf) parts that Nokia, Ericsson and Motorola also had access to, and SW tools they also had access to, yet Apple won and buried the other companies because their end-product was way more popular with the customer base. I'm sure engineers from Nokia, Ericsson and Motorola also said "we could have done exactly the same thing with the right leadership" when they saw that.

I also say "I could have done that" when I see how the maker of Flappy Bird became a multi millionaire, or how any other top 100 AppStore slop app has 100+ million downloads.

Coding skills are dime a dozen these days. A lot of people can do 95% of these things now. The differentiator between failure and success, comes with the 5% rest: network effects, market know-how, promotion, timing, outreach, UI, UX, luck, etc.

LPisGood 5 days ago|||
I agree it was a good idea and there’s more to product success, but you were specifically talking about coding skill level.

There are some things I could easily say I (and many others) could not build even in retrospect. Solidworks, for example is beyond a lot of people’s skill level and very difficult to build.

Flappy bird and open claw, not so much.

gavmor 5 days ago||||
Many people have! Nanoclaw, LocalGPT, Moltis, Thoth, Q-Claw... the list goes on.
Dylan16807 5 days ago|||
Well your previous comment sure made it sound like you were talking about level of coding skill.
afavour 5 days ago|||
It's frustrating to see this response so often, as if it weren't blindingly obvious.

After years of near monopoly status these companies have a lock on many people's social lives. To give up Instagram is akin to giving up text messaging. "Just stop using it" isn't helpful advice to those people.

If Instagram disappeared tomorrow it would be different, because everyone would be in the same position. But preaching personal responsibility in an area subject to network effects doesn't work.

securicat 5 days ago||
Give me a break. No one says “I can’t live without Instagram” literally. There are even studies that show that it makes their users depressed. From inside the company that _makes the product_.

Now, would it be inconvenient to stop, sure, but people need better self control. Put that cookie down!

afavour 5 days ago||
> No one says “I can’t live without Instagram” literally.

That's a straw man argument. I never said they were.

> There are even studies that show that it makes their users depressed.

What percentage of the population do you think are in the habit of reading academic studies about the effects of the products they use?

It all feels reminiscent of cigarette smoking. The damage was very well known yet people continued to do it. It took extensive government regulation to wean people off their addiction, not a "buck up, chump" motivational message.

pixl97 5 days ago|||
I don't believe the above user is here to have a well thought out discussion, they just want to tell the world how much better they are than the social media addicts.
securicat 5 days ago|||
I never said you did say that.
idle_zealot 5 days ago|||
You can and should do that, but it's not sufficient to individually avoid harm. You still have to live in a world where most people have their behavior manipulated, and that will impact you. Even from a purely selfish perspective you should support efforts to stop this sort of control broadly with legal action.
securicat 5 days ago|||
Fair point, and nothing would make me happier than TikTok and Instagram being shut down, at least for minors.
bdangubic 5 days ago|||
exactly. I did all of OPs suggestions, decade ago (never had TT to begin with) and still live in a sick society surrounded by the influence of these platforms
toasty228 5 days ago|||
It takes five minutes to just stop being depressed, it takes 5 minutes to just stop being addicted

What works for you, and me actually, doesn't work for most people, humans are complex things

dinfinity 5 days ago|||
> It takes five minutes to just stop being depressed, it takes 5 minutes to just stop being addicted

Would you place all the responsibility of drug addiction on drug dealers?

Yes, their practices are predatory, but it is essential to remind the addicts that ultimately change comes from within themselves. They need to change something.

securicat 5 days ago||||
That’s pretty insensitive to people suffering from mental illness. To compare sitting and doomscrolling on social media with something that’s chemically out of balance in a persons body is… a choice.
peanut_merchant 5 days ago||
The parent was obviously being sarcastic to prove the point through comparison.
macintux 5 days ago|||
They're still distorting our political and social worlds, whether we're participants or not.
sonicvroooom 5 days ago|||
yeah but that's a way they want you to behave in order to set up a control group within the target group that continues to behave as expected. the questions to be answered are not which parts of that control group, and how, nudge which parts of the target group slightly off the predicted and/or confirmed results. they answered that way back when. the question is, how can we react to the unexpected results that we ourselves forced. they can't just go on doing the opposite of what's good for them and bad for the users or vice versa, they have 50 years of data on that, some of which, should be noted, was accidently burned or bombed with a bunch of incriminating evidence shortly before investigators arrived ... which should make even the last sus person understand, it wasn't on purpose
logickkk1 5 days ago|||
"just delete your account" assumes the exit was designed to work. this is the company that called its own prime cancellation flow "the iliad."
Bengalilol 5 days ago|||
[dead]
fsflover 5 days ago||
https://en.wikipedia.org/wiki/Network_effect
Seattle3503 5 days ago||
> The burden of proof should fall on the platform, not the victim. The question is not whether a harmed user can show specific damage. The question is whether the company can show, before rolling a product out to billions of people, that it is not predatory by design.

That's asking every company to prove a negative before rolling out new features.

Could we have a regulatory agency that keeps an eye on dark patterns and deals with them as evidence emerges that something is harmful.

Pet_Ant 5 days ago||
> That's asking every company to prove a negative before rolling out new features.

That’s not as rediculous as it seems. That’s sort of model that drug manufacturers follow. It would also mean that if internally they see troubling behaviour they know they have to stop.

Practically, it would be corporate cover up. And applied earnestly it would make these businesses unviable.

intended 5 days ago||
Wait how?

Internal testing showed these features were addictive. They had resources allocated to creating addictive experiences for tweens.

The underlying behavioral science is well studied, down to the causal level.

Dark patterns are designed to make it hard to exit and unsubscribe. The language is purposefully obtuse, the options buried behind menu choices. We have enough A/B testing data to know how effective friction is at dissuading people from following a path.

How are we proving a negative here?

Seattle3503 5 days ago||
Proving something is addictive is not proving a negative
intended 5 days ago||
Ok. So is that you are saying that the quoted section is setting up a situation where the firm has to prove a negative?
Seattle3503 4 days ago||
Yes

> The question is whether the company can show, before rolling a product out to billions of people, that it is not predatory by design.

"Not predatory" is a negative

JohnMakin 5 days ago||
There are still supposedly serious people who should know better than insist "dark patterns" are not real and a mechanism to attack tech companies. I don't know how anyone these days can honestly reach that conclusion. Some of these sites use similar strategies as the old tobacco companies used to, all of this stuff is known already to marketers.
jp57 5 days ago|
But are they actually serious people? I had corporate astroturf accounts arguing with me on my otherwise-ignored blog as early as 2004. All this time later, I just assume that every serious corporation employs PR firms using sock-puppet accounts to shill in favor of whatever dark shit they're doing, acting like it all just really great and good for us.
ddtaylor 5 days ago||
We've seen this on HN before as well. Companies targeting blogs and reddit with LLM generated content that "subtly" name drop products or services, fake praise, and even meaningless "support" requests on discussion boards.
andai 5 days ago||
> SOMEWHERE IN META’S servers sat a slide deck marked “Confidential”. Written in 2019, its conclusion was blunt: “Teens can’t switch off from Instagram even if they want to.”

Found this document:

https://www.economist.com/by-invitation/2026/04/29/stop-big-...

Headlines (quote):

Instagram is an inevitable and unavoidable component of teens lives. Teens can’t switch off from Instagram even if they want to.

Instagram has become the ID card of this generation. It is the go-to tool for both measuring and gathering social prestige.

Instagram sets the standards not only for how teens should look and act but also for how they should think and feel.

Teens feel themselves to be at the forefront of new social behaviours to which there is no consensus on how to behave or cope. They sorely lack empathetic voices to whom they can turn for support.

Teens talk of Instagram in terms of an ‘addicts narrative’ spending too much time indulging in a compulsive behaviour that they know is negative but feel powerless to resist.

The pressure to ‘be present and perfect’ is a defining characteristic of the anxiety teens face around Instagram. This restricts both their ability to be emotionally honest and also to create space for themselves to switch off.

Anxiety around what to post and the potential cost involved in posting the wrong thing means teens are switching from proactive to passive engagement with the platform.

tedd4u 3 days ago||
The link you share is the link to TFA. I don't see the text you post here in that article. Is there another link that contains this text? Thank you.
andai 5 days ago||
Kinda sounds like the older generations have abandoned them, and now they settle for IG.
smarm52 5 days ago||
The author is a "Master of Laws" (lawyer) writing about technology and psychology. Read with some skepticism.
ajb 5 days ago||
What's ironic is that originally one of the advantages of automation was that it was more impartial than human-delivered services. The inventor of the automated telephone exchange, Strowger, designed it because he was concerned that the local telephone operators we directing his calls to a competitor. We had several decades during which machines had only very limited decision-making ability, and so it was their ability to manipulate or discriminate was minimal. That's gone. It went years ago, but it's taken a while for the public's intuition to catch up. People are starting to get angry, but are still somewhat baffled. Industry believes that they can continue to get away with it since they've done it for 10-20 years, but I think this underestimates how strong the backlash can get.
LanceH 5 days ago||
Everyone calling for government intervention when unsubscribing or taking a device away from a child would work is just hastening the end of free (libre) general purpose computing.

Insert credit card and two forms of id to log on...

siliconc0w 5 days ago|
On one hand, this is insidious when targeting children.... On the other these kind of metrics are what pretty much every company tries to optimize.

We're going to get better and better at hacking the human brain - for good and evil and we're going to have to trade some free will and personal liberty to really keep the worst of it in check. The dark pattern bullshit is the easiest thing to regulate but I don't have a lot of hope for even that.

intended 5 days ago|
Which is fine. As the article points out, we value markets because they find ways to allocate resources effectively. Insider trading is illegal because it breaks the market.

Firms can optimize as they like, but if the net result is that the market ceases to function, then those behaviors get penalized.

More comments...