Top
Best
New

Posted by JumpCrisscross 1 day ago

Meta's embrace of AI is making its employees miserable(www.nytimes.com)
453 points | 520 comments
joenot443 1 day ago|
https://archive.is/JUPmz
Havoc 1 day ago||
I think there is a bit of wider social norms piece missing as well on AI use in knowledge work context.

Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.

For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.

erentz 1 day ago||
Seems AI has made it cheap to produce information but now you have to spend more time parsing the information. And it’s now the less competent/useful people spending less time producing more information with the more useful people spending more of their valuable time parsing that information. This is why I’m skeptical of LLMs ever becoming a net benefit in most organizations.
butlike 10 minutes ago|||
It also doesn't help that the creation portion is the dopamine-releasing aspect, and the consumption portion is the "work" part of it.
anonymars 1 day ago||||
Intellectual denial of service
scruple 1 day ago||||
LLMs are Brandolini's Law taken to an entirely different plane of existence.
jimbokun 1 day ago||||
Calling it “information“is generous.
butlike 9 minutes ago||
In the particle physics abstract photon is a discrete piece of "information"-sense.
cactacea 23 hours ago||||
That is pretty much my existence at $MAJOR_TECH_COMPANY now. Inexperienced security engineers running bots against my codebase and sending me pages long tickets with their "findings". There might be a couple of interesting nuggets here and there but by and large the reports are just noise. This churn is actively taking away from my ability to actually respond to customer-impacting issues because "security is always our top priority".
whattheheckheck 19 hours ago||
You basically have to open up a channel for them to contribute if they want to play that game
Bombthecat 1 day ago||||
You don't parse the information. You paste it back to AI to get the bullet points the first person put it.
butlike 9 minutes ago||
And as biology has taught us: eating shit makes you sick
therealpygon 1 day ago||||
[flagged]
trollbridge 1 day ago|||
Well, you can use LLMs to parse LLM-generated slop. They make nice summaries. I have taken this approach to people who send me obviously generated LLM text; I simply run it through an LLM, paste the summary, and ask them "Is this an accurate summary?" and then I ask the for their original prompt.
solid_fuel 1 day ago|||
This puts the LLM providers in a great position of:

    <prompt text> -> [PROVIDER] -> <lots of output text> -> [PROVIDER] -> <prompt text, mangled>

They're getting paid to encode some inane prompt into paragraphs of text, and then they're getting paid again to summarize that back into something with even less value than the original prompt. And they're making money hand over fist because people are happier to play that game rather than just pushing back on the jerks sending them pages of generated garbage in the first place.
trollbridge 1 day ago||
I would agree with you, except right now the walls of text come from people using the free or very cheap versions of ChatGPT, et al. So there's not even anyone making money off of it.
dodu_ 1 day ago||||
Ah yes, take my single sentence, blow it up to 3 paragraphs with LLMs, and then the person reading it can have an LLM summarize it in a single sentence.

What the fuck are we even doing anymore?

mbac32768 11 hours ago|||
We're so close to realizing the answer was with us the entire time.

midwit meme template

guy on left: katie u want meet 3pm discuss project

midwit: Hi Katie, I hope this message finds you well and that your week has been off to a productive start. I wanted to reach out and proactively touch base regarding an opportunity to align on some of the ongoing project-related workstream...

guy on right: katie u want meet 3pm discuss project

ua709 1 day ago||||
I wonder if that even works. Kinda like when kids play telephone I think it’s unlikely the input and output sentences actually match.
philipswood 1 day ago||||
He's describing a 4 step process:

1) >I simply run it through an LLM,

2) >paste the summary,

3) >and ask them "Is this an accurate summary?"

4) >and then I ask the for their original prompt.

Agreed that just step 1 or step 1 and 2 would be depressingly pointless, but step 3 and 4 make this the equivalent of sending someone a let-me-google-that-for-you kind of link, does it not?

Caught out like this I imagine many people will kind of get the fact that you'd rather have their direct inputs..

(Or just get mad at you, but that's fair I guess)

trollbridge 1 day ago||
Worse still, the person that is the most egregious about doing this seems to appreciate it and responds with "Yes, that's right!" and just ignores (or has no idea what I'm talking about) when I ask for the original prompt.

I simply ask for a positive affirmation of the summary so that I can act on that, instead of other things.

anon84873628 1 day ago|||
The thing is, eventually these products will be more integrated into business workflows and have access to all the context, so the three paragraph expansion probably will be a significant improvement upon the original input.

And either that person won't be employed anymore, of the thing they were asking for in the first place will be automated for them.

I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.

footy 1 day ago|||
This is a pretty gross privacy violation but it's also just... So depressing.
anon84873628 1 day ago||
My employer already records every scrap of communications, I'm running everything on corporate infrastructure, and they sent the information to me.

Giving the AI knowledge of the org chart, who works on what, how they prefer to communicate, what their goals/biases are, is no different than what every ape implicitly collects in their own head.

coffeefirst 23 hours ago||||
Oh I know. In the past month I’ve moved several thousand dollars in spending away from companies that turned their support into a useless understaffed AI program.

The disease has spread to six figure enterprise contracts hallucinating about their own APIs.

BlackFly 1 day ago||||
As these products improve, one person sending the output and not the prompt will remain useless. The prompt captures the intent and level of real consideration of the person sending it, the receiver can augment that with additional information if they want to.
anon84873628 1 day ago||
That's like saying I should just send the English teacher a description of what my essay will be about, instead of actually writing it.

It seems like no one responding to this understands scoped context retrieval.

rurp 1 day ago||
Professional communication has a completely different goal than a student essay, and it's weird you conflate the two. A student paper is useless as an artifact, the actual value is for the student to learn how to write the paper. If a coworker sends me a long email for me to read it should provide some actual value.
anon84873628 23 hours ago||
I'm arguing against people who essentially say that running the LLM is useless; just send the prompt.. Obviously that is true if the person does zero additional value add, but then that person probably sucked as a colleague before LLMs anyway. When you use an LLM agent correctly you are adding value beyond just the prompt, and those three additional paragraphs won't just be extra noise. Especially if the agent is automatically fed your personal context.

An essay states a hypothesis and then uses first and second party sources to validate it. I'm not conflating anything, it's just a good abstract example of the type of knowledge synthesis work, which is why we make kids do them.

A business strategy proposal is nothing more than a specific type of essay where the research sources are internal research results, market trend analysis, etc.

A technical design doc is an essay about the best way to implement a feature.

An "executive summary" is just an abstract, and the MBR puts the latest research citations and raw results in bullet points.

dijksterhuis 1 day ago||||
> I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.

have you asked these people how they feel about this? have you asked them for permission, for their consent to do this with their communications to you?

what you’re doing sounds incredibly creepy. like, meta/facebook kinda of creepy. granted, it’s at a more limited scale, but it’s still creepy af dude.

fwiw, if i was your colleague and you asked me how i felt about you doing this with me, i’d be seeing about getting HR involved.

anon84873628 1 day ago|||
Um, I absolutely expect my colleagues to update their internal model of me every time we communicate, to a greater or lesser degree depending on how much that communication deviates from their expectations, or how much new information it contains. In fact, that is essentially the purpose of communication.

Do you think you are not constantly being "influenced" to do what people want from you?

What do think happens during a peer review or promotion decision?

What do you think the pile of data in SharePoint / GDrive represents?

You think HR will care about someone taking prolific detailed notes at work?

I did phrase my comment in a glib way to draw out this type of reaction. But this type of stuff is what "intelligence augmentation" will include, and the corporate panopticon is already alive and well anyway.

dijksterhuis 23 hours ago|||
their mental model. the human being’s mental model. the one in our private head. not some model on a corporate server, some secret “dossier” on every interaction you’ve ever had with them. you’re basically creating your own black book / surveillance tool on everyone you interact with dude.

just because the corporations do this to us doesn’t make it okay to do it to each other. just because your employer does it doesn’t mean it’s okay to do to your co-workers. like, there has to be a degree of trust between colleagues dude.

compiling a record of every single thing anyone has ever said to you, an individual human being who is not a corporation or a machine, all for the purposes of “it makes my emails better” is just plain fucking creepy.

i think you might need some time away from the screen. seriously.

> i did phrase my comment in a glib way to draw out this type of reaction.

maybe, just maybe, it would be a good idea to take a bit of time to seriously think about why being glib about this super creepy thing you’re doing is not a good thing.

bit of self-reflection. the thing us humans are supposedly still capable of doing and the machines are not.

whattheheckheck 19 hours ago|||
Really makes you wonder about freewill and information determinism
trollbridge 1 day ago|||
Well, implicit in the TOS of things like Gmail, etc. is already permission to do this.
drawfloat 1 day ago|||
That’s not how the real world works. You will be kicked out of the workplace and rightly so.
dijksterhuis 1 day ago|||
does that make it morally okay to do with your colleagues?

like, jfc, these are fucking people were talking about building “dossiers” of. people the person works with where a degree of trust and bonding is necessary. people they probably spend at least a quarter of their waking hours interacting with.

and your defence for it is “well, google does it”.

the best engineers know what not to build. they don’t build every single thing under the sun because they can.

also, don’t you have to explicitly agree to google’s terms for that stuff to use their services?

watwut 1 day ago|||
Currently they are inferior.
stoorafa 1 day ago||||
LLMs are great at decompression [1]

[1] https://jabde.com/2026/02/02/utilizing-llms-as-a-data-decomp...

dr_sausages 1 day ago||
Nice article! I wrote something similar this year too after seeing the 5-bullet-points of information stretched out with AI homologous slop too many times.

https://www.vaines.org/posts/2026-01-26-jpeg-compression-for...

Sgt_Apone 1 day ago||||
Might as well donate money to the AI companies at this point.
erentz 1 day ago||||
But now even this is just producing more information and requires more work both of you and of the original sender.
insane_dreamer 1 day ago||||
Telephone game
throw310822 1 day ago|||
> and then I ask the for their original prompt.

Original prompt: "Please rewrite this information in a nice format for my insufferable asshole colleague".

Avicebron 1 day ago|||
My default is that I won't copy and paste anything that's AI generated in communications. I kind of think that's the line. Use whatever you want in the background, but I want to communicate with the synthesis of your thoughts.

I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.

threecheese 1 day ago|||
I follow the same strategy, but loosely - I need those emdashes to signal that I’m using the tools.
nytesky 1 day ago|||
I actually regularly used emdashes in my writing —- my kids complain I write like AI in fact — and now I have to consciously remove them.

Likewise, I often used literary flourish and pleasantries like that above article about email decompression; I’m from the south so I think structured formality comes with territory.

I do think using LLM to turn notes and bullets into narratives should be considered no different than rendering CSV text into an excel format, just making it more digestible by recipient.

rdtsc 1 day ago|||
That’s my latest joke — that we’ll have to pretend like we used the tools so they can feel validated they’ve spent all this money on hyped up technology. So, yes, it’s em-dashes and “it’s not just this, it’s that …” so they can hopefully leave us alone.
xp84 1 day ago||
I remember feeling embarrassed one time that I used a very early GPT thing to help organize perf reviews for employees from the various bullet points I had written for each (I had a lot of direct reports). But in current world, I assume I’d be praised for doing so.
avidiax 1 day ago||||
My rule is more about expansion ratio and effort.

I will generate an LLM output that organizes scattered information and thoughts, resulting in 1.25x the text. I then read and edit it, generate executive summaries, and send it.

It saves effort for me in organizing, formatting and summarizing, and the LLM is producing more structure than content.

pinkgolem 1 day ago||||
I mean, i struggle with spelling/wording and ask the LLM to proofread a lot.

I often send out the LLM version, but still check if it contains the original thoughts correctly.

It's not a bad way to extend your vocabulary & catch spelling mistakes

stingraycharles 1 day ago|||
> I often send out the LLM version, but still check if it contains the original thoughts correctly.

Please don’t do this. You probably aren’t aware of how bad this can land. It’s not just about containing your original thoughts, it’s about the verbosity, repetitiveness, and absurdity of it all.

Grammarly is a much better tool for these kinds of purposes, and it actually guides and teaches you to improve your writing along the way.

__mharrison__ 1 day ago|||
The irony that this response has a very common LLMism...
adastra22 1 day ago||||
Grammarly the honeypot?
stingraycharles 1 day ago||
You seem to be referring to something specific I’m not aware of, could you elaborate?

A Google search didn’t reveal anything specific other than them using famous author names for expert review.

adastra22 1 day ago||
It's the nature of the product itself. It's a key logger software. That's literally what it does -- take every input on your computer and route it to their servers.
stingraycharles 1 day ago||
Right, I was just confused by your use of the word honeypot.

“keylogger mode” is optional, and to my understanding, you always see a visual indicator in the text area.

it doesn’t take every input as far as I know, and security firms don’t consider it a threat.

but point taken, it’s not for people with privacy concerns.

adastra22 18 hours ago||
It installs itself as an accessibility tool, which requires special user permissions. With these permissions, it sees literally every keystroke you make (except, in some cases on some OSs, system password prompts). The visual indicator is just their UI.

Regarding "honeypot" -- that's also what a honeypot is. They provide a service you want, then collect data. We have to take their word that they're only using this data to train their AI (which, btw, they are upfront about -- they log everything and feed it into their training. it's in their TOS).

stingraycharles 13 hours ago||
Isn’t a honeypot some decoy website / service / whatever that presents itself as legit, and then once you register / interact with it you’re caught in whatever they want to do?

Eg FBI putting up fake “buy drugs online” sites and logging your info once you place the (fake) order.

adastra22 11 hours ago||
It is deception, but it doesn't have to be decoy or fraudulent. It could actually provide the service or deliver goods. The point is that the operator isn't running it for the reason they say they are, but rather to gather info or whatever. Specifically in cyber defense a honeypot is sometimes a fake server that serves as an intrusion detection alarm, but that's actually the odd one out when you look at how the term is used more broadly.
anon84873628 1 day ago|||
Verbosity and repetitiveness? Which tools are you using?

Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style. If there's something you don't like, tell it to rewrite the part differently.

Theses are literally the things language models are best at.

stingraycharles 1 day ago||
> Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style.

This is not what the parent I replied to indicated, nor what people usually do.

anon84873628 1 day ago||
There's no reason to assume that their output is as bad as whatever you have come to expect.
mkl 1 day ago||||
You don't need a fake extended vocabulary. Just communicate directly and honestly. Underlining spelling errors as you type has been a standard feature of email software for nearly three decades.
pinkgolem 23 hours ago||
Nah man, it has been career limiting before and I have gotten bad feedback for it my whole life...

I keep the LLM close to the original text content wise, but feedback was/is fantastic

teeray 22 hours ago||||
> I often send out the LLM version

The problem is that you lose your voice and adopt one that your audience knows all too well (and knows it isn’t yours). It makes your audience feel like you aren’t listening to them (even though you are!), because they feel like they’re talking to an LLM.

unD 1 day ago|||
I'm pretty sure most non-native speakers (one here) do the same. I'm not talking about three-paragraph-long Slack thread, but even a single message where I feel otherwise unable to convey what I need _the way_ I want.
jrumbut 1 day ago|||
My typical practice is to write a reply using my own brain and whatever practices are called for, then attach any interesting chatbot responses that were generated as documents.

So there's a clear separation, a reply from me which I stand by and then some interesting chatbot stuff if you're into that.

nlawalker 1 day ago|||
You have to call it out when you see it, politely and charitably.

"Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."

stingraycharles 1 day ago|||
This is what I do, slightly more explicitly saying “just be the real you”. About 50% of colleagues take it well. The other 50% don’t understand the problem, and don’t understand when (and when not) to use AI.

They are at high risk.

Employees using ChatGPT to renegotiate their salary are showing a serious lack of cognitive awareness.

vasco 1 day ago|||
"If I wanted to receive copy paste from a bot I wouldn't message you, why are you trying to sneak this in?"

You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?

zeroonetwothree 1 day ago||
Just curious: would you consider yourself autistic?
sph 1 day ago|||
No, perhaps continental European. When I moved to Britain I had some adjusting period at work because English-speaking countries are terrified of disagreement and confrontation, and I am not used to dancing around the point, especially in stressful settings where efficiency is key. Mind you, I was always polite and respectful to anybody.

I got better at it, but I can’t say I ever got to like the pervasive hypocrisy. To my understanding the American/West Coast is even more fake on this aspect.

justsid 1 day ago||||
Not everyone not conforming to your preferred style of communication is autistic. What is up with the internet trying to diagnose people?!

The parent is right. The reason society as a whole is way too comfortable with overstepping social boundaries, is because people think it’s somehow rude to confront others. It makes no sense. Sometimes you gotta say it how it is, because quite frankly the real rude person is the one copy and pasting a ton of AI output into your communication so you have to parse that and then try and figure out the original intent between the lines. How is that acceptable but saying “don’t do that to me?” is not?

vasco 1 day ago||||
No, just like to work with eastern Europeans and got used to their communication. Now this stuff just jumps out at me.
luckylion 1 day ago|||
Is that a tell? Sounds very much like my reaction to that behavior, but I always assumed it's because I'm german.
sunrunner 1 day ago||
Perhaps a quick visit to https://german.millermanschool.com could sort this out
vasco 1 day ago||
Funny! I'm autistic enough that I went to do it and got 51% German and 27% autistic. In reality am portuguese and never diagnosed (outside of internet comment sections).
Aurornis 1 day ago|||
> She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.

LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.

For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.

If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!

For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.

I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.

ceejayoz 1 day ago|||
> LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible.

Oh, we know. It's pretty clear in many cases.

Terr_ 1 day ago|||
Perhaps a less-brittle version would be to replace "we don't know X" with "we can't easily prove X to the extent needed to deter it."
watwut 1 day ago||
We dont need to prove it to deter it.

If in doubt, treat it as a bad writing and give that feedback. It is a bad writing.

2wdfsd 1 day ago|||
lol yeeh... its obvious as hell.

And frankly the best signal now is: the shorter it is the greater the likelihood it was at least expensive for the human to produce. Said in another way - a shorter thing is easier to make sense of completely and if its garbage - its garbage. At least the cost borne on you was minimised!

trusche 1 day ago||
If I had more time, I would have written a shorter letter. From funny quote to litmus test.
alexandre_m 1 day ago||||
> This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.

That’s not really the point. Engineering has always operated on trust networks, not just artifacts.

Your review naturally adapts based on the level of trust you have in the author. If someone has consistently produced high-quality work, whether they used AI or not becomes mostly irrelevant.

xp84 1 day ago||||
Insightful take.

What’s funny to me is your last paragraph. A lot of companies are so gung-ho about “AI ALL the things!” that I’m not sure as a manager if I’d get in trouble for “spotting the AI copy paste” junk. I’m supposed to make sure everyone is using AI as much as possible, after all. So, rejecting someone’s output for being low-effort AI slop and asking for a “less AI” version of it might mark me as a silly old fashioned guy who doesn’t believe in AI.

anon84873628 1 day ago||
Why not coach the people to use the AI correctly and continue rewriting until it is the correct length and level of detail? This whole thread is full of people talking as if you can only one shot the things, or they are incapable of being succinct.
ulfw 1 day ago|||
If it's 12 pages of bullshit I know it's AI and I don't bother reading it. Simple as that.

The world is turning stupid and the tech world is at the forefront of it.

gumby271 1 day ago|||
I've run into a similar thing where I'll be cc'd on support tickets with one of our customer support agents and they'll then reply to me with what is clearly an ai summary of the single email from the customer that I can already read. I do think they're trying to be helpful, but it's hard to not feel like they think I'm a child or an idiot. Back in the day we agreed that Googling something for someone was rude (letmegooglethatforyou.com being a good example), I don't know why ai summaries and slop aren't understood in the same way.
eloisius 1 day ago|||
I think it’s related to the same kind of psychology responsible for road rage. When we use a tool enough, we start to perceive it as an extension of ourselves, for better or worse. I think people that use AI to do all their writing or revision have legitimately lost the sense that it’s the output of a tool. They feel like they are helping and don’t see the different between personally writing a message for you or copying the output of Claude to you.
asib 1 day ago||||
That’s not the intent of letmegooglethatforyou. It’s a pointed way of telling the recipient they should do the bare minimum research on their own before asking someone else for help. It’s not about being angry that someone told you something they found from a cursory google search
notatoad 1 day ago||
You’re right, but Lmgtfy links are incredibly similar in tone to sending somebody ai output.

Lmgtfy was a passive-aggressive (but not really passive) way to say “hey, are you too dumb to google this?”. Sending somebody ai output feels the same to me - the message you’re sending to the recipient is “here, you’re obviously too dumb to ask an LLM about this yourself”. Except some people don’t seem to realize that’s the message they’re sending

furyofantares 1 day ago|||
letmegooglethatforyou.com was to let someone know that not searching for themselves is rude - it was not because it was rude to search for someone else (it wasn't and isn't)
gumby271 23 hours ago||
[dead]
milkshakes 1 day ago|||
you could always do this: https://marketoonist.com/wp-content/uploads/2023/03/230327.n...
figassis 1 day ago|||
And it’s too soon to have these norms. Employers today are willing to part with them at the hint of the slimmest efficiency gains, you’ll waste time. So I think the correct response today is wait for it to settle. Norms will form on their own.
andai 1 day ago|||
Well no, you're supposed to copy-paste it into ChatGPT, ask for executive summary, and recover an approximation of the original input. Duh :)
heresie-dabord 1 day ago|||
> helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

Only 30 minutes? You have it good! ^_^

This person is creating more work for an FTE who now has both a) the original job, and b) the additional load of purging corruption from the inputs for (a). This is happening at scale.

Your tolerance for this depends on how close to capacity you are for (a). It's a tale as old as corporate time, well-documented by Office Space and Dilbert.

Work is Work. Pantomime is Pantomime , whether it's with "frontier" or low-tier LoLMs.

> There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.

What is acceptable right now is to believe that corrupting the inputs to the work of serious FTEs is somehow beneficial. You are expected to sing the revised words of the corporate anthem with your customary passion and obedience. Layoffs will continue until the morale of data centres has improved.

analog31 1 day ago|||
In an ideal workplace, one could sit down with the colleague and have her experience untangling the slop, perhaps by a process akin to pair programming.

Sometimes I wonder if we're letting people graduate from school with no real grasp of the purpose of written communication. School strips writing of purpose, and creates artificial purposes such as using AI to combine words in order for AI to assign it a good grade. Even before the AI era, most human generated text was not worth reading.

Mars008 1 day ago|||
I've seen manager obviously reading copilot's advises as his own thoughts on meetings.
adastra22 1 day ago|||
copilot?!
fg137 1 day ago||
Microsoft Copilot is used as the default "general" AI tool at most companies you have heard of
FireBeyond 23 hours ago|||
At a previous position I got frustrated at a manager who'd characterize multiple team members, including myself, I suspect, as "creating AI slop" when he was posting multiple times a week about how his Jira tickets, PRDs etc, were all being "supercharged" (ugh) with AI.

Meanwhile, I was absolutely using AI, but not to write documents but to do first pass critical reviews, the "what am I missing here, what haven't I accounted for here?" but the writing was all my own.

scruple 1 day ago|||
Yeah I write prompts asking it to misspell a few words, break a few grammar rules, forget to capitalize once in a while, miss some punctuation once in a while. No one will ever catch on.
Forgeties79 1 day ago|||
My current bar is “if you know I’m expecting to hear from a person don’t paste unedited ChatGPT outputs and hit send.” Everybody wants to send out the efforts of their corner-cutting, but nobody wants to receive them.

Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.

notatoad 1 day ago|||
I’d go a step further and say there is never a good reason to share unedited ai output.

The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.

bandrami 1 day ago||||
The asymmetry is that lots of people want to use LLMs to produce things, and nobody wants to consume the things LLMs produce.

The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.

2wdfsd 1 day ago|||
Not quite. Ultimately the lions-share of income of model producer's is coming from firms.

Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.

Its a whole lot more nuanced than some shitty game theory.

somewhatgoated 1 day ago||
“Firms are only making perfectly rational decisions that result in meaningful real outcomes” - not my experience.

Firms waste literally billions on some bullshit that gets them nothing.

Forgeties79 1 day ago|||
>the market always finds a way

That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.

If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.

bandrami 1 day ago|||
I've thought the 3D-printing analogy is pretty apt for about a year now. It had a lot of promise at first but it never quite has the impact people thought it would. There are still 3D printers for sale, and people still prototype with them, but nobody's printing out a dustpan when they need one.
anon84873628 1 day ago||
Um, have you heard about the drone warfare in Ukraine?
watwut 1 day ago||
There is a lot more then 3d printing going into that.
anon84873628 1 day ago||
Yes, but would they have been able to develop the production capacity for resistance without it?

Certainly seems like the advantages of 3D printing came in clutch exactly when they were needed.

Forgeties79 21 hours ago||
I think you’re missing the thrust of my comment and the responses. Nobody is saying 3-D printers are worthless, but if you remember what it was like when they were first emerging into the mainstream, you would think we would all have one in our living rooms by now just spitting out everything we need constantly. We would all be building our own furniture and repairing every niche thing in our house with them. We’d all be on some magical network sharing files with each other. We’d have a massive surge in printed guns.

Everything was theorized and it all was a variation of “nothing will be the same for anyone ever again,” not “some specific areas will be really different.”

trollbridge 1 day ago|||
I'd say that's a pretty accurate analysis. Something that is easily generated by an LLM obviously has low value and there is no moat.

Agentic coding is a bit different, particularly if a great deal of effort and intelligence goes into it, but that's a quite different thing than just cranking out slop apps.

Forgeties79 1 day ago||
Yeah there is no doubt that some companies are going to radically change their operations because of agentic coding in particular. But the revolution that is being promised, and the investment that has gone along with it, is going to smash against some pretty nasty shoals of reality sooner rather than later
bandrami 1 day ago||
Some are going to radically change their operations, but we have yet to actually see if the ROI on that comes through for them. It will be an interesting thing to watch.
Forgeties79 1 day ago||
Fair point. My implication (though I completely failed to indicate it lol) is that for some companies it will be a huge, mostly positive change I imagine. But it won’t be the majority of the companies trying to make that happen right now that’s for sure. Unless we want to consider every company deploying a chat bot for user support I guess…though I wouldn’t exactly say that is the massive leap in technology AI is promising
jimbokun 1 day ago|||
A lot of time I will just say “Gemini/Claude is telling me…” just like I would for a Google search result. Sometimes helpful to use the common wisdom embedded in the LLMs as a starting point for the discussion.
somewhatgoated 1 day ago||
As soon as I read this phrase my eyes glaze over and I skip everything that comes after it.

If I want the LLM answer I freaking ask it myself

878654Tom 1 day ago|||
Indeed, am I talking to a person or to a proxy-prompt?
Forgeties79 1 day ago|||
I also don’t get why people keep saying “who cares so long as it’s correct?”

That’s a huge assumption! And I care a lot, because I want to know a person looked at the result and decided it was correct. If you don’t do that, you’re dumping that work on to me and ignoring that I asked you for a reason.

Anyone can open up ChatGPT and ask for a quick answer. What on earth makes people think I want them to just do that for me when I ask them a question?

scotty79 1 day ago|||
> She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.

What you are seeing is a seed of the future of communication.

Communication is one of the hardest things people do. The goal it to pass idea from the sender to recipient in a manner that is least lossy. Look at how many things need to be aligned for that to even barely succeed. You have to speak the same language, dialect, have similar enough personal vocabulary, have sufficiently aligned mindsets in the domain you are communicating about, have the same current context and ability to convey context update, then the sender must serialize their ideas into actual words with correct enough spelling and in correct order to get the idea into recipients mind. All that while knowing only very little about recipients mind and having to predict what effect the words will have on it, assuming they don't misread the text.

In the future barely anyone will produce raw text themselves. At least not in professional contexts. World will be way more mixed. People will come from very different cultures and use very different languages. Most people you will encounter in professional setting will not be sufficiently aligned with you to communicate anything beyond the simplest ideas. And neither you nor them are going to be willing to align with others.

You know what will align with you? Your AI. So any message from a human will go through your AI and any message crafted by you for a human will go through your AI as well. And when it's received, through theirs. Messages will not be written. They will be constructed in a dialog with senders AI. And they won't be read. They will be interrogated in a dialog between recipient and their AI.

The future is going to be way more diverse. People will use their own communication styles they were taught when they grew up. But the bulk of out-of-family communication will be done through AI. And the AI language will be verbose. Not really fit for routine human consumption. Because words are cheap for AI and not losing details is a communication priority. It's starting as a corporatese English. But I think it will evolve rapidly to increase signal to noise ratio (while still being impractically voluminous for humans).

They issue now is just that you are trying to read rudimentary machine code of future human communication directly.

otabdeveloper4 1 day ago|||
You can use an LLM to fix spelling and grammar errors. You don't need to generate slop. (Cloud providers sell LLMs as "robot information workers" when they're actually "calculators for text".)
stavros 1 day ago|||
Well, sure, it's very new. Soon we'll adapt and it'll be just another tool we're using.
Rekindle8090 1 day ago||
[dead]
1vuio0pswjnm7 1 day ago||
"Many workers immediately revolted. In online comments, they blasted the tracking as a privacy violation, ..."

“How do we opt out?” - Meta employee

Poetic justice, or "dogfooding"

throwaway7356 1 day ago||
I wonder what the thought process is? "I only work here because I like to violate everyone else's privacy. Mine has to be respected"?

It's not like Meta/Facebook ever had moral concerns about privacy violations or surveillance (or many other things).

hnfong 1 day ago||
Come on, we all know why they work there...

"I only work here because I like money and Meta pays more than $(INSERT_OTHER_FAANG)"

chistev 1 day ago||
I've heard the rebuttal that it's not necessarily that, but that getting jobs if they quit the one they already have is hard. So they stay.

Check out this Ask HN thread for example about how bad the Job Market is -

https://news.ycombinator.com/item?id=47988268

financetechbro 23 hours ago||
I think the broader issue is that Meta has been evil for pretty much all of history so you can’t rationalize remaining employed there as simply a factor of the current job market (ofc this primarily applies to more tenured Meta employees)
chistev 22 hours ago||
Fair point.
zeroonetwothree 1 day ago|||
Most Meta employees do not support invasive features without opt outs in their products. Some try to argue against it. But ultimately there is only person who gets to decide.

You might as well blame the entire US population for certain problematic actions of the president.

bjohnson225 1 day ago|||
> You might as well blame the entire US population for certain problematic actions of the president.

The equivalent would be blaming those who chose the current president. Which would be an entirely reasonable thing to do.

sbochins 1 day ago||
By that logic, you should blame the people that use meta’s products.
xigoi 1 day ago||||
Being a USA citizen is often not a choice. Being a Metaslop employee is a choice.
Conscat 1 day ago||
That feels reductive. It's actionable to get OCI, renounce US citizenship, and become Indian after a decade. It's just not very practical for most people. I know many people who don't work their dream job, but switching to a favorable company isn't currently practical for them.
gloxkiqcza 1 day ago||||
> You might as well blame the entire US population for certain problematic actions of the president

You actively decide to work for Meta which has been known to dishonestly violate privacy since basically day one [1]. Most US citizens were passively born as one. It’s also much easier to leave a company than to move out of USA.

[1] https://www.businessinsider.com/embarrassing-and-damaging-zu...

thrance 1 day ago||||
The infamous "Nuremberg defense".

Meta employee chose to work there. Each day, they choose to keep working there. With Meta on their resume, they could find work pretty much anywhere else.

https://en.wikipedia.org/wiki/Superior_orders

reshlo 1 day ago||||
Getting another job when you have Meta on your CV is a lot easier than moving to another country.
pertymcpert 1 day ago||||
[flagged]
Sh0000reZ 1 day ago|||
"Country by of and for the people."

A nation is what its people tolerate.

An economics euphemism; what the market will bear.

Americans bear their government and neighbors providing zero assurances of food, shelter, healthcare.

Millions support the Prez and the rest, even though they have power in numbers, well, not doing anything is a choice.

Good luck out there. My fellow Americans and I don't have to care if you end up homeless in your car. Murica!

Sh0000reZ 22 hours ago||
What? I thought this forum was all about technical correctness.

It's technically correct you all have no assurances your assets continue to hold value and you won't end up homeless. Lol

Censorship of truth!

Bombthecat 1 day ago|||
You can opt out by clicking through those 9 pages, each with a confusing form and buttons.

And on the next update we either just enable it again and make you go through the process again :)

mdavid626 1 day ago||
...and at the end of the 9 pages you'll be added to the list of employes to let go next time layoff happen.
1vuio0pswjnm7 1 day ago||
"What goes around comes around"

"You reap what you sow"

Karma (not the HN kind)

DragonStrength 1 day ago||
Well, yeah, management sees a weak labor market and imagines the ability to fire all those troublesome engineers. Remember, especially in recent years, tech management is made up predominantly of grads from a select set of "elite" universities, whose caliber is determined mostly by how rich the parents are. It's no surprise we're in a moment of extreme labor disdain. The idea engineers with years of education are as fungible as manual labor has been tried again and again with the same results. LLMs won't change that.
whyenot 1 day ago||
> It's no surprise we're in a moment of extreme labor disdain.

So sad to think that a generation or two ago, everyone wanted to emulate the HP Way. Now all of that is gone and unless you are a superstar, you're just a commodity to be managed, and extinguished when the time comes.

_doctor_love 1 day ago||
Sorry, going to have to disagree with you there friend. It is not the case that everyone wanted to emulate the HP Way. The HP Way represented the best of Silicon Valley thinking, and if you read the book, you will see that even those guys were an outlier.

I remember that there is a passage in the book where the HP guys go and meet with other leaders of American corporations, and most of them felt that they did not have any kind of obligation back to society. I am a huge fan of the HP Way, but they were unusual, and not the norm.

jimbokun 1 day ago||
That and the large technology companies don’t really have many ideas for new software or features that will make them more money. Can only increase profits by reducing costs.
menloshark 1 day ago||
Here's how things play out: Zuck gets some idea, he's surrounded by a bunch of yes men who say "yes, this will definitely change the world", then it turns into this optics game of kissing the ring. You ask yourself "how could they blow 80B on the Metaverse like that", this is how.

DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.

It's a toxic and fear based culture. You join, the people around you are already thinking how to scapegoat you. People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it

zmmmmm 1 day ago||
It is hard to judge culture during a period of serial downsizing because it will always be toxic in that context. But what you tell aligns with what I have inferred over a period of many years observing, even during times when they were growing: at a high level, Zuck gives the right signals of a successful tech CEO. He's smart, insightful, talks well (now) and appears decisive and willing to back long term bets into the future. And he makes money like crazy.

But looking at the track record there's a very concerning lack of execution around critical strategic objectives. Take metaverse - I know most people laugh at it because they think it was a bad idea to start with. I push that aside and look at the execution. They poured a startling amount of money into it, and the end result - technically - sucks. This is not good execution of a bad idea. This is incompetent execution of an untested idea. After 5 years of huge investment the characters in Horizon Worlds still look like cartoons. All the advertised features of hyper-realistic worlds, generative world building etc failed to materialise. They made a face saving pivot to mobile where they claim it is successful but I literally never heard of anyone using it. I think it will be entirely synthetic traffic driven from their existing properties.

Then you can look at AI. You can say the jury is still out on their AI reboot, but it has been out a long time now, and it seems like at best they are just grading into being at par with leading AI labs. But I think that's being generous because so little has been released. What is certain is they went from a leading position right up to 2022-2023 to falling completely off the radar. Despite still holding the undisputed leading AI framework in PyTorch.

I have to conclude there's a genuine culture and execution problem that probably centers on the fact that Zuck is simply not a good people manager. And his relationship with the next level down (Andrew Bosworth etc) is such that he doesn't enable them to be either. And this all permeates through to an organization that delivers at a fraction of what it should given the resources it is expending.

Animats 1 day ago|||
The low execution quality of Meta's metaverse effort surprised me, too.

But they wanted it to run on their relatively weak headgear. A good metaverse needs a decent gamer PC, a serious GPU, and a few hundred megabits per second of Internet bandwidth. (I've written a Second Life client in Rust, so I'm very aware of the system requirements.) Facebook needs to serve a user base which is mostly phones and people with weak PCs. Not Steam users.

If you have to squeeze it onto underpowered hardware, you get something like Decentraland or R2 or Horizon - low rez, very limited detail, small contained areas. Roblox has made some progress on this problem, but it took them two decades, even with a lot of money.

The real problem with metaverses is that a big, realistic virtual world is a technical achievement, but not particularly fun. It's a world in which you can spend time and meet people, but the world is not a game. It has no plot or agenda. This throws many new Second Life users. They find themselves in a virtual world the size of Los Angeles, with thousands of options, and are totally lost. It's not passive entertainment. As Ted Turner (CNN, TBS, etc.) used to say, "the great thing about television is that it's so passive."

duskwuff 1 day ago||
I think the problem goes beyond that. Meta never had a particularly coherent story for what "Horizon Worlds" was supposed to be to users - it was variously pitched as an online conference room, a social hangout, a way to explore 3D models, a video game... it felt as if they were throwing ideas at the wall to see what stuck, and nothing really did.
zmmmmm 1 day ago|||
Ultimately yes, that was the issue. In theory they built a viable product, even if it still was cartoonish etc. But it was enough to see that even if it was perfected - there simply wasn't a killer app for what to actually do in there. The vast majority of the worlds that got any traction were just kids playgrounds with silly or trivial games. Some of them were quite fun. But none of them represented a serious value proposition to anybody with actual money.

The crazy thing is, they built a half decent app called Horizon Workrooms. You could go in there with colleagues and co-work. With so many people WFH it was an actual useful thing to be able to share a room with your colleagues and anybody could throw up a shared screen on the projector, while having your own display in front of you that nobody could see. I did this with folks from my team and it became a regular Friday afternoon type thing for us all to hange out. This was actually useful. But they managed to screw it up and eventually canceled it as well.

Animats 1 day ago|||
That's what metaverses are like - big spaces in which users can do things. What to do is largely up to the users.
ffsm8 1 day ago||||
He is the owner though.

If zuck wanted, he could solve it. Decimate middle management, downsize at a level of what musk did to Twitter and then _slowly rebuilt_ in order to pay attention to the culture this time, removing anyone that takes part in such behavior...

The company would be worth more (because smaller headcount) and likely even ship more, because the culture would be better.. I've never worked at Facebook though, I'm just an armchair analyst being judgemental from reading some comments.

zmmmmm 1 day ago|||
Interesting wording, because he's not the owner. What he owns is enough voting rights that nobody can challenge his decisions.

And also interesting in the sense that, this is what he claimed to actually do a few years ago. He had a "year of efficiency" where he significantly flattened and restructured the org, losing tens of thousands of staff. At that time I even defended him precisely due to this reasoning - if execution is failing you need a reboot. Well he did the reboot and it is still failing.

kelnos 1 day ago|||
> Interesting wording, because he's not the owner. What he owns is enough voting rights that nobody can challenge his decisions.

So he's the owner, for the definition that matters for GP's argument.

ffsm8 1 day ago|||
Again, I'm just an armchair analyst, but in that year of efficiency,his aim was to reduce wastage, removing low performers etc.

That kind of trimming entrenches previous culture even more, which can be desirable - but not in this particular case where the culture itself is the issue.

At that point you can't trim, you need to decimate. The layoffs at that time were several waves of around 10% - unless I misremember? If he instead did two waves with 40% each and slowly rebuilt from scratch, it'd be a different story.

bennyelv 1 day ago|||
Why is the problem assumed to be middle management? Maybe middle management is the only thing preventing the company going from successful dumpster fire to unsuccessful dumpster fire…
ffsm8 1 day ago||
Because the issue is the culture, and the culture is entirely in the responsibility of middle management.

If an IC behaved like this then it's would've been the responsibility of the middle management to let them go when it started. So it'd still be on them.

And that's ignoring that issues like this have historically always started in middle management.

Also I suspect you're looking at it from an individual level: one middle manager on their own obviously cannot have enough impact to change this culture, so it's not the "fault" of any one manager. And that's the reason why the heavy handed approach is necessary, because the bad culture has settled. Anything any one manager may try to improve their ICs work life will inevitably get soured by the next level.

marcosdumay 1 day ago|||
> This is incompetent execution of an untested idea.

VR will be huge some day. Maybe not as huge as the Metaverse hype, but huge nonetheless.

But did you expect Facebook to have any competence on making it? Even if the timing was correct, what differentiator do they have?

And then the CEO throws a world-changing amount of money without even an idea (because "a VR world!" isn't an idea). Did you expect any of that money not to be wasted? That's not how products are made.

The Metaverse wasn't an organization failure. It was all Zuckenberg's incompetence, Facebook didn't even get the chance to try.

The AI started different, but it's becoming the same thing again.

somewhatgoated 1 day ago|||
VR won’t be huge someday. We won’t live to see it at least. We also won’t experience quantum computing having a real world impact. We also won’t see humanoid robots doing any meaningful real world work. There also won’t be a Mars base in our lifetime or datacenters in space or underwater. There won’t be any flying cars either.
zmmmmm 1 day ago|||
I can't tell how serious you are.

But I'm curious - thinking of your past self (depending how old you are), what would have said about the current AI revolution 10 years ago? Eg: the chances that fully agentic generalised automated software engineering would become orthodoxy? What chance would you have given it happening by 2026?

somewhatgoated 18 hours ago|||
I’m like 8%5 serious. And you are right I never would have dreamed of some of the things that we have now including LLMs. So it might well be true that I’m very wrong on all of these.
rwmj 1 day ago|||
We're still waiting for "fully agentic generalised automated software engineering".
johnfn 1 day ago||||
I would definitely bet against the humanoid robot thing on good odds.
somewhatgoated 18 hours ago||
You mean that we’ll have robots that can do the same (more or less) things that humans can?

I think the field made great advances in the last decades but still so far away from a meaningful human robot.

Personally I also think it doesn’t make sense - we can already produce humans at mich cheaper cost than robots, they grow, repair themselves, can learn all kinds of stuff, etc.

I would rather invest in more humans than humanoid robots.

Specialised non-humanoid robots are a great idea on the other hand.

umeshunni 1 day ago|||
Underwater data centers: https://www.scientificamerican.com/article/china-powers-ai-b...

"Flying cars" https://www.reuters.com/business/aerospace-defense/joby-flie...

somewhatgoated 18 hours ago||
Sure there have been attempts, but nothing that regular humans actually use. Once I can book a flying taxi like I can book a regular one I’ll admit defeat.
Eufrat 1 day ago||||
> VR will be huge some day. Maybe not as huge as the Metaverse hype, but huge nonetheless.

I really doubt this. There’s too many people who suffer from motion sickness to make this payoff. 33% of the population suffers from motion sickness to varying degrees and current mitigations including blowing a fan at suffering users, is an unrealistc barrier to causal usage.

senexox 1 day ago|||
I love the quest and was just using it about an hour ago. Even beyond motion sickness, it is not the same experience as it was when I first got the quest.

There is a habituation that happens the entire experience becomes far less immersive feeling. I have used the quest so much I don't really feel the immersion anymore at all. I had just found youtube 360 videos of the sphinx and great pyramid last night. I wish I would have watched this a year ago as it would have been so mind blowing. It is still fun but it is nothing like what it use to be. I don't feel like I "go" to the places anymore.

It reminds me quite a bit of the way marijuana was such a different experience the first few times vs the 500th time.

So even if you don't get sick, the magic wears off in about a month and people stop bothering. The experience is so consistent with people getting bored after a month. I can say from experience that this has nothing to do with the lack of content but something to do with the way the brain adjusts.

zmmmmm 1 day ago||||
i think the key is, about half of that 33% can tolerate certain elements of it (stationary experiences etc) and another slice suffer in a way that will be resolvable or at least somewhat mitigated by technology improvements. And then another slice will accommodate it if exposed early enough.

Put it all together and you probably are talking more like 10% of people residual. It is still a lot but I think it's just bearable to not be a death blow to mainstream use.

avidiax 1 day ago||
You can't have a modality that leaves someone out for a mass social or business product.

It's the vegetarians that constrain a shared restaurant choice.

doublerabbit 1 day ago|||
I have the Valve Index and had to buy prescription lenses to put inside to allow me to play without my glasses.

The first company to have auto adjustment lenses to my eye sight will get my money. when I can use it with my current eye sight and without having to buy accessories, I'll root for VR.

I am tired of this hypocrisy world.

HWR_14 1 day ago||||
> Even if the timing was correct, what differentiator do they have?

Being willing to put $80 billion on the line is a differentiator. It can subsidize hardware, hire talent, acquire companies, etc.

There were definitely ideas beyond just "VR good". But frankly, giving some of the high level employees he had (Boswell and Luckie and Carmack among others) $10billion each to make VR products they think should exist is something that would probably work

intended 1 day ago|||
No.

VR is not going to be huge, and it misses the entire point of tech.

Think of something like a Bloomberg terminal. Ugly as sin, and incomprehensible to any one who hasn’t practiced using it. It also gets work done faster, and has a keyboard with multiple keys to get to menus faster.

BB terminals save calories. VR does not.

VR is cool, it is aspirational, but it is not saving experts, let alone the average person, time and energy.

giancarlostoro 1 day ago|||
> DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.

I would be surprised if I even got through the interview hellscape that these companies put people through. I'm not interested in talking about algorithms and things that no dev in my entire decade+ time on the industry ever talks about, ever. To make matters worse, the things you should screen developers for nobody seems to do so, except exceptional shops that care about quality (ironically enough!). The only thing the algo questions do is push out "older" candidates who may not remember every little nuance anymore, because... they don't have to hand craft algorithms, every language worth its salt has sorting algorithms or lambdas (thinking of C#) to make sorting effortless.

omgitspavel 1 day ago||
A decade+ is plenty of time to spend a few weeks brushing up on CS basics. There is really only a handful of algorithms and data sctructues and none of them are rocket science.

And what's the alternative? Quizzing people on some random C# framework methods? The "I don't use algos in a day to day job" argument has been around forever, but nobody making it ever proposes a better filter.

dxdm 1 day ago|||
The better filter is to spend the precious interview time talking about actual experience solving real work problems, it has a high signal to noise ratio, because it gives you information on many independent axes.

I guess for candidates fresh out of school, you have to fall back to things they should know out of school as a proxy.

financltravsty 1 day ago|||
Strong no hire for Staff+ signal from this post.

Meta's leetcode gambit includes leetcode Hards and Mediums which aren't just "remember your hash maps and trees!" They're incredibly hard to brute force under time pressure if you haven't practiced similar problems before. Now do that for every interview -- exhausting.

Alternative? Lol? System design. "Walk me through systems you've built." Have a conversation. If you can't then maybe you don't have the skill for interviewing or dare I say the skill to be an engineer.

VygmraMGVl 17 hours ago|||
Meta's interview process does include leetcode Hard and Medium problems (although the Hards tend to be on the easier side), but you don't actually have to write working code, just talk your way through an algorithm. I actually was surprised at how it didn't feel like I was being asked to brute force my way through a tough algorithm problem, but more felt like a whiteboarding session with my interviewer. YMMV but I found Meta's interview process to be the most humane "big tech" interview I went through.
omgitspavel 22 hours ago|||
But there is a system design interview, for staff+ there should be even two system design rounds if I remember correctly.

When I interviewed at faang I was only once asked a leetcode hard question. Mediums in 99% of cases are manageable with just "remembering your hash maps and trees".

I'm in no way saying there aren't people who ask hard questions, but most of the times it is not the case. Also, how would you check that the person can code and solve problems with only checking their past system design experience?

kraf 1 day ago|||
They create toxic products that make the entire world more toxic. How they still manage to not have any responsibility while being editors and publishers is beyond me. I couldn't imagine how their insides wouldn't be toxic as well. Nice people don't do this.
mathgladiator 1 day ago|||
OR join meta, sell your soul, stay for 7 years, then retire and be done with work forever!!!
jimbokun 1 day ago|||
Will they still be offering enough compensation over the next 7 years for that to be true?

Not sure their stock price will continue to rise as it gas in the past.

whateveracct 1 day ago||||
7 years at a toxic workplace is tough
somewhatgoated 1 day ago||
I’d rather be poor
voidfunc 1 day ago|||
Id rather not.

Ive never known poverty in my life and I will do _anything_ to avoid it.

kelnos 1 day ago|||
Fortunately there are many other options than "work at Meta" or "be poor".
shimman 1 day ago||||
I'm sorry but if you can work at Meta you can work at any other company in the US. You're clearly making a choice. Lets not forget "I'm just following orders" wasn't a valid excuse.
whateveracct 21 hours ago|||
there's a middle ground

i make good money but not FAANG. like quarter million a year + equity that is sometimes liquid for more.

i do it remote and for a company that isn't so brutally antagonistic as meta. remote also means i don't commute, don't get trapped in an office for 40+ hrs/week, and can spend more time during the workday on my personal life than work itself.

so i make less money in an absolute sense, but i am not in any pain or being surveilled or being bullied to work hard.

and honestly i make more money per hour worked than a meta employee. so lower salary, higher effective hourly wage.

teaearlgraycold 1 day ago||||
I 100% understand the appeal of freedom from external pressures that retirement offers. But at the same time all the (many) people I know that retired early mostly just goof off and struggle to complete any of their many projects. And don’t get me wrong, I love goofing off. Been doing plenty of it. But given my inevitable death I have to appreciate a little external pressure forcing me to do good work.
toast0 1 day ago|||
> But at the same time all the (many) people I know that retired early mostly just goof off and struggle to complete any of their many projects.

I retired early and ended up going back to work part time. I didn't complete many of my projects, but that's not why I went back. Most of my projects were things I wanted to play with, not things I expected to finish.

Working part time is nice because of external pressure, but really, the most of the pressure is cause I'll feel bad if I disappoint the people that are letting me work with them.

I don't feel bad if I don't get my personal projects done, because nobody is going to use them anyway.

Seattle3503 1 day ago|||
Are you a dev? What does part time look like?
toast0 1 day ago||
Yes, I write software. The company is 100% remote with an annual team meetup and an annual company meetup, but I only go to the team one.

4 days a week, online at 9-10 am, offline 2-3 pm most days. Sometimes I'm working a sticky problem and stay online later. Or if I start a deploy in late afternoon, I'll stick around to finish it, etc.

Still on group chats, may or may not mute them on my day off.

teaearlgraycold 1 day ago|||
I have picked up a project that helps out a nonprofit and it’s making a nice financial impact. And then there are artistic projects that I hope positively impact others.
jimbokun 1 day ago||||
What corporations even offer “good” work any more? In the sense of not making the world a net shittier place.
teaearlgraycold 1 day ago||
I don’t know what your values are but I’m sure you can find some company that is at least morally neutral in its mission. However you might have to accept lower pay.

But to clarify I meant “work you can be proud of” when I said “good”.

xp84 1 day ago||||
If I were fortunate enough to be in that position, I think I’d partner up with a buddy to build something cool (that is unlikely to be a big moneymaker) and rely on each other for that pressure.
mathgladiator 1 day ago||||
After early retirement, it took be a solid 3 years to undo the mentality of needing to work. Now, I ride my bike with my wife as we fight her MS. I show up for her and myself.
somewhatgoated 1 day ago||||
Idk external pressure is mostly forcing me to participate in the corporate hellscape - would love to leave this and goof off as a goat farmer somewhere.

Let’s face it - most businesses don’t produce anything meaningful and just exist to realise the infinite growth fallacy of capitalism

mathgladiator 1 day ago|||
I'm going to raise cattle.
dinkumthinkum 1 day ago|||
What is the infinite growth fallacy? Are you familiar with the concept of dividends?
somewhatgoated 1 day ago||
The infinite growth fallacy is the belief that industrial economies can expand exponentially forever (rising GDP, consumption, and population) on a planet with finite resources.

What do dividends have to do with it?

Mars008 1 day ago||||
> and struggle to complete any of their many projects

Hmm.. I don't struggle, I enjoy it. The goal isn't to start glossy product production. It's to learn how to do it. As soon as it's obvious project is usually shelved. Except for the 'main line' projects which together can result in something significant.

wotsdat 1 day ago|||
[dead]
menloshark 1 day ago|||
Maybe if you joined 10 years ago lmao
dlandis 1 day ago|||
> People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it

So this applies to even, say, mid-level developers? Wouldn't you get work assigned to you after you're hired, or do you actually have to hunt for your own projects, like you might in some consulting firms?

menloshark 1 day ago||
> or do you actually have to hunt for your own projects, like you might in some consulting firms?

This is how the company works on a fundamental level.

On healthy teams, having something assigned to you (for levels under staff/6) is normal. On unhealthy teams, you're just a sitting duck and it's better to find your own work. Or else you'll be forced to work on bullshit projects with no upside.

Side note: the "they" who does the assigning is not a manager, it's another IC. The ones that go out and find their own work. That could be at any level technically, but usually staff+ because they form little political mafias.

gerdesj 1 day ago|||
Is this supposition or first hand experience?
menloshark 1 day ago||
The latter. I've seen so much unethical shit here, I'd love to give more detail but I'd probably dox myself
janussunaj 1 day ago||
Don't let it break you. Take whatever money you made and run.

The rest of big tech isn't much better. Big G is less stressful, but you'll see vicious and cringey behavior left and right. Hyped large startups are cults and 100% cringe. Meta is kind of the worst of both worlds though. "But they pay so well". Yeah, also: life is short.

cindyllm 1 day ago|||
[dead]
poopiokaka 1 day ago|||
[dead]
pfannkuchen 1 day ago|||
> There's a reason why the average tenure is <2 years.

Companies that hire a lot or hired a lot recently always have this. The 3 month people drag down the average. It isn’t necessarily due to turnover.

Not disagreeing with the overall point, I’ve just seen people say this same thing about a lot of companies and it doesn’t always mean something.

voidfunc 1 day ago|||
Im joining meta for the total comp not because I give a shit about the company or products. Same as every company.
menloshark 1 day ago|||
The total comp is a lie because the average tenure is <2 years, statistically speaking you won't get the full 4yr initial grant by the time you leave.

Just one suggestion: don't stop interviewing and be very observant of whatever team you land in, be ready to jump ship if there are too many red flags. Also don't trust any of the managers. Don't take anything people say at face value. Be very discerning in team matching, where you land determines everything.

You might be thinking "oh if I just work 7 days a week, I'll be safe". That's not true, it's all about where you land.

torton 1 day ago||
> Also don't trust any of the managers. Don't take anything people say at face value.

"Did you enjoy Game of Thrones? You'll love working here!"

BobbyJo 1 day ago|||
OP was saying not to join because you'll have a shitty time, not because the products aren't inspirational enough.
voidfunc 1 day ago||
Ill join purely for the comp. I can take a lot of abuse, trust me.
jimbokun 1 day ago|||
This certainly fits with everything in the article.
udswagz 1 day ago|||
100% true, absolutely nailed it
Balgair 1 day ago||
https://www.amazon.com/Careless-People-Cautionary-Power-Idea...

I mean, the book is just over a year old here people. It's not like this is new or out of date stuff.

loeg 1 day ago||
Mark hates leakers, so it is kind of intensely funny that the NYT seems to have a direct line to probably dozens of ICs. Ultimately, it's hard to keep secrets shared with 70,000 employees.
asveikau 1 day ago||
Years ago when following what Zuckerberg did occupied more space in my brain, it struck me that he can "hate leakers" but not look inward and change his behavior in a way that doesn't upset people and make them want to leak. He is a very reactionary guy, and not a "how can I be the change" or "what did I do to cause this" kind of guy.

I thought of this during his various scandals at the end of the 2010s. Everything was a PR reaction for him, rather than looking inward. The best PR is not being an asshole. I wonder if he's thought about it.

loeg 1 day ago|||
I don't think there's any possible way to behave to satisfy every single employee of tens of thousands.
asveikau 1 day ago||
Can't please every human alive, so I might as well not try to do any better. This is a very Zuckerbergian take.
loeg 1 day ago|||
Do you want to prevent leaks, or achieve some other goal? If you want to prevent leaks, this isn't an effective approach. If you want to achieve a different goal, that's fine, too, but orthogonal to the stated goal. For leaks, it's probably better to just restrict communications to the necessary distribution and understand that anything widely distributed is more or less public.
Henchman21 1 day ago||
You can prevent all the leaks by having nothing bad to leak. That is what is being suggested.
loeg 1 day ago|||
I don't think leaking is limited to "bad" things.
noisy_boy 1 day ago||
Leaking of good things is just good "PR" - people even pay good money for that to "accidentally" happen. Wonder why nobody thinks about actually doing the good thing and not bother with the rest.
kortilla 1 day ago||
No, Apple famously doesn’t want stuff leaked even if it would be good.
shimman 1 day ago||
Apple isn't good, it's just materialistic. Let's not conflate the two, especially since both have no issues working with authoritarians of all strips.
SpicyLemonZest 1 day ago|||
I get why people have this idea, but it doesn't work. A culture of "don't have anything bad to leak" quickly and inevitably leads to "keep your mouth shut so there's nothing bad to leak".
Henchman21 23 hours ago||
You're saying it is impossible for good people to exist. I can't accept that?
SpicyLemonZest 22 hours ago||
It's impossible for someone to be so good that nothing they say would ever look bad when leaked out of context. Sometimes you have to make hard tradeoffs, or hypothetically evaluate something to understand whether it's bad or not.
alex1138 1 day ago||||
He'd please a lot more people if the feed hadn't been filled with crap - out of order, at that - that nobody ever subscribed to in the first place, causing them to miss actual posts from friends (whatever you think of 'social media', his website is fucking broken) for YEARS AND YEARS AND YEARS
senordevnyc 1 day ago|||
[flagged]
CamperBob2 1 day ago|||
If you go out of your way to get people addicted to your site, you don't get to complain when they take your rug-pull a little too personally.
senordevnyc 1 day ago||
lol, I don’t think Zuckerberg gives two shits about people complaining.
_DeadFred_ 1 day ago|||
If that one change requires every user to click on their friends profiles to see updates x 2.1 billion daily users and say 4 family friends checked and it takes 1 second to click that is 291,000 8 hour workdays lost per day to humanity. Around 100,000,000 work days per year humanity looses out on putting to productive use. And I am REALLY underestimating the time lost to this. Facebook is stealing low end 100 million days of productivity a year from humanity on this one thing.

Or another way, 850,000,000 hours. It took 5-15 billion human hours of work to go to the moon. They steal 1 moon program worth of human time from humanity every 6 or so years. At the scales they operate we need to judge them on that scale. Mark get's paid/rewarded at that scale. He needs to be judged on the same scale. Not on 'the impact per individual'.

Meta has stolen multiple moon programs from humanity (again I am way under measuring) for that one change in order to increase their billions of dollars.

https://www.quora.com/How-many-man-hours-went-into-the-Apoll...

senordevnyc 1 day ago|||
How can you steal time from humanity when they freely chose to use your product? You don’t owe people a perfect product that doesn’t “waste” their time when compared to some arbitrary standard of how it should be.
dag100 1 day ago|||
Your argument is effectively saying "how can lowering the quality of my product affect customers when they freely use it?".

If you use Facebook regularly, you are locked into it because unless you manage to convince your entire friend network to move to some other social media with you, you will have to "leave them behind".

senordevnyc 1 day ago||
Or you could do what I, and many millions of others have, and just…stop using it.
ceejayoz 1 day ago||||
> How can you steal time from humanity when they freely chose to use your product?

By employing psychologists who figure out how to make it addictive?

socialcommenter 1 day ago|||
Ethics and morals are not "arbitrary".
alex1138 1 day ago|||
It's worse than that, people have reported that even going to someone's page, FB determines the order posted. Also, psychological experiments FB has done; also, it's kind of the definition of addiction, because FB, in the beginning, when you first friend someone, shows posts. These then subsequently drop off. You can post something, assuming it's been read. It might show up for nobody.

It's been said before that it's interesting Zuckerberg for making a social site is pretty introverted. It's because he stole it and he's always been stealing things. He did it to Whatsapp. He copied Snapchat multiple times. He thinks people are "dumb fucks" rather than "look, people shouldn't give info away, but now that I have it I'll do everything I can to keep it secure" (I DON'T like Google but my understanding is they have far fewer data problems). That's the mark of a certain kind of person which I'll, I suppose, not name. It's insulting to the web, what he does

dingaling 1 day ago|||
FB works perfectly well for me.

80% of posts in my FB feed are groups or people to which I've subscribed or followed.

10% are interesting things it suggests outside that core, which I then follow.

10% are suggestions that I don't find interesting and which I mark as such.

j-bos 1 day ago|||
> This is a very Zuckerbergian take.

No, it's just a common fallacy. If you don't like the guy, isn't "zuckerbergian" an example of helping him live rent free in people's heads?

asveikau 1 day ago||
I'm actually not kidding when I say that Zuckerberg likes that particular fallacy a lot and I've seen him use it. You're right that it's not at all exclusive to him.
cheschire 1 day ago||||
Jesse Eisenberg captured this perfectly.
georgemcbay 1 day ago||||
> The best PR is not being an asshole. I wonder if he's thought about it.

There are a lot of people in the world who lack basic human empathy to such an extent that it is nearly impossible for them to just not be an asshole.

I don't know for sure if this applies to Mark Zuckerberg but based on all the second-hand anecdotal information I've heard about him "empathy" as he understands it is a product branding feature rather than a human emotion.

cybercatgurrl 1 day ago||
hard to do anything about when it’s in your genetics. it’s a form of neurodivergence just like any other. and to deny it is just furthering the stigma against people with high cognitive empathy and low affective empathy
kelnos 1 day ago||
Then perhaps people like that shouldn't be in charge of a company like Meta.
sleight42 1 day ago||||
He's wealthy enough that he doesn't have to care. Sadly, he's one of many, most, who demonstrate what happens what you have unbelievable wealth: you do whatever you want to whoever you want without feeling remorse.

And why not? What does he have to fear? He controls the stock. He's not going to lose his company. He's not going to lose his wealth.

He's all but invulnerable as long as he doesn't do wrong (enough) by whatever government he lives under and sucks up enough that he can get away with the rest.

This is not an excuse. This is disgust. He, and most billionaires, are rotten bastards. It's not whether they're awful, it's how awful.

giancarlostoro 1 day ago|||
He probably has the same thing as Elon Musk, aspbergers to be honest. Eh I just looked it up, and apparently he does. Come to think of it, maybe Steve Jobs as well, he was insanely eccentric.
kelnos 1 day ago|||
"Eccentric" != "Asperger's"

(As an aside, Asperger's is not diagnosed anymore; it's been folded into autism spectrum disorder.)

jimbokun 1 day ago|||
Employees seeing wave after wave of their coworkers laid off. You won’t win much loyalty that way.

This latest one releasing the NUMBER and DATE of the layoffs a month in advance without naming WHO is a whole new level of stupid. Let’s deliberately maximize the level of anxiety in our employees and reduce their trust in us to zero.

loeg 1 day ago||
> This latest one releasing the NUMBER and DATE of the layoffs a month in advance

This, too, was leaked to the press. Their plan wasn't to announce a month in advance.

hibikir 1 day ago||
Given Facebook's current size, and how many people are relatively disgruntled, but work there because of the pay being quite good, the chances of leaks for wide coms approach 100%. The level of internal, upward trust you need to have few leaks left facebook at least 10 years ago.

No amount of hate will fix it, and no amount of tracking will hide all but the most hidden secrets, so he better get over it. In his situation, hating leakers is like Garfield hating Mondays.

softwaredoug 1 day ago||
I noticed a lot more joy using AI from people at smaller companies or working by themselves :)

I say this as someone self employed that burned almost $1000 on tokens last month. And had. A lot of fun doing it.

munificent 1 day ago||
No surprise. People like being more productive when they reap some of the benefits of that increased productivity. If you're expected to be 10x more productive but don't get a raise, all you're doing is stuffing money in some executive's pocket while your job security goes down.
zmmmmm 1 day ago|||
I'm being heavily consulted to advise management on culture change towards AI. And my number one message is this: make the number one, first and potentially only beneficiary of AI use the individual staff members themselves. If they have more time now, DO NOT start filling that with more work for them to do. If they do more all by themselves accept it as a bonus (experience says this is overwhelming what will happen anyway). Whichever way it goes, let them experience directly the benefit, and let the culture change happen organically downstream from that.

I think all these companies front-loading staff reductions are actively sabotaging themselves in the worst possible way in this regard.

uzername 1 day ago||
I would love to hear more about your advice and the coaching you are giving to management. We also have a strong push to prove evidence of climbing productivity with clearly state future staffing goals. I would like to advocate for this, at even partially, enhancement and quality of life improvement for IC folks.
zmmmmm 1 day ago||
It starts with the generic pitch around culture change - "culture eats strategy for breakfast" style. Then a bit of shock and awe around how extensively AI is going to redesign business processes in the long run, leading into an argument about it being a marathon, not a sprint and at the moment everyone is treating it like a sprint, the real winners will be those gearing up for endurance. Then structuring the pathway: personal productivity as a cornerstone ebbing into pilots of implementation in areas highly aligned with AI capabilities minimised risk - all as preparation for the main game which will ultimately redesign core business processes in an AI first way.

I will say I am a bit of an outlier. I see others mostly pitching for things like small teams of "AI Champions" etc. I don't favor this because I think it will lead to dysfunctional outcomes (people trying to make the initiatives fail because they weren't "chosen" etc). So I pitch for the broad based, whole organization journey etc. But it does require a strong argument for acceptance of a slower pace of externally visible adoption.

dzhiurgis 1 day ago|||
This.

I’m in a dreadful situation right now. Everyone in team got a claude account, but I’m a contractor so not for me (the only dev in team of 25 consultants). Someone in the team assigned me a task to review claude skill that opens up tickets for me. I’m not even using claude and official policy is no AI use for development…

Otherwise it’s been mixed bag. Pace definitely picked up and things that I actually enjoyed doing (UI) it does very well. Things that are actually hard (backend logic) it sucks and painted me in corner too many times.

Aurornis 1 day ago|||
Meta is on the extreme other end of this. The article opens with how they're now using AI to monitor how everyone uses their computers.

It's still insane to me that Meta thought this would be a good idea, or that employees would be comfortable with it even though they claim it's only used for anonymous AI training.

loeg 1 day ago|||
> using AI to monitor how everyone uses their computers

It's the other way around -- they're monitoring the computers to train AI.

sterlind 1 day ago|||
probably both, to be fair.

Meta may know that their employees will put up with it, given how depressing the job market is right now, but unhappy, cynical, resentful employees do not produce good software and innovations.

there's a real financial cost to treating devs like cage-raised livestock.

loeg 1 day ago||
It's unclear how you would use LLMs to monitor clicks. Unless you just mean they're authoring the monitoring software with LLM assistance (which is probably right).
saratogacx 1 day ago||
LLM generates context based on what's on the screen and associates it with the action taken by the user. It is less "point of time" but more "charting the flow"

For example. page content of a PR with open comments, next action is to focus on the first comment. when a new PR with no open comments is shown the approve/push button is the next action. That starts a re-enforcement loop.

stasomatic 1 day ago|||
Could this be a vector to poison the AI? I am not one for sabotage, just bad karma all in all, but not all are like that, and if one knows their days at ACME are numbered, the sirens start singing.
jimbokun 1 day ago|||
If they were competently evil they would have just done it quietly.
abalashov 1 day ago|||
I work by myself and feel no joy in using AI.
stavros 1 day ago|||
I work by myself an feel great joy. Today I talked to the AI about a feature I want to add to this week's project (https://www.writelucid.cc) and it had some good feedback. Later I refactored a big part of the code to simplify it (though I had to explain to Claude why this was possible), and it came out great.

I've never been happier, I can now build everything I've been wanting to build, really fast, with very few bugs.

echelon 1 day ago|||
I work for myself and I absolutely love AI.

I'm able to get 3x the work done. Greenfield stuff appears almost immediately.

My job is providing value to customers, not worshipping at the cathedral of software that will last forever. Nothing lasts forever.

Start treating software as ephemeral. It'll click.

This doesn't mean write low quality, unmaintainable software. It just means focus on getting stuff to your customer.

Writing in super typesafe languages with the highest level of strictness helps a lot. My AI stack is Rust and Typescript.

Thanemate 1 day ago|||
I tried using it last week to make a simple Yu-Gi-Oh! website, that shows decks, lets you rate them, register users, etc. kinda like masterduelmeta.com and I enjoyed using it, but definitely did not enjoyed making it. I didn't felt a sense of ownership or dopamine from nailing the styles just right, or making the cards shimmer when you hover them.

All jobs can generate income. What led me follow this one job in particular was the joy of turning nothing into something, and it now feels that the most effective way to do that is for only $99.99/month, and that price needle is only going to move further upwards as capabilities increase.

echelon 1 day ago||
> it now feels that the most effective way to do that is for only $99.99/month, and that price needle is only going to move further upwards as capabilities increase.

That's not how economics works.

That can happen briefly with monopolies and ossified markets, but there is typically always an alternative that will seek to break in and grab market share.

Chinese tokens are pretty cheap and they'll gladly undercut US hyperscalers.

saltyoldman 1 day ago|||
This is the right way to look at things now. It might not always have the right track record, but AI built coding is more likely to have all the right permissions in place by default, most likely to copy existing patterns in your codebase, most likely to use the highest performance patterns and on top of all that, the spec will match what was asked of it.
codemog 1 day ago|||
What magical AI are you using? That’s not my experience at all.
loeg 1 day ago|||
Claude with the 4.7 model is getting pretty good.
saltyoldman 20 hours ago||||
I use cursor on auto mode almost all the time. I switch to Opus 4.7 when I need I know it will go off the rails. But generally Auto mode "just works".

For my personal work on my own projects, Codex 5.5 because it's cheap at $20 / month and I get in about 10 prompts during my work day (would be more like 40 if I was not focused on work though)

sterlind 1 day ago|||
there is a significant learning curve to using AI well. learning to stay skeptical and keep your brain on, developing an intuition of how much free reign to give it, writing ironclad specs and design docs and keeping them updated, making work easy to inspect, the tone you use talking to it, using one agent to critique another's work, etc.

basically, AI will produce slop if left unattended. but it's not really its fault.. it's a process failing, like not supervising the interns. using AI the Right Way(tm) is a mental workout, quite a bit slower, but extremely rewarding (ime.)

crooked-v 1 day ago|||
I can't even get LLMs to reliably use tool calls instead of bash, let alone follow existing patterns in a codebase.
echelon 1 day ago||
What do your prompts look like?

Mine are pretty robust and articulate. I tend to write very lengthy instructions and include snippets of code, file paths, struct names, etc.

j-bos 1 day ago|||
Been feeling that energy too, trying so hard to stay at my current big co job for the health insurance. But the draw is pulling me hard.
foota 1 day ago||
I've generally assumed that AI would make developers get lower compensation because of the lowered quantity of developers required for the same output, but this raises the possibility of it actually increasing if more developers end up doing their own things instead of entering the broader labor market :)
loeg 1 day ago|||
It could increase compensation by growing the economy. (E.g., perhaps counterintuitively, skilled immigration has this effect.)
bdangubic 1 day ago|||
the problem is that very few to none SWEs “doing their own thing” will ever make a penny out if it. whatever they do, if it actually makes a little traction, will be cloned and copied in a week by someone else. this whole idea that “we’ll see a 1-person billion dollar startup” is as silly as it gets
jimbokun 1 day ago|||
What’s your ROI for that $1000?
amelius 1 day ago||
Just wait until Big AI copies those businesses.
stephc_int13 1 day ago||
I believe that any kind of partial automation is going to make the job more soul-crushing.

Ford style assembly lines made the work of the factory workers more miserable. Partially automated cashier did the same thing.

I don't think there is any point in trying to resist automation, as the efficiency benefits are too important.

layer8 1 day ago||
Efficiency gains are more important than people not having to spend their working life with soul-crushing tasks? I don’t quite follow.
stephc_int13 1 day ago||
The assumption is that orders of magnitude more people will benefit from the efficiency gains, like it was the case in agriculture automation or factory work automation.

In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.

IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.

daveguy 1 day ago||||
Yes, eventually we will all be able to enjoy our delicious algorithm, attention, and data sandwiches on our lunch breaks.
archagon 22 hours ago|||
All this skips the fundamental question: can this job be done by a machine? Or does the job just have a vaguely machine-doable look, particularly to those outside the trenches?

Because the latter is how you get the software engineering equivalent of collapsing bridges, en masse.

stephc_int13 19 hours ago||
Almost nothing is fully automated yet. So the answer is tricky.

In the beginning, they are irrelevant, but at some point, edge cases are everything you have to deal with.

themafia 1 day ago|||
> Ford style assembly lines

The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable.

> Partially automated cashier did the same thing.

I've not once heard anyone in the service industry make this complaint.

> as the efficiency benefits are too important.

You can squeeze every last drop of productivity from your employees. In the short term this may even evidence profits. In the long term it only works if you hold a monopoly position.

stephc_int13 1 day ago||
"The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable."

The whole innovation was about making the jobs as simple and repetitive as possible so humans would basically work like robots.

Once you're there, having removed any agency and freedom, pushing the hours to the limits of human exhaustion is just one logical step.

3fsd 1 day ago|||
You've described the amazon warehouse. Ive worked in there and trust me, I did not see people displaying exhaustion etc. There were many there who did the job purely because of how simple it was and were ok with it. Perhaps they got conditioned to it.

Yes it was jarring for me to experience that.

themafia 1 day ago|||
> was about making the jobs as simple and repetitive as possible so humans would basically work like robots.

So they make fewer mistakes. Not that they become zombies that you are then able to abuse.

> pushing the hours to the limits of human exhaustion is just one logical step.

There's nothing logical about ignoring consequences. Which is probably why the "union strike" even exists. It's fighting illogic with illogic.

wat10000 1 day ago||
We’ve had partial automation in programming since the first assembler was written. I don’t think we’re more miserable than we would be if we still had to write machine code by hand.
stephc_int13 1 day ago||
People who enjoyed programming at this level (myself included) were not really that happy but most had to transition into a job that didn't value some the skills they patiently acquired and were machines never attained the highest level.

I would have been happy writting z80 and 68000 assembly code for an entire career.

wat10000 1 day ago||
Same here, but I think even you and I would get annoyed if we had to write machine code directly. Some people like assembly, but I've never encountered someone who eschewed even that.

If we look at automation beyond assemblers (e.g. compilers), even if you or I might be content without it, I think it's safe to say that the vast majority of programmers are glad they don't have to write assembly.

epsteingpt 1 day ago||
Meta employees as a whole are highly insecure, both from a job security and a 'status' perspective. People jump ship to the latest thing (which was the Metaverse, now is the internal AI lab).

There's a massive restructuring going on - layoffs, reorgs - and an even more ruthless performance bar.

The internal spying is common across companies. The extent would shock most 'big company' employees.

The realization and angst IMO is more that the days of extremely great comp, job security (even if you're good) and career progression is over at Meta unless they figure this pivot out.

What will social media become when influencers aren't a thing, and "creators" is no longer a moat.

You imagine Mark must be sweating bullets right now, along with the rest of media.

smrtinsert 1 day ago|
From the third party perspective, it feels like gambling to me. I can't imagine being ok with knowing everything keystroke you make is being tracked to train a model, most likely to replace you. I can't imagine ever wanting to work there, thought that for a long time. "It's full of brilliant people". Well, the profession is quite honestly, there are plenty of places to work.
epsteingpt 1 day ago||
That's changing quickly, and Meta pays extremely well.

Not so easy for 3-4 year kids out of school to make $500K-$600K.

The supergenius quanty ones go to Jane Street and the smart product-y ones jump ship to OpenAI or Anthropic (e.g. Boris) but there just aren't 20,000 high paying roles out there.

Anyone saying otherwise is kidding themselves.

cadamsdotcom 1 day ago|
Uber burning its whole AI budget in 4 months instead of 12, companies everywhere pressing employees to use AI whether or not it makes sense..

My cofounder and I get to “only” pay $200/mo to build our product while the hyperscalers burning tokens like crazy stave off price rises for people like us - thanks Zuck!

More comments...