Posted by JumpCrisscross 1 day ago
Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.
For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
<prompt text> -> [PROVIDER] -> <lots of output text> -> [PROVIDER] -> <prompt text, mangled>
They're getting paid to encode some inane prompt into paragraphs of text, and then they're getting paid again to summarize that back into something with even less value than the original prompt. And they're making money hand over fist because people are happier to play that game rather than just pushing back on the jerks sending them pages of generated garbage in the first place.What the fuck are we even doing anymore?
midwit meme template
guy on left: katie u want meet 3pm discuss project
midwit: Hi Katie, I hope this message finds you well and that your week has been off to a productive start. I wanted to reach out and proactively touch base regarding an opportunity to align on some of the ongoing project-related workstream...
guy on right: katie u want meet 3pm discuss project
1) >I simply run it through an LLM,
2) >paste the summary,
3) >and ask them "Is this an accurate summary?"
4) >and then I ask the for their original prompt.
Agreed that just step 1 or step 1 and 2 would be depressingly pointless, but step 3 and 4 make this the equivalent of sending someone a let-me-google-that-for-you kind of link, does it not?
Caught out like this I imagine many people will kind of get the fact that you'd rather have their direct inputs..
(Or just get mad at you, but that's fair I guess)
I simply ask for a positive affirmation of the summary so that I can act on that, instead of other things.
And either that person won't be employed anymore, of the thing they were asking for in the first place will be automated for them.
I've already got my agent building a dossier for everyone we interact with. I haven't started training it on their writing style so I can mirror back to them... yet.
Giving the AI knowledge of the org chart, who works on what, how they prefer to communicate, what their goals/biases are, is no different than what every ape implicitly collects in their own head.
The disease has spread to six figure enterprise contracts hallucinating about their own APIs.
It seems like no one responding to this understands scoped context retrieval.
An essay states a hypothesis and then uses first and second party sources to validate it. I'm not conflating anything, it's just a good abstract example of the type of knowledge synthesis work, which is why we make kids do them.
A business strategy proposal is nothing more than a specific type of essay where the research sources are internal research results, market trend analysis, etc.
A technical design doc is an essay about the best way to implement a feature.
An "executive summary" is just an abstract, and the MBR puts the latest research citations and raw results in bullet points.
have you asked these people how they feel about this? have you asked them for permission, for their consent to do this with their communications to you?
what you’re doing sounds incredibly creepy. like, meta/facebook kinda of creepy. granted, it’s at a more limited scale, but it’s still creepy af dude.
fwiw, if i was your colleague and you asked me how i felt about you doing this with me, i’d be seeing about getting HR involved.
Do you think you are not constantly being "influenced" to do what people want from you?
What do think happens during a peer review or promotion decision?
What do you think the pile of data in SharePoint / GDrive represents?
You think HR will care about someone taking prolific detailed notes at work?
I did phrase my comment in a glib way to draw out this type of reaction. But this type of stuff is what "intelligence augmentation" will include, and the corporate panopticon is already alive and well anyway.
just because the corporations do this to us doesn’t make it okay to do it to each other. just because your employer does it doesn’t mean it’s okay to do to your co-workers. like, there has to be a degree of trust between colleagues dude.
compiling a record of every single thing anyone has ever said to you, an individual human being who is not a corporation or a machine, all for the purposes of “it makes my emails better” is just plain fucking creepy.
i think you might need some time away from the screen. seriously.
> i did phrase my comment in a glib way to draw out this type of reaction.
maybe, just maybe, it would be a good idea to take a bit of time to seriously think about why being glib about this super creepy thing you’re doing is not a good thing.
bit of self-reflection. the thing us humans are supposedly still capable of doing and the machines are not.
like, jfc, these are fucking people were talking about building “dossiers” of. people the person works with where a degree of trust and bonding is necessary. people they probably spend at least a quarter of their waking hours interacting with.
and your defence for it is “well, google does it”.
the best engineers know what not to build. they don’t build every single thing under the sun because they can.
also, don’t you have to explicitly agree to google’s terms for that stuff to use their services?
[1] https://jabde.com/2026/02/02/utilizing-llms-as-a-data-decomp...
https://www.vaines.org/posts/2026-01-26-jpeg-compression-for...
Original prompt: "Please rewrite this information in a nice format for my insufferable asshole colleague".
I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.
Likewise, I often used literary flourish and pleasantries like that above article about email decompression; I’m from the south so I think structured formality comes with territory.
I do think using LLM to turn notes and bullets into narratives should be considered no different than rendering CSV text into an excel format, just making it more digestible by recipient.
I will generate an LLM output that organizes scattered information and thoughts, resulting in 1.25x the text. I then read and edit it, generate executive summaries, and send it.
It saves effort for me in organizing, formatting and summarizing, and the LLM is producing more structure than content.
I often send out the LLM version, but still check if it contains the original thoughts correctly.
It's not a bad way to extend your vocabulary & catch spelling mistakes
Please don’t do this. You probably aren’t aware of how bad this can land. It’s not just about containing your original thoughts, it’s about the verbosity, repetitiveness, and absurdity of it all.
Grammarly is a much better tool for these kinds of purposes, and it actually guides and teaches you to improve your writing along the way.
A Google search didn’t reveal anything specific other than them using famous author names for expert review.
“keylogger mode” is optional, and to my understanding, you always see a visual indicator in the text area.
it doesn’t take every input as far as I know, and security firms don’t consider it a threat.
but point taken, it’s not for people with privacy concerns.
Regarding "honeypot" -- that's also what a honeypot is. They provide a service you want, then collect data. We have to take their word that they're only using this data to train their AI (which, btw, they are upfront about -- they log everything and feed it into their training. it's in their TOS).
Eg FBI putting up fake “buy drugs online” sites and logging your info once you place the (fake) order.
Tell it that you want a succinct professional email and it will do that. Give it examples of your own writing and it will match that style. If there's something you don't like, tell it to rewrite the part differently.
Theses are literally the things language models are best at.
This is not what the parent I replied to indicated, nor what people usually do.
I keep the LLM close to the original text content wise, but feedback was/is fantastic
The problem is that you lose your voice and adopt one that your audience knows all too well (and knows it isn’t yours). It makes your audience feel like you aren’t listening to them (even though you are!), because they feel like they’re talking to an LLM.
So there's a clear separation, a reply from me which I stand by and then some interesting chatbot stuff if you're into that.
"Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."
They are at high risk.
Employees using ChatGPT to renegotiate their salary are showing a serious lack of cognitive awareness.
You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?
I got better at it, but I can’t say I ever got to like the pervasive hypocrisy. To my understanding the American/West Coast is even more fake on this aspect.
The parent is right. The reason society as a whole is way too comfortable with overstepping social boundaries, is because people think it’s somehow rude to confront others. It makes no sense. Sometimes you gotta say it how it is, because quite frankly the real rude person is the one copy and pasting a ton of AI output into your communication so you have to parse that and then try and figure out the original intent between the lines. How is that acceptable but saying “don’t do that to me?” is not?
This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.
LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.
For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.
If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!
For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.
I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.
Oh, we know. It's pretty clear in many cases.
If in doubt, treat it as a bad writing and give that feedback. It is a bad writing.
And frankly the best signal now is: the shorter it is the greater the likelihood it was at least expensive for the human to produce. Said in another way - a shorter thing is easier to make sense of completely and if its garbage - its garbage. At least the cost borne on you was minimised!
That’s not really the point. Engineering has always operated on trust networks, not just artifacts.
Your review naturally adapts based on the level of trust you have in the author. If someone has consistently produced high-quality work, whether they used AI or not becomes mostly irrelevant.
What’s funny to me is your last paragraph. A lot of companies are so gung-ho about “AI ALL the things!” that I’m not sure as a manager if I’d get in trouble for “spotting the AI copy paste” junk. I’m supposed to make sure everyone is using AI as much as possible, after all. So, rejecting someone’s output for being low-effort AI slop and asking for a “less AI” version of it might mark me as a silly old fashioned guy who doesn’t believe in AI.
The world is turning stupid and the tech world is at the forefront of it.
Lmgtfy was a passive-aggressive (but not really passive) way to say “hey, are you too dumb to google this?”. Sending somebody ai output feels the same to me - the message you’re sending to the recipient is “here, you’re obviously too dumb to ask an LLM about this yourself”. Except some people don’t seem to realize that’s the message they’re sending
Only 30 minutes? You have it good! ^_^
This person is creating more work for an FTE who now has both a) the original job, and b) the additional load of purging corruption from the inputs for (a). This is happening at scale.
Your tolerance for this depends on how close to capacity you are for (a). It's a tale as old as corporate time, well-documented by Office Space and Dilbert.
Work is Work. Pantomime is Pantomime , whether it's with "frontier" or low-tier LoLMs.
> There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
What is acceptable right now is to believe that corrupting the inputs to the work of serious FTEs is somehow beneficial. You are expected to sing the revised words of the corporate anthem with your customary passion and obedience. Layoffs will continue until the morale of data centres has improved.
Sometimes I wonder if we're letting people graduate from school with no real grasp of the purpose of written communication. School strips writing of purpose, and creates artificial purposes such as using AI to combine words in order for AI to assign it a good grade. Even before the AI era, most human generated text was not worth reading.
Meanwhile, I was absolutely using AI, but not to write documents but to do first pass critical reviews, the "what am I missing here, what haven't I accounted for here?" but the writing was all my own.
Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.
The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.
The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.
Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.
Its a whole lot more nuanced than some shitty game theory.
Firms waste literally billions on some bullshit that gets them nothing.
That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.
If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.
Certainly seems like the advantages of 3D printing came in clutch exactly when they were needed.
Everything was theorized and it all was a variation of “nothing will be the same for anyone ever again,” not “some specific areas will be really different.”
Agentic coding is a bit different, particularly if a great deal of effort and intelligence goes into it, but that's a quite different thing than just cranking out slop apps.
If I want the LLM answer I freaking ask it myself
That’s a huge assumption! And I care a lot, because I want to know a person looked at the result and decided it was correct. If you don’t do that, you’re dumping that work on to me and ignoring that I asked you for a reason.
Anyone can open up ChatGPT and ask for a quick answer. What on earth makes people think I want them to just do that for me when I ask them a question?
What you are seeing is a seed of the future of communication.
Communication is one of the hardest things people do. The goal it to pass idea from the sender to recipient in a manner that is least lossy. Look at how many things need to be aligned for that to even barely succeed. You have to speak the same language, dialect, have similar enough personal vocabulary, have sufficiently aligned mindsets in the domain you are communicating about, have the same current context and ability to convey context update, then the sender must serialize their ideas into actual words with correct enough spelling and in correct order to get the idea into recipients mind. All that while knowing only very little about recipients mind and having to predict what effect the words will have on it, assuming they don't misread the text.
In the future barely anyone will produce raw text themselves. At least not in professional contexts. World will be way more mixed. People will come from very different cultures and use very different languages. Most people you will encounter in professional setting will not be sufficiently aligned with you to communicate anything beyond the simplest ideas. And neither you nor them are going to be willing to align with others.
You know what will align with you? Your AI. So any message from a human will go through your AI and any message crafted by you for a human will go through your AI as well. And when it's received, through theirs. Messages will not be written. They will be constructed in a dialog with senders AI. And they won't be read. They will be interrogated in a dialog between recipient and their AI.
The future is going to be way more diverse. People will use their own communication styles they were taught when they grew up. But the bulk of out-of-family communication will be done through AI. And the AI language will be verbose. Not really fit for routine human consumption. Because words are cheap for AI and not losing details is a communication priority. It's starting as a corporatese English. But I think it will evolve rapidly to increase signal to noise ratio (while still being impractically voluminous for humans).
They issue now is just that you are trying to read rudimentary machine code of future human communication directly.
“How do we opt out?” - Meta employee
Poetic justice, or "dogfooding"
It's not like Meta/Facebook ever had moral concerns about privacy violations or surveillance (or many other things).
"I only work here because I like money and Meta pays more than $(INSERT_OTHER_FAANG)"
Check out this Ask HN thread for example about how bad the Job Market is -
You might as well blame the entire US population for certain problematic actions of the president.
The equivalent would be blaming those who chose the current president. Which would be an entirely reasonable thing to do.
You actively decide to work for Meta which has been known to dishonestly violate privacy since basically day one [1]. Most US citizens were passively born as one. It’s also much easier to leave a company than to move out of USA.
[1] https://www.businessinsider.com/embarrassing-and-damaging-zu...
Meta employee chose to work there. Each day, they choose to keep working there. With Meta on their resume, they could find work pretty much anywhere else.
A nation is what its people tolerate.
An economics euphemism; what the market will bear.
Americans bear their government and neighbors providing zero assurances of food, shelter, healthcare.
Millions support the Prez and the rest, even though they have power in numbers, well, not doing anything is a choice.
Good luck out there. My fellow Americans and I don't have to care if you end up homeless in your car. Murica!
It's technically correct you all have no assurances your assets continue to hold value and you won't end up homeless. Lol
Censorship of truth!
And on the next update we either just enable it again and make you go through the process again :)
"You reap what you sow"
Karma (not the HN kind)
So sad to think that a generation or two ago, everyone wanted to emulate the HP Way. Now all of that is gone and unless you are a superstar, you're just a commodity to be managed, and extinguished when the time comes.
I remember that there is a passage in the book where the HP guys go and meet with other leaders of American corporations, and most of them felt that they did not have any kind of obligation back to society. I am a huge fan of the HP Way, but they were unusual, and not the norm.
DON'T JOIN META, no matter how fast the recruiters reply to your messages. No matter how cool the work sounds (the managers lie in team matching). There's a reason why the average tenure is <2 years.
It's a toxic and fear based culture. You join, the people around you are already thinking how to scapegoat you. People gatekeep actual work and save it for political favorites and everyone else on the outside is stuck cooking up bullshit projects. If you do manage to find work on your own, people will immediately start scheming to steal it
But looking at the track record there's a very concerning lack of execution around critical strategic objectives. Take metaverse - I know most people laugh at it because they think it was a bad idea to start with. I push that aside and look at the execution. They poured a startling amount of money into it, and the end result - technically - sucks. This is not good execution of a bad idea. This is incompetent execution of an untested idea. After 5 years of huge investment the characters in Horizon Worlds still look like cartoons. All the advertised features of hyper-realistic worlds, generative world building etc failed to materialise. They made a face saving pivot to mobile where they claim it is successful but I literally never heard of anyone using it. I think it will be entirely synthetic traffic driven from their existing properties.
Then you can look at AI. You can say the jury is still out on their AI reboot, but it has been out a long time now, and it seems like at best they are just grading into being at par with leading AI labs. But I think that's being generous because so little has been released. What is certain is they went from a leading position right up to 2022-2023 to falling completely off the radar. Despite still holding the undisputed leading AI framework in PyTorch.
I have to conclude there's a genuine culture and execution problem that probably centers on the fact that Zuck is simply not a good people manager. And his relationship with the next level down (Andrew Bosworth etc) is such that he doesn't enable them to be either. And this all permeates through to an organization that delivers at a fraction of what it should given the resources it is expending.
But they wanted it to run on their relatively weak headgear. A good metaverse needs a decent gamer PC, a serious GPU, and a few hundred megabits per second of Internet bandwidth. (I've written a Second Life client in Rust, so I'm very aware of the system requirements.) Facebook needs to serve a user base which is mostly phones and people with weak PCs. Not Steam users.
If you have to squeeze it onto underpowered hardware, you get something like Decentraland or R2 or Horizon - low rez, very limited detail, small contained areas. Roblox has made some progress on this problem, but it took them two decades, even with a lot of money.
The real problem with metaverses is that a big, realistic virtual world is a technical achievement, but not particularly fun. It's a world in which you can spend time and meet people, but the world is not a game. It has no plot or agenda. This throws many new Second Life users. They find themselves in a virtual world the size of Los Angeles, with thousands of options, and are totally lost. It's not passive entertainment. As Ted Turner (CNN, TBS, etc.) used to say, "the great thing about television is that it's so passive."
The crazy thing is, they built a half decent app called Horizon Workrooms. You could go in there with colleagues and co-work. With so many people WFH it was an actual useful thing to be able to share a room with your colleagues and anybody could throw up a shared screen on the projector, while having your own display in front of you that nobody could see. I did this with folks from my team and it became a regular Friday afternoon type thing for us all to hange out. This was actually useful. But they managed to screw it up and eventually canceled it as well.
If zuck wanted, he could solve it. Decimate middle management, downsize at a level of what musk did to Twitter and then _slowly rebuilt_ in order to pay attention to the culture this time, removing anyone that takes part in such behavior...
The company would be worth more (because smaller headcount) and likely even ship more, because the culture would be better.. I've never worked at Facebook though, I'm just an armchair analyst being judgemental from reading some comments.
And also interesting in the sense that, this is what he claimed to actually do a few years ago. He had a "year of efficiency" where he significantly flattened and restructured the org, losing tens of thousands of staff. At that time I even defended him precisely due to this reasoning - if execution is failing you need a reboot. Well he did the reboot and it is still failing.
So he's the owner, for the definition that matters for GP's argument.
That kind of trimming entrenches previous culture even more, which can be desirable - but not in this particular case where the culture itself is the issue.
At that point you can't trim, you need to decimate. The layoffs at that time were several waves of around 10% - unless I misremember? If he instead did two waves with 40% each and slowly rebuilt from scratch, it'd be a different story.
If an IC behaved like this then it's would've been the responsibility of the middle management to let them go when it started. So it'd still be on them.
And that's ignoring that issues like this have historically always started in middle management.
Also I suspect you're looking at it from an individual level: one middle manager on their own obviously cannot have enough impact to change this culture, so it's not the "fault" of any one manager. And that's the reason why the heavy handed approach is necessary, because the bad culture has settled. Anything any one manager may try to improve their ICs work life will inevitably get soured by the next level.
VR will be huge some day. Maybe not as huge as the Metaverse hype, but huge nonetheless.
But did you expect Facebook to have any competence on making it? Even if the timing was correct, what differentiator do they have?
And then the CEO throws a world-changing amount of money without even an idea (because "a VR world!" isn't an idea). Did you expect any of that money not to be wasted? That's not how products are made.
The Metaverse wasn't an organization failure. It was all Zuckenberg's incompetence, Facebook didn't even get the chance to try.
The AI started different, but it's becoming the same thing again.
But I'm curious - thinking of your past self (depending how old you are), what would have said about the current AI revolution 10 years ago? Eg: the chances that fully agentic generalised automated software engineering would become orthodoxy? What chance would you have given it happening by 2026?
I think the field made great advances in the last decades but still so far away from a meaningful human robot.
Personally I also think it doesn’t make sense - we can already produce humans at mich cheaper cost than robots, they grow, repair themselves, can learn all kinds of stuff, etc.
I would rather invest in more humans than humanoid robots.
Specialised non-humanoid robots are a great idea on the other hand.
"Flying cars" https://www.reuters.com/business/aerospace-defense/joby-flie...
I really doubt this. There’s too many people who suffer from motion sickness to make this payoff. 33% of the population suffers from motion sickness to varying degrees and current mitigations including blowing a fan at suffering users, is an unrealistc barrier to causal usage.
There is a habituation that happens the entire experience becomes far less immersive feeling. I have used the quest so much I don't really feel the immersion anymore at all. I had just found youtube 360 videos of the sphinx and great pyramid last night. I wish I would have watched this a year ago as it would have been so mind blowing. It is still fun but it is nothing like what it use to be. I don't feel like I "go" to the places anymore.
It reminds me quite a bit of the way marijuana was such a different experience the first few times vs the 500th time.
So even if you don't get sick, the magic wears off in about a month and people stop bothering. The experience is so consistent with people getting bored after a month. I can say from experience that this has nothing to do with the lack of content but something to do with the way the brain adjusts.
Put it all together and you probably are talking more like 10% of people residual. It is still a lot but I think it's just bearable to not be a death blow to mainstream use.
It's the vegetarians that constrain a shared restaurant choice.
The first company to have auto adjustment lenses to my eye sight will get my money. when I can use it with my current eye sight and without having to buy accessories, I'll root for VR.
I am tired of this hypocrisy world.
Being willing to put $80 billion on the line is a differentiator. It can subsidize hardware, hire talent, acquire companies, etc.
There were definitely ideas beyond just "VR good". But frankly, giving some of the high level employees he had (Boswell and Luckie and Carmack among others) $10billion each to make VR products they think should exist is something that would probably work
VR is not going to be huge, and it misses the entire point of tech.
Think of something like a Bloomberg terminal. Ugly as sin, and incomprehensible to any one who hasn’t practiced using it. It also gets work done faster, and has a keyboard with multiple keys to get to menus faster.
BB terminals save calories. VR does not.
VR is cool, it is aspirational, but it is not saving experts, let alone the average person, time and energy.
I would be surprised if I even got through the interview hellscape that these companies put people through. I'm not interested in talking about algorithms and things that no dev in my entire decade+ time on the industry ever talks about, ever. To make matters worse, the things you should screen developers for nobody seems to do so, except exceptional shops that care about quality (ironically enough!). The only thing the algo questions do is push out "older" candidates who may not remember every little nuance anymore, because... they don't have to hand craft algorithms, every language worth its salt has sorting algorithms or lambdas (thinking of C#) to make sorting effortless.
And what's the alternative? Quizzing people on some random C# framework methods? The "I don't use algos in a day to day job" argument has been around forever, but nobody making it ever proposes a better filter.
I guess for candidates fresh out of school, you have to fall back to things they should know out of school as a proxy.
Meta's leetcode gambit includes leetcode Hards and Mediums which aren't just "remember your hash maps and trees!" They're incredibly hard to brute force under time pressure if you haven't practiced similar problems before. Now do that for every interview -- exhausting.
Alternative? Lol? System design. "Walk me through systems you've built." Have a conversation. If you can't then maybe you don't have the skill for interviewing or dare I say the skill to be an engineer.
When I interviewed at faang I was only once asked a leetcode hard question. Mediums in 99% of cases are manageable with just "remembering your hash maps and trees".
I'm in no way saying there aren't people who ask hard questions, but most of the times it is not the case. Also, how would you check that the person can code and solve problems with only checking their past system design experience?
Not sure their stock price will continue to rise as it gas in the past.
Ive never known poverty in my life and I will do _anything_ to avoid it.
i make good money but not FAANG. like quarter million a year + equity that is sometimes liquid for more.
i do it remote and for a company that isn't so brutally antagonistic as meta. remote also means i don't commute, don't get trapped in an office for 40+ hrs/week, and can spend more time during the workday on my personal life than work itself.
so i make less money in an absolute sense, but i am not in any pain or being surveilled or being bullied to work hard.
and honestly i make more money per hour worked than a meta employee. so lower salary, higher effective hourly wage.
I retired early and ended up going back to work part time. I didn't complete many of my projects, but that's not why I went back. Most of my projects were things I wanted to play with, not things I expected to finish.
Working part time is nice because of external pressure, but really, the most of the pressure is cause I'll feel bad if I disappoint the people that are letting me work with them.
I don't feel bad if I don't get my personal projects done, because nobody is going to use them anyway.
4 days a week, online at 9-10 am, offline 2-3 pm most days. Sometimes I'm working a sticky problem and stay online later. Or if I start a deploy in late afternoon, I'll stick around to finish it, etc.
Still on group chats, may or may not mute them on my day off.
But to clarify I meant “work you can be proud of” when I said “good”.
Let’s face it - most businesses don’t produce anything meaningful and just exist to realise the infinite growth fallacy of capitalism
What do dividends have to do with it?
Hmm.. I don't struggle, I enjoy it. The goal isn't to start glossy product production. It's to learn how to do it. As soon as it's obvious project is usually shelved. Except for the 'main line' projects which together can result in something significant.
So this applies to even, say, mid-level developers? Wouldn't you get work assigned to you after you're hired, or do you actually have to hunt for your own projects, like you might in some consulting firms?
This is how the company works on a fundamental level.
On healthy teams, having something assigned to you (for levels under staff/6) is normal. On unhealthy teams, you're just a sitting duck and it's better to find your own work. Or else you'll be forced to work on bullshit projects with no upside.
Side note: the "they" who does the assigning is not a manager, it's another IC. The ones that go out and find their own work. That could be at any level technically, but usually staff+ because they form little political mafias.
The rest of big tech isn't much better. Big G is less stressful, but you'll see vicious and cringey behavior left and right. Hyped large startups are cults and 100% cringe. Meta is kind of the worst of both worlds though. "But they pay so well". Yeah, also: life is short.
Companies that hire a lot or hired a lot recently always have this. The 3 month people drag down the average. It isn’t necessarily due to turnover.
Not disagreeing with the overall point, I’ve just seen people say this same thing about a lot of companies and it doesn’t always mean something.
Just one suggestion: don't stop interviewing and be very observant of whatever team you land in, be ready to jump ship if there are too many red flags. Also don't trust any of the managers. Don't take anything people say at face value. Be very discerning in team matching, where you land determines everything.
You might be thinking "oh if I just work 7 days a week, I'll be safe". That's not true, it's all about where you land.
"Did you enjoy Game of Thrones? You'll love working here!"
I mean, the book is just over a year old here people. It's not like this is new or out of date stuff.
I thought of this during his various scandals at the end of the 2010s. Everything was a PR reaction for him, rather than looking inward. The best PR is not being an asshole. I wonder if he's thought about it.
Or another way, 850,000,000 hours. It took 5-15 billion human hours of work to go to the moon. They steal 1 moon program worth of human time from humanity every 6 or so years. At the scales they operate we need to judge them on that scale. Mark get's paid/rewarded at that scale. He needs to be judged on the same scale. Not on 'the impact per individual'.
Meta has stolen multiple moon programs from humanity (again I am way under measuring) for that one change in order to increase their billions of dollars.
https://www.quora.com/How-many-man-hours-went-into-the-Apoll...
If you use Facebook regularly, you are locked into it because unless you manage to convince your entire friend network to move to some other social media with you, you will have to "leave them behind".
By employing psychologists who figure out how to make it addictive?
It's been said before that it's interesting Zuckerberg for making a social site is pretty introverted. It's because he stole it and he's always been stealing things. He did it to Whatsapp. He copied Snapchat multiple times. He thinks people are "dumb fucks" rather than "look, people shouldn't give info away, but now that I have it I'll do everything I can to keep it secure" (I DON'T like Google but my understanding is they have far fewer data problems). That's the mark of a certain kind of person which I'll, I suppose, not name. It's insulting to the web, what he does
80% of posts in my FB feed are groups or people to which I've subscribed or followed.
10% are interesting things it suggests outside that core, which I then follow.
10% are suggestions that I don't find interesting and which I mark as such.
No, it's just a common fallacy. If you don't like the guy, isn't "zuckerbergian" an example of helping him live rent free in people's heads?
There are a lot of people in the world who lack basic human empathy to such an extent that it is nearly impossible for them to just not be an asshole.
I don't know for sure if this applies to Mark Zuckerberg but based on all the second-hand anecdotal information I've heard about him "empathy" as he understands it is a product branding feature rather than a human emotion.
And why not? What does he have to fear? He controls the stock. He's not going to lose his company. He's not going to lose his wealth.
He's all but invulnerable as long as he doesn't do wrong (enough) by whatever government he lives under and sucks up enough that he can get away with the rest.
This is not an excuse. This is disgust. He, and most billionaires, are rotten bastards. It's not whether they're awful, it's how awful.
(As an aside, Asperger's is not diagnosed anymore; it's been folded into autism spectrum disorder.)
This latest one releasing the NUMBER and DATE of the layoffs a month in advance without naming WHO is a whole new level of stupid. Let’s deliberately maximize the level of anxiety in our employees and reduce their trust in us to zero.
This, too, was leaked to the press. Their plan wasn't to announce a month in advance.
No amount of hate will fix it, and no amount of tracking will hide all but the most hidden secrets, so he better get over it. In his situation, hating leakers is like Garfield hating Mondays.
I say this as someone self employed that burned almost $1000 on tokens last month. And had. A lot of fun doing it.
I think all these companies front-loading staff reductions are actively sabotaging themselves in the worst possible way in this regard.
I will say I am a bit of an outlier. I see others mostly pitching for things like small teams of "AI Champions" etc. I don't favor this because I think it will lead to dysfunctional outcomes (people trying to make the initiatives fail because they weren't "chosen" etc). So I pitch for the broad based, whole organization journey etc. But it does require a strong argument for acceptance of a slower pace of externally visible adoption.
I’m in a dreadful situation right now. Everyone in team got a claude account, but I’m a contractor so not for me (the only dev in team of 25 consultants). Someone in the team assigned me a task to review claude skill that opens up tickets for me. I’m not even using claude and official policy is no AI use for development…
Otherwise it’s been mixed bag. Pace definitely picked up and things that I actually enjoyed doing (UI) it does very well. Things that are actually hard (backend logic) it sucks and painted me in corner too many times.
It's still insane to me that Meta thought this would be a good idea, or that employees would be comfortable with it even though they claim it's only used for anonymous AI training.
It's the other way around -- they're monitoring the computers to train AI.
Meta may know that their employees will put up with it, given how depressing the job market is right now, but unhappy, cynical, resentful employees do not produce good software and innovations.
there's a real financial cost to treating devs like cage-raised livestock.
For example. page content of a PR with open comments, next action is to focus on the first comment. when a new PR with no open comments is shown the approve/push button is the next action. That starts a re-enforcement loop.
I've never been happier, I can now build everything I've been wanting to build, really fast, with very few bugs.
I'm able to get 3x the work done. Greenfield stuff appears almost immediately.
My job is providing value to customers, not worshipping at the cathedral of software that will last forever. Nothing lasts forever.
Start treating software as ephemeral. It'll click.
This doesn't mean write low quality, unmaintainable software. It just means focus on getting stuff to your customer.
Writing in super typesafe languages with the highest level of strictness helps a lot. My AI stack is Rust and Typescript.
All jobs can generate income. What led me follow this one job in particular was the joy of turning nothing into something, and it now feels that the most effective way to do that is for only $99.99/month, and that price needle is only going to move further upwards as capabilities increase.
That's not how economics works.
That can happen briefly with monopolies and ossified markets, but there is typically always an alternative that will seek to break in and grab market share.
Chinese tokens are pretty cheap and they'll gladly undercut US hyperscalers.
For my personal work on my own projects, Codex 5.5 because it's cheap at $20 / month and I get in about 10 prompts during my work day (would be more like 40 if I was not focused on work though)
basically, AI will produce slop if left unattended. but it's not really its fault.. it's a process failing, like not supervising the interns. using AI the Right Way(tm) is a mental workout, quite a bit slower, but extremely rewarding (ime.)
Mine are pretty robust and articulate. I tend to write very lengthy instructions and include snippets of code, file paths, struct names, etc.
Ford style assembly lines made the work of the factory workers more miserable. Partially automated cashier did the same thing.
I don't think there is any point in trying to resist automation, as the efficiency benefits are too important.
In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.
IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.
Because the latter is how you get the software engineering equivalent of collapsing bridges, en masse.
In the beginning, they are irrelevant, but at some point, edge cases are everything you have to deal with.
The ones with 10 hour shifts and mandatory overtime? Yea, I don't think it's the _line_ that's making them miserable.
> Partially automated cashier did the same thing.
I've not once heard anyone in the service industry make this complaint.
> as the efficiency benefits are too important.
You can squeeze every last drop of productivity from your employees. In the short term this may even evidence profits. In the long term it only works if you hold a monopoly position.
The whole innovation was about making the jobs as simple and repetitive as possible so humans would basically work like robots.
Once you're there, having removed any agency and freedom, pushing the hours to the limits of human exhaustion is just one logical step.
Yes it was jarring for me to experience that.
So they make fewer mistakes. Not that they become zombies that you are then able to abuse.
> pushing the hours to the limits of human exhaustion is just one logical step.
There's nothing logical about ignoring consequences. Which is probably why the "union strike" even exists. It's fighting illogic with illogic.
I would have been happy writting z80 and 68000 assembly code for an entire career.
If we look at automation beyond assemblers (e.g. compilers), even if you or I might be content without it, I think it's safe to say that the vast majority of programmers are glad they don't have to write assembly.
There's a massive restructuring going on - layoffs, reorgs - and an even more ruthless performance bar.
The internal spying is common across companies. The extent would shock most 'big company' employees.
The realization and angst IMO is more that the days of extremely great comp, job security (even if you're good) and career progression is over at Meta unless they figure this pivot out.
What will social media become when influencers aren't a thing, and "creators" is no longer a moat.
You imagine Mark must be sweating bullets right now, along with the rest of media.
Not so easy for 3-4 year kids out of school to make $500K-$600K.
The supergenius quanty ones go to Jane Street and the smart product-y ones jump ship to OpenAI or Anthropic (e.g. Boris) but there just aren't 20,000 high paying roles out there.
Anyone saying otherwise is kidding themselves.
My cofounder and I get to “only” pay $200/mo to build our product while the hyperscalers burning tokens like crazy stave off price rises for people like us - thanks Zuck!