Top
Best
New

Posted by mips_avatar 5 hours ago

Everyone in Seattle hates AI(jonready.com)
565 points | 526 comments
pkasting 5 hours ago|
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.

I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.

Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)

CSMastermind 4 hours ago||
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
yoyohello13 3 hours ago|||
Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.
venturecruelty 3 hours ago|||
Yes, because they're more likely to understand that the computer isn't this magical black box, and that just because we've made ELIZA marginally better, doesn't mean it's actually good. Anecdata, but the people I've seen be dazzled by AI the most are people with little to no programming experience. They're also the ones most likely to look on computer experts with disdain.
josephg 2 hours ago|||
Well yeah. And because when an expert looks at the code chatgpt produces, the flaws are more obvious. It programs with the skill of the median programmer on GitHub. For beginners and people who do cookie cutter work, this can be incredible because it writes the same or better code they could write, fast and for free. But for experts, the code it produces is consistently worse than what we can do. At best my pride demands I fix all its flaws before shipping. More commonly, it’s a waste of time to ask it to help, and I need to code the solution from scratch myself anyway.

I use it for throwaway prototypes and demos. And whenever I’m thrust into a language I don’t know that well, or to help me debug weird issues outside my area of expertise. But when I go deep on a problem, it’s often worse than useless.

ethbr1 2 hours ago|||
This is why AI is the perfect management Rorschach test.

To management (out of IC roles for long enough to lose their technical expertise), it looks perfect!

To ICs, the flaws are apparent!

So inevitably management greenlights new AI projects* and behaviors, and then everyone is in the 'This was my idea, so it can't fail' CYA scenario.

* Add in a dash of management consulting advice here, and note that management consultants' core product was already literally 'something that looks plausible enough to make execs spend money on it'

torginus 1 hour ago||||
In my experience (with ChatGPT 5.1 as of late) is that the AI follows a problem->solution internal logic and doesn't think and try to structure its code.

If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.

A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.

I feel like the AI has a strong bias towards adding things, and not removing them. The most obviously wrong thing is with CSS - when I try to do some styling, it gets 90% of the way there, but there's almost always something that's not quite right.

Then I tell the AI to fix a style, since that div is getting clipped or not correctly centered etc.

It almost always keeps adding properties, and after 2-3 tries and an incredibly bloated style, I delete the thing and take a step back and think logically about how to properly lay this out with flexbox.

BurningFrog 1 hour ago||
Bad code is only really bad if it needs to be maintained.

If your AI reliably generates working code from a detailed prompt, the prompt is now the source that needs to be maintained. There is no important reason to even look at the generated code

panarky 1 hour ago|||
> It programs with the skill of the median programmer on GitHub

This is a common intuition but it's provably false.

The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.

Eighteen months ago GPT-4 was outperforming 85% of human participants in coding contests. And people who participate in coding contests are already well above the median skill level on Github.

And capability has gone way up in the last 18 months.

usefulcat 1 hour ago|||
The best argument I've yet heard against the effectiveness of AI tools for SW dev is the absence of an explosion of shovelware over the past 1-2 years.

https://mikelovesrobots.substack.com/p/wheres-the-shovelware...

Basically, if the tools are even half as good as some proponents claim, wouldn't you expect at least a significant increase in simple games on Steam or apps in app stores over that time frame? But we're not seeing that.

Miraste 9 minutes ago||
Interesting approach. I can think of one more explanation the author didn't consider: what if software development time wasn't the bottleneck to what he analyzed? The chart for Google Play app submissions, for example, goes down because Google made it much more difficult to publish apps on their store in ways unrelated to software quality. In that case, it wouldn't matter whether AI tools could write a billion production-ready apps, because the limiting factor is Google's submission requirements.
sethherr 1 hour ago||||
Trying to figure out how to align this with my experiences (which match the parents’ comment), and I have an idea:

Coding contests are not like my job at all.

My job is taking fuzzy human things and making code that solves it. Frankly AI isn’t good at closing open issues on open source projects either.

incrudible 1 hour ago|||
> The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.

It does, by default. Try asking ChatGPT to implement quicksort in JavaScript, the result will be dogshit. Of course it can do better if you guide it, but that implies you recognize dogshit, or at least that you use some sort of prompting technique that will veer it off the beaten path.

teaearlgraycold 1 hour ago||||
I saw an ad for Lovable. The very first thing I noticed was an exchange where the promoter asked the AI to fix a horizontal scroll bar that was present on his product listing page. This is a common issue with web development, especially so for beginners. The AI’s solution? Hide overflow on the X axis. Probably the most common incorrect solution used by new programmers.

But to the untrained eye the AI did everything correctly.

rhetocj23 1 hour ago||||
Yes. The people who are amazed with AI were never that good at a particular subject area in the first place - I dont care who you are. You were not good enough - how do I know this? Well I know economics, corporate finance, accounting et al very deeply. Ive engaged with LLMS for years now and still they cannot get below the surface level and are not improving further than this.

Its easy to recall information, but something entirely different to do something with that information. Which is what those subject ares are all about - taking something (like a theory) and applying it in a disciplined manner given the context.

Thats not to diminish what LLMs can do. But lets get real.

dingnuts 3 hours ago|||
[dead]
codethief 28 minutes ago||||
> Generally I've noticed the better someone is at coding the more critical they are of AI generated code.

I'm having a dejavu of yesterday's discussion: https://news.ycombinator.com/item?id=46126988

RajT88 1 hour ago||||
I am not a great (some would argue, not even good) programmer, and I find a lot of issues with LLM generated code. Even Claude pro does really weird dumb stuff.
CyberDildonics 1 hour ago||||
It starts to make you realize how unaware many people must be of what their programs are doing to accept AI stuff wholesale.
scotty79 1 hour ago||||
It works both ways. If you are good, it's also easier to spot moments of brilliance from AI agent when it saves you hours of googling, reading docs, some trial and error while you pour yourself cup of coffee and ponder the next steps. You can spot when a single tab press saved you minutes.
newAccount2025 6 minutes ago||
Yes. Love it for quick explorations of available options, reviewing my work, having it propose tests, getting its help with debugging, and all kinds of general subject matter questions. I don’t trust it to write anything important but it can help with a sketch.
throwaway29812 1 hour ago|||
[dead]
gipp 3 hours ago||||
Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from
karmasimida 3 hours ago|||
That is the problem.

AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.

I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.

Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.

citizenpaul 1 hour ago||
> it just makes software engineering into bothering supervisor job.

I'm pretty sure this is the entire enthusiasm from C-level for AI in a nutshell. Until AI SWE resisted being mashed into a replaceable cog job that they don't have to think/care about. AI is the magic beans that are just tantalizingly out of reach and boy do they want it.

spwa4 1 hour ago||
But every version of AI for almost a century had this property, right down from the first vocoders that were going to replace entire callcenters to convolutional AI that was going to give us self-driving cars. Yes, a century, vocoders were 1930s technology, but they can essentially read the time aloud.

... except they didn't. In fact most AI tech were good for a nice demo and little else.

In some cases, really unfairly. For instance, convnet map matching doesn't work well not because it doesn't work well, but because you can't explain to humans when it won't work well. It's unpredictable, like a human. If you ask a human to map a building in heavy fog they may come back with "sorry". SLAM with lidar is "better", except no, it's a LOT worse. But when it fails it's very clear why it fails because it's a very visual algorithm. People expect of AIs that they can replace humans but that doesn't work, because people also demand AIs never say no, never fail, like the Star Trek computer (the only problem the star trek computer ever has is that it is misunderstood or follows policy too well). If you have a delivery person occasionally they will radically modify the process, or refuse to deliver. No CEO is ever going to allow an AI drone to change the process and No CEO will ever accept "no" from an AI drone. More generally, no business person seems to ever accept a 99% AI solution, and all AI solutions are 99%, or actually mostly less.

AI winters. I get the impression another one is coming, and I can feel it's going to be a cold one. But in 10 years, LLMs will be in a lot of stuff, like with every other AI winter. A lot of stuff ... but a lot less than CEOs are declaring it will be in today.

groby_b 2 hours ago||||
There are plenty of good tasks left, but they're often one-off/internal tooling.

Last one at work: "Hey, here are the symptoms for a bug, they appeared in <release XYZ> - go figure out the CL range and which 10 CLs I should inspect first to see if they're the cause"

(Well suited to AI, because worst case I've looked at 10 CLs in vain, and best case it saved me from manually scanning through several 1000 CLs - the EV is net positive)

It works for code generation as well, but not in a "just do my job" way, more in a "find which haystack the needle is in, and what the rough shape of the new needle is". Blind vibecoding is a non-starter. But... it's a non-starter for greenfields too, it's just that the FO of FAFO is a bit more delayed.

ethbr1 2 hours ago||
My internal mnemonic for targeting AI correctly is 'It's easier to change a problem into something AI is good at, than it is to change AI into something that fits every problem.'

But unfortunately the nuances in the former require understanding strengths and weaknesses of current AI systems, which is a conversation the industry doesn't want to have while it's still riding the froth of a hype cycle.

Aka 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'

groby_b 2 hours ago||
> 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'

I see we've met the same product people :)

ethbr1 1 hour ago||
I had a VP of a revenue cycle team tell me that his expectation was that they could fling their spreadsheets and Word docs on how to do calculations at an AI powered vendor, and AI would be able to (and I direct quote) "just figure it all out."

That's when I realized how far down the rabbit hole marketing to non-technical folks on this was.

kccqzy 2 hours ago||||
Yeah but Google won’t expect you to use AI tools developed outside Google and trained on primarily OSS code. It would expect you to use the Google internal AI tools trained on google3, no?
almostdeadguy 1 hour ago|||
I think it’s a fair point that google has more stakeholders with a serious investment in some flubbed AI generated code not tanking their share value, but I’m not sure the rest of it is all that different from what engineer at $SOME_STARTUP does after the first ~8monthes the company is around. Maybe some folks throwing shit at a wall to find PMF are really getting a lot out of this, but most of us are maintaining and augmenting something we don’t want to break.
3vidence 55 minutes ago||||
Googler, opinion is my own.

Working on our mega huge code basis with lots of custom tooling and bleeding edge stuff hasn't been the best for for AI generated code compared to most companies.

I do think AI as a rubber ducky / research assistant type has been overall helpful as a SWE.

agumonkey 3 hours ago|||
so would love to be a fly in there office and hear all their convos
icedchai 2 hours ago|||
My experience is the productivity gains are negative to neutral. Someone else basically wrote that the total "work" was simply being moved from one bucket to another. (I can't find the original link.)

Example: you might spend less time on initial development, but more time on code review and rework. That has been my personal experience.

hectdev 4 hours ago|||
It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.
suprjami 4 hours ago|||
I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.

I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?

AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.

dwaltrip 4 hours ago|||
You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.

Rinse and repeat for many "one-off" tasks.

It's not going away, you need to learn how to use it. shrugs shoulders

bigger_cheese 2 hours ago|||
The issue is people trying to use these AI tools to investigate complex data not the throwaway UI part.

I work as the non-software kind of engineer at an industrial plant there is starting to emerge a trend of people who just blindly trust the output of AI chat sessions without understanding what the chat bot is echoing at them which is wasteful of their time and in some cases my time.

This not not new in the past I have experienced engineers who use (abuse) statistics/regression tools etc. Without understanding what the output was telling them but it is getting worse now.

It is not uncommon to hear something like: "Oh I investigated that problem and this particular issue we experienced was because of reasons x, y and z."

Then when you push back because what they've said sounds highly unlikely it boils down to. "I don't know that is what the AI told me".

Then if they are sufficiently optimistic they'll go back and prompt it with "please supply evidence for your conclusion" or some similar prompt and it will supply paragraphs of plausible sounding text but when you dig into what it is saying there are inconsistencies or made up citations. I've seen it say things that were straight up incorrect and went against Laws of Thermodynamics for example.

It has become the new "I threw the kitchen sink into a multivariate regression and X emerged as significant - therefore we should address x"

I'm not a complete skeptic I think AI has some value, for example if you use it as a more powerful search engine by asking it something like "What are some suggested techniques for investigating x" or "What are the limitations of Method Y" etc. It can point you to the right place assist you with research, it might find papers from other fields or similar. But it is not something you should be relying on to do all of the research for you.

area51org 5 minutes ago||||
One thing people often don't realize or ignore: these LLMs are trained on the internet, the entire internet.

There's a shit-ton of bad and inefficient code on the internet. Lots of it. And it was used to train these LLMs as much as the good code.

In other words, the LLMs are great if you're OK with mediocrity at best. Mediocrity is occasionally good enough, but it can spell death for a company when key parts of it are mediocre.

I'm afraid a lot of the executives who fantasize about replacing humans with AI are going to have to learn this the hard way.

icedchai 2 hours ago||||
I have found value for one off tasks. I forget the exact situation, but I wanted to do some data transformation, something that would normally take me a half hour of awk/sed/bash or python scripting. AI spit it out right away.
dwaltrip 3 hours ago||||
It’s kind of fun watching this comment go up and down :)

There’s so much evidence out there of people getting real value from the tools.

Some questions you can ask yourself are “why doesn’t it work for me?” and “what can I do differently?”.

Be curious, not dogmatic. Ignore the hype, find people doing real work.

mattgreenrocks 2 hours ago|||
I’m an AI skeptic. I like seeing what UIs it spits out, though, which defeats the blank page staring into my soul fear nicely. I don’t even use the code, just take inspiration from the layouts.
scotty79 1 hour ago||
Yeah, it helps a lot to make first steps, to overcome writers block, to make you put into words what you'd like to have built.

At one point you might take over, ask it for specific refactors you'd do but are too lazy to do yourself. Or even toss it away entirely and start fresh with better understanding. Yourself or again with agent.

SpicyLemonZest 3 hours ago||||
They're good questions! The problem is that I've tried to talk to the people who are getting real value from it, and often the answer ends up being that the value is not as real as they think. One guy gave an excited presentation about how AI let him write 7k LOC per day, expounded for an entire session about how the rest of us should follow in his shoes, and then clarified only in Q&A that reviewers couldn't keep up so he exempted himself from code review.
adriand 1 hour ago|||
I’m starting to believe there are situations where the human code review is genuinely not necessary. Here’s a concrete example of something that’s been blowing my mind. I have 25 years of professional coding experience but it’s almost all web, with a few years of iOS in the objective C era. I’m also an amateur electronic musician. A couple of weeks ago I was thinking about this plugin that I used to love until the company that made it went under. I’ve long considered trying to make a replacement but I don’t know the first thing about DSP or C++.

You know where this is going. I asked Claude if audio plugins were well represented in its training data, it said yes, off I went. I can’t review the code because I lack the expertise. It’s all C++ with a lot of math and the only math I’ve needed since college is addition and calculating percentages. However, I can have intelligent discussions about design and architecture and music UX. That’s been enough to get me a functional plugin that already does more in some respects than the original. I am (we are?) making it steadily more performant. It has only crashed twice and each time I just pasted the dump into Claude and it fixed the root cause.

Long story short: if you can verify the outcome, do you need to review the code? It helps that no one dies or gets underpaid if my audio plugin crashes. But still, you can’t tell me this isn’t remarkable. I think it’s clear there will be a massive proliferation of niche software.

deltaburnt 11 minutes ago|||
I don’t think I’ve ever seen someone seriously argue that personal throwaway projects need thorough code reviews of their vibe code. The problem comes in when I’m maintaining a 20 year old code base used by anywhere from 1M to 1B users.

In other words you can’t vibe code in an environment where evaluating “does this code work” is an existential question. This is the case where 7k LOC/day becomes terrifying.

Until we get much better at automatically proving correctness of programs we will need review.

SpicyLemonZest 1 hour ago|||
I agree that's remarkable, and I do expect a proliferation of LLM-assisted development in similar niches where verification is easy and correctness isn't critical. But I don't think most software developers today are in such niches.
stocksinsmocks 1 hour ago|||
Most enterprise software I use has serious defects. Professional CAD software for infrastructure is awful. Many are just incremental improvements piled upon software from the 1990s. Bugs last for decades because nobody can understand how the program works so they just work on one more little VBA plugin at a time. Meanwhile, the capabilities of these programs have fallen completely behind game studios with no budget and no business plan. Where are the results of this human excellence and code quality process? There are 10s of thousands of new CVEs every year from code hand crafted by artisans on their very own MacBooks. How? Perhaps there is the tiny possibility that maybe code quality is mostly an aesthetic judgment that nobody can really define, and just maybe this effort is mostly spent on vague concepts like maintainability or preferential decisions instead of the basics: does it meet the specification? Is the performance getting better or worse?

This is the game changer for me: I don’t have to evaluate tens or hundreds of market options that fit my problem. I tell the machine to solve it, and if it works, then I’m happy. If it doesn’t I throw it away. All in a few minutes and for a few cents. Code is going the way of the disposable diaper, and, if you ever washed a cloth diaper you will know, that’s a good thing.

SpicyLemonZest 1 hour ago||
> I tell the machine to solve it, and if it works, then I’m happy. If it doesn’t I throw it away.

What happens when it seems to work, and you walk away happy, but discover three months later that your circular components don't line up because the LLM-written CAD software used an over-rounded PI = 3.14? I don't work in industrial design, but I faced a somewhat similar issue where an LLM-written component looked fine to everyone until final integration forced us to rewrite it almost entirely.

samdoesnothing 3 hours ago||||
Most people don't have a problem with using genai for stuff like throwaway UI's. That's not even remotely relevant to the criticisms. People reject having it forced down their throats by companies who are desperate to make us totally reliant on it to justify their insane investments. And people reject the evangelicals who claim that it's going to replace developers because it can spit out mostly working boilerplate.
pydry 1 hour ago|||
It's like watching somebody argue that code linting is going to change the face of the world and the rebuttals to the skeptics are arguing that akshually code linting is quite useful....
xorcist 2 hours ago||||
Perhaps. But does it matter? There is a million tools to investigate complex data already. Are you suggesting it is more useful to develop a new tool from scratch, using LLM-type tools, than it is to use a mature tool for data analysis?

If you don't know how to analyze data, and flat out refuse to invest in learning the skill, then I guess that could be really useful. Those users are likely the ones most enthusiastic about AI. But are those users close to as productive as someone who learns a mature tool? Not even close.

Lots of people appreciate an LLM to generate boiler plate code and establish frameworks for their data structures. But that's code that probably shouldn't be there in the first place. Vibe coding a game can be done impressively quick, but have you tried using a game construction kit? That's much faster still.

thewebguyd 2 hours ago||||
> You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.

I think the throwaway part is important here and people are missing it, particularly for non-programmers.

There's a lot of roles in the business world that would make great use of ephemeral little apps like this to do a specific task, then throw it away. Usually just running locally on someone's machine, or at most shared with a couple other folks in your department.

Code doesn't have to be good, hell it doesn't even have to be secure, and certainly doesn't need to look pretty. It just needs to work.

There's not enough engineering staff or time to turn every manager's pet excel sheet project into a temporary app, so LLMs make perfect sense here.

I'd go as far to say more effort should be put into ephemeral apps as a use case for LLMs over focusing on trying to use them in areas where a more permanent, high quality solution is needed.

Improve them for non-developers.

gigel82 53 minutes ago||||
Except when your AI psychosis PM / manager sees your throwaway vibe-coded garbage and demands it gets shipped to customers.

It's infinitely worse when your PM / manager vibe-codes some disgusting garbage, sees that it kind of looks like a real thing that solves about half of the requirements (badly) and demands engineers ship that and "fix the few remaining bugs later".

uhkbuteuter 3 hours ago|||
[dead]
hectdev 4 hours ago||||
I would say it is like that. No one HAS to use AI. But the shared goal is to get a change to the codebase to achieve a desired outcome. Some will outsource a significant part of that to AI, some won't.

And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.

I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.

Freak_NL 4 hours ago|||
> No one HAS to use AI.

Well… That's no longer true, is it?

My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.

And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.

AI is already unavoidable.

palmotea 3 hours ago|||
> My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.

My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.

ryandrake 3 hours ago|||
Nothing says "this product is useful" quite like forcing people to use it and punishing people who don't. If it was that good, there'd be organic demand to use it. People would be begging to use it, going around their boss's back to use it.

The fact that companies have to force you to use it with quotas and threats is damning.

groby_b 2 hours ago|||
Yeah. Well. There are company that require TPS reports, too.

It's mostly a sign leadership has lost reasoning capability if it's mandatory.

But no, reporting isn't necessarily the problem. There are plenty of places that use reporting to drive a conversation on what's broken, and why it's broken for their workflow, and then use that to drive improvement.

It's only a problem if the leadership stance is "Haha! We found underpants gnome step 2! Make underpants number go up, and we are geniuses". Sadly not as rare as one would hope, but still stupid.

oarsinsync 2 hours ago||||
> And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person

All of this predates LLMs (what “AI” means today) becoming a useful product. All of this happened already with previous generations of “AI”.

It was just even shittier than the version we have today.

pxc 1 hour ago||
It was also shittier than the version we had before it (human receptionists).

This is what I always think of when I imagine how AI will change the world and daily life. Automation doesn't have to be better (for the customer, for the person using it, for society) in order to push out the alternatives. If the automation is cheap enough, it can be worse for everyone, and still change everything. Those are the niches in ehich I'm most certain will be here to stay— because sometimes, it hardly matters if it's any good.

hectdev 3 hours ago||||
It isn't a universal thing. I have no doubt there is a job out there that that isn't a requirement. I think the issue is the C-level folks are seeing how more productive someone might be and making it a demand. That to me is the wrong approach. If you demonstrate and build interest, the adoption will happen.
kentm 3 hours ago||||
> where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.

If you're lucky. I've had LLMs that just repeatedly hang up on me when they obviously hit a dead end.

groby_b 2 hours ago|||
As opposed to reaching, say, somebody in an offshored call center with an utterly undecipherable accent reading a script at you? Without any room for deviation?

AI's not exactly a step down from that.

moduspol 4 hours ago|||
> But the shared goal is to get a change to the codebase to achieve a desired outcome.

I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.

The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.

hectdev 3 hours ago||
In my personal use case, I work at a company that has SO MUCH process and documentation for coding standards. I made an AI agent that knows all that and used it to update legacy code to the new standard in a day. Something that would have taken weeks if not more. If your desire is manageable code, make that a requirement.

I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.

Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.

moduspol 53 minutes ago|||
It's not just about coding standards. It's about, over time, having a team of people with a built-up set of knowledge about how things work and how they're expected to work. You don't get that by vibe coding and reviewing numerous PRs written by other people (or chatbots).

If everyone on your team is doing that, it's not long before huge chunks of your codebase are conceptually like stuff that was written a long time ago by people who left the company. Except those people may have actually known what they were doing. The AI chatbots are generating stuff that seems to plausibly work well enough based on however they were prompted.

There are intangible parts of software development that are difficult to measure but incredibly valuable beyond the code itself.

> Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.

This could be the vibe coder mantra. And it's true on day one. Once you've got reasonably complex software being maintained by one or more teams of developers who all need to be able to fix bugs and add features without breaking things, it's not quite as simple as "make the machine do the thing."

SpicyLemonZest 3 hours ago|||
How did you verify that your AI agent performed the update correctly? I've experienced a number of cases where an AI agent made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it.
spopejoy 1 hour ago|||
> made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it

Maybe I'm not understanding you're point, but this is the kind of thing that happens in software teams all the time and is one of those "that's why they call it work" realities of the job.

If something "seems right/passed review/fell apart" then that's the reviewer's fault right? Which happens, all the time! Reviewers tend to fall back to tropes and "is there tests ok great" and whatever their hobbyhorses tend to be, ignoring others. It's ok because "at least it's getting reviewed" and the sausage gets made.

If AI slashed the amount of time to get a solution past review, it buys you time to retroactively fix too, and a good attitude when you tell it that PR 1234 is why we're in this mess.

hectdev 3 hours ago|||
Unit tests, manual testing the final product, PR with two approvals needed (and one was from the most anal retentive reviewer at the company who is heavily invested in the changes I made), and QA.
Loughla 4 hours ago|||
>AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.

I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.

When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.

Or am I entirely off base with your experience?

dwoldrich 4 hours ago|||
It would be helpful if you would relate your own bad experiences and how you overcame them. Leading off with "do it better" isn't very instructive. Unfortunately there's no useful training for much of anything in our industry, much less AI.

I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.

I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.

kentm 3 hours ago||||
> have you considered that you haven't used the tools correctly or effectively?

The problem is that this comes off just as tone-deaf as "you're holding it wrong." In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing. And when that doesn't work, the promoter goes into system prompts, MCP, agent files, etc and entire workflows that are required to get it to do the correct thing. It ends up feeling like you're being lied to, even if there's some benefit out there.

There's also the fact that all programming workflows are not the same. I've found some areas where AI works well, but a lot of my work it does not. Usually things that wouldn't show up in a simple Google search back before it was enshittified are pretty spotty.

mattgreenrocks 2 hours ago|||
I suspect AI appeals very strongly to a certain personality type who revels in all the details in getting a proper agentic coding environment bootstrapped for AI to run amok in, and then supervises/guides the results.

Then there’s people like me, who you’d probably term as an old soul, who looks at all that and says, “I have to change my workflow, my environment, and babysit it? It is faster to simply just do the work.” My relationship with tech is I like using as little as possible, and what I use needs to be predictable and do something for me. AI doesn’t always work for me.

Lerc 2 hours ago|||
> In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing.

This is almost the complete opposite of my experience. I hear expressions about improvements and optimism for the future, but almost all of the discussion from active people productivly using AI is about identifying the limits and seeing what benefits you can find within those limits.

They are not useless and they are also not a panacea. It feels like a lot of people consider those the only available options.

jandrewrogers 2 hours ago||||
AI is okay (not great) at generating low- to mid-skill code. If you are working in a high-skill software domain that requires pervasive state-of-the-art or first-principles implementation then AI produces consistently terrible code. It frequently is flatly incorrect about esoteric technical details that really matter.

It can't reason from first principles and there isn't training data for a lot of state-of-the-art computer science and code implementations. Nothing you can prompt will make it produce non-naive output because it doesn't have that capability.

AI works for a lot of things because, if we are honest, AI generated slop is replacing human generated slop. But not all software is slop and there are software domains where slop is not even an option.

uhkbuteuter 3 hours ago|||
[dead]
gaigalas 44 minutes ago||||
I think it's more a continuation of IDE versus pure editor.

More precisely:

In one side, it's the "tools that build up critical mass" philosophy. AI firmly resides here.

On the other, it's the "all you need is brain and plain text" philosophy. We don't see much AI in this camp.

One thing I learned is that you should never underestimate the "all you need is brain and plain text" camp. That philosophy survived many, many "fatal blows" and has come up on top several times. It has one unique feature: resilience to bloat, something that the current smart tools camp is obviously overlooking.

sulicat 4 hours ago||||
I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.

I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.

The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.

Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.

rr808 3 hours ago||
Lol you made me think my power bill has gone up but I didn't get a pay rise for my increased productivity.
postalrat 2 hours ago||||
I see LLM's as kinda the new hotness in IDEs. And some people will use vi forever.
anthem2025 4 hours ago||||
[dead]
jimbokun 4 hours ago||||
Most of the people against “AI” are not against it because they think it doesn’t work.

It’s because they know it works better every day and the people controlling it are gleefully fucking over the rest of the world because they can.

The plainly stated goal is TO ELIMINATE ALL HUMAN EMPLOYEES, with no plan for how those people will feed, clothe, or house themselves.

The reactions the author was getting was the reaction of a horse talking to someone happily working for the glue factory.

IAmBroom 4 hours ago||
I don't think you're qualified to speak for most of the people against AI.
throwout4110 4 hours ago|||
Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
mjr00 4 hours ago|||
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…

... but maybe not in the way that these CEOs had hoped.[0]

Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.

I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.

[0] https://github.com/ocaml/ocaml/pull/14369

bigstrat2003 4 hours ago||||
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.

I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.

ben_w 4 hours ago|||
> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.

One of the tests I sometimes do of LLMs is a geometry puzzle:

  You're on the equator facing south. You move forward 10,000 km along the surface of the Earth. You are rotate 90° clockwise. You move another 10,000 km forward along the surface of the earth. Rotate another 90° clockwise, then move another 10,000 km forward along the surface of the Earth.

  Where are you now, and what direction are you facing?
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).

Anecdotes are of course a bad way to study this kind of thing.

Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.

Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.

Smaug123 1 hour ago||
Out of interest, was your intended answer "where you started, facing east"?

FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer. On request, it also gave me a Mathematica program which (after I fixed some trivial exceptions due to errors in units) informs me that using the ITRF00 datum the actual answer is 0.0177593 degrees north and 0.168379 west of where you started (about 11.7 miles away from the starting point) and your rotation is 89.98 degrees rather than 90.

(ChatGPT 5.1 Thinking, for me, get the wrong answer because it correctly gets near the South Pole and then follows a line of latitude 200 times round the South Pole for the second leg, which strikes me as a flatly incorrect interpretation of the words "move forward along the surface of the earth". Was that the "usual error they all used to make"?)

hectdev 4 hours ago||||
This fascinates me. Just observing but because it hasn't worked for you, everyone else must be lying? (I'm assuming that's what you mean by baseless)

How does that bridge get built? I can provide tangible real life examples but I've found push back from that in other online conversations.

ggerni 4 hours ago||
[flagged]
shepherdjerred 3 hours ago|||
What have you tried? How much time have you spent? Using AI is it’s own skill set separate from programming
aisengard 4 hours ago||||
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
throwout4110 4 hours ago||
What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”? I would say Claude right now has probably made worse code and wasted time than if I had coded things myself, but it’s because this is like the first few hundred days of this. Open weight models are also worse but they will never go away and improve steadily as well. I am all for people doing whatever works for them I just don’t get the negativity or the skepticism when you look at the progress over what has been almost zero time. It’s crappy now in many respects but it’s like saying “my car is slow” in the one millisecond after I floor the gas pedal
takluyver 3 hours ago|||
My understanding is that all the big AI companies are currently offering services at a loss, doing the classic Silicon Valley playbook of burning investor cache to get big, and then hope to make a profit later. So any service you depend on could crash out of the race, and if one emerges as a victorious monopoly and you rely on them, they can charge you almost whatever they like.

To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.

ben_w 3 hours ago||
My understanding is they make a loss overall due to the spending on training new models, that the API costs are profit making if considered in isolation. That said, this is based on guestimates based on hosting costs of open-weight models, owing to a lack of financial transparancey everywhere for the secret-weights models.
blibble 3 hours ago||
> that the API costs are profit making if considered in isolation.

no, they are currently losing money on inference too

watwut 3 hours ago|||
> What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”?

Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.

At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.

This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.

throwout4110 2 hours ago||
Sure but you’re forgetting that competition exists. If anthropic investors suddenly say “enough” and demand positive cash flow it wouldn’t be that hard, everyone is capturing users for flywheels and capex for model improvements because if they don’t they will be guaranteed to lose.

It’s going to definitely be crappy, remember Google in 2003 with relevant results and no endless SEO , or Amazon reviews being reliable, or Uber being simple and cheap, etc. once growth phase ends monetization begins and experience declines but this is guard railed by the fact that there are many players.

elictronic 4 hours ago||||
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.

This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.

throwout4110 4 hours ago||
Ok sure the bubble/non-bubble stuff, fine, but in terms of “things I’d like to be a part of” it’s hard to imagine a more transformative technology (not to again turn off the anti-hype crowd). But ok, say it’s 1997, you don’t like the valuations you see. But as a tech person you’re not excited by browsers, the internet, the possibilities? You don’t want to be a part of that even if it means a bubble pops? I also hear a lot of people argue “finances don’t make a lick of sense” but i don’t think things are that cut and dried and I don’t see this as obvious. I don’t think really many people know how things will evolve and what size a market correction or bubble would have.
zdragnar 4 hours ago||
What precisely about AI is transformative, compared to the internet? E-mail replaced so much of faxing, phoning and physical mail. Online shopping replaced going to stores and hoping they have what you want, and hoping it is in stock, and hoping it is a good price. It replaced travel agents to a significant degree and reoriented many industries. It was the vehicle that killed CDs and physical media in general.

With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.

It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.

Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.

It has, thus far, made nearly everything worse.

throwout4110 2 hours ago||
> With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.

I think this is probably the disconnect, this seems so wildly different from my experience. Not only that, I’ll grant that there are a ton of limitations still but surely you’d concede that there has been an incredible amount of progress in a very short time? Like I can’t imagine someone who sits down with Claude like I do and gets up and says “this is crap and a fad and won’t go anywhere”.

As for generated content, I again agree with you and you’d be surprised to learn that _execs_ agree with you but look at models from 1, 2, 3 years ago and tell me you don’t see a frightening progression of quality. If you want to say “I’ll believe it when I see it” that’s fine but my god just look at the trajectory.

For AI slop text, once again agree, once again I think we all have to figure out how to use it, but it is great for e.g. helping me rewrite a wordy message quickly, making a paper or a doc more readable, combining my notes into something polished, etc, and it’s getting better and better and better.

So I disagree it has made everything worse but I definitely agree that it has made a lot of things worse and we have a lot of Pets.com ideas that are totally not viable today, but the point I think people are maybe missing (?) is that it’s not about where we are it’s about the velocity and the future. You may be terrified and nauseated by $1T in capex on AI infra, fine but what that tells you is the scale is going to grow even further _in addition_ to the methodological / algorithmic improvements to tackle things like continual learning, robustness, higher quality multimodal generation with e.g. true narrative consistency, etc etc etc. in 5 years I don’t think many people will think of “slop” so negatively

zdragnar 2 hours ago||
Where you see exponential growth in capability and value, I see the early stages of logarithmic growth.

A similar thing played out a bit with IoT and voice controlled systems like Alexa. They've got their places, but nobody needs or wants the Amazon Dash buttons, or for Alexa to do your shopping for you.

Setting an alarm or adding a note to a list is fine, remote monitoring is fine, but when it comes to things that really matter like spending money autonomously, it completely falls flat.

Long story short, I see a fad that will fall into the background of what people actually do, rather than becoming the medium that they do it by.

throwout4110 1 hour ago||
I could not disagree more but you are far from alone and I respect a lot of the reasons I’ve gathered why you and others have this belief
skywhopper 3 hours ago|||
Maybe those people do different work than you do? Coding agents don’t work well in every scenario.
throwout4110 2 hours ago||
Yet people imply that because it doesn’t work in their scenario that it’s not good?
Xeronate 2 hours ago|||
Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.
deaux 1 hour ago||
It is, it can be an enormous learning accelerator for new skills, for both adults and genuinely curious kids. The gap between low and high performancer will explode. I can tell you that if I had LLMs I would've finished schooling at least 25% quicker, while learning much more. When I say this on HN some are quick to point out the fallibility of LLMs, ignoring that the huge majority of human teachers are many times more fallible. Now this is a privileged place where many have been taught by what is indeed the global top 0.1% of teachers and professors, so it makes more sense that people would respond this way. Another source of these responses is simply fear.

In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.

I can't tell you if this is the same inside e.g. China. I'm fairly sure it's not nearly as bad though as kids there derive much less benefit from cheating on homework/the learning process, as they're more singularly judged on standardized tests where AI is not available.

mips_avatar 4 hours ago|||
The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.
buildsjets 3 hours ago|||
This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.
ohyoutravel 2 hours ago|||
I saw a post on Reddit the other day where the user was posting screenshots from ChatGPT about how they were using ChatGPT as a “Human OS” and outsourcing all decisions and information to ChatGPT. It made me queasy.
mips_avatar 3 hours ago||||
IDK I got really sick in a foreign country, I wasn't sure how to get to the hospital and I was alone in a hotel room. I don't really know how using chatgpt to help me isn't actualizing.
erentz 1 hour ago|||
We used to have Google search and Google maps which solved this problem of finding information about symptoms and finding medical centers near you. LLM doesn’t make anything better it just confidently asserts things about medicine that may be wrong and always need to be verified with the real sources anyway.
mips_avatar 1 hour ago||
Well google search is a little nerfed since it went full ad revenue focused
ajkjk 3 hours ago||||
If you are operating under the constraint that talking to strangers is impossible then I could see why ChatGPT feels like a godsend...
blibble 3 hours ago|||
did you try asking at the reception desk?
simplyluke 35 minutes ago|||
Growing up in the internet age (I'm 28 now) it took me until well into my 20s to realize how many classes of problems can be solved in 30 seconds on a phone call vs hours on a computer.
mips_avatar 50 minutes ago|||
The hotel owner eventually half carried me to the hospital because I got so weak from dehydration, though I'm glad I left my hotel room when I did I had difficulty avoiding fainting.
canjobear 1 hour ago||||
Extremely uncharitable reading. Plausibly they were in a foreign country where they didn't speak any of the language and didn't know how anything worked. This kind of situation was never easy for anyone.
turtlesdown11 2 hours ago|||
Idiocracy (movie) is being speedrun before our eyes here on HN
nostrademons 2 hours ago||||
This is what people used to use Google for; I remember so many times between 2000-2020 that Google saved my bacon for exactly those things (travel plans, self-diagnosis, navigating local bureaucracies, etc.)

It's a sad commentary on the state of search results and the Internet now that ChatGPT is superior, particularly since pre-knowledge-panel/AI-overview Google was superior in several ways (not hallucinating, for one, and being able to triangulate multiple sources to tell the truth).

miltonlost 3 hours ago|||
Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for. Stolen items? Depending on the items and the place, possibly police. Missed flights? Customer service agent at the airport for your airline or call the airline help line.
mips_avatar 3 hours ago||
Well I got so weak I needed to go to the hospital, and that was tough.
wkat4242 1 hour ago|||
I also hoped it would crash and burn. The real value added usecases will remain. The overhyped crap won't.

But the shockwave will cause a huge recession and all those investors that put up trillions will not take their losses. Rich people never get poorer. One way or another us consumers will end up paying for their mistakes. Either by huge inflation, job losses, energy costs, service enshittification whatever. We're already seeing the memory crisis having huge knock on effects with next year's phones being much more expensive. That's one of the ways we are going to be paying for this circus.

I really see value in it too, sure. But the amount of investment that goes into it is insane. It's not that valuable by far. LLMs are not good for everything and the next big thing is still a big question mark. AI is dragged in by the hair into usecases where it doesn't belong. The same shit we saw with blockchains, but now on a world crashing scale. It's very scary seeing so much insanity.

But anyway whatever I think doesn't matter. Whatever happens will happen.

WhyOhWhyQ 4 hours ago|||
I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.
SpaceNoodled 2 hours ago||
That honestly sounds like addiction.
xg15 2 hours ago||
Not sure if that's also category #2 or a new one, but also: Places where AI is at risk of effectively becoming a drug and being actively harmful for the user: Virtual friends/spouses, delusion-confirming sycophants, etc.
sirreal14 4 hours ago||
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
sudoshred 2 hours ago||
As someone on a team with a less stringent code review culture, AI generated code creates more work when used indiscriminately. Good enough to get approved but full of non-obvious errors that cause expensive rework which only gets prioritized once the shortcomings become painfully obvious (usually) months after the original work was “completed” and once the original author has forgotten the details, or worse, left the team entirely. Not to say AI generated code is not occasionally valuable, just not for anything that is intended to be correct and maintainable indefinitely by other developers. The real challenge is people using AI generated code as a mechanism to avoid fully understanding the problem that needs to be solved.
psyclobe 4 hours ago|||
> and I've even seen unit tests > that mock the actual function > under test.

Yup. Ai is so fickle it’ll do anything to accomplish the task. But ai is just a tool it’s all about what you allow it to do. Can’t blame ai really.

dpark 4 hours ago|||
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
sirreal14 2 hours ago||
> In fairness I’ve seen humans make that mistake

These were (formerly) not the kinds of humans who regularly made these kinds of mistakes.

lbrito 2 hours ago||||
Leverage.

Those slops already existed, but AI scales them by an order of magnitude.

I guess the same can be said of any technology, but AI is just a more powerful tool overall. Using languages as an example - lets say duck typing allowed a 10% productivity boost, but also introduced 5% more mistakes/problems. AI (claims to) allow a 10x productivity boost, but also ~10x mistakes/problems.

mehagar 3 hours ago|||
If a tool makes it easy to shoot yourself in the foot, then it's not a good tool. See C++.
jahsome 2 hours ago||
I'm no apologist but this statement doesn't ring for me. It's easy to shock yourself with electricity, is it a bad tool?
doyougnu 4 hours ago|||
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.

I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?

teej 3 hours ago||
Your coworkers were probably writing subtle bugs before AI too.
munificent 3 hours ago||
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
teej 3 hours ago||
Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.
munificent 3 hours ago|||
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
Vegenoid 3 hours ago||||
No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.

You don’t get to fix bugs in code by simply pouring it through a filter.

mips_avatar 4 hours ago|||
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
elzbardico 3 hours ago||
If AI replaces software engineers, people outside tech doesn't have much chance of surviving it too.
LPisGood 39 minutes ago||
Exactly. I think it’s pretty clear that software engineering is an “intelligence complete” problem. If you can automatically solve SWE than you can automatically solve pretty much all knowledge work.
jfalcon 4 hours ago||
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.

-206dev

bccdee 5 hours ago||
> Engineers don't try because they think they can't.

This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.

There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.

I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.

So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

averyvery 4 hours ago||
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).

A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.

New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.

layer8 3 hours ago|||
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.

"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.

One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.

cs02rm0 3 hours ago|||
> the other engineers have lost all ambition for anything else

Worse, they've lost all funding for anything else.

zwnow 2 hours ago||
Industries are built upon shit people built in their basements, get hacking
r14c 2 hours ago||
I think it should be noted that a garage or basement in California costs like a million dollars.
FridgeSeal 3 hours ago|||
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
badbird33 3 hours ago||
To add another layer to it, the reason execs are foaming at the mouth is because they are hoping to fire the as many people as possible. Including those who implemented whatever AI solution in the first place.
gedy 3 hours ago||||
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
spaniard89277 3 hours ago|||
My experience too. They are so convinced that AI is magical that pushing back makes you look bad.

Then things don't turn out as they expected and you have to deal with a dude thinking his engineers are messing with him.

It's just boring.

anonymars 2 hours ago|||
It reminds me of how we moved from "mockups" to "wireframes" -- in other words, deliberately making the appearance not look like a real, finished UI, because that could give the impression that the project was nearly done

But now, to your point: they can vibe-code their own "mockups" and that brings us back to that problem

senordevnyc 3 hours ago||||
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.

There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.

mjr00 3 hours ago|||
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?

There's certainly potential but a lot of the market is hot air right now.

> Either way, the market is going to punish them accordingly.

I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.

afavour 53 minutes ago|||
IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.

The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.

micromacrofoot 2 hours ago||||
What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.
thewebguyd 3 hours ago|||
> simply because the market has never really punished people for being less efficient at their jobs

In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.

Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).

In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.

jacquesm 3 hours ago||||
Punishment eh? Serves them right for being skeptical.

I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.

In between there was LISP, the AI language (and a couple of others).

I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.

In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.

Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.

The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.

dgacmu 2 hours ago||
I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.

And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.

jacquesm 2 hours ago||
> but it does quite well with boring stuff that's still a substantial amount of programming.

I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.

dgacmu 37 minutes ago||
I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.
handstitched 3 hours ago||||
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.

And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.

senordevnyc 3 hours ago||
Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.
bigstrat2003 2 hours ago||
What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".
eschaton 3 hours ago||||
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
afavour 3 hours ago||||
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction

Or, and stay with me on this, it’s a reaction to the actual experience they had.

I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.

Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.

bluGill 3 hours ago||||
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
josephg 2 hours ago||||
It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”

AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.

I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.

spamizbad 3 hours ago||||
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,

I don't understand why people seem so impatient about AI adoption.

AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.

ElijahLynn 3 hours ago||||
Well said!
gishh 3 hours ago||||
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.

Nobody seemed to give a fuck as long as the problem was solved.

This it getting out of hand.

Aeolun 2 hours ago|||
Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.
DonHopkins 2 hours ago|||
But sed and awk are problems.
awesome_dude 3 hours ago|||
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)

I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)

FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)

palmotea 3 hours ago||||
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.

SV_BubbleTime 4 hours ago|||
This isn’t “unfair”, but you are intentionally underselling it.

If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.

Edit: lol this forum :)

nosianu 4 hours ago|||
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right

I AM very impressed, and I DO use it and enjoy the results.

The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.

Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.

But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.

So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.

I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.

I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.

I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.

jandrese 3 hours ago||||
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.

In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.

I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.

DrewADesign 3 hours ago||
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.

So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.

sydd 3 hours ago||||
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.

When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.

Libidinalecon 2 hours ago||
Many people also program and have no idea what a giant codebase looks like.

I know I don't. I have never been paid to write anything beyond a short script.

I actually can't even picture what a professional software engineer actually works on day to day.

From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.

It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.

rconti 4 hours ago||||
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Or your job isn't what AI is good at?

AI seems really good at greenfield projects in well known languages or adding features.

It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

perardi 3 hours ago|||
> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

This is precisely my experience.

Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.

Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.

Aeolun 2 hours ago|||
> It's been pretty awful, IME, at working with less well-known languages

Well, there’s your problem. You should have selected React while you had the chance.

bigstrat2003 4 hours ago||||
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
Jblx2 3 hours ago||||
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
etempleton 2 hours ago||
AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.

There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.

Jblx2 25 minutes ago||
>AI is better than you at what you aren’t very good at.

Yes, this is better phrased.

ggerni 4 hours ago||||
post portfolio I wanna see your bags
ModernMech 3 hours ago||||
> If you haven’t had a mind blown moment with AI yet...

Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.

antonvs 3 hours ago|||
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.

> Edit: lol this forum :)

Indeed.

pjmlp 4 hours ago|||
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.

This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.

mr_toad 4 hours ago||
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
Fordec 3 hours ago|||
You know what, this clarifies something for me.

PC, Web and Smartphone hype was based on "we can now do [thing] never done before".

This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.

burningChrome 2 hours ago|||
>> This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

I was doing RPA (robotic process automation) 8 years ago. Nobody wanted it in their departments. Whenever we would do presentations, we were told to never, ever, ever talk about this technology replacing people - it only removes the mundane work so teams can focus more on the bigger scope stuff. In the end, we did dozens and dozens of presentations and only two teams asked us to do some automation work for them.

The other leaders had no desire to use this technology because they were not only fearful of it replacing people on their teams, they were fearful it would impact their budgets negatively so they just quietly turned us down.

Unfortunately, you're right because as soon as this stuff gets automated and you find out 1/3rd of your team is doing those mundane tasks, you learn very quickly you can indeed remove those people since there won't be enough "big" initiatives to keep everybody busy enough.

The caveat was even on some of the biggest automations we did, you still needed a subset of people on the team you were working with to make sure the automations were running correctly and not breaking down. And when they did crash, since a lot of these were moving time sensitive data, it was like someone just stole the crown jewels and suddenly you need two war rooms and now you're ordering in for lunch.

james_marks 3 hours ago||||
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.

Or hiring a mathematician to calculate what is now done in a spreadsheet.

munificent 3 hours ago|||
100%.

"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.

pjmlp 4 hours ago||||
Same, doesn't make this hype phase more bearable though.
Paianni 4 hours ago|||
or 'interactive' or 'cloud' (early 2010s).
bwfan123 4 hours ago|||
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product

I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.

balamatom 3 hours ago||
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane

This right here is the real thing which AI is deployed to upset.

The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.

The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.

That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.

My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.

saubeidl 3 hours ago||
This sounds a lot like the Marxist concept of alienation: https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
hinkley 4 hours ago|||
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
jacquesm 3 hours ago|||
> But moving toward one pole moves you away from the other.

My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.

Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.

mips_avatar 4 hours ago|||
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
layer8 3 hours ago|||
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.

bartread 2 hours ago|||
I've never worked at Microsoft. However, I do have some experience with the company.

I worked building tools within the Microsoft ecosystem, both on the SQL Server side, and on the .NET and developer tooling side, and I spent some time working with the NTVS team at Microsoft many years ago, as well as attending plenty of Microsoft conferences and events, working with VSIP contacts, etc. I also know plenty of people who've worked at or partnered with Microsoft.

And to me this all reads like classic Microsoft. I mean, the article even says it: whatever you're doing, it needs to align with whatever the current key strategic priority is. Today that priority is AI, 12 years ago it was Azure, and on and on. And, yes, I'd imagine having to align everything you do to a single priority regardless of how natural that alignment is (or not) gets pretty exhausting, and I'd bet it's pretty easy to burn out on it if you're in an area of the business where this is more of a drag and doesn't seem like it delivers a lot of value. And you'll have to dogfood everything (another longtime Microsoft pattern) core to that priority even if it's crap compared with whatever else might be out there.

But I don't think it's new: it's simply part and parcel of working at Microsoft. And the thing is, as a strategy it's often served them well: Windows[0], Xbox, SQL Server, Visual Studio, Azure, Sharepoint, Office, etc. Doesn't always work, of course: Windows Phone went really badly, but it's striking that this kind of swing and a miss is relatively rare in Microsoft's history.

And so now, of course, they're doing it with AI. And, of course, they're a massive company, so there will be plenty of people there who really aren't having a good time with it. But, although it's far from a foregone conclusion, it would not be a surprise for Microsoft to come from behind and win by repeating their usual strategy... again.

[0] Don't overread this: I'm not necessarily saying I'm a huge fan. In fact I do think Windows, at is core, is a decent operating system, and has been for a very long time. On the back end it works well, and I have no complaints. But I viscerally despise Windows 11 as a desktop operating system. That's right: DESPISE. VISCERALLY. AT A MOLECULAR LEVEL.

pico303 2 hours ago|||
This somewhat reflects my sentiment to this article. It felt very condescending. This "self-limiting beliefs" and the implication that Seattle engineers are less than San Francisco engineers because they haven't bought into AI...well, neither have all the SF engineers.

One interesting take away from the article and the discussion is that there seem to be two kinds of engineers: those that buy into the hype and call it "AI," and those that see it for the fancy search engine it is and call it an "LLM." I'm pretty sure these days when someone mentions "AI" to me I roll my eyes. But if they say, "LLM," ok, let's have a discussion.

antonvs 3 hours ago|||
> often companies with real products will mix in tidbits of hype

The wealthiest person in the world relies entirely on his ability to convince people to accept hype that surpasses all reason.

balamatom 4 hours ago|||
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

Spot. Fucking. On.

Thank you.

zzzeek 4 hours ago|||
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.

But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.

senordevnyc 3 hours ago||
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
blibble 3 hours ago||
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.

so, people with experience?

zzzeek 3 hours ago|||
ive been programming for more than 40 years
senordevnyc 3 hours ago|||
Obviously. Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

In hindsight it makes sense, I’m sure every major shift has played out the same way.

bigstrat2003 2 hours ago||
> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.

throwout4110 5 hours ago|||
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).

AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?

> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.

What do you view as the potential that’s been stated?

Fraterkes 4 hours ago|||
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.

In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.

(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.

When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)

throwout4110 4 hours ago|||
Yes ok then I definitely agree
pydry 4 hours ago|||
Shells around chatgpt are fine if they provide value.

Way better than AI jammed into every crevice for no reason.

mingus88 4 hours ago|||
Not OP but for starters LLMs != AI

LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI

pixl97 4 hours ago||
>Not OP but for starters LLMs != AI

Please don't do this, make up your own definitions.

Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.

In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.

ponector 4 hours ago||
Why then there is an AI-powered dishwasher, but no AI car?
fedsocpuppet 3 hours ago||
https://www.tesla.com/fsd ?

I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.

bigstrat2003 2 hours ago|||
It's because nobody was trying to take video game behavior scripts and declare them the future of all things technology.
fedsocpuppet 2 hours ago||
Ok? I'm not going to change the definition of a 70 year old field because people are annoyed at chatgpt wrappers.
ponector 2 hours ago|||
Cannot find any mention of AI there.

Also it's funny how they add (supervised) everywhere. It looks like "Full self driving (not really)"

fedsocpuppet 2 hours ago||
Yes one needs some awareness of the technology. Computer vision: unambiguously AI, motion planning: there are classical algorithms but I believe tesla / waymo both use NNs here too.

Look I don't like the advertising of FSD, or musk himself, but we without a doubt have cars using significant amounts of AI that work quite well.

binary132 4 hours ago||
Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value
skybrian 3 hours ago|||
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.

Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.

OvidNaso 3 hours ago||
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
skybrian 3 hours ago||
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.

Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.

rpcope1 4 hours ago||||
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
triceratops 1 hour ago||
I'd argue even dogshit has more practical use than Bitcoin, if no one paid money for Bitcoin. You can throw it for self-defence, compost it (under high heat to kill the germs), put it on your property to scare away raccoons (it works sometimes).
venturecruelty 3 hours ago||||
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
mgaunard 4 hours ago||||
very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.

The values of bitcoin are:

- easy access to trading for everyone, without institutional or national barriers

- high leverage to effectively easily borrow a lot of money to trade with

- new derivative products that streamline the process and make speculation easier than ever

The blockchain plays very little part in this. If anything it makes borrowing harder.

blibble 3 hours ago||
I agree with "easy access to trading for everyone, without institutional or national barriers"

how on earth does bitcoin have anything to do with borrowing or derivatives?

in a way that wouldn't also work for beanie babies

airstrike 4 hours ago||||
If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.
ejoso 4 hours ago||||
Uh… So the argument here is that anticipated future value == meaningful value today?

The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.

adastra22 3 hours ago||||
With almost zero fundamentals. That’s the part you are glossing over.
senordevnyc 3 hours ago|||
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
shepardrtc 5 hours ago||
Ok so a few thoughts as a former Seattleite:

1. You were a therapy session for her. Her negativity was about the layoffs.

2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.

3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.

4. I don't think people hate AI, they hate the hype.

Anyways, your app actually does sound interesting so I signed up for it.

hexator 4 hours ago||
Some people really do hate AI, it's not entirely about the layoffs. This is a well insulated bubble but you can find tons of anti-AI forums online.
zubiaur 1 hour ago|||
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.

I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.

I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.

nateglims 1 hour ago|||
Outside of tech, I think the opinion is generally negative. AI has lost a lot the narrative due to things like energy prices and layoffs.
deaux 1 hour ago||
Globally, the opinion isn't generally negative. It's localized.
Karrot_Kream 1 hour ago|||
I was an early employee at a unicorn and I saw this culture take hold once we started hiring from Big Tech talent pools and offering Big Tech comp packages, though before AI hype took off. There's a crazy lack of agency that kicks in for Big Tech folks that's really hard to explain. This feeling that each engineer is this mercenary trying really hard to not get screwed by the internal system.

Most of it is because there's little that ties actual output to organizational outcomes. AI mandates after all are just a way to bluntly for e engineers to use AI, where if you were at a startup or smaller company you would probably organically find how much an LLM helps you where. It may not even help your actual work even if it helps your coworkers. That market feedback is sorely missing from the Big Techs and so hamfisted engineering mandates have to do in order to for e engineers to become more efficient.

In these cases I always try to remind friends that you can always leave a Big Tech. The thing is, from what I can tell, a lot of these folks have developed lifestyle inflation from working in Big Tech and some of their anger comes from feeling trapped in their Big Tech role due to this. While I understand, I'm not particularly sympathetic to this viewpoint. At the end of the day your lifestyle is in your hands.

mips_avatar 4 hours ago|||
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
pier25 1 hour ago||
It’s not only the hype though.

What about the complete lack of morality some (most?) AI companies exhibit?

What about the consequences in the environment?

What about the enshitification of products?

What about the usage of water and energy?

Etc.

assemblyman 3 hours ago||
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.

I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.

Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.

Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.

Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.

I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.

mkoubaa 1 hour ago|
I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned
groos 5 hours ago||
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
pnathan 4 hours ago|
As a layoff justification and a hurryup tool, it is pretty loathesome. People use their jobs for their housing, food, etc.
elzbardico 3 hours ago||
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy. FWIW, humans derive a lot of their self-evaluation as people from labor.
adastra22 3 hours ago||
Marx was correct in his identification of the problem (the communist manifesto still holds up today). Marx went off the rails with his solution.
decimalenough 3 hours ago||
While everybody else is ranting about AI, I'll rant about something else: trip planning apps. There have been literally thousands of attempts at this and AFAICT precisely zero have ever gotten any traction. There are two intractable problems in this space.

1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:

2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.

b_e_n_t_o_n 1 hour ago||
There just isn't much friction between having a few tabs open (maps, booking site, airplane site, google search) and a notepad. The friction of searching for an app, downloading it, and then learning how to use it is just higher.

So many products are like this - it sounds good on paper to consolidate a bunch of tasks in one place but it's not without costs and the benefit is just not very high.

brokencode 2 hours ago|||
I use and pay for Wanderlog. Idk how their business is doing, but I love it as a user. They use an embedded Google Maps viewer for locations, so there is no problem for coverage.
cube00 1 hour ago||
> They use an embedded Google Maps viewer for locations

If they become popular they'll have to move to OSM, Google's steep charging for their Maps API at high usage that has brought companies to their knees is well known [1].

[1]: https://news.ycombinator.com/item?id=35089776

debesyla 3 hours ago||
It's just another business/service niche that is solved until the current Big Provider becomes Evil or goes under.

Similar to "made for everyone" social networks and video upload platforms.

But there are niches that are trip planning + there are no one solving the pain! For example Geocaching. I always dreamed about an easy way to plan Geocaching routes to travel and find interesting caches on the way. Currently you gotta filter them out and then eyeball the map what seems to be nearby, despite there, maybe, not being any real roads there, or the cache is probably maybe actually lost or has to be accessed at specific time of day.

So... No one wants apps that are already solved + boring.

paxys 4 hours ago||
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
artifaxx 1 hour ago||
Definitely, AI sentiment is positive among most people at the small startup I work at in the Seattle area. I do see the "AI fatigue" too, I bet the majority is from using AI as a repeated layoff rationalization. Personally AI is a tool, one of the more useful ones (e.g. Claude and Gemini thinking models make quite helpful code reviewers once given a checklist) The hype often overshadows these benefits.
mips_avatar 4 hours ago||
That's probably the difference
vunderba 5 hours ago||
From the article:

> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.

I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.

mips_avatar 4 hours ago||
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
59nadir 2 hours ago|||
In Swedish the G wouldn't be silent so it wouldn't really be all that much like "wonderful"; "vanderfugel" is the closest thing I could come up with for how I'd pronounce it with some leniency.
throw-qqqqq 3 hours ago|||
Same in Danish FWIW.

In English, I’d pronounce it very similar to “wonderful”.

adastra22 2 hours ago||
If OP dropped the g, it would be a MUCh better product name.
throw-qqqqq 2 hours ago|||
Solid advice. Seeing how many here would pronounce it differently, I totally agree hahah
mips_avatar 2 hours ago|||
I actually own wanderfull.ai
adastra22 2 hours ago||
Drop an l would be better I think
littlekey 2 hours ago|||
The weird thing is that half of the uses of the name on that landing page spell it as "Wanderfull". All of the mock-up screencaps use it, and at the bottom with "Be one of the first people shaping Wanderfull" etc.

So even the creator can't decide what to call it!

epolanski 4 hours ago|||
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
Ekaros 4 hours ago|||
If there is a g in there I will pronounce a g there. I have some standards and that is one. Pronouncing every single letter.
basscomm 4 hours ago|||
> Pronouncing every single letter.

Now I want to know how you pronounce words like: through, bivouac, and queue.

adastra22 2 hours ago||
You don’t pronounce all the letters?
badc0ffee 3 hours ago||||
That's a gnarly standard you have there.
paddleon 2 hours ago|||
obviously not a native French speaker
mips_avatar 4 hours ago|||
It's pronounced wanderfull in Norwegian
epolanski 4 hours ago|||
And how many of your users are going to have Nordic backgrounds?

I personally thought it was wander _fughel_ or something.

Let alone how difficult it is to remember how to spell it and look it up on Google.

thinkling 1 hour ago||||
The one current paying user of the app I've seen in this discussion called it "Wanderlog". FYI on the stickiness of the current name.
richiebful1 1 hour ago||
wanderlog is a separate web service

https://wanderlog.com/

hbosch 2 hours ago||||
"Wanderful" would be a better name.
quickthrowman 2 hours ago|||
Just FYI, I would read it out loud in English as “wander fuggle”. I would assume most Americans would pronounce the ‘g’.

I thought ‘wanderfugl’ was a throwback to ~15 years ago when it was fashionable to use a word but leave out vowels for no reason, like Flickr/ /Tumblr/Scribd/Blendr.

adastra22 2 hours ago|||
And if you manage to say it outloud, say it to someone else and ask them to spell it. If they can’t spell it, they can’t type it into the url bar.
efskap 4 hours ago|||
Maybe that's why they didn't go with the English cognate i.e. Wanderfowl, since being foul isn't great branding
isomorphic 4 hours ago||
What? You don't want travel tips from an itinerant swinger? Or for itinerant swingers?
somekyle2 5 hours ago|
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
tptacek 5 hours ago||
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
wk_end 5 hours ago|||
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.

Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.

tptacek 4 hours ago|||
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).

Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.

I think it's just not true that non-tech people are especially opposed to AI.

sleepybrett 4 hours ago|||
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
vasvir 2 hours ago||
MANNA https://milweesci.weebly.com/uploads/1/3/2/4/13247648/mannap...
somekyle2 5 hours ago||||
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
tptacek 4 hours ago||
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
pesus 4 hours ago||
Frankly, tech deserves its bad reputation in SF (and worldwide, really).

One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.

I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.

treis 4 hours ago|||
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
GuinansEyebrows 3 hours ago|||
it's plenty sane to be angry when the benefits of those technical innovations are not distributed equally.
pesus 3 hours ago|||
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
Hammershaft 3 hours ago||
The plough also made the rich richer, but in the long run the productivity gains it enabled drove improvements to common living standards.
tptacek 4 hours ago|||
I don't agree with any of this. I just think it's aggravating to live in a company town.
majormajor 5 hours ago||||
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."

There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.

neutronicus 4 hours ago||||
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.

At least, that's my wife's experience working on a contract with a state government at a big tech vendor.

tptacek 1 hour ago||
Not talking about government employees, for whatever that's worth.
kg 5 hours ago|||
EDIT: Removed part of my post that pissed people off for some reason. shrug

It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.

tptacek 5 hours ago||
The claim I was responding to implied that non-techies distinctively hate AI. You're a techie.
tokioyoyo 3 hours ago|||
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.

Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.

themafia 3 hours ago|||
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters

That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.

IAmBroom 4 hours ago|||
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
mips_avatar 5 hours ago|||
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
_keats 5 hours ago||
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.

I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics

Forgeties79 5 hours ago|||
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
lambchoppers 4 hours ago||
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.

The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.

Forgeties79 4 hours ago||
> health and safety seems irrelevant to me

Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.

lambchoppers 3 hours ago|||
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?

It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.

adastra22 2 hours ago|||
I for one have no idea what you mean by health and safety with respect to AI. Do you have an OSHA concern?
65 3 hours ago|||
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.
ggerni 5 hours ago||
[flagged]
More comments...