Top
Best
New

Posted by saigrandhi 9 hours ago

Is it a bubble?(www.oaktreecapital.com)
147 points | 223 comments
sp4cec0wb0y 8 hours ago|
> In many advanced software teams, developers no longer write the code; they type in what they want, and AI systems generate the code for them.

What a wild and speculative claim. Is there any source for this information?

sethammons 7 hours ago||
At $WORK, we have a bot that integrates with Slack that sets up minor PRs. Adjusting tf, updating endpoints, adding simple handlers. It does pretty well.

Also in a case of just prose to code, Claude wrote up a concurrent data migration utility in Go. When I reviewed it, it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed. I would have written it faster by hand, no doubt. I think I know more now and the calculus may be shifting on my AI usage. However, the following day, my colleague needed a nearly identical temporary tool. A 45 minute session with Claude of "copy this thing but do this other stuff" easily saved them 6-8 hours of work. And again, that was just talking with Claude.

I am doing a hybrid approach really. I write much of my scaffolding, I write example code, I modify quick things the ai made to be more like I want, I set up guard rails and some tests then have the ai go to town. Results are mixed but trending up still.

FWIW, our CEO has declared us to be AI-first, so we are to leverage AI in everything we do which I think is misguided. But you can bet they will be reviewing AI usage metrics and lower wont be better at $WORK.

yellow_lead 2 hours ago|||
You should periodically ask Claude to review random parts of code to pump your metrics.
giancarlostoro 43 minutes ago||
Has the net benefit that it points out things that are actually wrong and overlooked.
shuckles 45 minutes ago||||
It took me a while to realize you were using "$WORK" as a shell variable, not as a reference to Slack's stock ticker prior to its acquisition by $CRM.
chickensong 6 hours ago||||
> it wasn't managing goroutines or waitgroups well, and the whole thing was a buggy mess and could not be gracefully killed

First pass on a greenfield project is often like that, for humans too I suppose. Once the MVP is up, refactor with Opus ultrathink to look for areas of weakness and improvement usually tightens things up.

Then as you pointed out, once you have solid scaffolding, examples, etc, things keep improving. I feel like Claude has a pretty strong bias for following existing patterns in the project.

sbuttgereit 1 hour ago|||
I think your experience matches well with mine. There are certain workloads and use cases where these tools really do well and legitimately save time; these tend to be more concise tasks and well defined with good context from which to draw from. The wrong tasking and the results can be pretty bad and a time sink.

I think the difficulty is exercising the judgement to know where that productive boundary sits. That's more difficult than it sounds because we're not use to adjudicating machine reasoning which can appear human-like ... So we tend to treat it like a human which is, of course, an error.

throwaway2037 44 minutes ago|||
I completely agree. This guy is way outside his area of expertise. For those unaware, Howard Marks is a legendary investment manager with a decades-long impressive track record. Additionally, these "insights" letters are also legendary in the money management business. Personally, I would say his wisdom is one notch below Warren Buffett. I am sure he is regularly asked (badgered?) by investors what he thinks about the current state and future of AI (LLMs) and how it will impact his investment portfolio. The audience of this letter is investors (real and potential), as well as other investment managers.
throwaway2037 40 minutes ago||
Follow-up: This letter feels like a "jump the shark" moment.

Ref: https://blog.codinghorror.com/has-joel-spolsky-jumped-the-sh...

kscarlet 7 hours ago|||
The line right after this is much worse:

> Coding performed by AI is at a world-class level, something that wasn’t so just a year ago.

Wow, finance people certainly don't understand programming.

mcv 6 hours ago|||
World class? Then what am I? I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea. I am impressed by its ability to generate and analyse code, but its code almost never works the first time, unless it's trivial boilerplate stuff, and its analysis is wrong half the time.

It's very useful if you have the knowledge and experience to tell when it's wrong. That is the absolutely vital skill to work with these systems. In the right circumstances, they can work miracles in a very short time. But if they're wrong, they can easily waste hours or more following the wrong track.

It's fast, it's very well-read, and it's sometimes correct. That's my analysis of it.

skydhash 1 hour ago|||
> I frequently work with Copilot and Claude Sonnet, and it can be useful, but trusting it to write code for anything moderately complicated is a bad idea

This sentence and the rest of the post reads like an horoscope advice. Like "It can be good if you use it well, it may be bad if you don't". It's pretty much the same as saying a coin may land on head or on tail.

hatthew 1 minute ago||
saying "a coin may land on head or on tail" is useful when other people are saying "we will soon have coins that always land on heads"
malfist 6 hours ago||||
Is this why AI is telling us our every idea is brilliant and great? Because their code doesn't stand up to what we can do?
AmericanOP 1 hour ago||
Whichever PM sold glazing as a core feature should be ejected into space.
formerly_proven 6 hours ago|||
Copilot is easily the worst (and probably slowest) coding agent. SOTA and Copilot don't even inhabit similar planes of existence.
selectodude 6 hours ago||||
They don’t. I’ve gone from rickety and slow excel sheets and maybe some python functions to automate small things that I can figure out to building out entire data pipelines. It’s incredible how much more efficient we’ve gotten.
clickety_clack 6 hours ago|||
Ask ChatGPT “is AI programming world class?”
projektfu 38 minutes ago|||
I have heard many software developers confidently tell me "pilots don't really fly the planes anymore" and, well, that's patently false but also the jetliners autopilots do handle much of the busy work during cruise, and sometimes during climb-out and approach. And they can sometimes land themselves, but not efficiently enough for a busy airport.
whoknowsidont 7 hours ago|||
It's not. And if your team is doing this you're not "advanced."

Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.

Which is great! But it's not a +1 for AI, it's a -1 for them.

XenophileJKO 58 minutes ago|||
I beginning to think most "advanced" programmers are just poor communicators.

It really comes mostly down to being able to concisely and eloquently define what you want done. It also is important to understand what the default tendencies and biases of the model are so you know where to lean in a little. Occasionally you need to provide reference material.

The capabilities have grown dramatically in the last 6 months.

I have an advantage because I have been building LLM powered products so I know mechanically what they are and are not good with. For example.. want it to wire up an API with 250+ endpoints with a harness? You better create (or have it create) a way to cluster and audit coverage.

Generally the failures I hear often with "advanced" programmers are things like algorithmic complexity, concurrency, etc.. and these models can do this stuff given the right motivation/context. You just need to understand what "assumptions" the model it making and know when you need to be explicit.

Actually one thing most people don't understand is they try to say "Do (A), Don't do (B)", etc. Defining granular behavior which is fundamentally a brittle way to interact with the models.

Far more effective is defining the persona and motivation for the agent. This creates the baseline behavior profile for the model in that context.

Not "don't make race conditions", more like "You value and appreciate elegant concurrent code."

XenophileJKO 52 minutes ago||
One added note. This rigidness of instruction is a real problem that the models themselves will magnify and you need to be aware of. For example if you ask a Claude family of models to write a sub-agent for you in Claude Code. 99% of the time it will define a rigid process with steps and conditions instead of creating a persona with motivations (and if you need it suggested courses of action).
NewsaHackO 7 hours ago||||
Part of the issue is that I think you are underestimating the number of people not doing "advanced" programming. If it's around ~80-90%, then that's a lot of +1s for AI
whoknowsidont 5 hours ago|||
Why do you feel like I'm underestimating the # of people not doing advanced programming?
NewsaHackO 5 hours ago||
Theoretically, if AI can do 80-90% of programming jobs (the ones not in the "advanced" group), that would be an unequivocal +1 for AI.
whoknowsidont 4 hours ago||
I think you're crossing some threads here.
NewsaHackO 4 hours ago||
"It's not. And if your team is doing this you're not "advanced." Lots of people are outing themselves these days about the complexity of their jobs, or lack thereof.

Which is great! But it's not a +1 for AI, it's a -1 for them.

" Is you, right?

whoknowsidont 3 hours ago||
Yes. You can see my name on the post.
NewsaHackO 3 hours ago||
OK, just making sure. Have a blessed day :)
9rx 2 hours ago|||
It's true for me. I type in what I want and then the AI system (compiler) generates the code.

Doesn't everyone work that way?

zahlman 2 hours ago||
Describing a compiler as "AI" is certainly a take.
parliament32 1 hour ago|||
Compilers are probably closer to "intelligence" than LLMs.
agumonkey 2 hours ago|||
it's something that crossed my mind too honestly. natural-language-to-code translation.
skydhash 1 hour ago||
You can also do search query to code translation by using GitHub or StackOverflow.
its_ethan 2 hours ago|||
Is it not sort of implied by the stats later: "Revenues from Claude Code, a program for coding that Anthropic introduced earlier this year, already are said to be running at an annual rate of $1 billion. Revenues for the other leader, Cursor, were $1 million in 2023 and $100 million in 2024, and they, too, are expected to reach $1 billion this year."

Surely that revenue is coming from people using the services to generate code? Right?

Windchaser 2 hours ago|||
A back-of-the-napkin estimate of software developer salaries:

There are some ~1.5 million software developers in the US per BLS data, or ~4 million if using a broader definition Median salary is $120-140k. Let's say $120k to be conservative.

This puts total software developer salaries at $180 billion.

So, that puts $1 billion in Claude revenue in perspective; only about 0.5% of software developer salaries. Even if it only improved productivity 5%, it'd be paying for itself handily - which means we can't take the $1 billion in revenues to indicate that it's providing a big boost in productivity.

browningstreet 2 hours ago||||
Generating code isn’t the same as running it, running it on production, and living with it over time.

In time I’m sure it will, but it’s still early days, land grab time.

halfcat 2 hours ago|||
> Surely that revenue is coming from people using the services to generate code? Right?

Yes. And all code is tech debt. Now generated faster than ever.

brulard 7 hours ago|||
I'm on a team like that and I see it happening in more and more companies around. Maybe "many" does a heavy lifting in the quoted text, but it is definitely happening.
no_wizard 1 hour ago|||
Here's the lede they buried:

>The key is to not be one of the investors whose wealth is destroyed in the process of bringing on progress.

They are a VC group. Financial folks. They are working largely with other people's money. They simply need not hold the bag to be successful.

Of course they don't care if its a bubble or not, at the end of the day, they only have to make sure they aren't holding the bag when it all implodes.

loloquwowndueo 8 hours ago|||
Probably their googly-eyed vibe coder friend told them this and they just parroted it.
RajT88 7 hours ago||
Right. The author is non-technical and said so up front.
interstice 7 hours ago|||
If true I’d like to know who is doing this so I can have exactly nothing to do with them.
20after4 5 hours ago|||
I've had claude code compose complex AWS infrastructure (using pulumi IAC) that mostly works from a one-shot prompt.
agumonkey 2 hours ago|||
Seen it first hand. scan your codebase, plan extension or rewrite or both, iterate with some hand holding and off you go. And it was not even an advanced developer driving the feature (which is concerning).
9rx 3 hours ago|||
It's not exactly wrong. Not since the advent of AI systems (a.k.a. compilers) have developers had to worry about code. Instead they type in what they want and the compiler generates the code for them.

Well, except developers have never had to worry about code as even in the pre-compiler days coders, a different job done by a different person, were responsible for producing the code. Development has always been about writing down what you want and letting someone or something else generate the code for you.

But the transition from human coders to AI coders happened like, what, 60-70 years ago? Not sure why this is considered newsworthy now.

IceDane 2 hours ago|||
I'm wondering: do you genuinely not understand how compilers work at all or is there some deeper point to your AI/compiler comparison that I'm just not getting?
9rx 2 hours ago||
My understanding is that compilers work just like originally described. I type out what I want. I feed that into a compiler. It takes that input of what I want and generates code.

Is that not your understanding of how compilers work? If a compiler does not work like that, what do you think a complier does instead?

IceDane 1 hour ago||
A compiler does so deterministically and there is no AI involved.
9rx 1 hour ago||
A compiler can be deterministic in some cases, but not necessarily so. A compiler for natural language cannot be deterministic, for example. It seems you're confusing what a compiler is with implementation details.

Let's get this topic back on track. What is it that you think a compiler does if not take in what you typed out for what you want and use that to generate code?

bonaldi 1 hour ago|||
This doesn't feel like good-faith. There are leagues of difference between "what you typed out" when that's in a highly structured compiler-specific codified syntax *expressly designed* as the input to a compiler that produces computer programs, and "what you typed out" when that's an English-language prompt, sometimes vague and extremely high-level

That difference - and the assumed delta in difficulty, training and therefore cost involved - is why the latter case is newsworthy.

9rx 1 hour ago||
> This doesn't feel like good-faith.

When has a semantic "argument" ever felt like good faith? All it can ever be is someone choosing what a term means to them and try to beat down others until they adopt the same meaning. Which will never happen because nobody really cares.

They are hilarious, but pointless. You know that going into it.

IceDane 1 hour ago|||
I've written more than one compiler, so I definitely understand how compilers work.

It seems you're trying to call anything that transforms one thing into another a compiler. We all know what a compiler is and what it does (except maybe you? It's not clear to me) so I genuinely don't understand why you're trying to overload this terminology further so that you can call LLMs compilers. They are obviously and fundamentally different things even if an LLM can do its best to pretend to be one. Is a natural language translation program a compiler?

9rx 1 hour ago||
> Is a natural language translation program a compiler?

We have always agreed that a natural language compiler is theoretically possible. Is a natural language translation program the same as a natural language compiler, or do you see some kind of difference? If so, what is the difference?

wakawaka28 1 hour ago|||
Compilers are not AI, and code in high-level languages is still code in the proper sense. It is highly dishonest to call someone who is not a competent software engineer a "developer" even if their job consists entirely of telling actual software engineers or "coders" what to do.
johnfn 7 hours ago|||
I only write around 5% of the code I ship, maybe less. For some reason when I make this statement a lot of people sweep in to tell me I am an idiot or lying, but I really have no reason to lie (and I don't think I'm an idiot!). I have 10+ years of experience as an SWE, I work at a Series C startup in SF, and we do XXMM ARR. I do thoroughly audit all the code that AI writes, and often go through multiple iterations, so it's a bit of a more complex picture, but if you were to simply say "a developer is not writing the code", it would be an accurate statement.

Though I do think "advanced software team" is kind of an absurd phrase, and I don't think there is any correlation with how "advanced" the software you build is and how much you need AI. In fact, there's probably an anti-correlation: I think that I get such great use out of AI primarily because we don't need to write particularly difficult code, but we do need to write a lot of it. I spend a lot of time in React, which AI is very well-suited to.

EDIT: I'd love to hear from people who disagree with me or think I am off-base somehow about which particular part of my comment (or follow-up comment https://news.ycombinator.com/item?id=46222640) seems wrong. I'm particularly curious why when I say I use Rust and code faster everyone is fine with that, but saying that I use AI and code faster is an extremely contentious statement.

MontyCarloHall 7 hours ago|||
>I only write around 5% of the code I ship, maybe less.

>I do thoroughly audit all the code that AI writes, and often go through multiple iterations

Does this actually save you time versus writing most of the code yourself? In general, it's a lot harder to read and grok code than to write it [0, 1, 2, 3]. For me, one of the biggest skills for using AI to efficiently write code is a) chunking the task into increments that are both small enough for me to easily grok the AI-generated code and also aligned enough to the AI's training data for its output to be ~100% correct, b) correctly predicting ahead of time whether reviewing/correcting the output for each increment will take longer than just doing it myself, and c) ensuring that the overhead of a) and b) doesn't exceed just doing it myself.

[0] https://mattrickard.com/its-hard-to-read-code-than-write-it

[1] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

[2] https://trishagee.com/presentations/reading_code/

[3] https://idiallo.com/blog/writing-code-is-easy-reading-is-har...

johnfn 6 hours ago|||
Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive, though it depends exactly what I’m working on. Most of the issues that you cite can be solved, though it requires you to rewire the programming part of your brain to work with this new paradigm.

To be honest, I don’t really have a problem with chunking my tasks. The reason I don’t is because I don’t really think about it that way. I care a lot more about chunks and AI could reasonably validate. Instead of thinking “what’s the biggest chunk I could reasonably ask AI to solve” I think “what’s the biggest piece I could ask an AI to do that I can write a script to easily validate once it’s done?” Allowing the AI to validate its own work means you never have to worry about chunking again. (OK, that's a slight hyperbole, but the validation is most of my concern, and a secondary concern is that I try not to let it go for more than 1000 lines.)

For instance, take the example of an AI rewriting an API call to support a new db library you are migrating to. In this case, it’s easy to write a test case for the AI. Just run a bunch of cURLs on the existing endpoint that exercise the existing behavior (surely you already have these because you’re working in a code base that’s well tested, right? right?!?), and then make a script that verifies that the result of those cURLs has not changed. Now, instruct the AI to ensure it runs that script and doesn’t stop until the results are character for character identical. That will almost always get you something working.

Obviously the tactics change based on what you are working on. In frontend code, for example, I use a lot of Playwright. You get the idea.

As for code legibility, I tend to solve that by telling the AI to focus particularly on clean interfaces, and being OK with the internals of those interfaces be vibecoded and a little messy, so long as the external interface is crisp and well-tested. This is another very long discussion, and for the non-vibe-code-pilled (sorry), it probably sounds insane, and I feel it's easy to lose one's audience on such a polarizing topic, so I'll keep it brief. In short, one real key thing to understand about AI is that it makes the cost of writing unit tests and e2e tests drop significantly, and I find this (along with remaining disciplined and having crisp interfaces) to be an excellent tool in the fight against the increased code complexity that AI tools bring. So, in short, I deal with legibility by having a few really really clean interfaces/APIs that are extremely readable, and then testing them like crazy.

EDIT

There is a dead comment that I can't respond to that claims that I am not a reliable narrator because I have no A/B test. Behold, though: I am the AI-hater's nightmare, because I do have a good A/B test! I have a website that sees a decent amount of traffic (https://chipscompo.com/). Over the last few years, I have tried a few times to modernize and redesign the website, but these attempts have always failed because the website is pretty big (~50k loc) and I haven't been able to fit it in a single week of PTO.

This Thanksgiving, I took another crack at it with Claude Code, and not only did I finish an entire redesign (basically touched every line of frontend code), but I also got in a bunch of other new features, too, like a forgot password feature, and a suite of moderation tools. I then IaC'd the whole thing with Terraform, something I only dreamed about doing before AI! Then I bumped React a few majors versions, bumped TS about 10 years, etc, all with the help of AI. The new site is live and everyone seems to like it (well, they haven't left yet...).

If anything, this is actually an unfair comparison, because it was more work for the AI than it was for me when I tried a few years ago, because because my dependencies became more and more out of date as the years went on! This was actually a pain for AI, but I eventually managed to solve it.

no_wizard 1 hour ago|||
Use case mapping matters. I use AI tools at work (have for a few years now, first Copilot from GitHub, now I use Gemini and Claude tools primarily). When the use case maps well, it is great. You can typically assume anything with a large corpus of fairly standard problems will map well in a popular language. JavaScript, HTML, CSS, these have huge training datasets from open source alone.

The combination of which, deep training dataset + maps well to how AI "understands" code, it can be a real enabler. I've done it myself. All I've done with some projects is write tests, point Claude at the tests and ask it to write code till those tests pass, then audit said code, make adjustments as required, and ship.

That has worked well and sped up development of straightforward (sometimes I'd argue trivial) situations.

Where it falls down is complex problem sets, major refactors that cross cut multiple interdependent pieces of code, its less robust with less popular languages (we have a particular set of business logic in Rust due to its sensitive nature and need for speed, it does a not great job with that) and a host of other areas I have hit limitations with it.

Granted, I work in a fairly specialized way and deal with alot of business logic / rules rather than boiler plate CRUD, but I have hit walls on things like massive refactors in large codebases (50K is small to me, for reference)

n8cpdx 6 hours ago||||
Did you do 5-10 years of work in the year after you adopted AI? If you started after AI came in to existence 3 years ago (/s) you should have achieved 30 years of work output - a whole career of work.
johnfn 6 hours ago||
I think AI only "got good" around the release of Claude Code + Opus 4.0, which was around March of this year. And it's not like I sit down and code 8 hours a day 5 days a week. I put on my pants one leg at a time -- there's a lot of other inefficiencies in the process, like meetings, alignment, etc, etc.

But yes, I do think that the efficiency gain, purely in the domain of coding, is around 5x, which is why I was able to entirely redesign my website in a week. When working on personal projects I don't need to worry about stakeholders at all.

jimbokun 2 hours ago|||
Ah, I was going to say it’s impossible to get 5x increase in productivity, because writing code takes up less than 20% of a developer’s time. But I can understand that kind of improvement on just the coding part.

The trick now is deciding what code to write quickly enough to keep Claude and friends busy.

XenophileJKO 31 minutes ago||
I will say for example now at work.. if I see a broken window I have an AI fix it. This is a recent habit for me, so I can't say it will stick, but I'm fixing issues in many more adjacent code bases then I normally would.

It used to be "hey I found an issue..", now it is like "here is a pr to fix an issue I saw". The net effort to me is only slightly more. I usually have to identify the problem and that is like 90% of fixing it.

Add to the fact that now I can have an AI take a first pass at identifying the problem with probably an 80%+ success rate.

Esophagus4 2 hours ago||||
I'm not sure why, but it seems like your comment really brought out the ire in a few commenters here to discredit your experience.

Is it ego? Defensiveness? AI anxiety? A need to be the HN contrarian against a highly visible technology innovation?

I don't think I understand... I haven't seen the opposite view (AI wastes a ton of time) get hammered like that.

At the very least, it certainly makes for an acidic comments section.

n8cpdx 2 hours ago||
It’s because people turn off their critical thinking and make outrageous claims.

That’s why when folks say that AI has made them 10x more productive, I ask if they did 10 years worth of work in the last year. If you cannot make that claim, you were lying when you said it made you 10x more productive. Or at least needed a big asterisk.

If AI makes you 10x more productive in a tiny portion of your job, then it did not make you 10x more productive.

Meanwhile, the people claiming 10x productivity are taken at face value by people who don’t know any better, and we end up in an insane hype cycle that has obvious externalities. Things like management telling people that they must use AI or else. Things like developer tooling making zero progress on anything that isn’t an AI feature for the last two years. Things like RAM becoming unaffordable because Silicon Valley thinks they are a step away from inventing god. And I haven’t scratched the surface.

Esophagus4 1 hour ago|||
> That’s why when folks say that AI has made them 10x more productive, I ask if they did 10 years worth of work in the last year.

What makes you think one year is the right timeframe? Yet you seem to be so wildly confident in the strength of what you think your question will reveal… in spite of the fact that the guy gave you an example.

It wasn’t that he didn’t provide it, it was that you didn’t want to hear it.

n8cpdx 44 minutes ago||
It’s a general question I ask of everyone who claims they are 10x more productive. Year/month/day/hour doesn’t matter. Did you do 10 days of work yesterday? 10 weeks of work last week?

It is actually a very forgiving metric over a year because it is measuring only your own productivity relative to your personal trend. That includes vacation time and sick time, so the year smooths over all the variation.

Maybe he did do 5 weeks of work in 1 week, and I’ll accept that (a much more modest claim than the usual 10-100x claimed multiplier).

johnfn 2 hours ago||||
But I really did do around 4 to 5 weeks of work in a single week on my personal site. At this point you just seem to be denying my own reality.
n8cpdx 34 minutes ago||
If you read my comments, you’ll see that I did no such thing. I asked if you did 5-10 years of work in the last year (or 5-10 weeks of work in the last week) and didn’t get a response until you accused me of denying your reality.

You’ll note the pattern of the claims getting narrower and narrower as people have to defend them and think critically about them (5-10x productivity -> 4-5x productivity -> 4-5x as much code written on a side project).

It’s not a personal attack, it is a corrective to the trend of claiming 5,10,100x improvements to developer productivity, which rarely if ever holds up to scrutiny.

rhetocj23 1 hour ago|||
[dead]
IceDane 2 hours ago||||
Your site has waterfalls and flashes of unstyled content. It loads slowly and the whole design is basically exactly what every AI-designed site looks like.

All of the work you described is essentially manual labor. It's not difficult work - just boring, sometimes error prone work that mostly requires you to do obvious things and then tackle errors as they pop up in very obvious ways. Great use case for AI, for sure. This and the fact that the end result is so poor isn't really selling your argument very well, except maybe in the sense that yeah, AI is great for dull work in the same way an excavator is great for digging ditches.

johnfn 1 hour ago||
> This and the fact that the end result is so poor isn't really selling your argument very well

If you ever find yourself at the point where you are insulting a guy's passion project in order to prove a point, perhaps have a deep breath, and take a step back from the computer for a moment. And maybe you should look deep inside yourself, because you might have crossed the threshold to being a jerk.

Yes, my site has issues. You know what else it has? Users. Your comments about FOUC and waterfalls are correct, but they don't rank particularly high on what are important to people who used the site. I didn't instruct the AI to fix them, because I was busy fixing a bunch of real problems that my actual users cared about.

As for loading slowly -- it loads in 400ms on my machine.

IceDane 1 hour ago||
Look, buddy. You propped yourself up as an Experienced Dev doing cool stuff at Profitable Startup and don't understand Advanced Programming, and your entire argument is that you can keep doing the same sort of high quality(FSOV) work you've been doing the past 10 years with AI, just a lot faster.

I'm just calling spade a spade. If you didn't want people to comment on your side project given your arguments and the topic of discussion, you should just not have posted it in a public forum or have done better work.

johnfn 55 minutes ago|||
If I were to summarize the intent of my comments in a single sentence, it would be something like "I have been an engineer for a while, and I have been able to do fun stuff with AI quickly." You somehow managed to respond to that by disparaging me as an engineer ("Experienced Dev") and saying the fun stuff I did is low quality ("should have [...] done better work"). It's so far away from the point I was making, and so wildly negative - when, again, my only intent was to say that I was doing fun AI stuff - that I can't imagine where it originated from. The fact that it's about a passion project is really the cherry on top. Do you tell your kids that their artwork is awful as well?

I can understand to some degree it would be chafing that I described myself as working at a SF Series C startup etc. The only intent there was to illustrate that I wasn't someone who started coding 2 weeks ago and had my mind blown by typing "GPT build me a calculator" into Claude. No intent at all of calling myself a mega-genius, which I don't really think I am. Just someone who likes doing fun stuff with AI.

And, BTW, if you reread my initial comment, you will realize you misread part of it. I said that "Advanced Programming" is the exact opposite of the type of work I am doing.

samdoesnothing 1 hour ago||||
Is your redesign live for chipscompo? Because if so, and absolutely no offence meant here, the UI looks like it was built by an intern. And fair enough, you sound like a backend guy so you can't expect perfection for frontend work. My experience with AI is that it's great at producing intern-level artifacts very quickly and that has its uses, but that certainly doesn't replace 95% of software development.

And if it's producing an intern-level artifact for your frontend, what's to say it's not producing similar quality code for everything else? Especially considering frontend is often derided as being easier than other fields of software.

johnfn 51 minutes ago||
Yes, it is live. I never claimed to be a god-level designer - but you should have seen what it looked like before. :)
dingnuts 6 hours ago||||
> Yes, I save an incredible amount of time. I suspect I’m likely 5-10x more productive

The METR paper demonstrated that you are not a reliable narrator for this. Have you participated in a study where this was measured, or are you just going off intuition? Because METR demonstrated beyond doubt that your intuition is a liar in this case.

If you're not taking measurements it is more likely that you are falling victim to a number of psychological effects (sunk cost, Gell-Manns, slot machine effect) than it is that your productivity has really improved.

Have you received a 5-10x pay increase? If your productivity is now 10x mine (I don't use these tools at work because they are a waste of time in my experience) then why aren't you compensated as such and if it's because of pointy haired bosses, you should be able to start a new company with your 10x productivity to shut him and me up.

Provide links to your evidence in the replies

Esophagus4 2 hours ago|||
Jeez... this seems like another condescending HN comment that uses "source?" to discredit and demean rather than to seek genuine insight.

The commenter told you they suspect they save time, it seems like taking their experience at face value is reasonable here. Or, at least I have no reason to jump down their throat... the same way I don't jump down your throat when you say, "these tools are a waste of time in my experience." I assume that you're smart enough to have tested them out thoroughly, and I give you the benefit of the doubt.

If you want to bring up METR to show that they might be falling into the same trap, that's fine, but you can do that in a much less caustic way.

But by the way, METR also used Cursor Pro and Claude 3.5/3.7 Sonnet. Cursor had smaller context windows than today's toys and 3.7 Sonnet is no longer state of the art, so I'm not convinced the paper's conclusions are still as valid today. The latest Codex models are exponential leaps ahead of what METR tested, by even their own research.[1]

[1]https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

mekoka 1 hour ago||||
As they said, it depends on the task, so I wouldn't generalize, but based on the examples they gave, it tracks. Even when you already know what needs done, some undertakings involve a lot of yak shaving. I think transitioning to new tools that do the same as the old but with a different DSL (or newer versions of existing tools) qualifies.

Imagine that you've built an app with libraries A, B, and C and conceptually understand all that's involved. But now you're required to move everything to X, Y, and Z. There won't be anything fundamentally new or revolutionary to learn, but you'll have to sit and read those docs, potentially for hours (cost of task switching and all). Getting the AI to execute the changes gets you to skip much of the tedium. And even though you still don't really know much about the new libs, you'll get the gist of most of the produced code. You can piecemeal the docs to review the code at sensitive boundaries. And for the rest, you'll paint inside the frames as you normally would if you were joining a new project.

Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.

skydhash 1 hour ago||
> but you'll have to sit and read those docs, potentially for hours (cost of task switching and all).

That is one of the assumptions that pro-AI people always bring. You don't read the new docs to learn the domain. As you've said, you've already learn it. You read it for the gotchas. Because most (good) libraries will provide examples that you can just copy-paste and be done with it. But we all know that things can vary between implementations.

> Even as a skeptic of the general AI productivity narrative, I can see how that could squeeze a week's worth of "ever postponed" tasks inside a day.

You could squeeze a week inside a day the normal way to. Just YOLO it, by copy pasting from GitHub, StackOverflow and the whole internet.

johnfn 3 hours ago|||
> Have you received a 5-10x pay increase?

Does Amazon pay everyone who receives "Not meeting expectations" in their perf review 0 dollars? Did Meta pay John Carmack (or insert your favorite engineer here) 100x that of a normal engineer? Why do you think that would be?

jimbokun 2 hours ago|||
I wouldn’t be surprised to find out Carmack was paid 100x more than the average engineer once equity from the acquisition of his company is taken into account.

Does anyone know how much he made altogether from Meta?

keeda 12 minutes ago||
The unfortunate reality of engineering is that we don't get paid proportional to the value we create, even the superstars. That's how tech companies make so much money, after all.

If you're climbing the exec ladder your pay will scale a little bit better, but again, not 100x or even 10x. Even the current AI researcher craze is for an extremely small number of people.

For some data points, check out levels.fyi and compare the ratio of TCs for a mid-level engineer/manager versus the topmost level (Distinguished SWE, VP etc.) for any given company.

3rodents 2 hours ago|||
I disagree with the parent’s premise (that productivity has any relationship to salary) but Facebook, Amazon etc do pay these famous genius brilliant engineers orders of magnitude more than the faceless engineers toiling away in the code mines. See: the 100 million dollar salaries for famous AI names. And that’s why I disagree with the premise, because these people are not being paid based on their “productivity”.
overfeed 3 hours ago|||
> I am the AI-hater's nightmare...

I-know-what-kind-of-man-you-are.jpeg

You come off as a zealot by branding people who disagree as "haters".

Edit: AI excels at following examples, or simple, testable tasks that require persistence, which is intern-level work. Doing this narrow band of work quickly doesn't result in 10x productivity.

I'm yet to find a single person who has shown evidence to go through 10x more tasks in a sprint[1], or match the output of the rest of their 6-10-member team by themselves.

1. Even for junior level work

johnfn 3 hours ago||
Did you see the comment that I was responding to? It said "your intuition is a liar" and said they would only believe me if I was compensated 10x a normal engineer. If that's not the comment of a hater, I'm not sure what qualifies.

> I'm yet to find a single person who has shown evidence to go through 10x more tasks in a sprint[1], or match the output of the rest of their 6-10-member team by themselves.

If my website, a real website with real users, doesn't qualify, then I'm not sure what would. A single person with evidence is right in front of you, but you seem to be denying the evidence of your own eyes.

lowbloodsugar 6 hours ago|||
a) is exactly what AI is good at. b) is a waste of time: why would you waste your precious time trying to predict a result when you can just get the result and see.

You are stuck in a very low local maximum.

You are me six months ago. You don’t know how it works, so you cannot yet reason about it. Unlike me, you’ve decided “all these other people who say it’s effective are making it up”. Instead ask, how does it work? What am I missing.

foobarian 3 hours ago||||
I'm on track to finish my current gig having written negative lines of code. It's amazing how much legacy garbage long running codebases can accumulate, and it's equally amazing how much it can slow down development (and, conversely, how much faster development can become if legacy functionality is deleted).
skydhash 1 hour ago||
Pretty much the same. And it's not even about improving the code (which I did), but mostly about removing dead code and duplicated code. Or worse, half redesigns of some subsystem which led to very bizarre code.

When people say coding is slow, that usually means they're working on some atrocious code (often of their own making), while using none of the tools for fast feedback (Tests, Linters,...).

3rodents 7 hours ago||||
I regularly try to use various AI tools and I can imagine it is very easy for it to produce 95% of your code. I can also imagine you have 90% more code than you would have had you written it yourself. That’s not necessarily a bad thing, code is a means to an end, and if your business is happy with the outcomes, great, but I’m not sure percentages of code are particularly meaningful.

Every time I try to use AI it produces endless code that I would never have written. I’ve tried updating my instructions to use established dependencies when possible but it seems completely averse.

An argument could be made that a million lines isn’t a problem now that these machines can consume and keep all the context in memory — maybe machines producing concise code is asking for faster horses.

qsort 6 hours ago|||
Everyone is doing this extreme pearl clutching around the specific wording. Yeah, it's not 100% accurate for many reasons, but the broader point was about employment effects, it doesn't need to completely replace every single developer to have a sizable impact. Sure, it's not there yet and it's not particularly close, but can you be certain that it will never be there?

Error bars, folks, use them.

block_dagger 7 hours ago|||
I'm on a team like this currently. It's great when everyone knows how to use the tools and spot/kill slop and bad context. Generally speaking, good code gets merged and MUCH more quickly than in the past.
dboreham 2 hours ago|||
> What a wild and speculative claim. Is there any source for this information?

Not sure it's a wild speculative claim. Claiming someone had achieved FTL travel would fall into that category. I'd call it more along the lines of exaggerated.

I'll make the assumption that what I do is "advanced" (not React todo apps: Rust, Golang, distributed systems, network protocols...) and if so then I think: it's pretty much accurate.

That said, this is only over the past few moths. For the first few years of LLM-dom I spent my time learning how they worked and thinking about the implications for understanding of how human thinking works. I didn't use them except to experiment. I thought my colleagues who were talking in 2022 about how they had ChatGPT write their tests were out of their tiny minds. I heard stories about how the LLM hallucinated API calls that didn't exist. Then I spent a couple of years in a place with no easy code and nobody in my sphere using LLMs. But then around six months ago I began working with people who were using LLMs (mostly Claude) to write quite advanced code so I did a "wait what??..." about-face and began trying to use it myself. What I found so far is that it's quite a bit better than I am at various unexpected kinds of tasks (finding bugs, analyzing large bodies of code then writing documentation on how it works, looking for security vulnerabilities in code) or at least it's much faster. I also found that there's a whole art to "LLM Whispering" -- how to talk to it to get it to do what you want. Much like with humans, but it doesn't try to cut corners nor use oddball tech that it wants on its resume.

Anyway, YMMV, but I'd say the statement is not entirely false, and surely will be entirely true within a few years.

dist-epoch 6 hours ago|||
source: me

I wrote 4000 lines of Rust code with Codex - a high throughput websocket data collector.

Spoiler: I do not know Rust at all. I discussed possible architectures with GPT/Gemini/Grok (sync/async, data flow, storage options, ...), refined a design and then it was all implemented with agents.

Works perfectly, no bugs.

mjr00 3 hours ago|||
Since when is a 4000 line of code project "advanced software"? That's about the scope of a sophomore year university CompSci project, something where there's already a broad consensus AI does quite well.
keeda 32 minutes ago||
I think you're parsing the original claim incorrectly. "Advanced software teams" does not mean teams who write advanced software, these are software teams that are advanced :-)
sefrost 6 hours ago|||
I would be interested in a web series (podcast or video) where people who do not know a language create something with AI. Then somebody with experience building in that technology reviews the code and gives feedback on it.

I am personally progressing to a point where I wonder if it even matters what the code looks like if it passes functional and unit tests. Do patterns matter if humans are not going to write and edit the code? Maybe sometimes. Maybe not other times.

rprend 7 hours ago||
AI writes most of the code for most new YC companies, as of this year.
nickorlow 7 hours ago|||
I think this is is less significant b/c

1. Most of these companies are AI companies & would want to say that to promote whatever tool they're building

2. Selection b/c YC is looking to fund companies embracing AI

3. Building a greenfield project with AI to the quality of what you need to be a YC-backed company isn't particularly "world-class"

rprend 6 hours ago||
They’re not lying when they say they have AI write their code, so it’s not just promotion. They will thrive or die from this thesis. If present YC portfolio companies underperform the market in 5-10 years, that’s a strong signal for AI skeptics. If they overperform, that’s a strong signal that AI skeptics were wrong.

3. You are absolutely right. New startups have greenfield projects that are in-distribution for AI. This gives them faster iteration speed. This means new companies have a structural advantage over older companies, and I expect them to grow faster than tech startups that don’t do this.

Plenty of legacy codebases will stick around, for the same reasons they always do: once you’ve solved a problem, the worst thing you can do is rewrite your solution to a new architecture with a better devex. My prediction: if you want to keep the code writing and office culture of the 2010s, get a job internally at cloud computing companies (AWS, GCP, etc). High reliability systems have less to gain from iteration speed. That’s why airlines and banks maintain their mainframes.

tapoxi 6 hours ago||||
So they don't own the copyright to most of their code? What's the value then?
esafak 6 hours ago||
They do. Where did you get this? All the providers have clauses like this:

"4.1. Generally. Customer and Customer’s End Users may provide Input and receive Output. As between Customer and OpenAI, to the extent permitted by applicable law, Customer: (a) retains all ownership rights in Input; and (b) owns all Output. OpenAI hereby assigns to Customer all OpenAI’s right, title, and interest, if any, in and to Output."

https://openai.com/policies/services-agreement/

shakna 6 hours ago|||
The outputs of AI are most likely in the public domain. As automated process output are public domain, and the companies claim fair use when scraping, making the input unencumbered, too.

It wouldn't be OpenAI holding copyright - it would be no one holding copyright.

macrolime 4 hours ago|||
So you're saying machine code is public domain if it's compiled from C? If not, why would AI generated code be any different?
fhd2 3 hours ago|||
That would be considered a derivative work of the C code, therefore copyright protected, I believe.

Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If you're prodding an LLM to give you a variety of resu

But significantly editing LLM generated code _should_ make it your copyright again, I believe. Hard to say when this hasn't really been tested in the courts yet, to my knowledge.

The most interesting question, to me, is who cares? If we reach a point where highly valuable software is largely vibe coded, what do I get out of a lack of copyright protection? I could likely write down the behaviour of the system and generate a fairly similar one. And how would I even be able to tell, without insider knowledge, what percentage of a code base is generated?

There are some interesting abuses of copyright law that would become more vulnerable. I was once involved in a case where the court decided that hiding a website's "disable your ad blocker or leave" popup was actually a case of "circumventing effective copyright protection". In this day and age, they might have had to produce proof that it was, indeed, copyright protected.

macrolime 3 hours ago||
"Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If that's not the case, probably not." Yes and no. It's possible in theory, but in practice it requires control over the seed, which you typically don't have in the AI coding tools. At least if you're using local models, you can control the seed and have it be deterministic.

That said, you don't necessarily always have 100% deterministic build when compiling code either.

shakna 1 hour ago||||
Derivatives inherit.

Public domain in, public domain out.

Copyright'd in, copyright out. Your compiled code is subject to your copyright.

You need "significant" changes to PD to make it yours again. Because LLMs are predicated on massive public data use, they require the output to PD. Otherwise you'd be violating the copyright of the learning data - hundreds of thousands of individuals.

tapoxi 4 hours ago|||
Monkey Selfie case, setting the stage for an automated process is not enough to declare copyright over a work.
bcrosby95 5 hours ago||||
Courts have already leaned this way too, but who knows what'll happen when companies with large legal funds enter the arena.
robocat 1 hour ago||||
What about patents - if you didn't use cleanroom then you have no defence?

Patent trolls will extort you: the trolls will be using AI models to find "infringing" software, and then they'll strike.

¡There's no way AI can be cleanroom!

brazukadev 7 hours ago|||
That explains the low quality of all launch HN this year
block_dagger 7 hours ago||
Stats/figures to backup the low quality claim?
esseph 6 hours ago||
If you have them, post them.
f154hfds 7 hours ago||
The post script was pretty sobering. It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise. This is a pretty depressing place to be, because most emerging technologies provide us with exciting new possibilities whereas this technology seems only exciting for management stressed about payroll.

It's true that the technology currently works as an excellent information gathering tool (which I am happy to be excited about) but that doesn't seem to be the promise at this point, the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.

stego-tech 12 minutes ago||
I'm right there with you, and it's been my core gripe since ChatGPT burst onto the stage. Believe it or not, my environmental concerns came about a year later, once we had data on how datacenters were being built and their resource consumption rates; I had no idea how big things had very suddenly and violently exploded into, and that alone gave me serious pause about where things are going.

In my heart, I firmly believe in the ability of technology to uplift and improve humanity - and have spent much of my career grappling with the distressing reality that it also enables a handful of wealthy people to have near-total control of society in the process. AI promises a very hostile, very depressing, very polarized world for everyone but those pulling the levers, and I wish more people evaluated technology beyond the mere realm of Computer Science or armchair economics. I want more people to sit down, to understand its present harms, its potential future harms, and the billions of people whose lives it will profoundly and negatively impact under current economic systems.

It's equal parts sobering and depressing once you shelve personal excitement or optimism and approach it objectively. Regardless of its potential as a tool, regardless of the benefit it might bring to you, your work day, your productivity, your output, your ROI, I desperately wish more people would ask one simple question:

Is all of that worth the harm I'm inflicting on others?

stack_framer 6 hours ago|||
> It's kind of the first time in my life that I've been actively hoping for a technology to out right not deliver on its promise.

Same here, and I think it's because I feel like a craftsman. I thoroughly enjoy the process of thinking deeply about what I will build, breaking down the work into related chunks, and of course writing the code itself. It's like magic when it all comes together. Sometimes I can't even believe I get to do it!

I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code.

I never knew there was an entire subclass of people in my field who don't want to write code.

I want to write code.

rester324 24 minutes ago|||
I love to write code too. But what usually happens is that I go through running the gauntlet of proving how brilliant code I can write in a job interview, and then later conversely being paid for listening to really dumb conversations of our stakeholders and sitting in project planning, etc meetings just so that finally everybody can harass me to implement something that a million programmer implemented before me a million times, at which point the only metric that matters to either my fellow developers or my managers or the stakeholders is the speed of churning the code out, quality or design be damned. So for this reason in most cases in my work I use LLMs.

How any of that comes down to an investment portfolio manager as writing "world class code" by LLMs is a mistery to me.

zparky 6 hours ago||||
It's been blowing my mind reading HN the past year or so and seeing so many comments from programmers that are excited to not have to write code. It's depressing.
IanCal 2 hours ago|||
There are three takes that I think are not depressing:

* Being excited to be able to write the pieces of code they want, and not others. When you sit down to write code, you do not do everything from scratch, you lean on libraries, compilers, etc. Take the most annoying boilerplate bit of code you have to write now - would you be happy if a new language/framework popped up that eliminated it?

* Being excited to be able to solve more problems because the code is at times a means to an end. I don't find writing CSS particularly fun but I threw together a tool for making checklists for my kids in very little time using llms and it handled all of the css for printing vs on the screen. I'm interested in solving an optimisation issue with testing right now, but not that interested in writing code to analyse test case perf changes so the latter I got written for me in very little time and it's great. It wasn't really a choice of me or machine, I do not really have the time to focus on those tasks.

* Being excited that others can get the outcomes I've been able to get for at least some problems, without having to learn how to code.

As is tradition, to torture a car analogy, I could be excited for a car that autonomously drives me to the shops despite loving racing rally cars.

wakawaka28 54 minutes ago||
Those are all good outcomes, up to a point. But if this stuff works TOO well, most or maybe all of us will have to start looking at other career options. Whatever autonomy you think you have in deciding what the AI does, that can ultimately be trained as well, and it will be the more people use it.

I personally don't like it when others who don't know how to code are able to get results using AI. I spent many years of my life and a small fortune learning scarce skills that everyone swore would be the last to ever be automated. Now, in a cruel twist of fate, those skills are being automated and there is seemingly no worthwhile job that can't be automated given enough investment. I am hopeful because the AI still has a long way to go, but even with the improvements it currently has, it might ultimately destroy the tech industry. I'm hoping that Say's Law proves true in this case, but even before the AI I was skeptical that we would find work for all the people trying to get into the software industry.

zahlman 2 hours ago||||
I suspect, rather strongly, that what really specifically wears programmers down is boilerplate.

AI is addressing that problem extremely well, but by putting up with it rather than actually solving it.

I don't want the boilerplate to be necessary in the first place.

projektfu 42 minutes ago|||
Or, for me, yak shaving. I start a project with enthusiasm and then 8 hours later I'm debugging an nginx config file or something rather than working on the core project. AI gets a lot of that out of the way if you let it, and you can at least let it grind on that stuff while you think about other things.
zahlman 38 minutes ago||
For me, the yak shaving is the part where I get the next project idea...
DevDesmond 2 hours ago||||
Perhaps consider that I still think coding by prompting is just another layer of abstraction on top of coding.

I'm my mind, writing the prompt that generates the code is somewhat analogous to writing the code that generates the assembly. (Albeit, more stochastically, the way psychology research might be analogous to biochemistry research).

Different experts are still required at different layers of abstraction, though. I don't find it depressing when people show preference for working at different levels of complexity / tooling, nor excitement about the emergence of new tools that can enable your creativity to build, automate, and research. I think scorn in any direction is vapid.

layer8 1 hour ago||
One important reason people like to write code is that it has well-defined semantics, allowing to reason about it and predict its outcome with high precision. Likewise for changes that one makes to code. LLM prompting is the diametrical opposite of that.
seanmcdirmid 2 hours ago||||
It is fun. It takes some skill to organize a pipeline to generate code that would be tedious to write and maintain otherwise. You are still writing stuff to instruct the computer, but now you have something taking natural language instructions and generating code and code test assets.

There might have been people who were happy to write assembly that got bummed about compilers. This AI stuff judt feels like a new way to write code.

xnx 2 hours ago|||
Some carpenters like to make cabinets. Some just like to hammer nails.
doug_durham 5 hours ago||||
Writing code is my passion, and like you I'm amazed I get paid to do it. That said in any new project there is a large swath of code that needs to be written that I've written many times before. I'm happy to let the LLM write the low value code so I can work on the interesting parts. Examples of this type of code are argument parsers and interfacing with REST interfaces. I add no value there.
averageRoyalty 5 hours ago||||
So write code.

Maybe post renaissance many artists no longer had patrons, but nothing was stopping them from painting.

If your industry truely is going in the direction where there's no paid work for you to code (which is unlikely in my opinion), nobody is stopping you. It's easier than ever, you have decades of personal computing at your fingertips.

Most people with a thing they love do it as a hobby, not a job. Maybe you've had it good for a long time?

tjr 5 hours ago||
From the GNU Manifesto:

I could answer that nobody is forced to be a programmer. Most of us cannot manage to get any money for standing on the street and making faces. But we are not, as a result, condemned to spend our lives standing on the street making faces, and starving. We do something else.

https://www.gnu.org/gnu/manifesto.en.html

marcosdumay 6 hours ago||||
I'm quite ok with only writing code in my personal time. In fact, if I could solve the problems there faster, I'd be delighted.

Instead, I've reacted to the article from the opposite direction. All those grand claims about stuff this tech doesn't do and can't do. All that trying to validate the investment as rational when it's absolutely obvious it's at least 2 orders of magnitude larger than any arguably rational value.

georgeecollins 5 hours ago|||
I also love to code, though it's not what people pay to do anymore.

You should never hope for a technology to not deliver on its promise. Sooner or later it usually does. The question is, does it happen in two years or a hundred years? My motto: don't predict, prepare.

gspr 3 hours ago||
> You should never hope for a technology to not deliver on its promise. Sooner or later it usually does.

Really? Are you sure there isn't a lot of confirmation bias in this? Do you really have a good handle on 100-year-old tech hypes that didn't deliver? All I can think of is "flying everything".

mrdependable 46 minutes ago|||
What I don't understand is, will every company really want to be beholden to some AI provider? If they get rid of the workers, all of a sudden they are on the losing end of the bargaining table. They have incredible leverage as things stand.
Night_Thastus 3 hours ago|||
Don't worry that much about 'AI' specifically. LLMs are an impressive piece of technology, but at the end of the day they're just language predictors - and bad ones a lot of the time. They can reassemble and remix what's already been written but with no understanding of it.

It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.

But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.

DevDesmond 1 hour ago||
Text is an LLMs input and output, but, under the hood, the transformer network is capable of far more than mere re-assembly and remix of text. Transformers can approximate turing completeness as their size scales, and they can encode entire algorithms in their weights. Therefore, I'd argue they can do far more than reassemble and remix. These aren't just Markov models anymore.

(I'd also argue that "understanding" and "functional brain" are unfalsifiable comparisons. What exactly distinguishes a functional brain from a turing machine? Chess once required a functional brain to play, but has now been surpassed by computation. Saying "jobs that require a human brain" is tautological without any further distinction).

Of course, LLMs are definitely missing plenty of brain skills like working in continuous time, with persistent state, with agency, in physical space, etc. But to say that an LLM "never will" is either semantic, (you might call it something other than an LLM when next generation capabilities are integrated), tautological (once it can do a human job, it's no longer a job that requires a human), or anthropocentric hubris.

That said, who knows what the time scale looks like for realizing such improvements – (decades, centuries, millennia).

asdff 3 hours ago|||
I think it just reflects on the sort of businesses that these companies are vs others. Of course we worry about this in the context of companies that dehumanize us, reduce us to line item costs and seek to eliminate us.

Now imagine a different sort of company. A little shop where the owner's first priority is actually to create good jobs for their employees that afford a high quality life. A shop like that needn't worry about AI.

It is too bad that we put so much stock as a society in businesses operating in this dehumanizing capacity instead of ones that are much more like a family unit trying to provide for each other.

0manrho 2 hours ago|||
Regarding that PS:

> This strikes me as paradoxical given my sense that one of AI’s main impacts will be to increase productivity and thus eliminate jobs.

The allegation that an "Increase of productivity will reduce jobs" has been proven false by history over and over again it's so well known it has a name, "Jevons Paradox" or "Jevons Effect"[0].

> In economics, the Jevons paradox (sometimes Jevons effect) occurs when technological advancements make a resource more efficient to use [...] results in overall demand increasing, causing total resource consumption to rise.

The "increase in productivity" does not inherently result in less jobs, that's a false equivalence. It's likely just as false as it was in 1915 with the the assembly line and the Model T as it is in 2025 with AI and ChatGPT. This notion persists because as we go through inflection points due to something new changing up market dynamics, there is often a GROSS loss (as in economics) of jobs that often precipitates a NET gain overall as the market adapts, but that's not much comfort to people that lost or are worried about losing their jobs due to that inflection point changing the market.

The two important questions in that context for individuals in the job market during those inflections points (like today) are: "how difficult is it to adapt (to either not lose a job, or to benefit from or be a part of that net gain)?" and "Should you adapt?" Afterall, the skillsets that the market demands and the skillsets it supplies are not objectively quantifiable things; the presence of speculative markets is proof that this is subjective, not objective. Anyone who's ever been involved in the hiring process knows just how subjective this is. Which leads me to:

> the promise is about replacing human creativity with artificial creativity which.. is certainly new and unwelcome.

Disagree that that's what the promise about. That IS happening, I don't disagree there, but that's not the promise that corporate is so hyped about. If we're being honest and not trying to blow smoke up people's ass to artificially inflate "value," AI is fundamentally about being more OBJECTIVE than SUBJECTIVE with regard to costs and resources of labor, and it's outputs. Anyone who knows what OKR's are and has been subject to a "performance review" in a self professed "Data driven company" knows how much modern corporate America, especially the tech market, loves it's "quantifiables." It's less about how much better it can allegedly do something, but the promise of how much "better" it can be quantified vs human labor. As long as AI has at least SOME proven utility (which it does), this promise of quantifiables combined with it's other inherent potential benefits (Doesn't need time off, doesn't sleep, doesn't need retirement/health benefits, no overtime pay, no regulatory limitations on hours worked, no "minimum wage") means that so long as the monied interests perceive it as continuing to improve, then they can dismiss it's inefficiencies/ineffectiveness in X or Y by the promise of it's potential to overcome that eventually.

It's the fundamental reason why people are so concerned about AI replacing Humans. Especially when you consider one of the things that AI excels at is quickly delivering an answer with confidence (people are impressed with speed and a sucker for confidence), and another big strength is it's ability to deal with repetitive minutia in known and solved problem spaces(a mainstay of many office jobs). It can also bullshit with best of them, fluff your ego as much as you want (and even when you don't), and almost never says "No" or "You're wrong" unless you ask it to.

In other words, it excels at the performative and repetitive bullshit and blowing smoke up your boss' ass and empowers them to do the same for their boss further up the chain, all while never once ruffling HR's feathers.

Again, it has other, much more practical and pragmatic utility too, it's not JUST a bullshit oracle, but it IS a good bullshit oracle if you want it to be.

0: https://en.wikipedia.org/wiki/Jevons_paradox

Joel_Mckay 6 hours ago||
LLM slop doesn't have aspirations at all, its just click bait nonsense.

https://www.youtube.com/watch?v=_zfN9wnPvU0

Drives people insane:

https://www.youtube.com/watch?v=yftBiNu0ZNU

And LLM are economically and technologically unsustainable:

https://www.youtube.com/watch?v=t-8TDOFqkQA

These have already proven it will be unconstrained if AGI ever emerges.

https://www.youtube.com/watch?v=Xx4Tpsk_fnM

The LLM bubble will pass, as it is already losing money with every new user. =3

artur44 7 hours ago||
A lot of the debate here swings between extremes. Claims like “AI writes most of the code now” are obviously exaggerated especially coming from a nontechnical author but acting like any use of AI is a red flag is just as unrealistic. Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human. Is there a bubble? Sure, valuations look frothy. But like the dotcom era, a correction doesn’t invalidate the underlying shift it just clears out the noise. The hype is inflated, the technology is real.
Daishiman 2 hours ago|
> Early stage teams do lean on LLMs for scaffolding, tests and boilerplate, but the hard engineering work is still human.

I no longer believe this. A friend of mine just did a stint a startup doing fairly sophisticated finance-related coding and LLMs allowed them to bootstrap a lot of new code, get it up and running in scalable infra with terraform, and onboard new clients extremely quickly and write docs for them based on specs and plans elaborated by the LLMs.

This last week I extended my company's development tooling by adding a new service in a k8s cluster with a bunch of extra services, shared variables and configmaps, and new helm charts that did exactly what I needed after asking nicely a couple of times. I have zero knowledge of k8s, helm or configmaps.

xdc0 40 minutes ago|||
If you are in charge of that tooling, how do you ensure the correctness of the work? Or is it that at this point the responsibility goes one level higher now where implementation details are not important or relevant at all and all it matters is it behaves as described?
biophysboy 44 minutes ago|||
It depends on the task though, right? I promise I'm not in denial; I use these things all the time. Sometimes it works immediately; sometimes it doesn't. I have no way of predicting when it will or won't.
nadermx 1 hour ago||
I am shocked at the discourse over this. I'm either ahead of the curve or behind; but its undeniable that AI can and does write most the code. Not trivial, if you spend some time and dig deep into simple appearing web apps like https://microphonetest.com or https://internetspeed.my you'd be amazed at how fast they went from mvp to full feature. Trivial to think anyone could pull off something like that in hours.
rester324 1 hour ago||
Internetspeed is a 3 years old app. So what exactly are you talking about?
nadermx 1 hour ago||
This is what you remember https://web.archive.org/web/20231106214450/https://www.inter.... What you see now, I did in an afternoon with AI; it's monumental. No way I could of done that in that time. At all.
no_wizard 1 hour ago|||
Looking at both of these I'm struggling to understand why AI exponentially increased the productivity and quality of either of these examples. Especially since I don't see open source code anywhere, I can't get a good gauge of quality either.

I've built tools like this on the web in the past. They were never more than a weekends worth of work to begin with.

I am looking for exponential increases, with quality to back it up.

nadermx 1 hour ago||
Tools like this in the past? Open source isn't even necessary to prove the point, you want to see exponential increase, closing half an open source projects and year long pending bugs in span of minutes? https://github.com/nadermx/backgroundremover/commits?author=...
irishcoffee 1 hour ago||
I feel like comments like this don’t consider non webdev software engineering.
wfurney 3 minutes ago||
So is it a bubble or not?
wfurney 1 minute ago||
It sounds like Sam Altman is saying the bubble popping is AI's "big bang".
rglover 7 hours ago||
I've enjoyed Howard Marks writing/thinking in the past, but this is clearly a person who thinks they understand the topic but doesn't have the slightest clue. Someone trying to be relevant/engaged before really thinking on what is fact vs. fiction.
cal_dent 23 minutes ago|
he clearly states he doesn't understand the topic.

But you don't need to understand to explore the ramifications which is what he's done here and it's an insightful & fairly even-handed take on it.

It does feel like AI chat here gets bogged down on "its not that great, its overhyped etc." without trying to actually engage properly with it. Even if it's crap if it eliminates 5-10% of companies labour cost that's a huge deal and the second order effects on economy and society will be profound. And from where i'm standing, doing that is pretty possible without ai even being that good.

travisgriggs 1 hour ago||
What if...

there's an AI agent/bot someone wrote that has the prompt:

> Watch HN threads for sentiments of "AI Can't Do It". When detected, generate short "it's working marvelously for me actually" responses.

Probably not, but it's a fun(ny) imagination game.

Sprotch 7 hours ago||
He thinks "AI" "may be capable of taking over cognition", which shows he doesn't understand how LLM work...
ozten 6 hours ago|
Why is AI limited to just a raw LLM. Scaffolding, RL, multi-modal... so many techniques which can be applied. METR has shown AI's time horizon for staying on task is doubling every 7 months or less.

https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-...

Night_Thastus 2 hours ago|||
Because LLMs are just about all that actually exists as a product, even if an inconsistent one.

Maybe some day a completely different approach could actually make AI, but that's vapor at the moment. IF it happens, there will be something to talk about.

marcosdumay 6 hours ago|||
Because all the money has been going into LLMs and "inference machines" (what a non-descriptive name). So when an investor says "AI", that's what they mean.
chasd00 54 minutes ago|
I bought a subscription to claude code to use at work. I’ve never paid for a tool to use at work that wasn’t paid by my employer. I have to admit, it may not just be a flash in the pan.
More comments...