Top
Best
New

Posted by GavinAnderegg 4 days ago

Spending Too Much Money on a Coding Agent(allenpike.com)
152 points | 176 commentspage 2
quonn 2 days ago|
Charging $200/month is economically only possible if there is not a true market for LLMs or some sort of monopoly power. Currently there is no evidence that this will be the case. There are already multiple competitors and the barrier to entry is relatively low (compared to e.g. the car industry or other manufacturing industries), there are no network effects (like for social networks) and no need to get the product 100% right (like compatibility to Photoshop or Office) and the prices for training will drop further. Furthermore $200 is not free (like Google).

Can anyone name one single widely-used digital product that does _not_ have to be precisely correct/compatible/identical to The Original and that everyone _does_ pay $200/month for?

Therefore, should prices that users pay get anywhere even close to that number, there will naturally be opportunities for competitors to bring prices down to a reasonable level.

lvl155 2 days ago||
Barrier to entry is actually very very high. Just because we have “open source” models doesn’t mean anyone can enter. And the gap is widening now. I see Anthropic/OpenAI as clear leaders. Opus 4 and its derivative products are irreplaceable for coders since Spring 2025. Once you figure it out and have your revelation, it will be impossible to go back. This is an iPhone moment right now and the network effect will be incredible.
mathiaspoint 2 days ago||
It's all text and it's all your text. There's zero network effect.
lvl155 2 days ago||
And that’s how it’s been forever. If your competitor is doing 10x your work, you will be compelled to learn. If someone has a nail gun and you’re using a hammer, no one’s saying “it’s all nails.” You will go buy a nail gun.
mathiaspoint 2 days ago||
Network affects come from people building on extra stuff. There's no special sauce with these models, as long as you have an inference endpoint you can recreate anything yourself with any of the models.

As to the nailgun thing, that's an interesting analogy, I'm actually building my own house right now entirely with hand tools, it's on track to finish in 1/5 the time some of this mcmansions do with 1/100th of the cost because I'm building what I actually need and not screwing around with stuff for business reasons. I think you'll find software projects are more similar to that than you'd expect.

chis 2 days ago||
I think you forgot to consider the cost of providing the inference.
quonn 2 days ago||
Well, that could be an additional problem.

My point was not that AI will necessarily be cheaper to run than $200, but that there is not much profit to be made. Of course the cost of inference will form a lower bound on the price as well.

pshirshov 2 days ago||
> Use boring technology: LLMs do much better with well-documented and well-understood dependencies than obscure, novel, or magical ones. Now is not the time to let Steve load in a Haskell-to-WebAssembly pipeline.

If we all go that way, there might be no new haskells and webassemblies in the future.

emrehan 2 days ago||
LLMs can read documentations for a language and use it as well as human engineers.

"given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person who learned from the same content"

Source: Gemini 1.5's paper from March 2024 https://storage.googleapis.com/deepmind-media/gemini/gemini_...

mathiaspoint 2 days ago||
I think there certainly will, it will just mean that only people who can function independently of the AI will have access to them a few years before everyone else.
georgeecollins 2 days ago||
I am blown away that you can get a founding engineer for $10k / month. I guess that is not counting stock options, in which case it makes sense. But I think if you include options the opportunity cost is much higher. IMO great engineers are worth a lot, no shade.
hoistbypetard 2 days ago||
> literally changing failing tests into skipped tests to resolve “the tests are failing.”

Wow. It really is like a ridiculous, over-confident, *very* junior developer.

mathiaspoint 2 days ago||
I can't imagine using something like this and not self hosting. Moving around in your editor costs money? That would completely crush my velocity.
tomjuggler 1 day ago||
I believe this story, but being from a third world country it's not feasible to spend anywhere near that much. Also, as others have mentioned I am wary of "rug-pulls" when it comes to proprietary models and services. That is why I am all-in on Deepseek currently, with Aider (and Roo for MCP integration). When the main api is lagging I just switch to the same model with a different provider on OpenRouter. Theoretically I could host my own if I got that busy.

Solo developer doing small jobs but I code every day and $10 per month would be a busy month for me. I still read every line of code though..

tabs_or_spaces 2 days ago||
Since this is a business problem.

* It's not clear on how much revenue or new customers is generated by using a coding agent

* It's not clear on how things are going on production. There's only talks about development in the article

I feel ai coding agents will give you the edge. Just this article doesn't talk about revenue or PnL side of things, just perceived costs saved from not employing an engineer.

v5v3 2 days ago|
Yes. A company needs measurable ROI and isn't going to spend $200 a month per seat on Claude.

It will instead sign a deal with Microsoft for ai that is 'good enough' and limit expensive ai to some. Or being in the big consultancys as usual to do the projects.

nico 2 days ago||
The article reads almost like an ad for o3 and spending a lot of money on LLM APIs

In my experience, o4-mini-high is good enough, even just through the chat interface

Cursor et al can be more comfy because they have access to the files directly. But when working on a sufficiently large/old/complex code base, the main limitation is the human in the loop and managing context, so things end up evening out. Not only that, but a lot of times it’s just easier/better to manually feed things to ChatGPT/Claude - that way you get to more carefully curate and understand the tasks and the changes

I still haven’t seen any convincing real life scenario with larger in-production code bases in which agents are able to autonomously write most of the code

If anyone has a video/demo, would love to see it

cma 2 days ago|
It's faster than me at drilling through all the layers of abstractions in large codebases to answer questions about how something is implemented and where the actual calculation or functionality gets done, so with that alone it's much more useful than web chat interfaces.
nico 2 days ago|||
Yes, when initially starting out the task/feature. But also most likely the agent won’t be able to fully do it on its own, and at that point the agent is not necessarily that much more useful than the chat

Also, local agents miss context all the time, at which point you need to manually start adding/managing the context anyway

And, if you are regularly working on the codebase, at some point you’ll probably have better in-brain initial context than the agent

jpc0 1 day ago|||
I know it is difficult to run software locally in some instances but at this point I feel it is probably quicker to implement tracing or improving the local development flow.

A breakpoint in a debugger is much much quicker than feeding the AI all the context needed and then confirming it didn’t miss some flow in some highly abstract code

andrewstuart 2 days ago||
I must be holding OpenAI wrong.

Everyone time I try it I find it to be useless compared to Claude or Gemini.

scrubs 1 day ago|
The one thing AI hates and doesn't want you to know about AI:

Having typing skills >= 120 wpm will triple your efficacy.

More comments...