Top
Best
New

Posted by cratermoon 12 hours ago

An AI coding agent, used to write code, needs to reduce your maintenance costs(www.jamesshore.com)
179 points | 43 commentspage 2
gitaarik 6 hours ago|
Yeah, but to be honest, I sometimes just tell Claude to cleanup / refactor stuff; it finds a lot of things, discusses it with me and I approve the plan, and it churns away my tokens for some time. I do this once in a while, and I've been doing this for over 6 months and I don't feel like my development has significantly slowed down. Yeah my token usage is more for sure, but my codebase also is, so I'm not worried about that. To me AI seems to make maintenance very easy, like the rest. You just need to do it.

Edit: I make it sound a bit simple maybe. I do more extensive redactors also, where I'm more involved and opinionated. But I don't feel the need to do that very often very deeply. But yeah sometimes it's definitely necessary to prevent the project from going off rails.

tossandthrow 6 hours ago||
This is my experience exactly.

I have reduced our response time on our api to 30ms from 80ms and gotten a setup we can comfortably grow into.

I had not had time to track down these optimizations without Claude code.

gitaarik 1 hour ago||
I'm getting downvotes for this. Why exactly?
hamhamed 7 hours ago||
This is what I've been preaching to my team. With 5.5 and 4.7 the coding agents are good enough know to almost never take any tech debt. Any new feature or fixes should come with a cleanup or refactor, on the same PR.
esailija 4 hours ago|
That's better than 99.99999% humans. Where do I put my credit card details?
swiftcoder 3 hours ago||
> Your crowd might tell you that, for each month you spend writing code, you’ll spend... 10 days on maintenance in the first year; and 5 days on maintenance each year after that

Someone is an optimist! I'd estimate those significantly higher, and even worse if you are in a field that has to do any sort of SOC/HIPAA/GDPR audit

devinabox 5 hours ago||
Great Article! I think ultimately we are heading towards a world where much better software will be created. This is the major roadblock we need to cross over before that can be true, but I think it is a very tractable problem!

I created a video that talks about this in more detail:

https://www.youtube.com/watch?v=G3Q7Y-nrUbk

aetherspawn 9 hours ago||
I think AI is great for the soul destroying boring stuff that makes me want to quit my job like wrapping legacy code in test cases. Hey I’ll take on any idiot who’s willing to do that job, even if he’s artificial.
WhereIsTheTruth 4 hours ago|
You can only type at 50WPM and read one file at a time, the LLM doesn't have the physical limits, use it at your advantage so you can actually focus on the work that matter
Jimmy0252 9 hours ago||
The maintenance-cost framing is the useful constraint. I’d rather see agents default to smaller diffs, test scaffolding, and explicit assumptions than maximize lines changed per prompt.
robotbikes 8 hours ago|
I think this is still the role of human oversight, these tools will forever be imperfect and the instructions we give them as prompts will always been prone to inaccuracies/misinterpretation. I find it useful to evaluate the code and often ask for simpler solutions and so far it has produced slightly more elegant solutions. The tendency to spawn helper functions to solve every problem or doing things in a slightly weird or at least unconvential way when there is an easier/standard way of doing it that would create less code. Your ideas if automated would definitely make things more maintainable but even code produced my machines require a human to be responsible for making sure/verifying it works.
psychoslave 6 hours ago||
https://www.laws-of-software.com/laws/kernighan/ relates here.

The incitives for remote LLMs are off with providing defaults which optimize for maintenable sound architecture though. Same way Claude is going to produce overview of the indexes of the summaries of comprehensive reports, no one is going to read. No doubt this feels like excellent KPI on how much output was generated.

lovich 7 hours ago||
So what are all of these agentic based strategies going to do once the infinite money spigot of investment into AI ends and they need to start charging prices that actually make a profit?

I get that most of the cost is in training and not inference, but I don’t see how models stay useful once the worlds software updates in a few months post training since the models can’t learn without said training.

Are we just going to have shops do the equivalent of old COBOL shops where everything is built to one years standards and the main language/framework is mostly set in stone?

tedbradley 5 hours ago|
Glad you asked. AI empowers people who couldn't do a job before to do a job. With more supply of qualified workers, these workers compete with each other by lowering the salary they'll take.

So:

* You get paid less. * The company might pay a similar amount due to LLM costs. Although, it could be more or less as well, depending on how it works out.

A couple of years ago, I saw a story of a guy writing two articles for a website a day. The boss asked him if he wanted to transition to AI-assisted writer for less pay. He said, "No." After a couple of weeks, he got canned. He checked the website out, and it had a bunch of AI writing on it.

LLMs are there to reduce your salaries and increase the businessowner's profits. Bigger inequality in wealth, it's only going to grow more and more. Also, a ton of people fired across many different fields.

ehnto 4 hours ago||
That is one possibility (that is playing out). Another one worth contrasting is the idea of AI as leverage for the worker. If you can take a regular developer and augment their output by 25%, then they have become more valuable to you and you should pay them more. Why should you pay them more? Because the market rate will price in that they provide more value now and you'll lose those workers to competitors if you don't.

That's a pretty old economic idea, and it will be interesting to see if it holds up in this instance. I have no idea how this all plays out. I do think it won't be one size fits all though.

aroido-bigcat 1 hour ago||
[flagged]
claud_ia 2 hours ago|
[flagged]
More comments...