Top
Best
New

Posted by svara 9 hours ago

Ask HN: How is AI-assisted coding going for you professionally?

Comment sections on AI threads tend to split into "we're all cooked" and "AI is useless." I'd like to cut through the noise and learn what's actually working and what isn't, from concrete experience.

If you've recently used AI tools for professional coding work, tell us about it.

What tools did you use? What worked well and why? What challenges did you hit, and how (if at all) did you solve them?

Please share enough context (stack, project type, team size, experience level) for others to learn from your experience.

The goal is to build a grounded picture of where AI-assisted development actually stands in March 2026, without the hot air.

203 points | 336 commentspage 2
kreyenborgi 2 hours ago|
Net negative. I do find it genuinely useful for code review, and "better search engine" or snippets, and sometimes for rubber ducking, but for agent mode and actual longer coding tasks I always end up rewriting the code it makes. Whatever it produces always looks like one of those students who constantly slightly misunderstands and only cares about minor test objectives, never seeing the big picture. And I waste so much time on the hope that this time it will make me more productive if only I can nudge it in the right direction, maybe I'm not holding it right, using the right tools/processes/skills etc. It feels like javascript frameworks all over again.
christophilus 14 minutes ago|
Same. I vacillate between thinking our profession will soon be over to thinking we’re perfectly safe. Sometimes, it’s brilliant. It is very good at exploring and explaining a codebase, finding bugs, and doing surgical fixes. It’s sometimes good at programming larger tasks, but only if you really don’t care about code quality.

The one thing I’m not sure about is: does code quality and consistency actually matter? If your architecture is sufficiently modular, you can quickly and inexpensively regenerate any modules whose low quality proves to be problematic.

So, maybe we really are fucked. I don’t know.

VoidWhisperer 2 hours ago||
It has definitely made me more productive. That said, that productivity isn't coming from using it to write business logic (I prefer to have an in-depth understanding of the logical parts of the codebases that I'm working on. I've also seen cases in my work codebases where code was obviously AI generated before and ends up with gaping security or compliance issues that no one seemed to see at the time).

The productivity comes from three main areas for me:

- Having the AI coding assistance write unit tests for my changes. This used to be by far my least favorite part of my job of writing software, mostly because instead of solving problems, it was the monotonous process of gathering mock data to generate specific pathways, trying to make sure I'm covering all the cases, and then debugging the tests. AI coding assistance allows me to just have to review the tests to make sure that they cover all the cases I can think of and that there aren't any overtly wrong assumptions

- Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole

- Quick test scripts. It has been extremely useful for generating quick SQL data for testing things, along with quick one-off scripts to test things like external provider APIs

ivraatiems 1 hour ago|
> Research. It has been extraordinarily helpful in giving me insight into how to design some larger systems when I have extremely specific requirements but don't necessarily have the complete experience to architect them myself - I know enough to understand if the system is going to correctly accomplish the requirements, but not to have necessarily come up with architecture as a whole.

I agree, this is where coding agents really shine for me. Even if they get the details wrong, they often pinpoint where things happen and how quite well.

They're also great for rapid debugging, or assisted bug fixing. Often, I will manually debug a problem, then tell the AI, "This exception occurs in place Y because thing X is happening, here's a stack trace, propose a fix", and then it will do the work of figuring out where to put the fix for me. I already usually know WHAT to do, it's just a matter of WHERE in context. Saves a lot of time.

Likewise, if I have something where I want thing X to do Y, and X already does Z, then I'll say, "Implement a Y that works like Z but for A B C", and it'll usually get it really close on the first try.

seudxs 1 hour ago||
I started AI-assisted coding quite a while ago with "query for code to copy and paste" approach which was slow but it dramatically shifts when the LLMs are used as agents that are just AI that have access to certain things like your project's source codes, internet and some other technical docs that refines them. You can simply instruct it to change snippet of codes by mentioning them with their actions inside the chat that feeds the agent, this is done in tools like cursor, antigravity, llmanywhere. an instruction could be limited to CRUD instructions, CRUD standing for Create, Read, Update, and Delete. an update instruction looks like "change the code that does this to do that" or more precise one "change the timeout of the request to ycombinator.com to 10". having a good memory definitely helps here but forgetting isn't the end of the development or necessity to start reading the source codes yourself to know where an instruction should target but you can ask the project's interconnected source codes (i put interconnected because it generates lots of source codes for some cases like test cases that aren't used in production but are part of the project in my experience with cursor for example) goal summary if you've forgotten the big picture of the project because you came back from a break or something. I used AI agent for my last langgraph solo project only which had python and go languages, git and cursor so take my advice with a grain of salt :)
QuadrupleA 6 hours ago||
As a veteran freelance developer - aside from some occasional big wins, I'd say it's been net neutral or even net negative to my productivity. When I review AI-generated code carefully (and if I'm delivering it to clients I feel that's my responsibility) I always find unnecessary complexity, conceptual errors, performance issues, looming maintainability problems, etc. If I were to let it run free, these would just compound.

A couple "win" examples: add in-text links to every term in this paragraph that appears elsewhere on the page, plus corresponding anchors in the relevant page parts. Or, replace any static text on this page with any corresponding dynamic elements from this reference URL.

Lose examples: constant, but edit format glitches (not matching searched text; even the venerable Opus 4.6 constantly screws this up), unnecessary intermediate variables, ridiculously over-cautious exception-handling, failing to see opportunities to isolate repeated code into a function, or to utilize an existing function that exactly implements said N lines of code, etc.

vemv 23 minutes ago||
Have you perceived a market shift for freelancers given the rise of AI coding?

It seems to me that sadly, paying for getting a few isolated tasks done is becoming a thing of the past.

slurpyb 3 hours ago||
It can only result in more work if you freelance because it you disclose that you used llm’s then you did it faster than usual and presumably less quality so you have to deliver more to retain the same income except now your paying all the providers for all the models because you start hitting usage limits and claude sucks on the weekends and your drive is full of ‘artifacts’, which incurs mental overhead that is exacerbated by your crippling adhd

And then all of a sudden you’re just arguing with the terminal all day - the specs are written by gpt, delivered in-the email written by gpt. Sometimes they dont even have the time to slice their prompt from the edges of the paste but the only thing i can think of is “i need to make the most of 0.5x off peak claude rates “

Fuck.

I got lots of pretty TUIs though so thats neat

simonw 6 hours ago||
The majority of code I've written since November 2025 has been created using agents, as opposed to me typing code into a text editor. More than half of that has been done from my iPhone via Claude Code for web (bad name, great software.)

I'm enjoying myself so much. Projects I've been thinking about for years are now a couple of hours of hacking around. I'm readjusting my mental model of what's possible as a single developer. And I'm finally learning Go!

The biggest challenge right now is keeping up with the review workload. For low stakes projects (small single-purpose HTML+JS tools for example) I'm comfortable not reviewing the code, but if it's software I plan to have other people use I'm not willing to take that risk. I have a stack of neat prototypes and maybe-production-quality features that I can't ship yet because I've not done that review work.

I mainly work as an individual or with one other person - I'm not working as part of a larger team.

mudkipdev 1 hour ago||
Are you saying you're learning go because you've freed up time elsewhere or is AI helping?
Cyphase 4 hours ago||
How often do you find issues during review? What kinds of issues?
simonw 3 hours ago||
Usually it's specification mistakes - I spot cases I hadn't thought to cover, or the software not behaving as usefully as if I had made a different design decision.

Occasionally I'll catch things it didn't implement at all, or find things like missing permission checks.

turlockmike 4 hours ago||
I stopped writing code a year ago. Claude code is a multiplier when you know how to use it.

Treat it like an intern, give it feedback, have it build skills, review every session, make it do unit tests. Red green refactor. Spend time up front reviewing the plan. Clearly communicate your intent and outcomes you want. If you say "do x" it has to guess what you want. If you say "I want this behaviour and this behaviour, 100% branch unit tested, adhearing to contributing guidelines and best practices, etc" it will take a few minutes longer, but the quality increases significantly.

I uninstalled vscode, I built my own dashboard instead that organizes my work. I get instant notifications and have a pr review kick off a highly opinionated or review utilizing the Claude code team features.

If you aren't doing this level of work by now, you will be automated soon. Software engineering is a mostly solved problem at this point, you need to embed your best practices in your agent and keep and eye on it and refine it over time.

csto12 4 hours ago||
I have read comments about this on X, here, and other places, yet I have ever seen there be proof this is an actual productivity boost.

I use Claude Opus (4.5, 4.6) all the time and catch it making making subtle mistakes, all the time.

Are you really being more productive (let’s say 3x times more), or just feel that way because you are constantly prompting Claude?

Maybe I’m wrong, but I don’t buy it.

NinjaTrance 3 hours ago|||
> I use Claude Opus (4.5, 4.6) all the time and catch it making making subtle mistakes, all the time.

Didn't we make subtle mistakes without AI?

Why did we spend so much time debugging and doing code reviews?

> Are you really being more productive (let’s say 3x times more)

At least 2x more productive, and that's huge.

csto12 3 hours ago|||
I think you’ve forgotten about the context of OP’s post. He said he uninstalled vscode and uses a dashboard for managing his agents. How are you going to be able to do code review well when you don’t even know what’s going on in your own project? I catch subtle bugs Claude emits because I know exactly what’s happening because I’m actively working with Claude, not letting Claude do everything.
turlockmike 3 hours ago||
The code is still visible if i want to review it.

But since I have a strong rule about always writing unit tests before code, my confidence is a lot higher.

https://simonwillison.net/2025/Dec/18/code-proven-to-work/

csto12 2 hours ago||
>The code is still visible if i want to review it.

I agree that the test harness is the most important part, which is only possible to create successfully if you are very familiar with exactly how your code works and how it should work. How would you reach this point using a dashboard and just reviewing PRs?

jplusequalt 3 hours ago|||
Are you getting paid 2x more?
wg0 4 hours ago||||
I agree. The code despite detailed spec reveals bugs and edge cases upon inspection.

I'm talking Claude Opus 4.6 here.

cluckindan 2 hours ago|||
The spec needs to be explicit about edge and corner cases.
layer8 1 hour ago||
At some point such a spec converges to the actual code you’d have written.
dude250711 2 hours ago|||
For all we know, some important clients might just be getting better service out of Anthropic's/OpenAI's "black boxes".
baq 3 hours ago||||
typical experience when only using one foundational model TBH. results are much better if you let different models review each other.

the bottleneck now is testing. that isn't going away anytime soon, it'll get much worse for a bit while models are good at churning code out that's slightly wrong or technically correct, but solving a different problem than intended; it's going to be a relatively short lived situation I'm afraid until the industry switches to most code being written for serving agents instead of humans.

turlockmike 3 hours ago||
The way LLMs work, different tokens can activate different parts of the network. I generally have 2-3 different agents review it from different perspectives. I give them identities, like Martin Fowler, or Uncle Bob, or whatever I think is relevant.
cdelsolar 2 hours ago|||
i really don't understand why people keep thinking this. i'm easily 10x more productive since Claude Code came out. it's insane how much stuff you can build quickly, especially on personal projects.
csto12 2 hours ago||
Of course personal projects are much quicker because usually personal projects don't have high code standards... I'm talking about production code.
wcedmisten 4 hours ago|||
> Software engineering is a mostly solved problem at this point

I guess that's why Claude Code has 0 open issues on Github. Since software engineering is solved, their autonomous agents can easily fix their own software much better and faster than human devs. They can just add "make no mistakes" to their prompt and the model can solve any problem!

Oh wait, they have 5,000+ open issues on Github[1]. I'm yet to be convinced that this is a solved problem

[1]: https://github.com/anthropics/claude-code/issues

wg0 4 hours ago|||
The OP probably is the only person in their team. There's no other plausible explanation of this level of AI psychosis.

PS: All in for AI agents I use all the time but sorry - SE is not a solved problem. Yet.

turlockmike 3 hours ago||
We have 80 engineers.
archagon 1 hour ago||||
Indeed. In my view, "software engineering is a solved problem" is a roughly equivalent statement to "writing is a solved problem." I'm convinced that people who say this were never serious engineers to begin with, viewing code entirely as a means to an end.

To me, code is both the canvas and deterministic artifact of deep thinking about program logic and data flow, as well as a way to communicate these things to other developers (including myself in the future). Outsourcing that to some statistical amalgam implies that the engineering portion of software engineering is no longer relevant. And maybe it's not for your run-of-the-mill shovelware, but as a profession I'd like to think we hold ourselves to a higher standard than that.

Also, does the sum total of software engineering done up to this point provide a sufficient training set for all future engineering? Are we really "done"? That sounds absurd to me.

I think people spouting absolutist statements like "software engineering is a solved problem" should largely be ignored.

throwaway613746 3 hours ago|||
[dead]
its-kostya 4 hours ago|||
> you need to embed your best practices in your agent and keep and eye on it and refine it over time.

Sincere question, how do beginners to the field (interns, juniors) do this when they don't have any best practices yet?

input_sh 2 hours ago|||
By working with other people, which I can't help but notice is missing from the parent comment.

Unless you want to be a solopreneur (terrible idea while you don't know what you're doing and don't have the means to hire someone that does), look at pretty much any other comment in this thread.

baq 3 hours ago||||
it's the easiest as it's ever been to get started in a foreign code base: start up the agent and ask questions. more or less instant answers, basically zero confabulations nowadays.

...but since it's so easy to deliver stuff without actually knowing anything, learning means putting in the effort to resist temptation and use the agent as a teaching aid instead of an intern savant.

turlockmike 4 hours ago|||
My advice for juniors is that it's too late to get entry level jobs for software engineering, but AI Automation engineering is just starting. Get a Claude code sub and build whatever you can imagine and focus on improving your own coding agent. Automate one more thing every day.
its-kostya 4 hours ago|||
Software eng has always been automating repetitive decision making and processes. Code is just a series of steps computers/systems follow deterministically. Now we are automating the automation.

I don't necessarily disagree with your advice, but goodness, I don't look forward to using any of the low quality software in the next decade. I hope the shareholders remain happy.

GeoAtreides 1 hour ago|||
>improving your own coding agent

??????????

write a thousand md files with detailed prompts (and called them skills)?

is that what would get juniors hired? and paid real money? a stack of md files?

relativeadv 4 hours ago|||
Very cool. What have you built with this method? Do you mind sharing details about the kinds of projects?
turlockmike 4 hours ago||
I mostly do this for work. These days I'm mostly building tooling for other devs. Observable memory system for coding, PR automation, CLI apps, dev coding dashboard, email automation. All of it integrating AI at various points (where intelligence is useful). All of that in the last two months alone.

Claude code skills represent a new type of AI native program. Give your agent the file system, let it build tools to sync and manage data.

ex-aws-dude 24 minutes ago|||
So you think your experience building tools for other devs is the same as every other domain of software to the point that you would declare the whole field of software engineering is a solved problem?

Gamedev, systems programming, embedded development, 3D graphics, audio programming, mobile, desktop, physics/simulation programming, HPC, RTC, etc.. that’s all solved based on your experience?

IncreasePosts 22 minutes ago|||
Why are you building tools for other humans? Why are they programming in this world where you aren't programming but are also an insanely productive programmer?
devmor 4 hours ago|||
Do you have any kind of proof you can show us? This reads like every other AI hype post but I have still never seen anyone demonstrate anything but proof of concept apps and imaginary workloads.
IAmGraydon 2 hours ago|||
It looks like he has created a handful of very simple utilities, which isn't surprising. LLMs are great for that.

https://github.com/turlockmike

queenkjuul 2 hours ago||||
I'm sure they don't
cyanydeez 2 hours ago|||
yeah, you'd think these commercial organizations would sit down with like, one marketer, and just put a non-trivial app together in real time and screen cap it all...

like, we've had this technology for several decades now, and none of these AI tools are like: "This is so great, let me show everyone how to write a CRUD database with a notepad and calendar app" or whatever.

MarkusQ 35 minutes ago||
Several decades? Seriously?

Several decades ago, we barely had the internet, rockets were single use only, and smart phones were coming any day now. CRISPR was yet to be named, social media meant movies from Blockbusters or HBO that you watched with friends. GLP-1 was a meh option for diabetics.

I agree with your overall point but...your time frame is way off.

dvfjsdhgfv 4 hours ago|||
> If you aren't doing this level of work by now, you will be automated soon.

It's harder and harder to detect sarcasm these days but in case you're being serious, I've tested a similar setup and I noticed Claude produces perfectly plausible code that has very subtle bugs that get harder and harder to notice. In the end, the initial speedup was gone and I decided to rewrite everything by hand. I'm working on a product where we need to understand the code base very well.

rckclmbr 4 hours ago||
I keep hearing “Claude creates subtle bugs”, but how is that different than people engineers? I’ve never worked in a bug free codebase
eeperson 3 hours ago|||
Everybody produces bugs, but Claude is good a producing code that looks like it solves the problem but doesn't. Developers worth working with, grow out of this in a new project. Claude doesn't.

An example I have of this is when I asked Claude to copy a some functionality from a front-end application to a back-end application. It got all of the function signatures right but then hallucinated the contents of the functions. Part of this functionality included a look up map for some values. The new version had entirely hallucinated keys and values, but the values sounded correct if you didn't compare with the original. A human would have literally copied the original lookup map.

NewsaHackO 3 hours ago|||
> Developers worth working with, grow out of this in a new project. Claude doesn't.

There is no way this is true. People make fewer bugs with time and guidance, but no human makes zero bugs. Also, bugs are not planned; it's always easy to in hindsight say "A human would have literally copied the original lookup map," but every bug has some sort of mistake that is made that is off the status quo. That's why it's a bug.

tehjoker 3 hours ago|||
I asked claude to help me figure out some statistical calculation in Apple Numbers. It helpfully provided the results of the calculation. I ignored it and implemented it in the spreadsheet and got completely different (correct) results. Claude did help me figure out how to do it correctly though!
jplusequalt 3 hours ago||||
When you write the code yourself you are slowly building up a mental model of how said thing should work. If you end up introducing a subtle bug during that process, at least you already have a good understanding of the code, so it shouldn't be much of an issue to work backwards to find out what assumptions turned out to be incorrect.

But now with Claude, the mental model of how your code works is not in your head, but resides behind a chain of reasoning from Claude Code that you are not privy too. When something breaks, you either have to spend much longer trying to piece together what your agent has made, or to continue throwing Claude at and hope it doesn't spiral into more subtle bugs.

GeoAtreides 1 hour ago|||
simple: people produce subtle subtle bugs, LLMs produce obvious subtle bugs.
sofixa 4 hours ago|||
Sounds like tech debt as a service. If the code review is automated, how can you be sure the code isn't full of security or maintanability issues?
turlockmike 3 hours ago||
https://simonwillison.net/2025/Dec/18/code-proven-to-work/
input_sh 2 hours ago||
I'll just quote the author of this blog from this very thread:

> I mainly work as an individual or with one other person - I'm not working as part of a larger team.

IncreasePosts 4 hours ago|||
Why exactly do you think people not doing that kind of work will be automated but your kind of work won't be automated?

If AI really is all that, then whatever "special" thing you are doing will be automated as well.

headcanon 4 hours ago|||
Thats exactly what we as software engineers do. We are constantly automating ourselves out of a job. The trick is that we never actually accomplish that, there will always be things for humans to do.

We're discovering so much latent demand for software, Jevon's paradox is in full effect and we're working more than ever with AI (at least I am).

turlockmike 4 hours ago|||
Software engineering is being automated. But building intelligent automation is just starting. AI engineer will be the only job left in the future as long as there are things to automate. It's really all the other jobs that will be automated first before AI engineer.
headcanon 2 hours ago||
Most knowledge worker use computers today to do their work, but we don't necessarily call them computer or software engineers. I think it will be something like that, but the economy will need to adapt and grow in order to accommodate it.
IncreasePosts 31 minutes ago|||
OP compared AI to interns, and how they need to guide it and instuct it on simple things, like using unit tests. Well, what about when AI is actually more like an ultra talented programmer. What exactly would OP bring to the table apart from being able to ask it to solve a certain problem?

Their comment about people who don't operate like them being out of a job might be true if AI doesn't progress past the current stage but I really don't see progress slowing down, at least in coding models, for quite some time.

So, whatever relevance OPs specific methods have right now will quickly be integrated into the models themselves.

turlockmike 4 hours ago||||
I don't disagree, aspects of that will be automated, but two things will remain: Intent and Judgement.

Building AI systems will be about determining the right thing to build and ensuring your AI system fully understands it. For example, I have a trading bot that trades. I spent a lot of time on refining the optimization statement for the AI. If you give it the wrong goal or there's any ambiguity, it can go down the wrong path.

On the back end, I then judge the outcomes. As an engineer I can understand if the work it did actually accomplished the outcomes I wanted. In the future it will be applying that judgement to every field out there.

AstroBen 4 hours ago|||
You're trusting AI to trade with your real money?
turlockmike 3 hours ago|||
Not a lot of money because I haven't built enough confidence but yes it's the ultimate test of can it do economically useful work
Karrot_Kream 3 hours ago|||
I mean, real algo trading shops use "AI" to do it all the time, they just don't use LLMs. While I'm not the GP I think the idea they're trying to express is that the nuts and bolts of structuring programs is going away. The engineer of today, according to this claim and similar to Karpathy's Software 3.0 idea, structures their work in terms of blocks of intelligence and uses these blocks to construct programs. Nothing stopping Claude Code or another LLM coding harness from generating the scaffolding for a time-series model and then letting the author refactor the model and its hyperparameters as needed to achieve fit.

Though I don't know of any algo trading shop that relies purely on algorithms as market regimes change frequently and the alpha of new edge ends up getting competed away frequently.

(And personally I'm a believer of the jagged intelligence theory of LLMs where there's some tasks that LLMs are great at and other tasks that they'll continue being idiotic at for a while, and think there's plenty of work left for nuts and bolts program writers to do.)

turlockmike 3 hours ago||
My trading agent builds its own models, does backtesting, builds tools for real time analysis and trading. I wrote zero of the code, i haven't even seen the code. The only thing I make sure is that it's continuously self improving (since I haven't been able to figure out how to automate that yet).
IncreasePosts 27 minutes ago||||
How technical do you need to be with your optimization statements and outcome checking? Isn't that moat constantly shrinking if AI is constantly getting better?
tehjoker 3 hours ago|||
Another way of saying this is most line engineers will be moving into management, but managing AIs instead of people.
georgemcbay 4 hours ago|||
I see variations of this non-stop these days, people who seem to be sure AI is going to automate everything right up to them.
analog31 4 hours ago|||
>>>> Software engineering is a mostly solved problem at this point...

I'll believe it when AI can tell me when a project will be done. I've asked my developer friends about this and I get a blank stare, like I'm stupid for asking.

IAmGraydon 3 hours ago|||
>Software engineering is a mostly solved problem at this point

You from 2 months ago:

>LLMs are great coders, but subpar developers". https://news.ycombinator.com/item?id=46434304

Interesting. That's a lot of progress in 2 months!

andrewmcwatters 4 hours ago||
[dead]
shmel 3 hours ago||
I got insanely more productive with Claude Code since Opus 4.5. Perhaps it helps that I work in AI research and keep all my projects in small prototype repos. I imagine that all models are more polished for AI research workflow because that's what frontier labs do, but yeah, I don't write code anymore. I don't even read most of it, I just ask Claude questions about the implementation, sometimes ask to show me verbatim the important bits. Obviously it does mistakes sometimes, but so do I and everyone I have ever worked with. What scares me that it does overall fewer mistakes than I do. Plan mode helps tremendously, I skip it only for small things. Insisting on strict verification suite is also important (kind of like autoresearch project).
notatoad 2 hours ago||
It’s completely inconsistent for me, and any time I start to think it is amazing, I quickly am proven wrong. It definitely has done some useful things for me, but as it stands any sort of “one shot” or vibecoding where I expect the ai to complete a whole task autonomously is still a long ways off.

Copilot completions are amazingly useful. chatting with the chatbot is a super useful debugging tool. Giving it a function or database query and asking the ai to optimize it works great. But true vibe coding is still, imho, more of a party trick than an actual productivity multiplier. It can do things that look useful, and it can do things that solve immediate self-contained problems. but it can’t create launchable products that serve the needs of multiple users.

piker 3 hours ago||
I am working on a sub 100KLOC Rust application and can't productively use the agentic workflows to improve that application.

On the other hand, I have tried them a number of times in greenfield situations with Python and the web stack and experienced the simultaneous joy and existential dread of others. They can really stand new projects up quick.

As a founder, this leaves me with what I describe as the "generation ship" problem. Is it possible that the architecture we have chosen for my project is so far out of the training data that it would be faster to ditch the project and reimplement it from scratch in a Claude-yolo style? So far, I'm convinced not because the code I've seen in somewhat novel circumstances is fairly mid, but it's hard to shake the thought.

I do find chatting with the models incredibly helpful in all contexts. They are also excellent at configuring services.

koyote 48 minutes ago||
If what you are doing is novel then I don't think yolo'ing it will help either. Agents don't do novel. I've even noticed this in meeting summaries produced by AI: A prioritisation meeting? AI's summary is concise, accurate, useful. A software algorithm design meeting, trying to solve a domain-specific issue? AI did not understand a word of what we discussed and the summary is completely garbled rubbish.

If all you're doing is something that already exists but you decided to architecture it in a novel way (for no tangible benefit), then I'd say starting from scratch and make it look more like existing stuff is going to help AI be more productive for you. Otherwise you're on your own unless you can give AI a really good description of what you are doing, how things are tied together etc. And even then it will probably end up going down the wrong path more often than not.

cyanydeez 2 hours ago||
I'm surprised there's no more attempts to stablize around a base model, like in stable diffusion, then augment those models with LoRas for various frameworks, and other routine patterns. There's so much going into trying to build these omnimodels, when the technology is there to mold the models into more useful paradigms around frameworks and coding patterns.

Especially, now that we do have models that can search through code bases.

wg0 4 hours ago|
I foresee that the AI blindness at CEO/CFO level and the general hype (from technical and non technical press and media) in our society that software engineering is over etc will result in severe talent shortage in 5-7 years resulting in bidding wars for talent driving salaries 3x upwards or more.
ares623 2 hours ago|
Then we'll be back to 2019/2020 cycle and round and round the merry go round we go
More comments...