Top
Best
New

Posted by ColinWright 14 hours ago

We mourn our craft(nolanlawson.com)
414 points | 542 commentspage 5
frays 12 hours ago||
The majority of the code currently running in production for my company was written 5+ years ago. This was all "hand-written" and much lower quality than the AI generated code that I am generating and deploying these days.

Yet I feel much more connected with my old code. I really enjoyed actually writing all that code even though it wasn't the best.

If AI tools had existing 5 years ago when I first started working on this codebase, obviously the code quality would've been much higher. However, I feel like I really loved writing my old code and if given the same opportunity to start over, I would want to rewrite this code myself all over again.

brandensilva 12 hours ago||
I'm probably a minority, but I've never loved dealing with syntax. The code itself always felt like a hindrance to me that reminded me that my brain was slowed down by my fingers. I get it though, it was tactical and it completed the loop. It was essential for learning I felt like despite eventually getting to a point where it slows you down the more senior you get.

AI has a ways to go before it's senior level if it ever reaches that level, but I do feel bad for juniors that survive this who never will have the opportunity to sculpt code by hand.

pzo 12 hours ago||
I didn't come in IT for money - back in the days it wasn't as well paid as today - nevertheless if this craft was very poorly paid I probably wouldn't choose this profession either. And I assume many people here wouldn't as well unless you are already semi-retired or debt free.

I mourn a little bit that in 20 years possibly 50% of software jobs will get axed or unless you are elite/celebrity dev salary will stagnate. I mourn that in the future upward mobility and moving up into upper middle class will be harder without trying to be entrepreneur.

techbrovanguard 2 hours ago||
"ai is inevitable, stop resisting" — claude marketing department. if it was so invertible, why are you funding these psyops?
daedrdev 13 hours ago||
I feel like we are long into the twilight of mini blogs and personal sites. Its like people trying to protect automotive jobs, the vas majority were already lost

Perhaps Im a cynic but I don't know

ertucetin 13 hours ago||
That's why I'll only read source code written until 2024.
teeray 12 hours ago||
> because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike

Sure, maybe it takes me a little while to ride across town on my bike, but I can reliably get there and I understand every aspect of the road to my destination. The bazooka-powered jetpack might get me there in seconds, but it also might fly me across state lines, or to Antarctica, or the moon first, belching out clouds of toxic gas along the way.

naragon 9 hours ago|
Like the cab drivers in London who have to know the city inside and out? https://wheelchairtravel.org/london-black-cab-driver-knowled...
nubg 13 hours ago||
To the people who are against AI programming, honest question: why do you not program in assembly? Can you really say "you" "programmed" anything at all if a compiler wrote your binaries?

This is a 100% honest question. Because whatever your justification to this is, it can probably be used for AI programmers using temperature 0.0 as well, just one abstraction level higher.

I'm 100% honestly looking forward to finding a single justification that would not fit both scenarios.

crazygringo 13 hours ago||
I'm all for AI programming.

But I've seen this conversation on HN already 100 times.

The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.

I personally don't agree at all, in the sense I don't think that matters. I've run into compiler bugs, and more library bugs than I can count. The real world is just as messy as LLMs are, and you still need the same testing strategies to guard against errors. Development is always a slightly stochastic process of writing stuff that you eventually get to work on your machine, and then fixing all the bugs that get revealed once it starts running on other people's machines in the wild. LLMs don't write perfect code, and neither do you. Both require iteration and testing.

godelski 11 hours ago|||

  > The answer they always give is that compilers are deterministic and therefore trustworthy in ways that LLMs are not.
I don't see this as a frequent answer tbh, but I do frequently see claims that this is the critique.

I wrote much more here[0] and honestly I'm on the side of Dijkstra, and it doesn't matter if the LLM is deterministic or probabilistic

  It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
  - Dijkstra: On the foolishness of "natural language programming"
His argument has nothing to do with the deterministic systems[1] and all to do with the precision of the language. His argument comes down to "we invented symbolic languages for a good reason".

[0] https://news.ycombinator.com/item?id=46928421

[1] If we want to be more pedantic we can actually codify his argument more simply by using some mathematical language, but even this will take some interpretation: natural language naturally imposes a one to many relationship when processing information.

stoneforger 9 hours ago||
Nah bro I'll just ask the LLM to do better next time /s
godelski 7 hours ago||
It amazes me people say that serially, given in the next breath they'll complain about how their manager doesn't know shit and is leading blind. Or complain about how you don't understand. Maybe we need to install more mirrors
haolez 13 hours ago|||
I just answered exactly that. I think that AI agents code better than humans and are the future.

But the parent argument is pretty bad, in my opinion.

Ronsenshi 12 hours ago||
Depends on who these humans you're comparing AI code to. I've seen and reviewed enough AI code in the last few months to have formed a solid impression that it's "ok" at best and relies heavily on who guides it - how well spec defined, what kind of rules are set, coding styles, architecture patterns.
stoneforger 8 hours ago||
The prompt user is basically selecting patterns from latent space. So you kind of need to know what you're looking for. When you don't know what you're looking for that's when the fun begins, but that's a problem for the next quarter.
Ronsenshi 7 hours ago||
It's true for more guided development approach. But the further you go into vibecode territory, the less you need to know.
koiueo 13 hours ago|||
There's a big difference between deterministic abstraction over machine code, and probabilistic translation of ambiguous language into machine code.

Compiler is your interface.

If you treat LLM as your interface... Well, I wouldn't want sharing codebase with you.

TJTorola 13 hours ago|||
I'm not particularly against AI programming but I don't think these two things are equivilent. A compiler translates code to specifications in a deterministic way, the same compiler produces the same output from the same code, it is all completely controlled. AI is not at all deterministic, temperature is built into LLMs and furthermore the lack of specificity in prompts and our spoken languages. The difference in control is significant enough to me not to put compilers and AI coding agents into the same catagory even though they are both taking some text and producing some other text/machine code.
pornel 12 hours ago|||
A chef can cook a steak better than a robo-jet-toaster, even though neither of them has raised the cow.

It's not about having abstraction levels above or below (BTW, in 21st century CPUs, the machine code itself is an abstraction over much more complex CPU internals).

It's about writing a more correct, efficient, elegant, and maintainable code at whichever abstraction layer you choose.

AI still writes messier, sloppier, buggier, more redundant code than a good programmer can when they care about the craft of writing code.

The end result is worse to those who care about the quality of code.

We mourn, because the quality we paid so much attention to is becoming unimportant compared to the sheer quantity of throwaway code that can be AI-generated.

We're fine dining chefs losing to factory-produced junk food.

rdedev 13 hours ago|||
Even if you are not coding in assembly you still need to think. Replace llm with a smart programmer. I don't like the other guy to do all the thinking for me. Much better if it's a collaborative process even if the other guy could have coded the perfect solution without my help. Like otherwise why am I even in the picture?
deredede 12 hours ago|||
I know how to review code without looking at the corresponding assembly and have high confidence in the behavior of the final binary. I can't quite say the same for a prompt without looking at the generated code, even with temperature 0. The difference is explainability, not determinism.
haolez 13 hours ago|||
Compilers are deterministic.
zajio1am 12 hours ago||
There is no requirement for compilers to be deterministic. The requirement is that a compiler produces something that is valid interpretation of the program according to the language specification, but unspecified details (like specific ordering of instructions in the resulting code) could in principle be chosen nondeterministically and be different in separate executions of the compiler.
koiueo 12 hours ago||
We are not talking about deterministic instructions, we are talking about deterministic behavior.

UB is actually a big deal, and is avoided for a reason.

I couldn't in my life guess what CC would do in response to "implement login form". For all I know CC's response could depend on time of day or Anthropic's electricity bill last month more, than on the existing code in my app, and the specific wording I use.

tines 12 hours ago|||
For me, the whole goal is to achieve Understanding: understanding a complex system, which is the computer and how it works. The beauty of this Understanding is what drives me.

When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.

When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.

The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.

They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.

godelski 11 hours ago|||

  > why do you not program in assembly?
There's a balance of levels of abstraction. Abstraction is a great thing. Abstraction can make your programs faster, more flexible, and more easy to understand. But abstraction can also make your programs slower, more brittle, and incomprehensible.

The point of code is to write specification. That is what code is. The whole reason we use a pedantic and somewhat cryptic schema is that natural language is too abstract. This is the exact reason we created math. It really is even the same reason we created things like "legalese".

Seriously, just try a simple exercise and be adversarial to yourself. Describe how to do something and try to find loopholes. Malicious compliance. It's hard, to defend and writing that spec becomes extremely verbose, right? Doesn't this actually start to become easier by using coding techniques? Strong definitions? Have we not all forgotten the old saying "a computer does exactly what you tell it to, not what you intend to tell it to do"? Vibe coding only adds a level of abstraction to that. It becomes "a computer does what it 'thinks' you are telling it to do, not what you intend to tell it to do". Be honest with yourself, which paradigm is easier to debug?

Natural language is awesome because the abstraction really compresses concepts, but it requires inference of the listener. It requires you to determine what the speaker intends to say rather than what the speaker actually says.

Without that you'd have to be pedantic to even describe something as mundane as making a sandwich[1]. But inference also leads to misunderstandings and frankly, that is a major factor of why we talk past one another when talking on large global communication systems. Have you never experienced culture shock? Never experienced where someone misinterprets you and you realize that their interpretation was entirely reasonable?[2] Doesn't this knowledge also help resolve misunderstandings as you take a step back and recheck assumptions about these inferences?

  > using temperature 0.0
Because, as you should be able to infer from everything I've said above, the problem isn't actually about randomness in the system. Making the system deterministic only has one realistic outcome: a programming language. You're still left with the computer doing what you tell it to do, but have made this more abstract. You've only turned it into the PB&J problem[1] and frankly, I'd rather write code than instructions like those kids are. Compared to the natural language the kids are using, code is more concise, easier to understand, more robust, and more flexible.

I really think Dijkstra explains things well[0]. (I really do encourage reading the entire thing. It is short and worth the 2 minutes. His remark at the end is especially relevant in our modern world where it is so easy to misunderstand one another...)

  The virtue of formal texts is that their manipulations, in order to be legitimate, need to satisfy only a few simple rules; they are, when you come to think of it, an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.

  Instead of regarding the obligation to use formal symbols as a burden, we should regard the convenience of using them as a privilege: thanks to them, school children can learn to do what in earlier days only genius could achieve. 
[0] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...

[1] https://www.youtube.com/watch?v=FN2RM-CHkuI

[2] Has this happened to you and you've been too stubborn to realize the interpretation was reasonable?

simonwsucks 13 hours ago||
[dead]
karmasimida 13 hours ago|
> If you would like to grieve, I invite you to grieve with me.

I think we should move past this quickly. Coding itself is fun but is also labour , building something is the what is rewarding.

notnullorvoid 11 hours ago|
By that logic prompting an AI is also labour.

It's not even always a more efficient form of labour. I've experienced many scenarios with AI where prompting it to do the right thing takes longer and requires writing/reading more text compared to writing the code myself.

More comments...