Posted by milkglass 22 hours ago
As it was said - the future is here, it just distributed non-uniformly, so somebody is still and will be for some time sailing, manufacturing things and writing code.
Coding is different though, coding doesn't have a cost barrier, it has a ability barrier. I think we will loose a lot of people who never were passionate about programming and perhaps go back to a happy equilibrium. AI is only production ready if you have someone who understands software development. AI will improve speed to market if you have the right team, it doesn't remove the need for some to learn to code. You will of course end up with startups using exclusively AI but they will be those who end up with major security breaches or simply cannot scale as the AI goes in the wrong direction for the future. Tbh that's probably a positive as it weeds out the start ups that are focused on buzzwords for funding and not product.
Why is speed-to-market such an important metric? I do not understand the need to mimic the largest players in the industry, nor do I see any particularly profound long term benefits to first mover advantage.
Anecdotally, what I’m seeing right now is the opposite. People who don’t care about programming are joining, while those who do care are getting tired of the bullshit and leaving. The good programmers are the ones leaving, the hacks are extremely happy to use LLMs.
When shit hits the fan, there won’t be many people left to clean it.
Because it seems to me like there's a lot of coding-adjacent things they still need to be able to do even if they never look at a line of code.
Just like how we don't need to be able to open our kernels and manually update the OS, the software company does it for us. It helps knowing the kernel, but you can still get the security updates even if you don't know how to.
And I think you should avoid making assumptions about people you know nothing about. That is so far from the truth it’s not even funny.
> 90% of the world will not be able to even open a .py
Which is nowhere near my argument. I’d appreciate if you engaged with what I said or not at all.
I'm sure you have a problem with that statement too but keep it to yourself because I don't want to hear it.
But civilisations have always forgotten things and then had to re-engineer them. We only recently recreated Roman-equivalent concrete; knowledge required to create the Saturn V rockets had to be re-engineered; we can't recreate medieval stained glass exactly, or Viking Ulfberht Swords; we would struggle to create Betamax tape today.
Many of the examples I found (as expected) relate to military or commercially sensitive technology that did not get written down (for obvious reasons).
It also reminded me when I read Thomas Thwaites' "The Toaster Project: Or a Heroic Attempt to Build a Simple Electric Appliance from Scratch", where to make a smelter from scratch he relied on a 450 year old book ("De re metallica" by Georgius Agricola), as well as a friendly Metallurgist.
We already lost the widespread ability to write assembler in an artisinal way. Now we have AI we will also be lazy about how we write individual bits of artisinal code. So what? Yes it will cost more (in time and money) when we need to re-engineer, but how much would it cost to keep alive all the knowledge and skills we might possibly need in the future?
We had better make sure we write down and preserve the recorded data though :)
What America did with developing Shale Oil to become viable, so quickly is one example.
There will always be specialists who can really debug stuff. Mechanics, etc. Time moves on, and we need to move with it.
I’m amazed at this “end-of-world” crap. People use AI to write this shit, to make it even crazier.
I had idea what might be the difference between the groups. I think for the latter group the code is important part of the goal. They see software as rather ends than means. Not entirely of course.
And the first group considers artifacts that the software produces to be the goal. So as long as AI written software is capable of producing valuable artifact they are willing and eager to go with it. And AI does that.
If the result of my code is finetuning of a neural network, I don't really care how it happened. I can benchmark it afterwards and know if the code that AI made for this purpose was good or not. I can inspect the code, investigate it, pinpoint ideas I don't like, suggest some ideas to try that I believe could give better results. I can restart, or try doing same thing few times in parallel trying different harnesses and models. All in service of the result, that is not code.
If you have a program that needs to do something and are willing to try AI to write it, think foremost about how you can rephrase the problem so that the output of AI written program becomes an artifact that can be independently verified, how to turn desired behavior into an artifact to evaluate.
This is weird, but does seem a common result.
-> AI generates a ton of code fast, but then the human takes a long time to review. Every time the prompt changes. The AI takes a few minutes to generate code that the human will take hour to review.
The reviewing is taking longer than if human just did the code. So why is it so difficult to go back to coding instead of prompting.?