>This is the exact same feeling I’m left with after a few days of using Claude Code.
For me what matters is the end result, not the mere act of writing code. What I enjoy is solving problems and building stuff. Writing code is a part.
I would gladly use a tool to speed up that part.
But from my testing, unless the task is very simple and trivial, using AI isn't always a walk in the park, simple and efficient.
Most professional software development hasn't been fun for years, mostly because of all the required ceremony around it. But it doesn't matter, for your hobby projects you can do what you want and it's up to you how much you let AI change that.
Thought it’s ok to use new for object literal in JS.
Depends whether you're in it for the endgame or the journey.
For some the latter is a means to the former, and for others it's the other way around.
That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
Setting aside the fact that the author nowhere says this, it may in fact be plausible.
> That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
Meanwhile half[0] the students supposed to be learning to build software in university will fail to learn something important because they asked Claude instead of thinking about it. (Or all the students using llms will fail to learn something half the time, etc.)
[0]: https://www.anthropic.com/news/anthropic-education-report-ho...
> That said, nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement.
The author tell his experience regarding his joy programming things and figuring stuff out. In the end he says that AI made him lose this joy, and he compares it to cheating in a game. He does not say one word about societal impact and or the amount of engineers in the future, it's what you interpreted yourself.
You comment is talking about ability to build software, vs. the article (in only a single sentence that references this topic, while the other 99% circles around something else) talks about the job market situation. If what you wanted so say "The author is arguing that people will probably have a harder time getting a job in software development", that would have been correct.
> That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.
You're (based on the new comment) explicitly saying that people without technical knowledge are getting jobs in software development sector. Where did you get that info from? Would be an interesting read for sure, if it's actually true.
I feel the same with a lot of points made here, but hadn't yet thought about the financial one.
When I started out with web development that was one of the things I really loved. Anyone can just read about html, css and Javascript and get started with any kind of free to use code editor.
Though you can still do just that, it seems like you would always drag behind the 'cool guys' using AI.
Without the punctuation, I first read it tautologically as "Most devs that use AI blindly, trust it instead of questioning what it produces". But even assuming you meant "Most devs that use AI, blindly trust it instead of questioning what it produces", there's still a negative feedback loop. We're still at the early experimentation phase, but if/when AI capabilities eventually settle down, people will adapt, learning when and when not they can trust the AI coder and when to take the reins - that would be the skill that people are hired for.
Alternatively, we could be headed towards an intelligence explosion, with AI growing in capabilities until it surpasses human coders at almost all types of coding work, except perhaps for particular tasks which the AI dev could then delegate to a human.
What makes you think that will be necessary?
With today's AI, driven by code examples it was trained on, it seems more likely to be able to do a good job of optimization in many cases than to have gleaned the principles of conquering complexity, writing bug-free code that is easy and flexible to modify, etc. To be able to learn these "journeyman skills" an LLM would need to either have access to a large number of LARGE projects (not just Stack Overflow snippets) and/or the thought processes (typically not written down) of why certain design decisions were made for a given project.
So, at least for time being, as a developer wielding AI as a tool, I think we can still have the satisfaction of the higher level design (which may be unwise to leave to the AI, until it is better able to reason and learn), while leaving the drudgework (& a little bit of the fun) of coding to the tool. In any case we can still have the satisfaction of dreaming something up and making it real.
My (naive?) assumption is that all of this will come down: the price (eventually free) and the energy costs.
Then again, may daughters know I am Pollyanna (someone has to be).
The hardware for AI is getting cheaper and more efficient, and the models are getting less wasteful too.
Just a few years ago GPT-3.5 used to be a secret sauce running on the most expensive GPU racks, and now models beating it are available with open weights and run on high end consumer hardware. Few iterations down the line good-enough models will run on average hardware.
When that Xcom game came out, filmmaking, 3D graphics, and machine learning required super expensive hardware out of reach of most people. Now you can find objectively better hardware literally in the trash.
Moore's law is withering away due to physical limitations. Energy prices go up because of the end of fossil fuels and rising climate change costs. Furthermore the global supply chain is under attack by rising geopolitical tension.
Depending on US tariffs and how the Taiwan situation plays out and many other risks, it might be that compute will get MORE expensive in the future.
While there is room for optimization on the generative AI front we are still have not even reached the point were generative AI is actually good at programming. We have promising toys but for real productivity we need orders of magnitude bigger models. Just look how ChatGPT 4.5 is barely economically viable already with its price per token.
Sure if humanity survives long enough to widely employ fusion energy, it might become practical and cheap again but that will be a long and rocky road.
The way we use LLMs is also primitive and inefficient. RAG is a hack, and in most LLM architectures the RAM cost grows quadratically with the context length, in a workload that is already DRAM-bound, on a hardware that already doesn't have enough RAM.
> Depending on US tariffs […] end of fossil fuels […] global supply chain
It does look pretty bleak for the US.
OTOH China is rolling out more than a gigawatt of renewables a day, has the largest and fastest growing HVDC grid, a dominant position in battery and solar production, and all the supply chains. With the US going back to mercantilism and isolationism, China is going to have Taiwan too.
I'm not sure how much RAM is on the average smartphone owned by someone earning $5/day*, but it's absolutely not going to be the half a terabyte needed for the larger models whose weights you can just download.
It will change, but I don't know how fast.
* I kinda expect that to be around the threshold where they will actually have a smartphone, even though the number of smartphones in the world is greater than the number of people