Posted by martinald 5 days ago
It took a lot of convincing, but I finally got her to start using ChatGPT to help her write SQL and walk her through setting up some SaaS accounting software formulas.
It worked so well now she's trying to find more applications at work. Claude code is too scary for her though. That will need to be in some Web UI before she feels comfortable giving it a try.
Maybe it's not a big deal, or maybe it's a compliance model with severe financial penalties for non-compliance. I just personally don't kind these tradeoffs going implicit.
Let's take the group of developers (to keep it simple) that have a deep understanding of LLMs and how they work. Even then, some don't care if it generates entire codebases for them, some know there will be bugs in it, they just don't care. Some care, but they know their job is to make their project managers happy. Others don't have apathy or pressure like that, but they'll still use it in the same way, because for one reason or the other it saves them time. I'm probably missing more examples, but it is the same usage, but different motivations, people, and environments.
I very much doubt that tinkering with a non-repeatable, probabilistic process is how most non-technical users will routinely use software.
I can imagine power users taking this approach to _create_ or extend productivity tools for themselves and others, just like they have been doing with Excel for decades. It will not _replace_ productivity tools for most non-technical users.
One tidbit I’d disagree with is that only those using the bleeding edge AI tools are reaping the benefits. There seem to be a lot of highly specialized tools and a lot of specific configurations (and mystical incantations) to get them to work, and those are constantly changing and being updated. The bleeding edge is a dangerous place to be if you value your time (and sanity).
Personally, as someone working on moderate-to-highly complex software (live inference of industrial IoT data), I can’t really open a merge / pull request for my colleagues to review unless I 100% understand what I’ve pushed, and can explain to them as well.
My killer app for AI would just be a CLI that gets me to a commit based on moderately technical input:
“Add this configuration variable for this entry point; split this class into two classes, one for each of the responsibilities that are currently crammed together; update the unit tests to reflect these changes, including splitting the tests for the old class into two different test classes; etc”
But, all the hype of the bleeding edge is around abstracting away the entire coding process until you don’t even understand what code is being generated? Hard to see it as anything but a pipe dream. AI is useful, but it’s not a panacea - you can’t fire it and replace it when it fucks up.
Granted I'm way behind the curve, but is this not how actual engineers (and not influencers) are using it? I heavily micro-manage the implementation because my manager still expects me to know the code
That's the type of input I give to Claude / Codex. Works for me.
The less you understood about code to start with, the quicker you achieve this goal... and the less prepared you are for the consequences.
I do wonder how long they'll be able to use this to their advantage before something "else" comes along. Like how IE had the largest market share before Chrome and other alternatives started catching up.
Then again, some markets like YouTube still haven't had any real serious alternatives. Maybe ChatGPT will always be number one in the consumer eyes.
Small companies are more agile and innovative while corporations often just shuffle papers around. Wow, what a bold claim, never seen before in the entire history of economics.