Posted by delaugust 7 hours ago
IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.
I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.
Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.
Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.
They have not necessarily changed the rate at which I produce valuable outputs (yet).
With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface
AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.
If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".
[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...
I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.
There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.
Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.
The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.
How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me.
Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.
The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.
Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.
This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.
But to me the article fails to:
(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;
(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.
If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).
Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.
As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".
My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions.I think one huge issue in my life has been: getting started
If AI helps with this, I think it is worth it.
Even if getting started is incorrect, it sparks outrage and an "I'll fix this" momentum.
Worth what? I probably agree, the greenfield rote mechanical tasks of putting together something like a basic interface, somewhat thorough unit tests, or a basic state container that maps to a complicated typed endpoint are things I'd procrastinate on or would otherwise drain my energy before I get started.
But that real tangible value does need to have an agreeable *price* and *cost* depending on the context. For me, that price ceiling depends on how often and to what extent it's able to contribute to generating maximum overall value, but in terms of personal economic value (the proportion of my fixed time I'm spending on which work), if it's on an upward trend of practical utility, that means I'm actually increasing the proportion of dull tasks I'm spending my time on... potentially.
Kind of like how having a car makes it so comfortable and easy and ostensibly fast to get somewhere for an individual—theoretically freeing up time to do all kinds of other activities—that some people justify endless amounts of debt to acquire them, allowing the parameters of where they're willing to live to shift further and further to the point where nearly all of their free time, energy, and money is spent on driving, all of their kids depend on driving, and society accepts it as an unavoidable necessity; all the deaths, environmental damage, side-effects of decreased physical activity and increased stress along for the ride. Likewise how various chat platforms tried to make communication so friction-less that I actually now want to exchange messages with people far less than ever before, effectively a foot gun
Maybe America is once again demolishing its cities so they can plow through a freeway, and before we know it, every city will be Dallas, and every road will be like commuting between San Jose to anywhere else—metaphorically of course, but also literally in the case of infrastructure build— when will it be too late to realize that we should have just accepted the tiny bit of hardship of walking to the grocery store.
------
All of that might be a bit excessive lol, but I guess we'll find out
If it's a less trodden path expect it to hallucinate some settings.
Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary.
"This lump of code is producing this behaviour when I don't want to"
Is a quick way to find/fix bugs (IME)
BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is
We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.
But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.
Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."
The total surveillance society is coming.
I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.
> But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.
What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?
Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.
Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?
All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?
Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.
Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.