Top
Best
New

Posted by delaugust 7 hours ago

AI is a front for consolidation of resources and power(www.chrbutler.com)
122 points | 91 comments
sockgrant 6 hours ago|
“As a designer…”

IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.

I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.

a_bonobo 31 minutes ago||
I think that's also because Claude Code (and LLMs) is built by engineers who think of their target audience as engineers; they can only think of the world through their own lenses.

Kind of how for the longest time, Google used to be best at finding solutions to programming problems and programming documentation: say, a Google built by librarians would have a totally different slant.

Perhaps that's why designers don't see it yet, no designers have built Claude's 'world-view'.

lumost 43 minutes ago|||
I think the question is whether those ai tools make you produce more value. Anecdotally, the ai tools have changed the workflow and allowed me to produce more tools etc.

They have not necessarily changed the rate at which I produce valuable outputs (yet).

ihaveajob 6 hours ago|||
I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful.
verdverm 6 hours ago||
I've been using Google ADK to create custom agents (fantastic SDK).

With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface

hagbarth 6 hours ago|||
If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
NitpickLawyer 6 hours ago||
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.

AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.

If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".

hagbarth 6 hours ago|||
Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.

[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...

xeckr 6 hours ago|||
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
HarHarVeryFunny 5 hours ago||
[dead]
monkaiju 42 minutes ago|||
All I see it doing, as a SWE, is limiting the speed at which my co-workers learn and worsening the quality of their output. Finally many are noticing this and using it less...
hollowturtle 5 hours ago|||
Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment
redorb 5 hours ago||
I think chatGPT itself is an epic product, Cursor has insane growth and usage. I also think they are both over-hyped, have too much a valuation.
layer8 4 hours ago|||
Citing AI software as the only examples of how AI benefits developing software, has a bit of a touch of self-help books describing how to attain success and fulfillment by taking the example of writing self-help books.

I don’t disagree that these are useful tools, by the way. I just haven’t seen any discernible uptick in general software quality and utility either, nor any economic uptick that should presumably follow from being able to develop software more efficiently.

emp17344 4 hours ago||||
It doesn’t matter what you think. Where’s all the data proving that AI is actually valuable? All we have are anecdotes and promises.
hollowturtle 5 hours ago||||
ChatGPT is... a chat with some "augmentation" feature aka outputting rich html responses, nothing new except the generative side. Cursor is a VSCode fork with a custom model and a very good autocomplete integration. Again where are the products? Where the heck is Windows without the bloat that works reliably before becoming totally agentic? And therefore idiotic since it doesn't work reliably
oblio 4 hours ago|||
I agree with everyone else, where is the Microsoft Office competitor created by 2 geeks in a garage with Claude Code? Where is the Exchange replacement created by a company of 20 people?

There are many really lucrative markets that need a fresh approach, and AI doesn't seem to have caused a huge explosion of new software created by upstarts.

Or am I missing something? Where are the consumer facing software apps developed primarily with AI by smaller companies? I'm excluding big companies because in their case it's impossible to prove the productivity, the could be throwing more bodies at the problem and we'd never know.

sfgvvxsfccdd 7 minutes ago||
> Office…Exchange

The challenge in competing with these products is not code. The challenge competing in lucrative markets that need a fresh approach is also generally not code. So I’m not sure that is a good metric to evaluate LLMs for code generation.

muldvarp 5 hours ago||
> IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.

How are we building _for_ ourselves when we literally automate away our jobs? This is probably one of the _worst_ things someone could do to me.

DennisP 5 hours ago||
Software engineers been automating our own work since we built the first assembler. So far it's just made us more productive and valuable, because the demand for software has been effectively unlimited.

Maybe that will continue with AI, or maybe our long-standing habit will finally turn against us.

muldvarp 4 hours ago||
> Software engineers been automating our own work since we built the first assembler.

The declared goal of AI is to automated software engineering entirely. This is in no way comparable to building an assembler. So the question is mostly about whether or not this goal will be achieved.

Still, nobody is building these systems _for_ me. They're building them to replace me, because my living is too much for them to pay.

corry 6 hours ago||
"The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe."

This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.

But to me the article fails to:

(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;

(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.

If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).

Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.

As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".

Dilettante_ 6 hours ago||

  My experience with AI in the design context tends to reflect what I think is generally true about AI in the workplace: the smaller the use case, the larger the gain.
This might be the money quote, encapsulating the difference between people who say their work benefits from LLMs and those who don't. Expecting it to one-shot your entire module will leave you disappointed, using it for code completion, generating documentation, and small-scale agentic tasks frees you up from a lot of little trivial distractions.
m463 5 hours ago||
> frees you up from a lot of little trivial distractions.

I think one huge issue in my life has been: getting started

If AI helps with this, I think it is worth it.

Even if getting started is incorrect, it sparks outrage and an "I'll fix this" momentum.

brailsafe 24 minutes ago|||
> If AI helps with this, I think it is worth it.

Worth what? I probably agree, the greenfield rote mechanical tasks of putting together something like a basic interface, somewhat thorough unit tests, or a basic state container that maps to a complicated typed endpoint are things I'd procrastinate on or would otherwise drain my energy before I get started.

But that real tangible value does need to have an agreeable *price* and *cost* depending on the context. For me, that price ceiling depends on how often and to what extent it's able to contribute to generating maximum overall value, but in terms of personal economic value (the proportion of my fixed time I'm spending on which work), if it's on an upward trend of practical utility, that means I'm actually increasing the proportion of dull tasks I'm spending my time on... potentially.

Kind of like how having a car makes it so comfortable and easy and ostensibly fast to get somewhere for an individual—theoretically freeing up time to do all kinds of other activities—that some people justify endless amounts of debt to acquire them, allowing the parameters of where they're willing to live to shift further and further to the point where nearly all of their free time, energy, and money is spent on driving, all of their kids depend on driving, and society accepts it as an unavoidable necessity; all the deaths, environmental damage, side-effects of decreased physical activity and increased stress along for the ride. Likewise how various chat platforms tried to make communication so friction-less that I actually now want to exchange messages with people far less than ever before, effectively a foot gun

Maybe America is once again demolishing its cities so they can plow through a freeway, and before we know it, every city will be Dallas, and every road will be like commuting between San Jose to anywhere else—metaphorically of course, but also literally in the case of infrastructure build— when will it be too late to realize that we should have just accepted the tiny bit of hardship of walking to the grocery store.

------

All of that might be a bit excessive lol, but I guess we'll find out

judahmeek 16 minutes ago|||
If? Shouldn't you know by now whether AI does or doesn't help with that? ;D
kulahan 5 hours ago|||
An agentic git interface might be nice, though hallucinations seem like they could create a really messy problem. Still, you could just roll back in that case, I suppose. Anyways, it would be nice to tell it where I'm trying to get to and let it figure out how to get there.
helterskelter 5 hours ago|||
Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know.
mattmanser 4 hours ago||
Depends what you're doing.

If it's a less trodden path expect it to hallucinate some settings.

Also a regular thing I see is that it adds some random other settings without comment and then when you ask it about them it goes, whoops, yeah, those aren't necessary.

awesome_dude 6 hours ago||
And bug fixes

"This lump of code is producing this behaviour when I don't want to"

Is a quick way to find/fix bugs (IME)

BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is

jrochkind1 30 minutes ago||
I'd like more people to talk about AI and surveillance. I think that is going to be one of it's biggest impacts on society(ies).

We are a decade or two in to having massive video coverage, such that you are probably on someone's camera much of your day in the world, and video feeds that are increasingly cloud hosted.

But nobody could possibly watch all that video. Even cameras specifically controlled by the police, it had already outstripped the ability to have humans monitoring it. At best you could refer to it when you had reason to think there'd be something on it, and even that was hugely expensive to human time.

Enter AI. "Find where Joe Schmoe was at 3:30pm yesterday and show me the video" "Give me a written summary of all the cars which crossed into the city from east to west yesterday afternoon." "Give me the names of everyone who entered the convenience store at 2323 Monument St last week." "Give me a written summary of Sue Brown's known activities in November."

The total surveillance society is coming.

I think it will be the biggest impact AI has on society in retrospect. I, for one, am not looking forward to it.

aynyc 6 hours ago||
A bit of sarcasm, but I think it's porn.
righthand 6 hours ago|
It’s at least about stimulating you to give richer data. Which isn’t quite porn.
faceball2000 6 hours ago||
What about surveillance? Lately I've been feeling that is what it's really for. Because our data can be queried in a much more powerful way when it has all been used to train LLMs.
exceptione 6 hours ago||
I think this is the best part of the essay:

  > But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?

  > There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.
xeckr 6 hours ago||
The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.
HarHarVeryFunny 5 hours ago|
This notion of a hard takeoff, or singularity, based on self-improving AI, is based on the implicit assumption that what's holding AI progress back is lack of AI researchers/developers, which is false.

Ideas are a penny a dozen - the bottleneck is the money/compute to test them at scale.

What exactly is the scenario you are imagining where more developers at a company like OpenAI (or maybe Meta, which has just laid off 600 of them) would accelerate progress?

xeckr 5 hours ago||
It's not hard to believe that adding AI researchers to an AI company marginally increases the rate of progress, otherwise why would the companies be clamouring for talent with eye-watering salaries? In any case, I'm not just talking about AI researchers—AGI will not only help with algorithmic efficiency improvements, but will probably make spinning up chip fabs that much easier.
HarHarVeryFunny 4 hours ago||
The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.

Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.

Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?

All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?

block_dagger 6 hours ago||
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.

What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.

kmnc 6 hours ago|
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
apsurd 6 hours ago||
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".

I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?

njarboe 6 hours ago|
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
apsurd 6 hours ago||
This means to me AI is rocket fuel for our post-truth reality.

Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.

Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.

emp17344 6 hours ago|||
How is this different from a less reliable search engine?
More comments...