Top
Best
New

Posted by tanelpoder 5 hours ago

So where are all the AI apps?(www.answer.ai)
328 points | 303 comments
paxys 4 hours ago|
It is incredibly easy now to get an idea to the prototype stage, but making it production-ready still needs boring old software engineering skills. I know countless people who followed the "I'll vibe code my own business" trend, and a few of them did get pretty far, but ultimately not a single one actually launched. Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.
TeMPOraL 4 hours ago||
> It is incredibly easy now to get an idea to the prototype stage

Yup. And for most purposes, that's enough. An app does not have to be productized and shipped to general audience to be useful. In fact, if your goal is to solve some specific problem for yourself, your friends/family, community or your team, then the "last step" you mention - the one that "takes majority of time and effort" - is entirely unnecessary, irrelevant, and a waste of time.

The productivity boost is there, but it's not measured because people are looking for the wrong thing. Products on the market are not solutions to problems, they're tools to make money. The two are correlated, because of bunch of obvious reasons (people need money, solving a problem costs money, people are happy to pay for solutions, etc.), but they're still distinct. AI is dropping the costs of "solving the problem" part, much more than that of "making a product", so it's not useful to use the lack of the latter as evidence of lack of the former.

PaulHoule 3 hours ago|||
In enterprise software there is an eternal discussion of "buy vs build" and most organizations go through a cycle of:

-- we had a terrible time building something so now we're only going to buy things

-- we had a terrible time buying something so now we're only going to build things

-- repeat...

Either way you can have a brilliant success and either way you fail abjectly, usually you succeed at most but not all of the goals and it is late and over budget.

If you build you take the risks of building something that doesn't exist and may never exist.

If you buy you have to pay for a lot of structure that pushes risks around in space and time. The vendor people needs marketing people not to figure out what you need, but what customers need in the abstract. Sales people are needed to help you match up your perception of what you need with the reality of the product. All those folks are expensive, not just because of their salaries but because a pretty good chunk of a salesperson's time is burned up on sales that don't go through, sales that take 10x as long they really should because there are too many people in the room, etc.

When I was envisioning an enterprise product in the early 2010s for instance I got all hung up on the deployment model -- we figured some customers would insist on everything being on-premise, some would want to host in their own AWS/Azure/GCP and others would be happy if we did it all for them. We found the phrase "hybrid cloud" would cause their eyes to glaze over and maybe they were right because in five years this became a synonym for Kubernetes. Building our demos we just built things that were easy for us to deploy and the same would be true for anything people build in house.

To some extent I think AI does push the line towards build.

rurp 3 hours ago||||
> if your goal is to solve some specific problem for yourself, your friends/family, community or your team, then the "last step" you mention - the one that "takes majority of time and effort" - is entirely unnecessary, irrelevant, and a waste of time.

To a point, but I think this overstates it by quite a bit. At the moment I'm weighing some tradeoffs around this myself. I'm currently making an app for a niche interest of mine. I have a few acquaintances who would find it useful as well but I'm not sure if I want to take that on. If I keep the project for personal use I can make a lot of simplifying decisions like just running it on my own machine and using the CLI for certain steps.

To deploy this to for non-tech users I need to figure out a whole deployment approach, make the UI more polished, and worry more about bugs and uptime. It sucks to get invested in some software that then constantly starts breaking or crashing. GenAI will help with this somewhat, but certainly won't drop the extra coding time cost down to zero.

PaulHoule 3 hours ago||
People today say "web applications suck", "Electron sucks", etc. They weren't around in the 1990s where IT departments were breaking under the load of maintaining desktop apps, when we were just getting on the security update treadmill, and where most shops that made applications for Windows had a dedicated InstallShield engineer and maybe even a dedicated tester for the install process.
steve1977 1 hour ago|||
Maintaining desktop apps was not really harder than maintaining the current Kubernetes-Web-App behemoths, at least in my experience.
PaulHoule 36 minutes ago||
Yeah, we traded managing files and registry entries on desktops for something that violates all the principles of the science-of-systems, the kind of thing Perrow warns about in his book Normal Accidents.
nogridbag 2 hours ago||||
I wish we had a dedicated InstallShield engineer! I had to design and burn my own discs for the desktop apps I built. And for some reason, the LightScribe drive was installed on the receptionist's computer. I have no idea why, but I was a new hire and I didn't question much.
jhatemyjob 1 hour ago|||
Windows was so bad that it made the web bad. Imagine the world we'd be in today if Internet Explorer never existed.
PaulHoule 38 minutes ago||
Well back in the 1990s Apple was on the ropes.

Classic MacOS was designed to support handling events from the keyboard, mouse and floppy in 1984 and adding events from the internet broke it. It was fun using a Mac and being able to get all your work done without touching a command line, but for a while it crashed, crashed and crashed when you tried to browse the web until that fateful version where they added locks to stop the crashes but then it was beachball... beachball... beachball...

They took investment from Microsoft at their bottom and then they came out with OS X which is as POSIXy as any modern OS and was able to handle running a web browser.

In the 1990s you could also run Linux and at the time I thought Linux was far ahead of Windows in every way. Granted there were many categories of software like office suites that were not available, but installing most software was

   ./configure
   make
   sudo make install

but if your system was unusual (Linux in 1994, Solaris in 2004) you might need to patch the source somewhere.
threetonesun 3 hours ago||||
I agree, although I'd also say for the majority of problems the first part of even prototyping it is probably a waste of time and most people would be better off asking a simple AI hooked up to search if an appropriate solution already exists, or can be easily made with existing tools.
apsurd 3 hours ago||||
But the last "making a product" part does apply to nearly any tool, even a solution to a personal problem.

I've started tons of scratch my own itch projects. There's adoption, UX, onboarding costs even if you're the only audience.

TLDR: i don't even use my own projects. I churn.

j45 1 hour ago|||
True, but a basic production stage benefits from some amount of backups, dev/staging/prodution, and more than one database.
bwfan123 2 hours ago|||
> the "last step" is what takes the majority of time and effort

Having worked extensively with vibe-coded software, the main problem for me is that I have tuned-off from the ai-code, and I dont see any skin-in-the-game for me. This is dangerous because it becomes increasingly harder to root-cause and debug problems because that muscle is atrophying. use-it or lose-it applies to cognitive skills (coding/debugging). Now, I lean negatively to ai-code because, while it seduces us with fast progress in the first 80%, the end outcome is questionable in terms of quality. Finally, ai-coding encourages a prompt-and-test or trial-and-error approach to software engineering which is frustrating and those with experience would prefer to get it right by design.

janalsncm 2 hours ago|||
I also wonder about this for myself. My feeling is that my debug skills are also atrophied a bit. But I would split debugging into two buckets:

1. Debugging my own code or obvious behavior with other libraries.

2. Debugging pain-in-the-ass behavior with other libraries.

My patience with the latter is significantly less now, and so is perhaps my skill in debugging them. Libraries that change their apis for no apparent reason, libraries which use nonstandard parameter names, libraries which aren’t working as advertised.

raw_anon_1111 4 hours ago|||
Software engineering is not “coding” though.

Before AI for the last 8 or so years now first at a startup then working in consulting mostly with companies new to AWS or they wanted a new implementation, it’s been:

1. Gather requirements

2. Do the design

3. Present the design and get approval and make sure I didn’t miss anything

4. Do the infrastructure as code to create the architecture and the deployment pipeline

5. Design the schema and write the code

6. Take it through UAT and often go back to #4 or #5

7. Move it into production

8. Monitoring and maintenance.

#4 and #5 can be done easily with AI for most run of the mill enterprise SaaS implementations especially if you have the luxury of starting from the ground up “post AI”. This is something you could farm off to mid level ticket takers before AI.

lordmathis 4 hours ago|||
I also experienced this with my personal projects. It was really easy to just workshop a new feature. I'd talk to claude and get a nice looking implementation spec. Then I'd pass it on to a coding agent which would get 80% there but the last 20% would actually take lot more time. In the meantime I'd workshop more and more features leading to an evergrowing backlog and an anxiety that an agent should be doing something otherwise I'm wasting time. I brought this completely on myself. I'm not building a business, nothing would happen if I just didn't implement another feature.
freedomben 3 hours ago||
Ha! I do this too and have also recently noticed. When scope creep is relatively cheap, it also gets unending and I'm never satisfied. I've had a couple of projects that I would otherwise open source that I've had to be realistic about and just accept it's only going to be useful for myself. Once I open it I feel a responsiblity for maintenance and stability that just adds a lot of extra work. I need to save those for the projects that might actually, realistically, be used.
DanHulton 3 hours ago|||
It's really thrown off some old adages. It's now "the first 90% takes 90% of the time, the last 10% takes the other 90,000,000% of the time."

Just doesn't have the same ring to it.

marginalia_nu 1 hour ago||
It's more Zeno's paradox. You take one step, get 90% of the way to the finishing line. Now you look ahead and still a bunch of distance ahead of you. You take another step and get 90% of the way there. Now you look ahead and see there's still more distance ahead of you,...
williamcotton 49 minutes ago|||
I agree 100%. Boring old software skills are part of what it took to "write" this DSL, complete with a fully featured LSP:

https://github.com/williamcotton/webpipe

https://github.com/williamcotton/webpipe-lsp

(lots of animated GIFs to show off the LSP and debugger!)

While I barely typed any of this myself I sure as heck read most of the generated code. But not all of it!

Of course you have to consider my blog to be "in production":

https://github.com/williamcotton/williamcotton.com/blob/main...

The reason I'm mentioning this project is because the article questions where all the AI apps are. Take a look at the git history of these projects and question if this would have been possible to accomplish in such a relatively short timeframe! Or maybe it's totally doable? I'm not sure. I knew nothing about quite a bit of the subsystems, eg, the Debug Adapter Protocol, before their implementation.

dalenw 12 minutes ago||
I recently "vibe coded" a long term background job runner service... thing. It's rather specific to my job and a pre-existing solution didn't exist. I already knew what I wanted the code to be, so it was just a matter of explaining explicitly what I wanted to the AI. Software engineering concepts, patterns, al that stuff. And at the end of the day(s) it took about the same amount of time to code it with AI than it would've taken by hand.

It was a lot of reviewing and proofreading and just verifying everything by hand. The only thing that saved me time was writing the test suite for it.

Would I do it again? Maybe. It was kinda fun programming by explaining an idea in plain english than just writing the code itself. But I heavily relied on software engineering skills, especially those theory classes from university to best explain how it should be structured and written. And of course being able to understand what it outputs. I do not think that someone with no prior software engineering knowledge could do the same thing that I did.

Cthulhu_ 4 hours ago|||
Exactly, there have been loads of tools over time to make software development easier - like Dreamweaver and Frontpage to build websites without coding, or low/no-code platforms to click and drag software together, or all frameworks ever, or libraries that solve issues that often take time - and I'm sure they've had a cumulative effect in developer productivity and / or software quality.

But there's not one tool there that triggered a major boost in output or number of apps / libraries / products created - unless I missed something.

Sure, total output has increased, especially since the early 2010's thanks to both Github becoming the social network of software development, and (arguably) Node / JS becoming one of the most popular languages/runtimes out there attracting a lot of developers to publish a lot of tools. But that's not down to productivity or output boosting developments.

adriand 4 hours ago|||
> Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.

This is true, and I bet there are thousands of people who are in this stage right now - having gotten there far faster than they would have without Claude Code - which makes me predict that the point made in the article will not age well. I think it’s just a matter of a bit more time before the deluge starts, something on the order of six more months.

lebuin 4 hours ago||
I'd argue that LLMs are not yet capable of the last step, and because most sufficiently large AI-generated codebase are an unmaintainable mess, it's also very hard for a human developer to take over and go the last mile.
danans 4 hours ago|||
> Anyone who has been doing this professionally will tell you that the "last step" is what takes the majority of time and effort.

That's true, but even the "last step" is being accelerated. The 10% that takes 90% of the time has itself been cut in half.

An example is turning debug logs and bug reports into bugfixes, and performance stats into infrastructure migrations.

The time required to analyze, implement, and deploy those has been reduced by a large amount.

It still needs to be coupled with software engineering skills - to decide between multiple solutions generated by an LLM, but the acceleration is significant.

adamrezich 4 hours ago||
So, how many years until we'll see results, then?
danans 3 hours ago||
> So, how many years until we'll see results, then?

-0.75 years.

Software development output (features, bugs, products) - especially at smaller companies like startups - has already accelerated significantly, while software development hiring has stayed flat or declined. So there has been a dramatic increase in human-efficiency. To me, that seems like a result, although it's cold comfort as a software engineer.

You probably won't see this reflected as a multiplication of new apps because the app consumer's attention is already completely tapped. There's very little attention surface area left to capture.

zdc1 4 hours ago|||
Even if you have the app, you get to start the fun adventure of marketing it and actually trying to grow the damn thing
TeMPOraL 4 hours ago|||
Right. Which is something you neither need nor want if you just wanted to have an app.
parpfish 3 hours ago|||
And you need to find a way to market it that prevents people from thinking “cool, I bet I can get Claude to whip up something similar real quick”
roadside_picnic 1 hour ago|||
> not a single one actually launched.

I think this represents a fundamental misunderstanding of how these AI tools are used most effectively: not to write software but to directly solve the problem you were going to solve with software.

I used to not understand this and agreed with the "where is all the shovelware" comments, but now I've realized the real shift is not from automating software creation, but replacing the need for it in the first place.

It's clear that we're still awhile away from this being really understood and exploited. People are still confusingly building webapps that aren't necessary. Here's two, somewhat related, examples I've come across (I spend a lot of time on image/video generation in my free time): A web service that automatically creates "headshots" for you, and another that will automatically create TikTok videos for you.

I have bespoke AI versions of both of these I built myself in an afternoon, running locally, creating content for prices that simply can't be matched by anyone trying to build a SaaS company out of these ideas.

What people are thinking: "I know, I can use AI to build a SaaS startup the sells content!" But building a SaaS company still requires real software since it has to scale to multiple users and use cases. What more and more people are realizing is "I can created the content for basically free on my desktop, now I need to figure out how to leverage that content". I still haven't cracked the code for creating a rockstar TikTok channel, but it's not because I'm blocked on the content end.

Similarly I'm starting to see that we're still not thinking about how to reorganize software teams to maximally exploit AI. Right now I see lots of engineers doing software the old way with an AI powered exo-skeleton. We know what this results in: massive PRs that clog up the whole process, and small bugs that creep up later. So long as we divide labor into hyper focused roles, this will persist. What I'm increasingly seeing is that to leverage AI properly we need to re-think how these roles actually work, since now one person can be much responsible for a much larger surface area rather than just doing one thing (arguably) faster.

prhn 4 hours ago|||
Even beyond the engineering there are 100 other things to do.

I launched a vibe coded product a few months ago. I spent the majority of my time

- making sure the copy / presentation was effective on product website

- getting signing certificates (this part SUCKS and is expensive)

- managing release version binaries without a CDN (stupid)

- setting up LLC, website, domain, email, google search indexing, etc, etc

ryanbuening 3 hours ago|||
Agreed. However, I just recently "launched" a side project and Cloudflare made a lot of the stuff you mentioned easier. I also found that using AI helped with setting up my LLC when I had questions.
ryandrake 4 hours ago|||
Exactly. The "writing code" part is literally the easiest part of building a software business. And that was even before LLM assisted coding. Now it's pretty much trivial to just spew slop code until something works. The hard parts are still: making the right thing, making it good, getting feedback and idea validation, and the really hard part is turning it into a business.
balls187 1 hour ago|||
> and a few of them did get pretty far, but ultimately not a single one actually launched.

Having done this professionally for a very, very long time, software engineers aren't particularly good at launching products.

Technology has drastically lowered the barriers to bring software products to customers, and AI is a continuation of that trend.

autotune 2 hours ago|||
I launched a draw.io competitor to the point that it is in production, but there is little activity on the site as far as signups are concerned. Doesn't deliver enough business value.
chrisandchris 2 hours ago||
Out of curiosity: What is your USP? Why should I prefer your product over draw.io?

IMHO (this may not apply to you!) a lot of people launch a "competitor" of a product which seems to be a clone of the product without improving something that the other product misses/is very bad at.

npilk 2 hours ago|||
How much longer will this be true, though? With improving computer use, it may be possible in the next ~year or so that agents will be able to wire up infrastructure and launch to production.
ericmcer 2 hours ago||
no

I don't think with LLMs as the foundation we will ever have something that can build and launch something end to end.

They just predict the next most likely token... no amount of clever orchestration can cover that up and make it into real intelligence.

bonoboTP 1 hour ago||
Nice bait
hermitcrab 2 hours ago|||
90% done, just the other 90% to do...
dominotw 4 hours ago|||
all they did is annoy their friends and family by sharing their vibeslop app and asking for "feedback".

I really dont know how to respond to these requests. I am going to hide out and not talk to anyone till this fad passes.

Reminds of the trend where everyone was dj wanting you to listen their mixtrack they made on abbleton live

highstep 4 hours ago|||
Is it really that big of a deal to help/encourage a friend/family in these simple ways?? Do you have no time in life to smell the flowers?
kevinsync 3 hours ago|||
Devil's advocate (because honestly I do agree with you, but..) -- help/encouragement often ends up turning into far more time and effort than it sounds like up front.

~18 months ago a friend of mine had a very viable, good idea for a physical product, but very fuzzy on the details of where to begin. My skillset backfilled everything he was missing to go from idea to reality to in-market.

I began at arm's length with just advice and validation, then slowly got involved with CAD and prototyping to make sure it kept moving forward, then infrastructure/admin, graphic design, digital marketing and support, etc, while he worked on manufacturing, physical marketing, networking, fulfillment, sales, etc.

Long story short, because I both deeply believe in the vision and know that teamwork makes the dream work, I am fully, completely, inextricably involved LOL -- and I don't have a single complaint about it either, but man, watch out, because if you don't believe in the vision but do have skills/expertise they're lacking, and opt out, friends and family will be the quickest and most aggrieved people you'll ever meet that think you're gatekeeping them from success.

pak9rabid 2 hours ago||
I hope this at least resulted in some equity of this project for you.
kevinsync 2 hours ago||
Yeah it turned out to be very fair, I just initially wasn't expecting to get as involved as I have hahaha
pesus 2 hours ago||||
In this case, it's more like asking your friends to take time to smell some feces instead of flowers.

Or to be a little less pessimistic, it's like asking them to stop and smell the flowers, except the flowers are fake and plastic and it makes your friends question your sanity. Either way, it's not a normal or enjoyable flower smelling experience, and doesn't add any enjoyment or simple pleasure to one's life like normal flower smelling would.

moduspol 3 hours ago||||
I sure do. I hook up Claude to my browser via MCP and have it review and give feedback for my family and friends' projects. It's a win/win.
coffeebeqn 3 hours ago|||
AI slop is not the flowers
bogwog 4 hours ago||||
When someone sends me an AI generated project or proposal, I just send them an AI generated reply I know they're not going to bother reading either.
101008 4 hours ago|||
"I think it's great, you should deploy it! Let me know when it's in production"
koonsolo 47 minutes ago|||
I have a product that users wanted me to extend with a certain side product. Unfortunately, I don't have the time or resources for that.

But one day I though, let's give this project to AI, as an experiment.

I gave some very generic instructions (it even created it's own functional specs), and off it went. I was able to instruct it with the same prompt over 8 iterations, and it kept track of progress in files. It was able to generate it in an afternoon.

I was truly amazed! It basically wrote 90% of the code, way better than what I anticipated. While I was telling my wife, I realized I need another 90% to finish it. And that part I need to do myself. And since I'm not as fast as AI, it will take me way more than an afternoon.

It made the development faster for sure. But in the grand scheme of things, coding is only a part of the effort. And only a part of the coding can be done by AI. Maybe writing the code would have taken me twice as long. But then there is testing, validating, fixing, releasing, etc. They all eat away at the % gain.

Needless to say, it's not released yet.

davmar 2 hours ago|||
succinct and accurate.
hintymad 8 minutes ago|||
[dead]
calvinmorrison 4 hours ago||
It's helping with that part too. I was able to configure a grafana stack with the help of claude for our ansible scripts.
skeeter2020 4 hours ago||
That's no where near the end stage of launching a business.
ghywertelling 3 hours ago|||
It is already having impact in triaging support tickets and faster resolution using logs.
stronglikedan 4 hours ago||||
I used it to design my business cards!
calvinmorrison 2 hours ago|||
it's past the end stage, we are already in business. it's just something I am not an expert in, I have used in the past (by having real ops engineers build it for me) and now I have something that gives us insight into our production stack, alerts, etc, that isnt janky and covers my goals. So... yeah that is valuable and improves my business.
hombre_fatal 4 hours ago||
Maybe the top 15,000 PyPi packages isn't the best way to measure this?

Apparently new iOS app submissions jumped by 24% last year:

> According to Appfigures Explorer, Apple's App Store saw 557K new app submissions in 2025, a whopping 24% increase from 2024, and the first meaningful increase since 2016's all-time high of 1M apps.

The chart shows stagnant new iOS app submissions until AI.

Here's a month by month bar chart from 2019 to Feb 2026: https://www.statista.com/statistics/1020964/apple-app-store-...

Also, if you hang out in places with borderline technical people, they might do things like vibe-code a waybar app and proudly post it to r/omarchy which was the first time they ever installed linux in their life.

Though I'd be super surprised if average activity didn't pick up big on Github in general. And if it hasn't, it's only because we overestimate how fast people develop new workflows. Just by going by my own increase in software output and the projects I've taken on over the last couple months.

Finally, December 2025 (Opus 4.5 and that new Codex one) was a big inflection point where AI was suddenly good enough to do all sorts of things for me without hand-holding.

contravariant 3 hours ago||
I can't really think of a polite way to phrase this, but I'm not surprised throwaway mobile apps do benefit, while relatively mature python packages do not. That matches my estimation of how much programming skill you can reasonable extract from the current LLMs.

Really the one thing that conclusively has changed is that the 'ask it on stackoverflow' has become 'ask it an LLM'. Around 95% of the stackoverflow questions can be answered by an LLM with access to the documentation, not sure what will happen to the other 5%. I don't think stackoverflow will survive a 20-fold reduction in size, if only because their stance on not allowing repeat questions means that exponential growth was the main thing preventing them from becoming stale.

hombre_fatal 3 hours ago||
> I'm not surprised throwaway mobile apps do benefit, while relatively mature python packages do not.

Right.

I don't think you even need cynicism or whatever you felt you were having impolite thoughts about:

I'd expect the top mature libraries to be the most resistant to AI tool use for various reasons. They already have established processes, they don't accept drive-by PR spam, the developers working on them might be the least likely to be early adopters, and -- perhaps most importantly -- the todo list of those projects might need the most human comms, like directional planning rather than the sort of yolo feature impl you can do in a one-man greenfield.

All to further bury signals you might find elsewhere in broader ecosystems.

pipnonsense 4 hours ago|||
i was curious, but I need a statista account to see it
deaux 3 hours ago|||
https://cdn.statcdn.com/Statistic/1020000/1020964-blank-754....
hombre_fatal 3 hours ago|||
Seems they use old data unless you craft a request with origin/referrer:

    curl 'https://cdn.statcdn.com/Statistic/1020000/1020964-blank-754.png' \
      -H 'Origin: https://www.statista.com' \
      -H 'Referer: https://www.statista.com/' \
      --output chart.png
Assuming it's a real chart, that will give you the image with the uptick in the last year.
philipphutterer 3 hours ago||||
https://archive.md/tM9Kg
hombre_fatal 4 hours ago|||
Heh, I got a solid five seconds with the chart until the paywall popped up.
klibertp 3 hours ago|||
But there's no labels on the X axis - and removing the popover with dev tools shows a chart that doesn't really support what OP says. So we might be looking at some sample chart instead of a real one.
robot-wrangler 57 minutes ago|||
> Heh, I got a solid five seconds with the chart until the paywall popped up.

Relevant! If the maximalist interpretation of AI capabilities were close to real, and if people tend to point their new super powers at their biggest pain points.. wouldn't it be a big blow for all things advertising / attention economy? "Create a driver or wrapper app that skips all ads on Youtube/Spotify" or "Make a browser plugin that de-emphasizes and unlinks all attention-grabbing references to pay-walled content".

If we're supposed to be in awe of how AI can do anything, and we notice repeatedly that nope, it isn't really empowering users yet, then we may need to reconsider the premise.

bigbadfeline 2 hours ago||
> Apparently new iOS app submissions jumped by 24% last year:

The amount of useless slop in the app store doesn't matter. There are no new and useful apps made with AI - apps that contribute to productivity of the economy as whole. The trade and fiscal deficits are both high and growing as is corporate indebtedness - these are the true measures for economic failure and they all agree on it.

AI is a debt and energy guzzling endeavor which sucks the capital juice out of the economy in return for meager benefits.

I can't think of a reason for the present unjustified AI rush and hype other than war, but any success towards that goal is a total loss for the economy and environment - that's the relation between economics and deadly destruction in a connected world, reality is the proof.

GorbachevyChase 7 minutes ago|||
Is the AI in the room with us now?

I get that people are upset that making a cool six figures off of stitching together React components is maybe not a viable long-term career path anymore. For those of us on the user side, the value is tremendous. I’m starting to replace what were paid enterprise software and plug-ins and tailoring them to my own taste. Our subject matter experts are translating their knowledge and work flows, which usually aren’t that complicated, into working products on their own. They didn’t have to spend six months or a year negotiating an agreement to build the software or have to beg our existing software vendors, who could not possibly care less, for the functionality to be added to software we are, for some reason, expected to pay for every single year, despite the absence of any operating cost to justify this practice.

deaux 2 hours ago|||
> There are no new and useful apps made with AI - apps that contribute to productivity of the economy as whole.

This is flat-earther level. It's like an environmentalist saying that nothing made with fossil fuels contributes to productivity. But they don't say that because they know it's not true.

There are so many valid gripes to have with LLMs, pick literally any of them. The idea that a single line of generated code can't possibly be productivity net positive is nonsensical. And if one line can, then so can many lines.

thinkharderdev 1 hour ago|||
> This is flat-earther level

Ok, so do you have a counterexample?

dash2 47 minutes ago|||
Here's mine. It's not big or important (at all!) but I think it is a perfectly valid app that might be useful to some people. It's entirely vibe-coded including code, art and sounds. Only the idea was mine.

https://apps.apple.com/us/app/kaien/id6759458971

planb 1 hour ago|||
Can you give me any new (i.e. released in 2026) app that does something useful? There's just not many good app ideas left after all..
croes 55 minutes ago||
That has some strong "Everything that can be invented has been invented" vibes.

If that would be true then all these AIs are useless. Who needs them to built something that already exists?

croes 57 minutes ago|||
Just shown me a new killer app from the app store that is coded by AI and isn’t an AI app itself.

Seems like the rest of the whole AI business, the only things going to the top are the AI tools themselves but not the things they are supposed to built.

fritzo 2 minutes ago||
They're private, that's the beauty. Code is so cheap now, we can ween ourselves off massive dependency chains.

200 years ago text was much more expensive, and more people memorized sayings and poems and quotations. Now text is cheap, and we rarely quote.

mlsu 9 minutes ago||
We have great software now!

YoloSwag (13 commits)

[rocketship rocketship rocketship]

YoloSwag is a 1:1 implementation of pyTorch, written in RUST [crab emoji]

- [hand pointing emoji] YoloSwag is Memory Safe due to being Written in Rust

- [green leaf emoji] YoloSwag uses 80% less CPU cycles due to being written in Rust

- [clipboard emoji] [engineer emoji] YoloSwag is 1:1 API compatible with pyTorch with complete ops specification conformance. All ops are supported.

- [recycle emoji] YoloSwag is drop-in ready replacement for Pytorch

- [racecar emoji] YoloSwag speeds up your training workflows by over 300%

Then you git clone yoloswag and it crashes immediately and doesn't even run. And you look at the test suite and every test just creates its own mocks to pass. And then you look at the code and it's weird frankenstein implementation, half of it is using rust bindings for pytorch and the other half is random APIs that are named similarly but not identical.

turlockmike 4 hours ago||
I deleted vscode and replaced with a hyper personal dashboard that combines information from everywhere.

I have a news feed, work tab for managing issues/PRs, markdown editor with folders, calendar, AI powered buttons all over the place (I click a button, it does something interesting with Claude code I can't do programmatically).

Why don't I share it? Because it's highly personal, others would find it doesn't fit their own workflow.

camdenreslink 4 hours ago||
Technical people (which is by far the minority of people out there) building personal apps to scratch an itch is one thing.

But based on the hype (100x productivity!), there should be a deluge of high quality mobile apps, Saas offerings, etc. There is a huge profit incentive to create quality software at a low price.

Yet, the majority of new apps and services that I see are all AI ecosystem stuff. Wrappers around LLMs, or tools to use LLMs to create software. But I’m not really seeing the output of this process (net new software).

GorbachevyChase 2 minutes ago|||
Why on earth would you publish and monetize software anybody can reproduce with a $20 subscription and an hour of prompting? Why would you ever publish something you vibe coded to PyPI? Code itself isn’t scarce anymore. If there is not some proprietary, secret data or profound insight behind it, I just don’t think there is a good reason to treat it like something valuable.
physicsguy 4 hours ago||||
I worked in an industry for five years and I could feasibly build a competitor product that I think would solve a lot of the problems we had before, and which it would be difficult to pivot the existing ones into. But ultimately, I could have done that before, it just brings the time to build down, and it does nothing for the difficult part which is convincing customers to take a chance on you, sales and marketing, etc. - it takes a certain type of person to go and start a business.
amrocha 4 hours ago||
Nobody’s talking about starting businesses. The article is specifically about pypi packages, which don’t require any sales and marketing. And there’s still no noticeable uptick in package creation or updates.
physicsguy 2 hours ago||
My understanding reading it was that PyPi packages is just being used as a proxy variable
enraged_camel 12 minutes ago||
Yes, you are correct. The parent is not following the conversation. They probably didn't even read the article.
raw_anon_1111 4 hours ago||||
There is no money in mobile apps. It came out in the Epic Trial that 90% of App Store revenue comes from in app purchases for pay to win games. Most of the other money companies are making from mobile are front end for services.

If someone did make a mobile app, how would it get up take? Coding has never been the hard part about a successful software product.

bdcravens 1 hour ago||||
> But based on the hype (100x productivity!), there should be a deluge of high quality mobile apps, Saas offerings, etc. There is a huge profit incentive to create quality software at a low price.

1. People aren't creating new apps, but enhancing existing ones

2. Companies are less likely to pay for new offerings when the barrier to entry is lowered due to AI. They'll just vibe code what they need.

camdenreslink 8 minutes ago||
I don't think the 2nd point will make a huge impact on software sales. Who is vibe coding? Software developers or business types? They aren't going to vibe code a CRM, or their own bespoke version of Excel, or their own Datadog APM.

Maybe they will vibe code small scripts, but nobody was really paying for software to do that in the first place. Saas-pocalypse is just people vibe investing, not really understanding the value proposition of saas in the first place (no maintenance, no deployments, SLAs, no database backups, etc).

aaroninsf 48 minutes ago||||
Profit is not everyone's goal.

Me, I'm not just chasing markets; I want to build things that create joy.

thewebguyd 4 hours ago||||
> Wrappers around LLMs, or tools to use LLMs to create software. But I’m not really seeing the output of this process

Because it's better to sell shovels than to pan for gold.

In the current state of LLMs, the average no-experience, non-techy person was never going to make production software with it, let alone actually launch something profitable. Coding was never the hard part in the first place, sales, marketing & growth is.

LLMs are basically just another devtool at this point. In the 90s, IDEs/Rapid App Development was a gold rush. LLMs are today's version of that. Both made developer's life's better, but neither resulted in a huge rush of new, cheap software from the masses.

Foobar8568 3 hours ago||
And SQL was that version in the 80s...
morkalork 4 hours ago||||
Before LLMs, there were code sweatshops in India, Vietnam, Latin America, etc. and they've been pumping out apps and SaaS products for decades now.
the-smug-one 4 hours ago||
And it was all crap software, no? EDIT: If it was crap, then that is still good for AI.
morkalork 4 hours ago||
AI-powered devs are struggling to stand above it so it wasn't all crap, or, AI produced stuff is too
CodingJeebus 4 hours ago||||
I think this is the great conundrum with AI. I find it's most useful when I build my own tools from models. It's great for solving last-mile-problem types of situations around my workflow. But I'm not interested in trying to productize my custom workflow. And I've yet to encounter an AI feature on an existing app that felt right.

Problem is that all these companies trying to push AI experiences know that giving users unfettered access to their data to build further customization is corporate suicide.

oro44 4 hours ago||||
Well it’s mostly explained by the fact that most people lack imagination and can’t hold enough concepts about a particular experience to think about how to re-imagine it, to begin with.

Oh and sadly, llm’s are useless for the imaginative part too. Shucks eh.

peteforde 4 hours ago||
I share this particular cynicism.

I have a list of ideas a mile long that gets longer every day, and LLMs help me burn through that list significantly faster.

However, the older I get, the more distraught I get that most people I meet "IRL" are simply not sitting on a list of problems they simply lack time to solve. I have... a lot of emotions around this, but it seems to be the norm.

If someone doesn't see or experience problems and intuitively start working out how they would fix them if they only had time, the notion that they could pair program effectively ideas that they didn't previously have with an LLM is absurd.

skeledrew 3 hours ago|||
Also one of those with a mile-long ideas list that I can finally now burn through. I gotta say, it feels good!
oro44 2 hours ago|||
Yeah and frankly the innovation would occur irrespective of llm’s.

Would it be harder? Sure. And perhaps the difficulty adds an additional cost of passion being a necessary condition to embark on the innovation. Passion leads to really good stuff.

My personal fear is we get landfill sites of junk software produced. To some extent it should be costly to convert an idea to a concept - the cost being thinking carefully so what you put out there is somewhat legible.

skeledrew 4 hours ago|||
There really isn't much profit incentive actually, as everyone has access to the same capabilities now. It'd be like trying to sell ice to Eskimos.
camdenreslink 6 minutes ago||
Most businesses do not have the capacity to use LLMs to produce software. If you have an idea that you can create into real high quality software that there is a demand for, then you should absolutely do it.
sputknick 4 hours ago|||
This is probably my favorite gain from AI assisted coding: the bar for "who cares about this app" has dropped to a minimum of 1 to make sense. I recently built an app for grocery shopping that is specific to how and where I shop, would be useless to anyone other than my wife. Took me 20 minutes. This is the next frontier: I have a random manual process I do every week, I'll write an app that does it for me.
ElFitz 4 hours ago|||
More than that. Building a throwaway-transient-single-use web app for a single annoying use kind of makes sense now, sometimes.

I had to create a bunch of GitHub and Linear apps. Without me even asking Codex whipped up a web page and a local server to set them up, collecting the OAuth credentials, and forward them to the actual app.

Took two minutes, I used it to set up the apps in three clicks each, and then just deleted the thing.

Code as transient disposable artifacts.

BoneShard 4 hours ago||
I posted it recently, but now this works differently https://xkcd.com/1205/

You can get a throw away app in 5 mins, before I wouldn't even bother.

JasperNoboxdev 2 hours ago||||
Same energy here. I was sitting on 50+ .env files across various projects with plaintext API keys and it always bothered me but never enough to actually fix it. AI dropped the effort enough that I just had a dedicated agent run at it for a few days — kept making iterations while I was using it day to day until it landed on a pretty solid Touch ID-based setup.

This mix of doing my main work on complex stuff (healthcare) with heavy AI input, and then having 1-2 agents building lighter tools on the side, has been surprisingly effective.

socalgal2 4 hours ago||||
Even if it’s only useful to you it would be super educational to see your prompts and the result.
MeetingsBrowser 4 hours ago||||
What exactly were you bale to build in 20 minutes?
TeMPOraL 4 hours ago|||
Me, and photo editor tool to semi-automate a task of digitizing a few dozen badly scanned old physical photos for a family photo book. Needed something that could auto-straighen and auto-crop the photos with ability to quickly make manual adjustments, Gemini single-shotted me a working app that, after few minutes of back-and-forth as I used it and complained about the process, gained full four-point cropping (arbitrary lines) with snapping to lines detected in image content for minute adjustments.

Before that, it single-shot an app for me where I can copy-paste a table (or a subsection of it) from Excel and print it out perfectly aligned on label sticker paper; it does instantly what used to take me an hour each time, when I had to fight Microsoft Word (mail merge) and my Canon printer's settings to get the text properly aligned on labels, and not cut off because something along the way decided to scale content or add margins or such.

Neither of these tools is immediately usable for others. They're not meant to, and that's fine.

sedawkgrep 3 hours ago||||
My buddy and I are writing our own CRUD web app to track our gaming. I was looking at a ticketing system to use for us to just track bug fixes and improvements. Nothing I found was simple enough or easy enough to warrant installing it.

I vibe'd a basic ticketing system in just under an hour that does what we need. So not 20 mins, but more like 45-60.

stavros 4 hours ago|||
I built a small app to emit a 15 kHz beep (that most adults can't hear) every ten minutes, so I can keep time when I'm getting a massage. It took ten minutes, really, but I guess it's in the spirit of the question.

For 20 minutes of time, I had a simple TTS/STT app that allows me to have a voice conversation with my AI assistant.

shafyy 4 hours ago|||
That's fine and all, but how much are you ready to pay to Anthropic and OpenAI to be able to do this? Like, is it worth 100 bucks a month for you to have your own shopping app?
hamdingers 4 hours ago|||
It's easily worth the <$1 in tokens from a Chinese model. You don't need frontier reasoning capabilities to make a personalized grocery list app.
joshmarinacci 4 hours ago|||
That is an excellent question. For me the answer is yes, but I'm unusual.
ryandrake 3 hours ago|||
It's not worth 100 bucks a month for me to have my own shopping app, but maybe it's worth 100 bucks a month to have ready access to a software garden hose that I can use if I want to spew out whatever stupid app comes to my mind this morning.

I'd rather not pay monthly for something (like water) that I'm turning on and off and may not even need for weeks. But paying per-liter is currently more expensive so that's what we currently do.

I think the future is going to be local models running on powerful GPUs that you have on-prem or in your homelab, so you don't need your wallet perpetually tethered to a company just to turn the hose on for a few minutes.

shafyy 4 hours ago|||
Haha great. I guess my wider point is that most people won't be ready to pay for it, and in the end there will be only two ways to monetize for OpenAI et al: Ads or B2B. And B2B will only work if they invest a lot into sales or if the business owners see real productivity gains one the hype has died one.
headcanon 4 hours ago|||
I've been getting close to that myself, I've been using VSCode + Claude Code as my "control plane" for a bunch of projects but the current interface is getting unwieldly. I've tried superset + conductor and those have some improvements but are opinionated towards a specific set of workflows.

I do think there would be value in sharing your setup at some point if you get around to it, I think a lot of builders are in the same boat and we're all trying to figure out what the right interface for this is (or at least right for us personally).

acessoproibido 4 hours ago|||
I would still be interested even if my personal workflow is different. These things can be very inspirational!
skyberrys 4 hours ago|||
This sounds chaotic and fun.
gear54rus 4 hours ago||
Sounds more like satire.
skyberrys 3 hours ago||
I am easily caught by satire and I have a weakness for buttons.
skydhash 4 hours ago|||
> I deleted vscode and replaced with a hyper personal dashboard that combines information from everywhere.

Emacs with Hyperbole[0]?

[0]: https://www.gnu.org/software/hyperbole/

Igrom 4 hours ago||
You can't mention Hyperbole and not say how you use it. I did not get past the "include the occasional action button in org-mode" phase.
neonnoodle 4 hours ago||
actually the rules say that no one can ever explain what Hyperbole is for
kylecazar 4 hours ago|||
... how did that replace vscode?

Do you never open a code editor?

headcanon 4 hours ago||
Kind of. I'm finding that my terminal window in VSCode went from being at the bottom 1/3rd of my screen to filling the whole screen a lot of the time, replacing the code editor window. If AI is writing all of your code for you based on your chat session, a lot of editing capabilities aren't needed as much. While I wouldn't want to get rid of it entirely, I'd say an AI-native IDE would deemphasize code editing in favor of higher-level controls.
Chris2048 4 hours ago|||
Wdym by "it does something interesting with Claude code I can't do programmatically"?
graeber_28927 4 hours ago|||
I'm guessing it's not a hard coded function, the button invokes. Instead it spawns a claude code session with perhaps some oredefined prompts, maybe attaches logs, and let's claude code "go wild". In that sense the button's effect wouldn't be programmatical, it would be nondeterministic.

Not OP, just guessing.

actionfromafar 4 hours ago||||
I have had the thought to write little "programs" in text or markdown for things which would just a chore to maintain as a traditional program. (I guess we call them "skills" now?) Think scraping a page which might change its output a bit every so often. It the volume or cadence is low, it may not be worth it to create a real program to do it.
bena 4 hours ago|||
It means he has a girlfriend. And she goes to a different school. In Canada. You've never heard of it.
vladostman 4 hours ago||
Perfect analogy actually
EGreg 4 hours ago|||
Well, I’m sharing it. If someone wants an early preview or to work w me on this, the calendly link is on the site:

https://safebots.ai

But it requires A LOT of work to make sure it is actually safe for people and organizations. And no, an .md file saying “PLEASE DONT PWN ME, KTHX” isn’t it at all. “Alignment” is only part of the equation.

If you’re not afraid to dive into rabbitholes, here is how it works: http://community.safebots.ai/t/layer-4-browser-extensions-pe...

tbeseda 4 hours ago||
Sorry, I'm not sure how this relates to the content of the article. Sounds like an interesting experience, but this is an analysis of the Python ecosystem pre+post ChatGPT.
causal 4 hours ago||
AI makes the first 90% of writing an app super easy and the last 10% way harder because you have all the subtle issues of a big codebase but none of the familiarity. Most people give up there.
skeeter2020 4 hours ago||
I spent about a week doing an "experiment" greenfield app. I saw 4 types of issues:

0. It runs way too fast and far ahead. You need to slow it down, force planning only and explicitly present a multi-step (i.e. numbered plan) and say "we'll do #1 first, then do the rest in future steps".

take-away: This is likely solved with experience and changing how I work - or maybe caring less? The problem is the model can produce much faster than you can consume, but it runs down dead ends that destroy YOUR context. I think if you were running a bunch of autonomous agents this would be less noticeable, but impact 1-3 negatively and get very expensive.

1. lots of "just plain wrong" details. You catch this developing or testing because it doesn't work, or you know from experience it's wrong just by looking at it. Or you've already corrected it and need to point out the previous context.

take-away: If you were vibe coding you'd solve all these eventually. Addressing #0 with "MORE AI" would probably help (i.e. AI to play/validate, etc).

2. Serious runtime issues that are not necessarily bugs. Examples: it made a lot of client-side API endpoints public that didn't even need to exist, or at least needed to be scoped to the current auth. It missed basic filtering and SQL clauses that constrained data. It hardcoded important data (but not necessarily secrets) like ports, etc. It made assumptions that worked fine in development but could be big issues in public.

take-away: AI starts to build traps here. Vibe coders are in big trouble because everything works but that's not really the end goal. Problems could range from 3am downtime call-outs to getting your infrastructure owned or data breaches. More serious: experienced devs who go all-in on autonomous coding might be three months from their last manual code review and be in the same position as a vibe coder. You'd need a week or more to onboard and figure out what was going on, and fix it, which is probably too late.

3. It made (at least) one huge architectural mistake (this is a pretty simple project so I'm not sure there's space for more). I saw it coming but kept going in the spirit of my experiment.

take-away: TBD. I'm going to try and use AI to refactor this, but it is non trivial. It could take as long as the initial app did to fix. If you followed the current pro-AI narrative you'd only notice it when your app started to intermittently fail - or you got you cloud provider's bill.

Schiendelman 11 minutes ago||
I'm a product manager, and a lot of the things I see people do wrong is because they don't have any product management experience. It takes quite a bit of work to develop a really good theory of what should be in your functional spec. Exde cases come up all the time in real software engineering, and often handling all those cases is spread across multiple engineers. A good product manager has a view of all of it, expects many of those issues from the agent, and plans for coaching it through them.
SAI_Peregrinus 4 hours ago|||
And as we all know, the first 90% of writing an app takes the first 90% of the time, and the last 10% takes the other 90% of the time.
terrabitz 1 hour ago||
The 90-90 rule may need an update for a POST-LLM world

"The first 90% of the code accounts for the first 9% of the development time. The remaining 10% of the code accounts for the other 9000% of the development time"

socalgal2 3 hours ago|||
Comprehension Debt

https://addyosmani.com/blog/comprehension-debt/

DougN7 4 hours ago|||
Well put. And that last 10% was always the hardest part, and now it’s almost impossible because emotionally you’re even less prepared for the slog ahead.
ing33k 4 hours ago|||
Agree. I’ve also noticed that feature creep tends to increase when AI is writing most of the code.
esafak 4 hours ago||
So the way is to read every line of code along the way.
Plutarco_ink 1 hour ago||
The article measures the wrong thing. PyPI package creation is a terrible proxy for AI-assisted software output because packages are published for reuse by others, which requires documentation, API design, and maintenance commitments that AI doesn't help with much.

The real output is happening in private repos, internal tools, and single-purpose apps that never get published anywhere. I've been building a writing app as a side project. AI got me from zero to a working PWA with offline support, Stripe integration, and 56 SEO landing pages in about 6 weeks of part-time work. Pre-AI that's easily a 6-month project for one person.

But I'm never going to publish it as a PyPI package. It's a deployed web app. The productivity gain is real, it just doesn't show up in the datasets this article is looking at.

The iOS App Store submission data (24% increase) that someone linked in the comments is a much better signal. That's where the output is actually landing.

droidjj 44 minutes ago|
Serious question: Did you use AI to write this or do you just sound like an LLM after having used them so much?
skeledrew 4 hours ago||
I think this article is making a pretty big assumption: that people making things with AI are also going to be publishing them. And that's just the opposite of what should be expected, for the general case.

Like I've been making things, and making changes to things, but I haven't published any of that because, well they're pretty specific to my needs. There are also things which I won't consider publishing for now, even if generally useful because, well the moat has moved from execution effort to ideas, and we all want to maintain some kind of moat to boost our market value (while there's still one). Everyone has reasonable access to the same capabilities now, so everyone can reasonably make what they need according to their exact specs easily, quickly and cheaply.

So while there are many things being made with AI, there is ever-decreasing reasons to publish most of it. We're in an era of highly personalized software, which just isn't worth generalizing and sharing as the effort is now greater than creating from scratch or modifying something already close enough.

chromacity 2 hours ago||
> I think this article is making a pretty big assumption: that people making things with AI are also going to be publishing them. And that's just the opposite of what should be expected, for the general case.

The premise is that AI has already fundamentally changed the nature of software engineering. Not some specific, personal use case, but that everything has changed and that if you're not embracing these tools, you'll perish. In light of this, I don't think your rebuttal works. We should be seeing evidence of meaningful AI contributions all over the place.

edgarvaldes 19 minutes ago||
Hard agree. A 10x productivity increase would bleed outside the personal or internal use cases, even without effort.
freedomben 4 hours ago||
Agree. There's also a weird ideological thing in open source right now, where any AI must be AI slop, and no AI is the only solution. That has strongly disincentivized legitimate contributions from people. I have to imagine that's having an impact.

There's a very real problem of low effort AI slop, but throwing out the baby with the bathwater is not the solution.

That said, I do kind of wonder if the old model of open source just isn't very good in the AI era. Maybe when AI gets a lot better, but for now it does take real human effort to review and test. If contributors were reviewing and testing like they should be doing, it wouldn't be an issue, but far too many people just run AI and don't even look at it before sending the PR. It's not the maintainers job to do all the review and test of a low-effort push. That's not fair to them, and even discarding that it's a terrible model for software that you share with anyone else.

skeledrew 3 hours ago|||
> where any AI must be AI slop, and no AI is the only solution

Yep, also a huge factor. Why publish something you built with an AI assistant if you know it's going to be immediately dunked on not because the quality may be questionable, but because someone sees an em-dash, or an AI coauthor, and immediately goes on a warpath? Heck I commented[0] on the attitude just a few hours ago. I find it really irritating.

[0] https://github.com/duriantaco/fyn/issues/4#issuecomment-4117...

kubanczyk 3 hours ago|||
You know what else strongly disincentivized legitimate contributions from people?

Having your code snatched and its copyright disregarded, to the benefit of some rando LLM vendor. People can just press "pause" and wait until they see whether they fuel something that brings joy to the world. (Which it might in the end. Or not.)

freedomben 9 minutes ago|||
For sure, that's legit too. I've had to grapple with that feeling personally. I didn't get to a great place, other than hoping that AI is democratized enough that it can benefit humanity. When I introspected deep enough, I realized I contributed to open source for two reasons, nearly equally:

1. To benefit myself with features/projects

2. To benefit others with my work

1 by itself would mean no bothering with PR, modifications, etc. It's way easier to hoard your changes than to go through the effort getting them merged upstream. 2 by itself isn't enough motivation to spend the effort getting up to speed on the codebase, testing, etc. Together though, it's powerful motivation for me.

I have to remind myself that both things are a net positive with AI training on my stuff. It's certainly not all pros (there's a lot of cons with AI too), but on the whole I think we're headed for a good destination, assuming open models continue to progress. If it ends up with winner-takes-all Anthropic or OpenAI, then that changes my calculus and will probably really piss me off. Luckily I've gotten positive value back from those companies, even considering having to pay for it.

JasperNoboxdev 2 hours ago|||
Been going back and forth on this with open source tools I've built. The training data argument is valid, but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon.

The moat used to be effort, nobody wants to rewrite this from scratch (especially when it's free). What's left is actually understanding why the thing works the way it does. Not sure that's enough to sustain open source long-term? I guess we all have to get used to it?

freedomben 8 minutes ago||
> but honestly the more immediate version of the same problem is that someone can just take your repo, feed it to an agent, and have their own fork in an afternoon.

Indeed, I've got a few applications I've built or contributed too that are (A)?GPL, and for those I do worry about this AI washing technique. For libraries that are MIT or permissive anyway, I don't really care. (I default to *GPL for applications, MIT/Apache/etc for libraries)

vjvjvjvjghv 4 hours ago||
This remains me so much of the .COM bubble in 2000. A lot of clueless companies thought that they just need to “do internet” without any further understanding or strategy. They burned a ton of money and got nothing out of it. Other companies understood that the internet is an enabling technology that can support a lot of business processes. So they quietly improved their business with the help of the internet.

I see the same with AI. Some companies will use AI quietly and productively without much fuzz. Others are just using it as a marketing tool or an ego trip by execs but no real understanding.

rzerowan 44 minutes ago|
Yep and the LLM tools are giving flasbacks to the Frontpage/DreanWeaver to geocities ipeline for building the sites.

Still early innings but i bet this plays out the same way - not everyone will have the time sink to vibecode all the software workflows they require.Maintainance iwse and security wise holes will still remain for the personaly non tech user. Devs and orgs will probably limit the usage to a helper sidecar rather than the hyped 100% LLM generated apps. Reminds me about the hype

peteforde 4 hours ago|
Not sure that I'd look at python package stats to build this particular argument on.

First, I find that I'm using a lot fewer libraries in general because I am less constrained by the mental models imposed by library authors upon what I'm actually trying to do. Libraries are often heavy and by nature abstract low-level calls from API. These days, I'm far more likely to have 2-3 functions that make those low-level calls directly without any conceptual baggage.

Second, I am generalizing but a reasonable assertion can be made that publishing a package is implicitly launching an open source project, however small in scope or audience. Running OSS projects is a) extremely demanding b) a lot of pain for questionable reward. When you put something into the universe you're taking a non-zero amount of responsibility for it, even just reputationally. Maintainers burn out all of the time, and not everyone is signed up for that. I don't think there's going to be anything remotely like a 1:1 Venn for LLM use and package publishing.

I would counter-argue that in most cases, there might already be too many libraries for everything under the sun. Consolidation around the libraries that are genuinely amazing is not a terrible thing.

Third, one of the most recurring sentiments in these sorts of threads is that people are finally able to work through the long lists of ideas they had but would have never otherwise gotten around to. Some of those ideas might have legs as a product or OSS project, but a lot of them are going to be thought experiments or solve problems for the person writing them, and IMO that's a W not an L.

Fourth, once most devs are past the "vibe" party trick phase of LLM adoption, they are less likely to squat out entire projects and far, far more likely to return to doing all of the things that they were doing before; just doing them faster and with less typing up-front.

In other words, don't think project-level. Successful LLM use cases are commit-level.

More comments...