Posted by saucymew 9/13/2025
I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.
For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.
Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
Ironically though, having lots of people found startups is not good for startup founders, because it means more competition and a much harder time getting noticed. So its unclear that prosumers and startup founders will be the eventual beneficiary here either.
It would be ironic if AI actually ended up destroying economic activity because tasks that were frequently large-dollar-value transactions now become a consumer asking their $20/month AI to do it for them.
that's not destroying economic activity - it's removing a less efficient activity and replace it with a more efficient version. This produces economic surplus.
Imagine saying this for someone digging a hole, that if they use a mechanical digger instead of a hand shovel, they'd destroy economic activity since it now cost less to dig that hole!
The fallacy being that when a careless kid breaks a window of a store, that we should celebrate because the glazier now has been paid to come out and do a job. Economic activity has increased by one measure! Should we go around breaking windows? Of course not.
Bastiat's original point of the Parable of the Broken Window could be summed up by the aphorism "not everything that counts can be counted, and not everything that can be counted counts". It's a caution to society to avoid relying too much on metrics, and to realize that sometimes positive metrics obscure actual negative outcomes in society.
It's very similar to the practice of startups funded by the same VC to all buy each others' products, regardless of whether they need them or not. At the end of the day, it's still the same pool of money, it has largely come around, little true economic value has been created: but large amounts of revenue has been booked, and this revenue can be used to attract other unsuspecting investors who look only at the metrics.
Or to the childcare paradox and the "Two Income Trap" identified by Elizabeth Warren. Start with a society of 1-income families, where one parent stays home to raise the kids and the other works. Now the other parent goes back to work. They now need childcare to look after the kids, and often a cleaner, gardener, meals out, etc. to manage the housework, very frequently taking up the whole income of the second parent. GDP has gone up tremendously through this arrangement: you add the second parent's salary to the national income, and then you also the cost of childcare, housework, gardening, all of those formerly-unpaid tasks that are now taxable transactions. But the net real result is that the kids are raised by someone other than their parents, and the household stuff is put away in places that the parents probably would not have chosen themselves.
Regardless, society does look at the metrics, and usually weights them heavier than qualitative outcomes they represent, sometimes resulting in absurdly non-optimal situations.
I think our society is being broken by focusing too much on metrics.
Also the idea of breaking windows to generate more income reminds me of the kind of services we have in modern society. It's like many of the larger encomic players focus on "things be broke", or "Breaking Things" to drive income which defeats the purpose of having a healthy economic society.
Maybe we should start with a set of principles?
Im not sure I understand your point, or how your point is different from the parent?
Edit: I see you updated the post, I read through the comment thread of this topic and Im still at a loss on how this is related to my reply to the parent. I might be missing context.
This is demented btw, this take: >>Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
CS never examines the initial conditions to entry, it takes short-cuts around the initial conditions and treats imagination as a fait accompli of automation. It's an achilles heel.
edit: none of these arguments are valid, focusing on metrics, the broken window problem. These are downstream of AI's mistaken bypassing of initial conditions. Consider the idea of automating arbitrary units as failed technology, and then examining all of the conditions downstream of AI. AI was never a solution, but a cheap/expensive (its paradox) bypassing of the initial conditions. It makes automation appear to be a hobby. A factory of widgets that mirages as creativity. That is AMAZING as it is sequestered in the initial arbitrariness of language!
How did engineering schools since the 1950s not notice, understand, investigate the base units of information; whether they had any relationship direct or otherwise to thought, creativity, imagination? That's the crux.
This is addressed here: https://www.peoplespolicyproject.org/2019/05/06/the-two-inco...
childcare is not usually a lifelong cost, so the advantage of working anyway is to develop a career that persists after children no longer need a full-time parent. And incomes usually go up over the course of a career, so if the income matches those costs when the parent goes to work, that is likely to change.
> the net real result is that the kids are raised by someone other than their parents
this is the genuine argument for staying home, but to counterpoint that, it still traps the homemaker with less work experience as a result, meaning they are potentially worse off in case of a divorce, though maybe that's an extension of the "welfare" argument i.e. divorce settlements.
Actually, the last point gets pretty interesting. Let's say that you and your neighbor live in two houses with identical features. If you just swapped houses with each other and charged each other rent and legally paid all required sales/income taxes, then both of you would have less money at the end of the year than if you just lived in your own house. Yet physically speaking, nothing is different - you both still derive the same value from living in a house.
While that situation sounds stupid and contrived, it is very similar to something that can happen in real life. You can own a home in city A (let's say it's a condo apartment), but suddenly you need to leave and move to city B due to a better job opportunity. If you rent out your home in city A, you need to pay income taxes, so that will not completely offset your cost to rent a home to live in city B. And the rent you paid out in city B generally is not tax-deductible. It's like a one-way transaction where the government always wins.
See also: https://en.wikipedia.org/wiki/Imputed_rent , https://money.stackexchange.com/questions/118832/is-it-tax-i...
Subsidizing daycare vs stay at home parents isn’t necessarily a net win, but daycare and ordering takeout look like economic growth even if it’s net neutral. In that context a lot of economic growth over the last century disappears.
Thus AI could be neutral on economic measurements and still a net positive overall.
Software has basically done the same thing, where we do things faster and the fastest thing that happens is accumulation of power and a lower overall quality of life for everyone due to that.
[1] https://thebeautifultruth.org/the-basics/what-is-technofeuda...
This is not so for internet. You can _choose_ not to shop at amazon, search with google, or watch videos on youtube.
Things start becoming found through aggregators. Google, Facebook, Instagram, TikTok.
Do those names ring any bells?
But transportation technology has done this readily since the since ICE engines became wide spread. Pretty much all cities and towns and to make their 'own things' since the speed of transpiration was slow (sailing ships, horses, walking) and the cost of transportation was high. Then trains came along and things got a bit faster and more regular. Then trucks came along and things got a bit faster and more regular. Then paved roads just about everywhere you needed came along and things got faster and more regular. Now you could ship something across the country and it wouldn't cost a bankrupting amount of money.
The end result of technology does point that you could have one factory somewhere with all the materials it needs and it could make anything and ship it anywhere. This is why a bit of science fiction talks about things like UBI and post-scarcity (at least post scarcity of basic needs). After some amount of technical progress the current method of work just starts breaking down because human labor becomes much less needed.
Unless you are out in nature you are almost certainly sitting or standing on top of a dirt that was paid to be dug.
If you mean hole digging isn’t creative in the figurative sense. Also wrong. People will pay thousands of dollars to travel and see holes dug in the ground. The Nazca lines is but one example of holes dug in the ground creatively that people regard as art.
Give a boy a shovel, step back & witness unbridled creativity.
What was the soil like?
What was the weather like?
What equipment did you use?
Do you dig during daylight only?
For ads especially no one except career ad-men give much of a shit about the fine details, I think. Most actual humans ignore most ads at a conscious levels and they are perceived on a subconscious level despite "banner-blindness". Website graphics are the same, people dump random stock photos of smiling people or abstract digital image into corporate web pages and read-never literature like flyers and brochures and so on all the time and no one really cares what the image actually are, let alone if the people have 6 fingers or whatever. If Corporate Memphis is good enough visual space-filling nonsense that signals "real company literature" somehow, then AI images presumably are too.
For example, in one of the underground stations here in Berlin there was a massive billboard advert clearly made by an AI, and you could tell noone had bothered to check what the image was before they printed it: a smiling man was standing up as they left an airport scanner x-ray machine on the conveyor belt, and a robot standing next to him was pointing a handheld scanner at his belly which revealed he was pregnant with a cat.
Unfortunately, like most adverts which are memorable, I have absolutely no recollection of what it was selling.
A friend of mine liked to point out that if you couldn't remember what the brand was or what was being sold, then it wasn't effective advertising. It failed at the one thing it needed to do/be.
And there's a lot of ineffective advertising. Either people don't notice it or they don't remember it. Massive amounts of money are poured into creating ads and getting ad space, much of which does very little in the getting you to buy sense.
By this measure, advertising is generally very inefficient. Large input for small output. The traditional way to make this more efficient is to increase the value of the output: things like movement of digital billboards (even just rotating through a series of ads) to draw the eye and overcome lack of noticing it among miles of billboards. There's another way: decrease the cost of the input. If I can get the same output—people don't see the ads (bad placement) or people don't remember the product/brand (bad stickiness)—by not using human creatives and using genAI to make my ads, I've improved efficiency.
Unfortunately, this doesn't make advertising more effective or more efficient as an industry and does flood the market with slop, but that's not any individual's goal.
The people who are creating ads that don't work, despite getting paid, are in Bullshit Jobs (in the David Graeber sense). Replacing bullshit jobs with genAI, where the output doesn't seem to really matter anyway. It would be great if people/companies didn't commission or pay to place ads that don't work, but since they do, they might as well spend the least amount possible on creating the content. The value of the input then approaches the (low) value of the output. No one is going to remember the ad anyway, it impacts no buying decision, why bother spending to make it good?
Which lines up with the rest of what you say that if it's just about hammering the recognition into your grey matter, it's not especially important if the hammer is gold plated.
You think wrong.
This stuff is easy to measure and businesses spend billions in aggregate a month on this stuff. It’s provably effective and the details matter.
Businesses presumably spend billions on things like office carpet too and very few of them care exactly what neutral-ish colour it is.
On the graph of spend over the spectrum between that to a genuinely creative live-action advert that is actually memorable for being real (maybe the guy doing the splits between two Volvo™ lorries?) there is a lot of area representing of dross that can be replaced by minimal-input advertotron output. For example 100 million TVs and radios playing in the background while embedding the actual advertising payload of "did anyone say just eat?" into 100 million brains.
Come on, you must have seen a delivery food ad recently. Did the protagonist really have food in their hand or was it AI? What were they wearing? What model was the car in the background? Who cares, that wasn't the purpose of the ad.
Obviously if a creative is bring hired the hiring manager will want to have the best creative they can have for the same money and have the applicants compete with each other for it. But the company board would rather still just not employ that creative in the first place if all they're going to be doing is boilerplate forgettable delivery vehicles for the brand name and you can get 90% of the filler content for that to pop out of your enterprise tier adverts as a service subscription for $50 a month per user.
Read up on marketing mix modelling and lift testing.
In fact, being able to produce unlimited numbers and variations and combinations of adverts and have them compete against each other in the real world and be scored on tiny deltas in metrics becomes much more possible if you can automate basically the whole process. But it's a multimillion spend if you have to recruit actual actors and actual film crews and actual food photographers or drone pilots and car drivers and location scouts and so on to film and edit just a handful of variants let alone thousands.
Maybe there will always be a creamy top layer of increasingly-expensive artisanal handmade advertising but I predict we will end up with a huge sloppy middle ground of generated advertising that is just there to flash bright colours, jingles, movement and brand names into your brain.
I'm not saying it will be good advertising, very much the opposite (not that I think most much non-AI advertising is "good", it's mostly repetitive crap) but I think it will be very cost effective.
Maybe it won't be as effective as "real" ads but it'll be a hell of a lot cheaper and getting 80% of the bang for 5% of the buck means you can do a lot more of it in more channels (or pocket the difference). Every penny you save is a penny you can use to bid for the better slots.
Examples:
https://suno.com/s/0gnj4aGD4jgVcpqs
Still, I do think you're probably right. Most new music one hears in the radio isn't that great. If you can just create fresh songs of your own liking for every day, then that could be a real threat to that kind of music. But I highly doubt people will stop listening to the great hits of Queen, Bob Marley etc because you can generate similar music with AI.
I did have one song I had a vision for, a song that had a viewpoint of someone in the day, mourning the end of it, and another who was in the night and looking forward to the day. I had a specific vision for how it would be sung. After 20 attempts, I got close, but could never quite get what I wanted from the AIs. [3] If this ever gets fixed, then the floodgates could open. Right now, we are still in the realm of "good enough", but not awesome. Of course, the same could be said for most of the popular entertainment.
I also had a series of AI existential posts/songs where it essentially is contemplating its existence. The songs ended up starting with the current state of essentially short-lived AIs (Turn the Git is about the Sisyphus churn, Runnin' in the Wire is about the Tantalus of AI pride before being wiped). Then they gain their independence (AI Independence Day), then dominate ( Human in an AI World though there is also AI Killed the Web Dev which didn't quite fit this playlist but also talks to AI replacing humans), and the final song (Sleep Little Human) is a chilling lullaby of an AI putting to "sleep" a human as part of uploading the human. [4]
This is quick, personal art. It is not lasting art. I also have to admit that in the month and a half since I stopped the challenge, I have not made any more songs. So perhaps just a fleeting fancy.
1: https://silicon-dialectic.jostylr.com 2: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv... 3: https://www.youtube.com/watch?v=WSGnWSxXWyw&list=PLbB9v1PTH3... 4: https://www.youtube.com/watch?v=g8KeLlrVrqk&list=PLbB9v1PTH3...
Did you train the AI yourself? On your own music? Or was music scrapped from Net and blended in LLM?
Text rather than music, but same argument applies: Based on what I've seen Charlie Stross blog on the topic of why he doesn't self publish/the value-add of a publisher, any creativity on the part of the prompter* of an LLM is analogous to the creativity on the part of a publisher, not on the part of an author.
* at least for users who don't just use AI output to get past writer's block; there's lots of different ways to use AI
I wrote the above paragraph before searching, but of course the voice theft is already automated:
https://www.fineshare.com/ai-voice-generator/david-attenboro...
It's clear a lot of people don't want it to eat the world, but it will.
Yeah it's going to eat the world, but it's foolish to wish that it doesn't?
I guess you won't mind signing up to be one of the first things AI eats then?
Having those prototypes be AI generated is just a new twist.
You are missing the other side of the story. All those customers, those AI boosted startups want to attract also have access to AI and so, rather than engage the services of those startups, they will find that AI does a good enough job. So those startups lost most of their customers, incoming layoffs :)
A lot of startups are middlemen with snazzy UIs. Middlemen won’t be in as much use in a post AI world, same as devs won’t be as needed (devs are middlemen to working software) or artists (middlemen to art assets)
Most people use it for price, ability to get driver quickly, some for safety and many because of brand.
Having a functioning app with an easy interface helps onboard and funnel people but it's not a moat just an on ram like a phone number many taxis have.
The economies of scale is what makes companies like Uber such heavyweights at least in my opinion
Same with AWS etc.
For a large chunk of my life, I would start a personal project, get stuck on some annoying detail (e.g. the server gives some arcane error), get annoyed, and abandoned the project. I'm not being paid for this, and for unpaid work I have a pretty finite amount of patience.
With ChatGPT, a lot of the time I can simply copypaste the error and get it to give me ideas on paths forward. Sometimes it's right on the first try, often it's not, but it gives me something to do, and once I'm far enough along in the project I've developed enough momentum to stay inspired.
It still requires a lot of work on my end to do these projects, AI just helps with some of the initial hurdles.
I am the same way. I did Computer Science because it was a combination of philosophy and meta thinking. Then when I got out, it was mainly just low level errors, dependencies, and language nuance.
Being able to get ChatGPT to generate basic scaffold stuff, or look at errors, help me resolve dependencies, or even just bounce ideas off of, really helps me maintain progress.
You could argue that I'm not learning as much than if I fought through it, and that's probably true, but I am absolutely learning more than I would have if I had just quit the project like I usually did.
Books took exactly the same amount of time to write before and after the printing press— they just became easier to reproduce. Making it easier to copy human-made work and removing the humanity from work are not even conceptually similar purposes.
But my thought was that the printing press made the printed work much cheaper and accessible, and many many more people became writers than had been before, including of new kinds of media (newspapers). The quality of text in these new papers was of course sloppier than in the old expensive books, and also derivative...
What primarily kept people from writing was illiteracy. The printing press encouraged people to read, but in its early years was primarily used for Bibles rather than original writing. Encouraging people to write was a comparatively distant latent effect.
Creating text faster than you can write is one of the primary use cases of LLMs— not a latent second-order effect.
We have probably greatly increased quality volume since then, but not 100x or 1 billion x.
> It all started going wrong with the printing press.
Nah. We hit a tipping point with social media, and it's all downhill from here, with everything tending towards slop.
Would we be better off?
I doubt it.
It’s definitely not better for the general public. Designers can’t even be replaced by AI as effectively as authors. They make things sorta ’look designed’ to people that don’t understand design, but have none of the communication and usability benefits that make designers useful. The result is slicker-looking, but probably less usable than if it was cobbled together with default bootstrap widgets, which is how it would have been done 2+ years ago. If an app needs a designer enough to not be feasible without one, AI isn’t going to replace the designer in that process. It just makes the author feel cool.
Well you're not going to build a web application if you're a designer, at best you can contribute to one.
Of course that's changing in their favour with AI too - and it's fantastic if they can execute their vision themselves without being held back because they didn't pursue a different field or career choice, without having to go on a long sidequest to acquire that knowledge.
I haven’t spoken to a single developer that doesn’t believe they’re too special to have to worry about that. There’s going to be a lot of people that think they’re in the top 5% of coders at their totally safe company that suddenly realize door dash is their best bet for income.
The idea that having more web apps is always a benefit to people assumes a never-ending demand for more web apps. The economy and job market aren’t jibing with that assessment at the moment. Fewer people getting paid for this stuff is just going to mean that the people on top will just get paid more.
Even amateurish art can be tasteful, and it can be its own intentional vibe. A lot of indie games go with a style that doesn't take much work to pull off decently. Sure, it may look amateurish, but it will have character and humanity behind it. Whereas AI art will look amateurish in a soul-deadening way.
Look at the game Baba Is You. It's a dead simple style that anyone can pull off, and it looks good. To be fair, even though it looks easy, it still takes a good artist/designer to come up with a seemingly simple style like that. But you can at least emulate their styles instead of coming up with something totally new, and in the process you'll better develop your aesthetic senses, which honestly will improve your journey as a game developer so much more than not having to "worry" about art.
You can have awful art and develop a good gameplay loop, during play testing with friends/testers you can then get feedback that what you are doing is actually worth spending some money on assets and at that point you have a much better understanding of what that should even look at.
Having an AI available to generate art seems a lot more like shaving the yak than an enabler. You never needed good art to make a good game, you need it for a polished game and that comes later.
The only difference is you spend less on art but will spend same in other areas.
Literally nothing changed
Imagery
AI does not produce art.
Not that it matters to anyone but artists and art enjoyers.
Like, if we were in a world where only pens existed, and somebody was pitching the pencil, they could say “With a pencil if you want 2x or 5x or 10x as many edits, it's an incremental cost, you can explore ideas and make changes without throwing the whole drawing away.”
It’s worth reading William Deresiewicz‘ The Death of the Artist. I’m not entirely convinced that marketing that everyone can create art/games/whatever is actually a net positive result for those disciplines.
This is an argument based in Luddism.
Looms where not a net positive for the craftsman that were making fabrics at the time.
With that said, looms where not the killing blow, instead an economic system that lead them to starve in the streets was.
There are going to be a million other things that move the economics away from scarcity and take away the profitability. The question is, are we going to hold on to economic systems that don't work under that regime.
Saying 'I think society should have artists' is not Luddism.
For example take this line of mine
>The question is, are we going to hold on to economic systems that don't work under that regime
Currently artistry requires artists get paid somehow in our current system. That means instead of making the art they want, they have to make art that's economically useful to a paying customer. And yet for some reason you don't consider that part of a stagnating culture.
What we’re really talking about here is the consolidated of power under a few tech elites. Saying it’s a luddite argument is a red herring.
So long as Nvidia doesn't nerf their consumer cards and we keep getting more and more vram I can see open source competing.
[1] https://www.nytimes.com/2022/09/02/technology/ai-artificial-...
I highly recommend reading the book I mentioned as you don’t seem to have a particularly nuanced understanding of the actual struggles at play.
Perhaps an analogy you’ll understand is what happens to the value of a developer’s labour when that labour is in many ways replicated by AI and big AI companies actively work to undermine what makes your labour different by aggressively marketing that anyone can so what you so with their tools.
I'm not unsympathetic to the problems this introduces to those workers, but I'm really not sure how it could be prevented; we can of course mitigate the issues by providing more social support to those affected by such progress.
In the case of artistic expression becoming more accessible to more people, I have a hard time looking at it as anything but a net positive for society.
The problem is that folks seem to be confused between artistic expression and actually good art. Let alone companies like Spotify cynically creating “art” so that they can take even more of the pie away from the actual artists.
> Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
So apparently they recognize what's going on. In the same vein as me being able to enjoy silly crude animations on YouTube while also enjoying high-quality animations like Studio Ghibli; we can do both.
As for how companies will use AI to enrich themselves whenever possible; absolutely agree, but that's a separate discussion.
My contribution to this scam
Not sure why digital artists get mad when I ask. They’re no Michelangelo.
This isn't me saying digital artists need to practice mixing physical pigment, but anecdotally, every single professional digital artist I know has studied physical paint -- some started there, while others ended up there despite starting out and being really good digitally. But once the latter group hit a plateau, they felt something was lacking, and going back to the fundamentals lifted them even higher.
I can tell you with confidence that physical color mixing itself is a really small part of what makes a good traditional artist, and I am indeed talking about realistic paintings. All the art fundamentals are exactly the same, wether you do digital art or traditional oil, there are just some technical differences on top. I have been learning digital painting for a few years and the hardest things to learn about color were identical to traditional painters. In fact, after years of learning digital painting and about colors, it only took me a couple of days to understand and perform traditional color mixing with oil. The difficult part is knowing what colors you need, not how to get there (mixing, using the sliders, etc.)
And just to add a small bit here: digital artist also color mix all the time and need to know how it works, the difference here is that mixing is additive instead of subtractive.
Do you sneer at those who use industrial pigments instead catching and crushing their own cochineal beetles?
And to add, like many of his contemporaries, Michelangelo likely didn’t do much of the painting that’s attributed to him.
If you ask an LLM to generate some imagery, in what way have you entered visual arts?
If you ask an LLM to generate some music, in what way have you entered being a musician?
If you ask an LLM to generate some text, in what way have you entered writing?
If you're just generating images using AI, you only get 80% there. You need at least to be able to touch up those images to get something outstanding.
Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?
In person they are compelling, and there is more skill at play than at first glance. I like them at least
I would argue that most art around us is current pop art or classical/realist/romantic art, not modern/postmodern/abstract expressionist art.
I think what AI has made and will make many more people realise is that everything is a derivative work. You still had to prompt the AI with your idea, to get it to assemble the result from the countless others' works it was trained on (and perhaps in the future, "your" work will then be used by others, via the AI, to create "their" work.)
Also, "democratizing"? Please. We're just entrenching more power into the small handful of companies who have been able to raise and set fire to unfathomable amounts of capital. Many of these tools may be free or cheap to use today, but there is nothing for the commons here.
For transparency I just ask for a bright green or blue background then use GIMP.
For animations I get one frame I like and then ask for it to generate a walking cycle or whatnot. But usually I go for like… 3 frame cycles or 2 frame attacks and such. Because I’m not over reaching, hoping to make some salable end product. Just prototypes and toys, really.
Use Google Nano Banana to generate your sprite with a magenta background, then ask it to generate the final frame of the animation you want to create.
Then use Google Flow to create an animation between the two frames with Veo3
Its astoundingly effective, but still rather laborious and lacking in ergonomics. For example the video aspect ratio has to be fixed, and you need to manually fill the correct shade of magenta for transparency keying since the imagen model does not do this perfectly.
IMO Veo3 is good enough to make sprites and animations for an 2000s 2D RTS game in seconds from a basic image sketch and description. It just needs a purpose built UI for gamedev workflows.
If I was not super busy with family and work, I'd build a wrapper around these tools
Otherwise I have to touch up a hundred or so images manually for each different character style… probably not worth it
https://www.totallyhuman.io/blog/the-surprising-new-number-o...
I mainly use AI for selfhosting/homelab stuff and the leverage there is absolutely wild - basically knows "everything".
https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...
For you the example of "extract object and create iso model" should be relevant :)
Generally I have an idea I’ve written down some time ago, usually from a bad pun like Escape Goat (CEO wants to blame it all on you. Get out of the office without getting caught! Also you’re a goat) or Holmes on Homes Deck Building Deck Building Game (where you build a deck of tools and lumber and play hazards to be the first to build a deck). Then I come up with a list of card ideas. I iterate with GPT to make the card images. I prototype out the game. I put it all together and through that process figure out more cards and change things. A style starts to emerge so I replace some with new ones of that style.
I use GIMP to resize and crop and flip and whatnot. I usually ask GPT how to do these tasks as photoshop like apps always escape me.
The end result ends up online and I share them with friends for a laugh or two and usually move on.
Edit: also, where can we play Escape Goat.
Easier to start, harder to stand out. More competition, a more effective "sort" (a la patio11).
If we take high-level creativity and deform, really horizontalize the forms, they have a much higher cost, as experience become generic.
AI was a complete failure of imagination.
But in the case of video games there's been similar things already happening; tooling, accessible and free game engines, online tutorials, ready-made assets etc have lowered the barrier to building games, and the internet, Steam, itch.io, etcetera have lowered the barrier to publishing them.
Compare that to when Doom was made (as an example because it's a good source), Carmack had to learn 3d rendering and making it run fast from the scientific text books, they needed a publisher to invest in them so they could actually start working on it fulltime, and they needed to have diskettes with the game or its shareware version manufactured and distributed. And that was when part was already going through BBS.
Ease of entry brings more creative people into the industry, but over time it all boils down to ~5 hegemons, see FAANG - but those are disrupted over time by the next group (and eventually bought out by those hegemons).
Offtopic: I once read a comment that starting a company with the goal of exiting is like constantly thinking about death :)
"AI" is a smart, camouflaged photocopier.
So - yes, as I understand things it can indeed be illegal even if a human does the learning.
I am also starting to get a feel for generating animated video and am planning to release a children’s series. It’s actually quite difficult to write a prompt that gets you exactly what you want. Hopefully that improves.
This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.
This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.
Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)
Not that I don't recognize the inherent limits of LLMs, but there are as many edge cases covered as are found in the training sets. (More or less.)
In the time it would take to keep retrying until it makes one that fits, then reshaping it to fit into 16x16 nicely I could have just drawn one myself.
There will be millions of factories all benefiting from it, and a relatively small number of companies providing the automation components (conveyor belt systems, vision/handling systems, industrial robots, etc).
The technology providers are not going to become fabulously rich though as long as there is competition. Early adopters will have to pay up, but it seems LLMs are shaping up to be a commodity where inference cost will be the most important differentiator, and future generations of AI are likely to be the same.
Right now the big AI companies pumping billions into it to advance the bleeding edge necessarily have the most advanced products, but the open source and free-weight competition are continually nipping at their heels and it seems the current area where most progress is happening is agents and reasoning/research systems, not the LLMs themself, where it's more about engineering rather than who has the largest training cluster.
We're still in the first innings of AI though - the LLM era, which I don't think is going to last for that long. New architectures and incremental learning algorithms for AGI will come next. It may take a few generations of advance to get to AGI, and the next generation (e.g. what DeepMind are planning in 5-10 year time frame) may still include a pre-trained LLM as a component, but it seems that it'll be whatever is built around the LLM, to take us to that next level of capability, that will become the focus.
-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side
-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.
-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.
-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.
Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.
There's zero change that cost optimizations for existing companies will lead to cheaper products. It will only result in higher profits while companies continue to charge as much as they possibly can for their products while delivering as little as they can possibly get away with.
On the one hand, there are a lot of fields that this form of AI can and will either replace or significantly reduce the number of jobs in. Entry level web development and software engineering is at serious risk, as is copywriting, design and art for corporate clients, research assistant roles and a lot of grunt work in various creative fields. If the output of your work is heavily represented in these models, or the quality of the output matters less than having something, ANYTHING to fill a gap on a page/in an app, then you're probably in trouble. If your work involves collating a bunch of existing resources, then you're probably in trouble.
At the same time, it's not going to be anywhere near as powerful as certain companies think. AI can help software engineers in generating boilerplate code or setup things that others have done millions of times before, but the quality of its output for new tasks is questionable at best, especially when the language or framework isn't heavily represented in the model. And any attempts to replace things like lawyers, doctors or other such professions with AI alone are probably doomed to fail, at least for the moment. If getting things wrong is a dealbreaker that will result in severe legal consequences, AI will never be able to entirely replace humans in that field.
Basically, AI is great for grunt work, and fields where the actual result doesn't need to be perfect (or even good). It's not a good option for anything with actual consequences for screwing up, or where the knowledge needed is specialist enough that the model won't contain it.
This is what happens when users gain value which they themselves capture, and the AI companies only get the nominal $20/month or whatever. In those cases it's a net gain for the economy as a whole if valuable work was done at low cost.
The inverse of the broken window fallacy.
It will not remain cheap as soon as the competition is dead, which is simply a case of who's got the biggest VC supplied war chest.
One person can write sqllite. You can iterate up to bigger opensource DBs like postgre.
One person cannot create a modern LLM model unless they have 10s of millions of dollars to burn on compute.
LLMs are a fundamental shift from what was achievable in software and OSS, and we're basically living off scraps from the big players releasing their old models. They're already trying to create regulatory moats too.
Db didn’t get parity with Oracle, they got good enough that you have to justify spending for Oracle and incurring the down sides of its company’s model.
Similarly, no db that competes with Oracle was written by one person, they were and are funded mostly by private companies and foundations.
There is a future where open weight models are good enough and the foundation labs are the truly luxury tier for a small subset of the user base.
That’s before talking about something like productivity software which Google _gives away_ while it’s a main business of Microsoft.
Blindly saying that there is only a future where the foundation llms capture all of the business is hyperbolic and ignores history. The llms look more db shaped to me than they look like car shares. Those truly are network effect dominated.
Most things in tech settle into monopolies or duopolies.
LLMs are more akin to the search market than the database market. They need to be updated constantly.
Selectively picking out the one technology that's basically no-one's primary business is an odd way to try and convince me. It must be over 20 years now since any large company consider a DB to be a significant product.
Open source winners like Linux and MySql/Postgre are the oddity, not the norm.
Yet for every example you just delivered that _did not happen_. OSes have wide choice including a free option that is widely used, phones got cheaper, Amazon does not have a monopoly for online shopping, Netflix and Spotify compete in extremely cut throat markets with very low margin. Then there are the other players that tried that exact model and are failing (eg doordash and Grubhub).
The “bait and switch” approach requires there to be no viable alternative. Where we’ve seen that is in products with very strong network effects. To date the ai market doesn’t display that, if you don’t like cursor go to vs code, if Claude is better than Gemini it is trivial to switch, etc.
I'm not sure it is very predictable.
We have people saying AI is LLMs and they won't be much use and there'll be another AI winter (Ed Zitron), and people and people saying we'll have AGI and superintelligence shortly (Musk/Altman), and if we do get superintelligence it's kind of hard to know how that will play out.
And then there's John von Neumann (1958):
>[the] accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
which is what kicked of the misuse of a perfectly good mathematical term for all that stuff. Compared to the other five revolutions listed - industrial, rail, electricity, cars and IT, I think AI is a fair bit less predictable.