Posted by s-gonzales 5 days ago
Well, that's a niche one.
It seems the mainstream view is grounded in anxiety and dislike for tech. This isn’t shared by my friends who enjoy tech at all.
Have others here been experiencing this?
It crowds out the people who genuinely devote thought, skill and time into their creations.
Trying to pad a resume with open source "contributions" and trying to pad a Patreon with "fanart" are behaviors that come from the same headspace, I think.
See the drama around the baby peacock search. The results were not only AI filled but objectively entirely wrong. I'd link you to said drama but I literally can't find it because search is just picking up an echo chamber of AI. Which in a way is pretty meta commentary in itself.
I appreciate the ability to generate images of whatever I want. I don't like the way it's messing with perception of reality in a world which we view largely through screens.
>dislike for tech. This isn’t shared by my friends who enjoy tech at all.
Are they in tech or in the creative space. People I know in creative spaces are anxious while those in the tech space tend to be mixed (doomer vs AI utopia split)
Online commenters are frequently not very different from an army of LLMs talking to each other. They get stuck on a few points and keep repeating them.
You can pay attention to them if you like, but take a quick stop on /r/relationshipadvice or /r/amitheasshole and you'll find out what the audience actually is. And then you can ask yourself: does these people's opinion matter to me? Do I want to be like them?
https://x.com/dmarcos/status/1848835003079397646
Even with AI, one needs talent and skill to create something noteworthy. Just the standard of what’s compelling will change. I think people capable to adapt to the new tech will do fine creatively and financially.
Not necessarily. You need to define "art" and most likely "creativity". As you have said AI is a tool, I assume you're not anthropomorphising the model itself as an artistic being.
So it's down to creativity/skill. I would say someone saying "create me an image of a person on a lake" has no creativity/skill at all. It may be an image, or a drawing, but it's not art. In the same way I can draw a stick person - it's a drawing without creativity/skill, so it's just a drawing, not art.
Transformers are also a very real existential threat to artists, as the vast majority can only ever have a decent career "working for the man", which is ultimately run by business people who could care less about AI generated material if it dramatically reduces cost while being effectively identical.
I'm actually curious what the people around you find cool about AI art. Nothing has "awed" me in the past 16 months, I would say.
Below my fav video series that heavily uses AI (images, video, lip sync, sound). I enjoy it and I think it has artistic merit
https://x.com/dmarcos/status/1848835003079397646
It’s done by a single person. Individuals will be able to make stuff that needed teams and large budgets before. It will allow for much more creative risks.
Even in traditional art. The Black Square was revolutionary in its specific way, but the copycats aren’t talked about.
I think the AI Seinfeld will be remembered for a while, but random spin offs will be forgotten. Just like how Twitch Plays Pokemon was huge, but follow ups got forgotten. AI art will need to find uniqueness angle at each step, because the complexity aspect will never be there.
There will be winner-takes-it-all productions, and that’ll come mostly from the companies with the highest marketing budgets.
The biggest uphill battle that AI content creation companies will have to fight is kind of societal. They’ll need to change public’s perception that AI=“low effort”.
No, you're barking up the wrong tree here. I love tech, I work in the industry and intend to continue to do so. I hate AI art because it's just bad. It's awful. AI produces images that don't make physical sense, have a weird glossy "digital art" sheen and most of all have zero artistic merit. I only want to see art (and read text) produced by human beings.
Der Kuss is an oil-on-canvas painting with added gold leaf, silver and platinum. It is widely praised and held in high regard.
To me, it looks like the man has a broken neck, and the woman's been decapitated, her head rotated 90° and reattached to her torso by the ear.
This isn't to say that AI art is flawless on the first go — just that quite a lot of famous "good" art has exactly that specific flaw too.
Even though GenAI is getting better, and even though competent users know how to work around this with the right kinds of effort, normal people don't study the thing they want to get an image of before asking an AI to make it for them, and therefore will generally fail to notice when the image they've just made is still very wrong at at least one entire layer of abstraction about how the things it's depicting function or relate to each other.
Have a look at the dreck in the linked article. The software was not asked to produce a picture of an impossible waterfall that couldn't exist in our physical world. Everything about the prompt indicates the user was looking for a physically real scene. The picture of the girl in a meadow even used hashtags to indicate to the algorithm that it was expected to produce an instagram-like photo, but there's a seed pod floating in mid air, her dress is missing a critical strap, the lacework of the dress doesn't make sense and the direction of lighting is totally inconsistent across different parts of her figure.
For example:
> The software was not asked to produce a picture of an impossible waterfall that couldn't exist in our physical world.
And, given my lack of knowledge of waterfall geology, I have no idea why that's impossible.
> her dress is missing a critical strap
Her left shoulder? I've seen real dresses where one of those slips off sideways for whatever reason. It's like the dress version of self-opening flies.
If you meant something else, I missed it.
I do see the seed pod; but (perhaps due to so many photographers using fill lighting or photoshop, I don't know) I see nothing wrong with the lighting.
I see nothing noteworthy about the lacework either, but I wouldn't expect to as a guy whose total knowledge of things like that is "they are often intricate and frilly".
There are oftentimes things I can indeed spot in GenAI images, when I make them I often find it best to make four at a time, pick the one that was closest to what I wanted, then highlight the errors and do img2img for another cycle, for a dozen repetitions, to get what I want.
But most people don't look that closely.
It's like this: https://xkcd.com/1015/
https://x.com/neuralviz/status/1848176393282326595
I enjoy it and I think has tons of artistic merit
As an example I've not had to use google for a knowledge search in months now. Perplexity is much better, both for a summary of what I'm looking for, and for finding non-garbage links.
So AI can summarize the news for you. But how does it do it? It steals the news from actual people and organizations. We're all selfish beings who want to save time and/or effort but what do you think happens to Perplexity if it had no access to news organizations? What's the end outcome here?
Did you pay for all the content you used to create your comment? If not how did you dare to write it and publish it?
You've stolen the fraction of their income because after reading your comment I don't need to go read your source if I'm only interested in information you stolen and shared.
What if you cause the demise of those sources with your (and your copycats) shameless theft?
This is silly. Technology evolves all the time. Things die and they should die. Other arise in their place to fill the gap.
Morals aside, AI will run into serious problems in 10-20 years when the world has rearranged itself around AI content. With less non-AI content available and no reliable way to differentiate AI vs non-AI content, there will no longer be a dataset to train against.
Individual humans summarizing the news can reduce revenue for news organizations slightly, but AI summarizing every news article is a problem on a whole different scale. Basically the same as the difference between getting a mosquito bite and being stabbed in your carotid artery - both are just blood loss, but one is a minor annoyance and the other is fatal.
They haven't since the 1980s.
Everything that you say as some sort of doom and gloom prediction about the future _is the world we've lived in for the last 40 years_.
Here's a random newspaper that you're saying doesn't have reporters: https://www.tampabay.com
It will evolve towards consuming more raw data and more information that people self publish to produce news. Newslike narration constructed on actual factual information is so bland and repeatable that there is no need for more training material. News is so uncreative and predictable that I can pick up a newspaper in language I don't know and still guess with high probability what most articles are about from photos, common names and few words of that language that I do know and general tone.
> Individual humans summarizing the news can reduce revenue for news organizations slightly,
There are so many humans doing that that the effect is not negligible. I skip reading all paywalled articles and read just their comments instead.
Then the AI will go to the primary sources that the human journalist currently go to.
Will it be flawed? Yes.
Are humans already? Also yes.
Is there a huge risk that "the algorithm" will be politically biased? Totally.
Can you name one press organisation, larger than a local one-city-only paper, that hasn't been accused of that?
Can you expand on this? Because I'm not following the flow of your logic AT ALL.
AI isn't magic, it learns from experience.
If the inputs used to train an AI are "stealing", why aren't the inputs (experiences, what you read, what you listen to) to your brain?
And I don't mean in the reductive sense of "you are a human and the AI is not" I mean the act and the process and the result are the same, what differs is the substrate and the architecture — proton exchange across lipid layers vs. electron flow across doped semiconductors for the former, and transformers vs. the evolved chemical mess of the human brain for the latter.
Technology certainly evolves but this is a shitty direction. You yourself are calling the service you use shameless theft. Perplexity themselves have been caught doing shady things. I'm not here trying to defend media companies but I'm saying the current end game here is not pretty and more people should consider that.
That clickfarms will die is no great loss.
Same for photocopiers, printers, general-purpose computers. If, at any point, some of that tech had "never have been invented or at the very least not released into the public", we wouldn't have e.g. photolitography and thus no modern microchips.
Copyright are a legal equivalent of a dirty hack to preserve some legacy behavior, that became a permanent fixture over time. Like with dirty hacks in code, you're going to get different responses from different people, depending on situation.
also, even in absence of copyright, the unprecedented scale of art theft that these technologies necessitate would be morally questionable
> In the teaching of the Catholic Church, an indulgence (Latin: indulgentia, from indulgeo, 'permit') is "a way to reduce the amount of punishment one has to undergo for (forgiven) sins".[1] The Catechism of the Catholic Church describes an indulgence as "a remission before God of the temporal punishment due to sins whose guilt has already been forgiven, which the faithful Christian who is duly disposed gains under certain prescribed conditions…"[3]
https://en.wikipedia.org/wiki/Indulgence
This directly lead to the reformation and two centuries of religious wars in Europe which proportionally killed more people than both the world wars combined.
The printing press has more blood on it than ink. By contrast the worst you can say about art models is that they are 'derivative' whatever that means.
one technology could be used to do bad things, the other technology is guaranteed to only be used to do bad things
That would require all images to be bad.
Why can't I have a daugerotype of a Victorian era werewolf Prime Minister standing in front of 10 Downing Street? What harm does that cause anyone?
They are amazing replacements for the clone tool from photo shop.
>one technology could be used to do bad things, the other technology is guaranteed to only be used to do bad things
One technology is used to make pictures, the other to commit genocide. My dude, I don't often say it, but go out and touch grass.
Both. They did both.
Cameras replaced most portrait painters, but they also gave us "Paint Drying" which is 10 hours 7 minutes long film of exactly what it sounds like — deliberately worthless images devoid of meaning as it's a protest.
And when I was a kid, the talking heads in the press were upset about cable TV introducing the UK to the cartoon channel, with a similar argument to yours.
here
> There is no theft
But even if by "theft" you mean "copyright infringement" there is a strong argument for fair use, we humans consume large amounts of copyrighted content and are influenced by it in everything we do, I don't find it unethical or wrong in general, and I don't see how it should be any different for a machine, unless you are against job automation, but in this case you seem to be worried about "theft".
Edit: HN thinks that I'm posting too fast so my reply to the comment below here ahah: "Well, then we agree that it wouldn't be a problem if the model didn't output an image from the training dataset, which is extremely rare or essentially impossible with today's dedup steps used during dataset creation."
i'd say that this is not the equivalent of a human glancing at an image and then have the memory of details of that image have some small influence on their way of thinking and imagining and creating, i'd say that's theft
also obviously depends on the case, copyright law is often very stupid and broken, like there's no reason why this book written 100 years ago isn't in public domain, but scraping millions of images from artists' websites kinda is
It is just a matter of doing it right the first time such as having license/agreements or company building it based on their own images.
like chemical weapons were invented, bioweapons were invented, meth manufacturing was invented, but we prohibit people from manufacturing them because they're harmful
same logic could apply to generative machine learning (obv not as harmful as above examples, but same idea)
literally any website containing images is in most cases overwhelmed with an unlimited supply of ai generated garbage
ai image generators are barely two years old, and have already caused a lot of damage in basically every sphere they interact with, i'd say you don't need a lot of foresight to be able to make a judgement here
Chemical weapons are still being developed (Russia used it in their war against Ukraine), bioweapons are still being developed, meth are still being manufactured all over in US if not smuggled in.
There are quite a few different versions (SD1.5, XL, XL Turbo, SD3) of Stable Diffusion still in use because the newer ones didn't definitively supersede previous versions.
Result: an explorer not standing on the edge of a waterfall, but instead a physically impossible waterfall in the distance. A glowing temple is very very very visible, totally unhidden by any trees.