Top
Best
New

Posted by davidbarker 14 hours ago

Nano Banana 2: Google's latest AI image generation model(blog.google)
526 points | 496 commentspage 6
dyauspitr 12 hours ago|
I really wish they opened a version of this up for adult content. They would make immense amounts of money and it could be fenced off behind some sort of paywall where they could verify the age of the person.
nightski 14 hours ago||
[flagged]
estearum 14 hours ago||
Quite telling that you think a technology that merely prevents you from passing off an AI-generated image as not-AI-generated makes the model "worthless."

Good!

That's the point! Whatever amazing use case you had in mind is bad and I'm glad SynthID (apparently) makes it impossible.

nightski 13 hours ago||
Actually no it just makes me use a different model. My uses are not nefarious at all, although it's fine for you to assume so. There are real, legitimate reasons why SynthId is actively harmful that do not involve deceiving or manipulating people at all. SynthId is just a stain on legitimate AI users. People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails (which is not what I am advocating for per se, just that SynthId is an awful solution).

It actually reeks of Google, since it's a technical solution to a people problem. Google doesn't seem to understand people.

procinct 12 hours ago|||
Can you explain your use case? I’d be interested to understand.
nightski 11 hours ago||
Every legitimate use case for AI. It is a way to mark legitimate work done using AI tools as inferior.

This might be acceptable if it prevented or limited nefarious use cases. But it does no such thing. It doesn't help at all on that front actually and is not a problem that can be solved by technology alone.

I view SynthId as more of a method of control. It's a way for Google to label work produced by an individual using their tools as their own.

I much prefer open models that let me be creative, write code, etc.. without trying to control/track/mark me.

estearum 13 hours ago|||
> There are real, legitimate reasons why SynthId is actively harmful that do not involve deceiving or manipulating people at all

I am legitimately curious: can you name some?

> Actually no it just makes me use a different model

Yes, this is a very good thing when "a different model" means "a worse model."

> People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails

That's totally invalid logic. There are plenty of deception and manipulation use cases that don't run afoul of model safety rails at all. Trivially: Creating fake dating profiles to scam people. Fake product images. Fake insurance claims. Fake blackmail (e.g. of a person and another man/woman at a bar).

nightski 11 hours ago||
It doesn't mean a worse model. It may mean that at certain points in time now which tend to be very short lived, but model advancement will hit diminishing returns and at that point models will become commoditized. But even now which model is best is not always the SynthId models from Google.

In fact, the only thing allowing differentiation now is how compute heavy current architectures are. It's very possible this will turn out to not be necessary.

Also my logic was not "Nefarious uses require no safety rails". That was your logic you injected into the conversation. I was merely saying that nefarious users were more likely to use models with safety rails off.

estearum 11 hours ago||
Can't you provide a few (or even one) example of a legitimate use case that SynthID destroys?
DalasNoin 14 hours ago|||
Why does SynthID make it worthless? it helps other platforms detect this as ai?
zardo 14 hours ago||
If the value is in deception.
csjh 14 hours ago||
What’s the downside of SynthID?
RivieraKid 13 hours ago||
It's extremely slow, takes several minutes to generate an image.
ge96 14 hours ago|
My naive question, can image generation make something novel eg. "show me a DNA structure that cures cancer" can it do that, or it has to have seen something before to generate it.

Just think we conceptually know what a brushless motor design looks like and it's just pixels. I guess even if it did produce the image we wouldn't know what it means.

minimaxir 13 hours ago||
All image models can generate images that were not in its training dataset, but it can't generate reductive extreme cases like your example.
ge96 13 hours ago||
What about it is extreme? It's a concept, like "generate an xray image" eventually hopefully the cure to cancer could be represented as a simple molecule or whatever, I'm not saying I know.
minimaxir 13 hours ago||
There is currently no knowledge nor progress for what a cure for cancer, and nothing a LLM can draw upon.

You could generate "pregnant Elon Musk with four arms and three eyes doing yoga poses" because the image models have enough visual concepts of each of those individual things, but that specific image is (likely) not in any training dataset.

ge96 13 hours ago||
What I'm saying is if this thing can generate random things (noise) couldn't it make that or new tech like negative mass. Anyway I get it too if we don't know then something we made wouldn't know.
claysmithr 12 hours ago||
You are overestimating it's intelligence, but I bet it would hallicinate some result, why not try it yourself?
ge96 12 hours ago||
I don't know what the cure of cancer would look like ha (not an organic chemist or biology, genomist... not even sure what field that would be).

But yeah I am slowly trying to incorporate AI into my life (the delegation, work in my sleep part). I develop it is the funny thing (RAG agents) but yeah. Sometimes I get sold on it like "wait a minute maybe it can do that" but no. Can probably tell I don't get deep into the technical part I'm an API consumer. That's the thing I realize too, can only know so much about a topic if you're spread thin/a generalist.