Posted by davidbarker 14 hours ago
Good!
That's the point! Whatever amazing use case you had in mind is bad and I'm glad SynthID (apparently) makes it impossible.
It actually reeks of Google, since it's a technical solution to a people problem. Google doesn't seem to understand people.
This might be acceptable if it prevented or limited nefarious use cases. But it does no such thing. It doesn't help at all on that front actually and is not a problem that can be solved by technology alone.
I view SynthId as more of a method of control. It's a way for Google to label work produced by an individual using their tools as their own.
I much prefer open models that let me be creative, write code, etc.. without trying to control/track/mark me.
I am legitimately curious: can you name some?
> Actually no it just makes me use a different model
Yes, this is a very good thing when "a different model" means "a worse model."
> People who want to deceive or manipulate are not using Google models anyways. They are going to use a model without safety rails
That's totally invalid logic. There are plenty of deception and manipulation use cases that don't run afoul of model safety rails at all. Trivially: Creating fake dating profiles to scam people. Fake product images. Fake insurance claims. Fake blackmail (e.g. of a person and another man/woman at a bar).
In fact, the only thing allowing differentiation now is how compute heavy current architectures are. It's very possible this will turn out to not be necessary.
Also my logic was not "Nefarious uses require no safety rails". That was your logic you injected into the conversation. I was merely saying that nefarious users were more likely to use models with safety rails off.
Just think we conceptually know what a brushless motor design looks like and it's just pixels. I guess even if it did produce the image we wouldn't know what it means.
You could generate "pregnant Elon Musk with four arms and three eyes doing yoga poses" because the image models have enough visual concepts of each of those individual things, but that specific image is (likely) not in any training dataset.
But yeah I am slowly trying to incorporate AI into my life (the delegation, work in my sleep part). I develop it is the funny thing (RAG agents) but yeah. Sometimes I get sold on it like "wait a minute maybe it can do that" but no. Can probably tell I don't get deep into the technical part I'm an API consumer. That's the thing I realize too, can only know so much about a topic if you're spread thin/a generalist.