Top
Best
New

Posted by kozika 20 hours ago

TikTok is being flooded with racist AI videos generated by Google's Veo 3(arstechnica.com)
110 points | 74 commentspage 2
bongodongobob 19 hours ago|
Racists? On the internet you say!?
AstroJetson 17 hours ago||
Just watched Mountainhead about this very topic. The AI videos were good enough to start wars, topple banking systems and countries.

It is very scary because the "tech-bros" in the movie pretty much mimic the actions of the real life ones.

Apocryphon 18 hours ago||
The Tayification of everything
turbofreak 14 hours ago|
Nice callback. That was a golden era.
ivape 18 hours ago||
I think it's fine to fingerprint AI generated images/videos. It's a massive privacy violation but I just can't see any other way. Too many people have always been and will always be unethical.
WillPostForFood 17 hours ago||
To what end? You want to fingerprint all AI images and video to catch people who make racist videos in order too to do what? It isn't illegal. If TikTok doesn't like the content they can delete the video and the account. If Google or OpenAI doesn't want the content being created, they can figure out a way to block it, and delete the user's accounts in the meantime.

If I told you many 14 year olds were making very similar offensive jokes at lunch in high school, would you support adding microphones throughout schools to track and catch them?

ivape 14 hours ago||
If I told you many 14 year olds were making very similar offensive jokes at lunch in high school

A picture is worth a thousand words. Me saying your mom is so fat that _______ in the lunchroom is different than me saying your mom is so fat in cinematic video format that can go locally viral (your whole school). This is the first time in my life I'm going to say this is not a history is echoing situation. This is a we have entirely gone to the next level, forget what you think you know.

SchemaLoad 18 hours ago|||
I've been wondering if ChatGPT makes such excessive use of EM dash just so people can easily identify AI generated content.

Google wouldn't even need a fingerprint, they could just look up from their logs who generated the video.

oceanplexian 17 hours ago|||
Google already admitted they are fingerprinting generative video and have a safety obsession so I guarantee they do it to their LLMs. Another reason is to pollute the output that folks like Deepseek are using to train derivative models.
IAmGraydon 16 hours ago|||
The em-dash is one marker, but I’ve read that most LLMs create small but statistically detectable biases in their output to help them avoid reingesting their own content.
partiallypro 17 hours ago||
Eventually as models become cheaper, the big companies that would do this won't have control over newer generated content, so it's fairly pointless.
jrflowers 16 hours ago||
The interesting thing about this is that it is the use case for these video generators. If the point of these tools is to churn out stuff to drive engagement, and the best way to do that is through content that is inflammatory, offensive, or misinformation, then that’s the ideal use case for them. That’s what the tool is for.
aaron695 18 hours ago||
[dead]
ynab10 17 hours ago||
[flagged]
brunker2 17 hours ago||
[flagged]
nvch 18 hours ago|
The question is, who is acting in a racist manner here: the LLM that does what it can, or the humans sharing those videos?
unsnap_biceps 17 hours ago|
Until we get a LLM that actually "thinks", it's just a tool like photoshop. Photoshop isn't racist if someone uses it to create racist material, so a LLM wouldn't be racist either.
ghushn3 15 hours ago|||
I saw (on HN, actually) an academic definition for prejudice, discrimination, and racism that stuck with me. I might be butchering this a bit, but prejudice is basically thinking another group is less than purely because of their race. Discrimination is acting on that belief. Racism is discrimination based on race, particularly when the person discriminated against is a minority/less powerful person.

LLMs don't think, and also have no race. So I have a hard time saying they can racist, per se. But they can absolutely produce racist and discriminatory material. Especially if their training corpus contains racist and discriminatory material (which it absolutely does.)

I do think it's important to distinguish between photoshop, which is largely built from feature implementation ("The paint bucket behaves like this", etc.), and LLMs which are predictive engines that try to predict the right set of words to say based on their understanding of human media. The input is not some thoughtful set of PMs and engineers, it's "read all this, figure out the patterns". If "all this" contains racist material, the LLM will sometimes repeat it.

stuaxo 10 hours ago||||
An LLM is a reflection of the biases in the data it's trained on, so it's not as simple as that.
redundantly 17 hours ago|||
LLMs can and do have biases. One wouldn't be far off calling an LLM racist.