Posted by Oravys 1 day ago
Awesome, if you're a victim of an AI company having your voice, you can help yourself by sending another AI company your voice!
> Audio is never used to train commercial models without explicit consent
I'm sure Mercor has explicit consent as well, legal teams are reasonably good at legally covering their asses with license terms.
Now 40k people have learned that biometrics aren't passwords. You can't rotate your voice.
I like to tell them this story that I read somewhere a decade or so ago. It might not be a true story (I never checked) but it's a helpful way of thinking about it.
Bob landed a great job and decided to celebrate by buying a new luxury car (a BMW in my recollection, but could be wrong) that had a thumbprint authentication for unlocking and for starting it, so you never have to carry external keys. One day a thief decided to steal Bob's car. They broke in to his house and tied him up. When they demanded the keys and he said there weren't any, they decided to cut off his thumb and use it as the key. Now Bob has no thumb and his car still got stolen.
I did find your story from 2005 about a man having his finger chopped off once the thieves realized they would need his appendage every time in order to start the car2.
https://news.sky.com/story/french-police-investigating-serie...
[2] https://www.newscientist.com/article/mg18624943-600-finger-c...
"My voice is my passport. Verify me."
I have to renew my passport every 10 years or so. How do I do that with my voice? I guess it's time to take some vocal lessons.
The fediverse take on that was "customers are advised to rotate their faces and birthdays."
Here is a clip of him when someone called his studio thinking they were the local Pizza Hut. Phil does all the other voices, including the phone system.
The ability to switch mid-sentence is mostly just something I discovered I can do and is fun. But the ability to pass as my real gender is something that helps me feel safe. And when needed, being able to occasionally pass as my prior gender (e.g., when calling my bank until I can change my name/gender legally), it also quite useful.
Well met, fellow Uplinker!!
I'm pretty sure this person worked at Playtronics.
Voice fingeprinting is essentially useless because it is easily recorded and reproduced.
https://www.science.org/content/blog-post/things-i-won-t-wor...
Do you need to calibrate it to be able to repeat it, and does that calibration change if you are at a different altitude and in different conditions, such as humidity?
Does merely changing altitude (or ambient pressure) change voice enough to be considered different by a recognition or synthesizing system?
Although it does seem to affect some people more than others for sure, I guess it depends how and what you're smoking.
I guess you don't listen to Sinatra.
In reality, some phlegm aside, their voice is still the same in any way that matters.
If you knew people who didn't smoke and started (not uncommon in the 80s and 90s, quite a few people I know started smoking in university, or after the stress of a first job, some even later), and also the inverse, you can trivially hear it for yourself.
The problem is that even if you know that, you still get bombarded by banking apps promising "biometrics are more secure than passwords, switch now!"
also this took me way too long to realize it had nothing to do with warhammer.
I feel like we're right on the threshold where we give up and start interacting with slop like it's human written.
Voices aren't strong.
There just aren't that many unique characteristic parameters behind a voice - it's largely dictated by an evolutionary shared shared larynx and vocal tract. They aren't fingerprints.
The fact that human voice impersonation is not only widely possible but popular should give you an indication of this. Prosody, intonation, range, etc. - it's all flexible and can be learned and duplicated.
The signals are simple too, because we have to encode and decode them quickly. You may or may not be able to picture and rotate an apple tree in your head, but you can easily read this sentence in the voice of David Attenborough.
Moreover, you can easily fine tune a voice model to fit any other speaker. You can store the unique speaker embeddings in a very thin layer. Zero and few shot unseen sampling can even come close to full reproduction. You can measure this all quantitatively.
Voices are not, and never have been, fingerprints. They're just not that unique.
In the idealized world, the legal system is meant to provide an accessible alternative to violence for reconciling disputes, but it's increasingly wielded as an impossibly kafkaesque system meant to maintain corporate power over individuals.
I think "CYA" is an overly-flowery term for the reality that they're blocking every avenue for legal recourse, while a variety of other avenues still exist for which adding friction requires the maintenance of expensive and ongoing costs (owning multiple residences, hiring security, etc.)
(To be clear, I am advocating for a more accessible and level legal system, not for UHC-style violence.)
Ah, I see. So, when discussing ways to ensure cuatomers cannot utilize our warranty process, I'll make sure to do so in ways that are not traceable and won't show up in discovery.
The bigger the company, the more speculation there is about stuff people don't actually understand.
Back when the relevant laws were written, most communications was oral and in-person, writing was reserved for the "important stuff". We now apply the laws that were designed for memos to messages on Slack, which are a lot like conversations than permanent documents.
In other cases I have heard people who ought to know better speculating about “what if” they didn’t have to follow the letter of some corporate policy that was rooted in risk avoidance. Again, it looks bad but it doesn’t mean anything concrete (except that the person might have iffy judgment).
Hey, fuck you too buddy.
I said this based on my years of working at companies on projects specifically to do things like delete all data as soon as it was legally permissible so it could never come up in court again.
And most of my “let’s take this offline” chats have led to discussions around doing illegal shit.
Hell, I had one manager give me handwritten code on paper and instructions to commit it under my name. The code in question would cause sales to go through without the discounts presented to customers because the discount service was buggy and his metrics were based on successfully completed sales. Even threatened to fire me when I said no, and only backed down when I put the paper in my pocket and asked if he would like for anyone else outside the room to see it or if he would not use me as a fall guy.
If your employees “don’t know what they’re talking about” then either they are not representative of the companies views and have no power to enact illegal policies for the company, or they do and you don’t have controls. Trying to hide that shit by default means you don’t get the benefit of the doubt, like you are giving them.
The situations you describe are not what I have experienced, which I guess makes me lucky.
My point was that in discovery, the idle chatter of know-nothings looks bad. But if there are companies that really have something to hide, well I guess that's what discovery is for. And as for your manager pal, if someone did that I'd be looking for work that very afternoon.
apology accepted and I rescind my insult.
> My point was that in discovery, the idle chatter of know-nothings looks bad. But if there are companies that really have something to hide, well I guess that's what discovery is for. And as for your manager pal, if someone did that I'd be looking for work that very afternoon.
I did, switched jobs a few weeks later after that. Did keep the paper and let him know I still had it just to fuck with him back during those weeks however.
> The situations you describe are not what I have experienced, which I guess makes me lucky.
It may be the opposite and I was just unlucky, but I have run into multiple situations with companies making 100s of millions to billions a year where that sort of behavior occurred, so if people are being trained to hide unfortunate conversations then I am going to assume the worst barring large amounts of contrary evidence.
Heard that first from a US mil commander who once ran for a minor political office like state rep.
This is an overly flowery way of saying: violence.
The worst of the consequences are the same. People end up dead, destitute, and/or with long-term health consequences and are unable to enjoy the fruits labor in the worst cases. In the milder cases i think i'd prefer a bruise for a week to a huge financial loss.
A lot of people were basically wiretapping themselves AND their businesses!
While a lot of Mercor "contractors" claim Mercor over-reached with data gathering via Insightful, it's kind of smart because people are too afraid to complain too much knowing they'll not only lose their primary job, but also open themselves up to uncapped liability for willful misconduct.
[0] https://www.wsj.com/tech/ai/mercor-ai-startup-personal-data-...
Selling the solution to the problem you caused ought to be illegal.
Most tech solutions are built on the problems they created. This includes phones, cars, computers, every software upgrade, and almost every electronic gadget. You are forced to use them because the world around you is no longer compatible with the way of life that was before the introduction of these tech.
Similarly, phones are required now for some activities, like online banking. First it was an option, then it became the norm.
See https://en.wikipedia.org/wiki/General_Motors_streetcar_consp...
Court records are public in the US. If creditors want to know if you’ve been in financial trouble, they should check for bankruptcies and lawsuits, not the extrajudicial version of those that the credit reporting companies run based on hearsay.
It’s not better in all ways, of course, but the alternative is not “everyone gets cheap credit extended to them” but rather “people who rich people know and trust get cheap credit extended to them, some others get more expensive credit, and some get no credit extended”. It’s not obvious to me that that’s better.
The good thing about the grift economy is it grifts itself, like the turtles!
Happy to discuss the forensic detection side. AudioSeal
watermarks, AASIST anti-spoofing, and how the detection landscape changes
once voice biometrics start leaking at scale.This is suggestion #1 on your list of remediation steps for victims, but you didn't provide any information on how anyone would actually do that. How exactly would I search the internet for copies of my voice?
Please don't tell me the solution is giving an embedding of my voice to another third party.
Mercer hasn't released many public statements over the incident. Social media posts aren't necessarily public; but I did find this breach notification sample filed with CA - https://oag.ca.gov/ecrime/databreach/reports/sb24-621099 . I guess we'll see if our legislators finally take data privacy seriously.
Mercor has definitely released statements with boilerplate "investigations are underway."
I don’t even use biometrics on apple devices, I use a 6 digit pin.
It was always a stupid idea.
The thing about been willing to trade convenience for security is you get called paranoid and then when the other shoe does drop and you are still doing that you still get called paranoid for the current thing you are not doing that “everyone does”.
Assuming Apple is truthful on this matter (so far it seems so), Apple devices store a mathematical representation of the data, not the data itself (i.e. not a picture of your finger) and keep it only on device on a special hardware section designed for extra security. When apps ask for authentication, they can never inspect the data, they can only ask “does this match?”.
Even if you were somehow able to exfiltrate the data and find some way to transform it for something nefarious, you’d still need to first attack and bypass a specific hardware feature of the target’s device.
So sure, not having any representation of the data anywhere is technically more secure (maybe, as typing your code could be intercepted by a shoulder surfer or a camera), but biometrics on Apple devices are fundamentally not the same as having your raw data available on a random server somewhere.
In the use case of a mobile phone, apple's face id absolutely improves security several-fold.
Because right now the incentive to do what's right are so low. Taking a risk with other's people lives is becomming the norm for companies.
Germans (because of course) have a word for this: "Datensparsamkeit". Being frugal with your data.
I don't know if it's the reason you imply. In the 70s, there were big debates in Germany about privacy and data storage. They spoke of one's data shadow (Datenschatten). I suspect this word comes from that tradition. The reason the word exists would then be the reflection (Verwaltigung) on WW2.
E.g. you don't think of firefighter as fire-fighter in ordinary usage.
But yes. We Americans know Germans more for their silly big words. But statements like that can be misinterpreted as the German perspective of themselves doesn't quite match the American stereotypes.
- we learned the hard way that data will be used to kill people, during the Nazi regime
- we learned it again in the GDR with the Stasi being a little less obvious but still ruining people's livelihoods
- and German comes up with compound words for such things
In the US of course the government buys this sort of information legally from corporations.
There is also the rather famous example of how earlier census data was used in the 40’s.
Once the government has your data, they have it. The next generation of representatives may not follow all the same rules and norms
Who doesn’t want that old post going extinct forever when they were shit faced outside of a bar in Nashville but now they are in their mid-life and are “respectable” members of society.
So yeah, of course they've developed that type of distrust. Americans should have also after the 50-60s paranoia of red scare, black people etc. Instead they just spend a few decades building a anti-social state.
Nowadays you just throw all the data into a black box and believe whatever it says blindly.
Or did you mean the "big data" crowd which thought 500GB was noteworthy? I don't think anyone took those serious, neither in 2010s nor now. That was always "small" data
500GB is in the "fits" category.
We do?
https://www.tomshardware.com/pc-components/ssds/kioxia-unvei...
Or 250 of these ~$400 4tb flash drives and an insane number of dongles to connect them all:
https://www.slashgear.com/1847725/largest-usb-thumb-drive-hi...
Related "monetizing user data" seems to just mean ads. Ads on everything, forever, until the userbase gets fed up and moves to a new service that definitely won't do that, and the cycle repeats about every 3 years.
I see this whenever an LLM’s impact is assessed. We know. The issue is scale and the ability for smaller and smaller groups (down to individuals) to execute at scale.
Fake news always existed. Now one dude in India can flood multiple sock puppet media accounts with right wing content/images (actual example) at a scale previously unimaginable.
My concern is that I can open up chatGPT and even with a free, “anonymous” account run an assembly line generating tens of thousands of words a day to pump to Twitter that are good enough to prop up multiple fake accounts and cause mayhem.
Now make it thousands of people like me doing it. Now add funding and political orgs. Add company leadership that turns a blind eye so long as it drives engagement. This scale and pipeline wasn’t possible 5 years ago, even if we clearly see the throughline.
I’m not even getting into fake images either. That used to require some know how. There are basically no hurdles and even if most people learn it’s fake, millions likely won’t. If you’re a little lucky, less scrupulous “news” outlets will amplify it for you as well for free.
I have the faintest possible hope that such things are going to be the death knell of social media. Yeah a lot of credulous idiots are happily giving AI thirst traps their money for stroking their confirmation bias, but that's just who's left at this point. It feels like every social media app I use is gradually bleeding users who aren't hopelessly addicted to the dopamine treadmill, because what's left is just plain unappealing to them, which selects for the people who are most vulnerable to AI shit, which is far from ideal, but also means those platforms are comprised ever more of that vulnerable population and nobody else. And the problem with all these businesses going through that is without a diverse, growing audience, you just become InfoWars, slinging the same slop to the same people every day, and every ounce of said slop is great for what's left of your audience, but absolute garbage for getting anyone new in it. And it just goes on that way until you sputter out and die (or harass the wrong group of parents I guess).
I wish all social media sites a very haha die in a fire.
No media uploading, memes are few and far between (usually punished), etc.
[0] https://www.opensourceshakespeare.org/views/plays/play_view....
Introducing… The Hooli Box!
Except no company is learning this lesson.
The enterprise threat model includes "our own users", and the modus operandi is to maintain as much information on that threat as possible.
Obviously, you don't have to face any legal consequences, so why worry?
Sorry for the rant... but I just find this lack of liability frustrating.
0 - https://techcrunch.com/2026/03/22/delve-accused-of-misleadin...
I jest but the majority of the "normal" people I know are happy to hand over biometrics because _it's easier_. We need to start branding biometrics as "forever passwords" or something to help people understand just what they're handing over when they validate access to their checking account or enter Disney World or whatever else.
Fingerprints, DNA, iris scans, gait patterns, etc. are all something you can't change (much like a permanent account ID) and are constantly being presented to the world (much like an email address). In addition under US law, police can compel presentation of fingerprints, but passwords are protected under the 5th amendment.
in a certain light, it's kind of admirable. they live like the world is the way it should be
So I could easily see a lot of people viewing this as a positive.
Them being forever passwords is the value prop. The risk scene has changed, but that was essentially always the pitch.
Why is voice and biometric stuff still server-side at all in 2026? Whisper.cpp runs on a phone. WebGPU works. Half these "we keep your voice secure" pipelines could run in the browser today.
The real reason isn't capability. It's cost. Centralised compute is cheaper to run, but that math only holds if you don't price in the periodic breach. Which nobody does until it's their own employees on the leak list.
I mean, just look at what happened to Crowdstrike....
I could think of quite a few things. I know that my bank and brokerage use voice ID.
Kind of nuts all the ways audio data can be used now.