Posted by cdrnsf 7 hours ago
Only if an anonymous person or their property is caught in a criminal act may the respective identity be investigated. This should be sufficient to ensure justice. Moreover, the evidence corresponding to the criminal act must be subject to a post-hoc judicial review for the justifiability of the conducted investigation.
Unfortunately for us, the day we stopped updating the Constitution is the day it all started going downhill.
The problem is when the government changes the definition of 'bad actor'.
That said, the recent waves vaguely in the direction of the US government has demonstrated the weakness of legal restrictions on the government. It's good to have something you can point to when they violate it, but it's too easily ignored. There's no substitute for good governance.
That is a myth spread by control freaks and power seekers. Yes, bad actors prefer anonymity, but the quoted statement is intended to mislead and deceive because good actors can also prefer strong anonymity. These good actors probably even outnumber bad ones by 10:1. To turn it around, deanonymization is where the bad actors play.
Also, anonymity can be nuanced. For example, vehicles can still have license plates, but the government would be banned from tracking them in any way until a crime has been committed by a vehicle.
Both good and bad actors benefit in the current system from anonymity. If bad actors had their identities revealed, they'd have a lot harder time being a bad actor. Good actors need anonymity because of those bad actors.
If my memory serves me, we had a PCA and LDA based one in the 90s and then the 2000s we had a lot of hand-woven adaboosts and (non AI)SIFTs. This is where 3D sensors proved useful, and is the basis for all scifi potrayals of facial recognition(a surface depth map drawn on the face).
In the 2010s, when deep learning became feasible, facial recognition as well as all other AI started using an end to end neural network. This is what is used to this day. It is the first iteration pretty much to work flawlessly regardless of lighting, angle and what not. [1]
Note about the terms AI, ML, Signal processing:
In any given era:
- whatever data-fitting/function approximation method is the latest one is typically called AI.
- the previous generation one is called ML
- the really old now boring ones are called signal processing
Sometimes the calling-it-ML stage is skipped.
[1] All data fitting methods are only as good as the data. Most of these were trained on caucasian people initially so many of them were not as good for other people. These days the ones deployed by Google photos and stuff of course works for other races as well, but many models don't.
Frankly, I never imagined when I read that decades ago, that it could be underselling the horror.
Most Americans don’t pay for news and don’t think they need to - https://news.ycombinator.com/item?id=46982633 - February 2026
(ProPublica, 404media, APM Marketplace, Associated Press, Vox, Block Club Chicago, Climate Town, Tampa Bay Times, etc get my journalism dollars as well)
I subbed to Wired last year during a sale and uh... I was never given a premium account linked to my email and support would never answer me. I signed up for the print edition as well and never received any of those. I was getting their newsletter though and that was new. Then I emailed to cancel when I got a billing notification to my email and they were able to cancel it just fine so apparently I did have an account? And then like two weeks ago I received the latest print edition.
Truly have no idea what that was about, but anyway glad to see someone else out here supporting almost all the same news orgs as me (404media is amazing!)
They've sold out for years already, maybe decades. Why fund them now when the corruption is out in the open?
AP is really one of the few places I'd even consider donating to at this point.
There's a vast gulf between "Clearview AI was founeded by white supremacists" and "Smartcheckr, which later merged with Clearview AI, employed for 3 weeks someone who posted white supremacist content under a pseudonym, unbeknownst to the Clearview AI founders".
In fact, neither the Buzzfeed article nor the NYTimes piece accuse anyone of white supremacy.
Other notable white supremacists with material ties in the article:
Chuck Johnson [1] collaborated with Ton-That and "in contact about scraping social media platforms for the facial recognition business." Ran a white supremacist site (GotNews) and white supremacist crowd funding sites.
Douglass Mackey [2] a white supremacist who consulted for the company.
Tyler Bass [3] an employee and member of multiple white supremacist groups and Unite the Right attendee.
Marko Jukic [4], employee and syndicated author in a publication by white supremacist Richard Spencer.
The article also goes into the much larger ecosystem of AI and facial recognition tech and its ties to white supremacists and the far-right. So there are not just direct ties to Clearview AI itself, but a network of surveillance companies who are ideologically and financially tied to the founders and associates.
[0] https://en.wikipedia.org/wiki/Clearview_AI
[1] https://en.wikipedia.org/wiki/Charles_C._Johnson
[2] https://en.wikipedia.org/wiki/Douglass_Mackey
[3] https://gizmodo.com/creepy-face-recognition-firm-clearview-a...
[4] https://www.motherjones.com/politics/2025/04/clearview-ai-im...
But you wrote that "Clearview AI was founded by white supremacists". Even after your new set of links, this remains unsubstantiated. None of your links allege that the Clearview founders are white supremacists, they make an attempt at guilt by association.
[1] https://img.huffingtonpost.com/asset/5e8cc7922300005600169bd...
For example:
- every technology has false positives. False positives here will mean 4th amendment violations and will add an undue burden on people who share physical characteristics with those in the training data. (This is the updated "fits the description."
- this technology will predictably be used to enable dragnets in particular areas. Those areas will not necessarily be chosen on any rational basis.
- this is all predictable because we have watched the War on Drugs for 3 generations. We have all seen how it was a tactical militaristic problem in cities and became a health concern/addiction issues problem when enforced in rural areas. There is approximately zero chance this technology becomes the first use of law enforcement that applies laws evenly.
I think that is pretty unlikely
Crimes aren't solved, despite having a literal panopticon. This view is just false.
Cops are choosing to not do their job. Giving them free access to all private information hasn't fixed that.
The thing you're missing is our system is working exactly like it's supposed to for rich people.
For example, Deepseek won't give you critical information about the communist party and Grok won't criticise Elon Musk
The main problem with the law not being applied evenly is structural - how do you get the people tasked with enforcing the law to enforce the law against their own ingroup? "AI" and the surveillance society will not solve this, rather they are making it ten times worse.
>people inevitably respond to one part of your broken framing, and then they're off to the races arguing about nonsense.
I agree that this unproductive. When people have two very different viewpoints it is hard for that gap to be bridged. I don't want to lay out my entire world view and argument from fist principals because it would take too much time and I doubt anyone would read it. Call it low effort if you want, but at least discussions don't turn into a collection of a single belief.
>how do you get the people tasked with enforcing the law to enforce the law against their own ingroup?
Ultimately law enforcement is responsible to the people so if the people don't want it then it will be hard to change. In regards to avoiding ingroup preference it would be worth coming up with ways of auditing cases that are not being looked into and having AI try to find patterns in what is causing it. The summaries of these patterns could be made public to allow voters and other officals to react to such information and apply needed changes to the system.
You answered your own question - it's straight up bait.
Go lick boots elsewhere.