Posted by ColinWright 1 day ago
I gave in and verified. Persona was the vendor then too. Their web app required me to look straight forward into my camera, then turn my head to the left and right. To me it felt like a blatant data collection scheme rather than something that is providing security. I couldn't find anyone talking about this online at the time.
I ended up finding a job through my Linkedin network that I don't think I could have found any other way. I don't know if it was worth getting "verified".
---
Related: something else that I find weird. After the Linkedin verification incident, my family went to Europe. When we returned to the US, the immigration agent had my wife and I look into a web cam, then he greeted my wife and I by name without handling our passports. He had to ask for the passport of our 7 month old son. They clearly have some kind of photo recognition software. Where did they get the data for that? I am not enrolled in Global Entry nor TSA PreCheck. I doubt my passport photo alone is enough data for photo recognition.
It's not. The developers' bubble we're in on the HN is invisibly tiny compared to the real life. And normies are not only perfectly happy uploading all their PII to Persona - they won't even understand what's wrong with that.
There has also been a backlash against verification in other communities like Reddit (also a bubble), mainly stemming from Discord's recent announcement.
The discourse is good, and while I wish every user and potential user understood all the pros, cons, and ramifications, I'm also happy we are finally talking about it in our bubbles.
The need / demand for some verification system might be growing though as I’ve heard fraudulent job application (people applying for jobs using fake identities… for whatever reason) is a growing trend.
The OP is right. For that reason we started migrating all of our cloud-based services out of USA into EU data centers with EU companies behind them. We are basically 80% there. The last 20% remaining are not the difficult ones - they are just not really that important to care that much at this point but the long terms intention is a 100% disconnect.
On IDV security:
When you send your document to an IDV company (be that in USA or elsewhere) they do not have the automatic right to train on your data without explicit consent. They have been a few pretty big class action lawsuits in the past around this but I also believe that the legal frameworks are simply not strong enough to deter abuse or negligence.
That being said, everyone reading this must realise that with large datasets it is practically very likely to miss-label data and it is hard to prove that this is not happening at scale. At the end of the day it will be a query running against a database and with huge volumes it might catch more than it should. Once the data is selected for training and trained on, it is impossible to undo the damage. You can delete the training artefact after the fact of course but the weights of the models are already re-balanced with the said data unless you train from scratch which nobody does.
I think everyone should assume that their data, be that source code, biometrics, or whatever, is already used for training without consent and we don't have the legal frameworks to protect you against such actions - in fact we have the opposite. The only control you have is not to participate.