Posted by b_mc2 9/12/2025
But it's the height of stupidity to employ ICE "thugs" to hunt down and round up poor laborers doing jobs that most Americans don't want to do, while letting big companies hire lots of foreigners on H1Bs for SWE jobs, while at the same time you have Americans graduating from college and unable to find jobs.
The US should get rid of ICE and drop the H1B program altogether -- (maybe with some narrow exceptions and not even sure about that). For exceptionally talented people wanting to work in the US there's the EB1 and EB2 programs. That would both largely solve the "illegals are taking our jobs!" problem and stop us acting like some 3rd world police state with masked police acting like the Stasi.
I know I'm out here in my own space capsule, but it seems like a non-sequitur. Again, perhaps this is my own bias speaking, but wouldn't you prefer to solve your own business problems as an entrepreneur, rather than battle to be employed by someone who has the intent to screw you, so that you might have the privilege to solve biz problems for them? In both cases you have problems, but only one gives you autonomy.
Alternatively, you might look towards employers who want you and do not desire to screw you.
That's why I keep saying and repeating: the tech industry and especially the engineering one should be further regulated and restricted just like other professions out there, otherwise, you are only allowing anyone to scam and game the system with any potential bubble currently happening.
I don't think requiring a US degree would impact even half the candidates.
https://www.eeoc.gov/how-file-charge-employment-discriminati...
Where did you hear this?
> use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently
This is terrible advice. It not only makes those messages inadmissible, it casts reasonable doubt on everything else you say.
Using an LLM to take the emotion out of your breadcrumbs is fine. Having it draft generic stuff, or worse, potentially hallucinate, may actually flip liability onto you, particularly if you weren't authorised to disclose the contents of those messages to an outside LLM.
Most employees don’t know what data matters or how to collect it. ChatGPT Pro (GPT-5 Pro) can walk someone through exactly what to track and how to frame it: drafting precise, non-threatening documentation, escalating via well-written emails, and organizing evidence. I first saw this when a seed-stage startup I know lost a wage claim after an employee used ChatGPT to craft highly effective legal emails.
This is the shift: people won’t hire a lawyer to explore “maybe” claims on a $100K tech job—but they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull. On its own, ChatGPT isn’t a lawyer. In the hands of a thoughtful user, though, it’s close to lawyer-level support for spotting issues, building a record, and pushing for a fair outcome. The legal system will feel that impact.
This is correct usage. Letting it draft notes and letters is not. (Procedural emails, why not.) Essentially, ChatGPT Pro lets one do e-discovery and preliminary drafting to a degree that’s good enough for anything less than a few million dollars.
I’ve worked with startups in San Francisco, where lawyers readily take cases on contingency because they’re so easy to win. The only times I’ve urged companies fight back have been recently, because the emails and notes the employee sent were clearly LLM generated and materially false in one instance. That let, in the one case that they insisted on pursuing, the entire corpus of claims be put under doubt and dismissed. Again, in San Francisco, a notoriously employee-friendly jurisdiction.
I’ve invested in legal AI efforts. I’d be thrilled if their current crop of AIs were my adversary in any case. (I’d also take the bet on ignoring an LLM-drafted complaint more than a written one, lawyer or not.)
Totally agree again. LLMs are great at collating and helping you decide if you have a case and, if so, convincing either a lawyer to take it or your adversary to settle.
Where they backfire is when people use them to send chats or demand letters. You suggested this, and this is the part where I’m pointing out that I am personally familiar with multiple cases where this took a case the person could have won, on contingency, and turned it into one where they couldn’t irrespective of which lawyers they retained.
It is in effect not a legal system, but a system to keep lawyers and judges in business with intentionally vaguely worded laws and variable interpretations.
It’s good at initiating them. I’ve started to see folks using LLM output directly in legal complaints and it’s frankly a godsend to the other side since blatantly making shit up is usually enough to swing a regulator, judge or arbitrator to dismiss with prejudice.