Top
Best
New

Posted by b_mc2 1 day ago

Corporations are trying to hide job openings from US citizens(thehill.com)
591 points | 447 commentspage 4
2OEH8eoCRo0 1 day ago|
Didn't Apple used to post job openings in small local newspapers in the Midwest?
cramcgrab 17 hours ago||
We effectively replaced 43 h1b’s with AI. Looking to do more soon.
bdangubic 16 hours ago||
The only requirement for H1B should be that you must get your degree in the United States. H1B’s should not be given out otherwise. It would solve almost all current shortcomings of the program
itake 16 hours ago|
Many h1bs get masters degrees in the USA, b/c they are lower cost than undergrad, can be done online/remotely, and are higher chance of winning the lottery.

I don't think requiring a US degree would impact even half the candidates.

bdangubic 16 hours ago||
Good point, definitely no online/remote, must reside in the US minimum 2 years to qualify
veunes 11 hours ago||
What's wild is how blatantly some of these tactics skirt the spirit of the law while technically staying within its letter
like_any_other 9 hours ago||
HN beat The Hill to this story by 5 days: https://news.ycombinator.com/item?id=45151086
the_real_cher 10 hours ago||
Has anyone started to think that tech industry in the USA is going the way of the manufacturing industry?

And by that I mean mostly gone/offshored?

bubblethink 9 hours ago|
That is what everyone is asking for though. As you can see from the complaints in the thread, people are angry about jobs going to immigrants in the US. The alternate is for jobs to go to immigrants abroad, i.e. offshoring. It appears that people are generally happier about the latter.
mikert89 1 day ago||
There's another thing happening which people haven't really heard much about, which is basically ChatGPT Pro is really good at making legal arguments. And so people that previously would never have filed something like a discrimination lawsuit can now use ChatGPT to understand how to respond to managers' emails and proactively send emails that point out discrimination in non-threatening manner, and so in ways that create legal entrapment. I think people are drastically underestimating what's going to happen over the next 10 years and how bad the discrimination is in a lot of workplaces.
JumpCrisscross 1 day ago||
> ChatGPT Pro is really good at making legal arguments

It’s good at initiating them. I’ve started to see folks using LLM output directly in legal complaints and it’s frankly a godsend to the other side since blatantly making shit up is usually enough to swing a regulator, judge or arbitrator to dismiss with prejudice.

mikert89 1 day ago|||
Posted my response below, you have no idea how impactful this is going to be
cindyllm 23 hours ago|||
[dead]
OutOfHere 1 day ago||
That's all well and good, but anyone who does this will likely just be terminated asap without cause, possibly as a part of a multi-person layoff that makes it appear innocuous.
toomuchtodo 1 day ago|||
First call should be to an employment attorney and the EEOC, no matter what, before you sign anything.

https://www.eeoc.gov/how-file-charge-employment-discriminati...

mikert89 1 day ago|||
That’s not quite right. To win a discrimination case, you typically need to document a pattern of behavior over time—often a year. Most people can’t afford a lawyer to manage that. But if you’re a regular employee, you can use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently. With diligent, organized evidence, you absolutely can build a case; the hard part is proving it, and ChatGPT is great at helping you gather and frame the proof.
JumpCrisscross 1 day ago||
> To win a discrimination case, you typically need to document a pattern of behavior over time—often a year

Where did you hear this?

> use ChatGPT to draft calm, non-threatening Slack messages that note discriminatory incidents and keep doing that consistently

This is terrible advice. It not only makes those messages inadmissible, it casts reasonable doubt on everything else you say.

Using an LLM to take the emotion out of your breadcrumbs is fine. Having it draft generic stuff, or worse, potentially hallucinate, may actually flip liability onto you, particularly if you weren't authorised to disclose the contents of those messages to an outside LLM.

mikert89 1 day ago||
With respect, it seems you haven’t kept up with how people actually use ChatGPT. In discrimination cases—especially disparate treatment—the key is comparing your performance, opportunities, and outcomes against peers: projects assigned, promotions, credit for work, meeting invites, inclusion, and so on. For engineers, that often means concrete signals like PR assignments, review comments, approval times, who gets merges fast, and who’s blocked.

Most employees don’t know what data matters or how to collect it. ChatGPT Pro (GPT-5 Pro) can walk someone through exactly what to track and how to frame it: drafting precise, non-threatening documentation, escalating via well-written emails, and organizing evidence. I first saw this when a seed-stage startup I know lost a wage claim after an employee used ChatGPT to craft highly effective legal emails.

This is the shift: people won’t hire a lawyer to explore “maybe” claims on a $100K tech job—but they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull. On its own, ChatGPT isn’t a lawyer. In the hands of a thoughtful user, though, it’s close to lawyer-level support for spotting issues, building a record, and pushing for a fair outcome. The legal system will feel that impact.

JumpCrisscross 1 day ago||
> they will ask an AI to outline relevant doctrines, show how their facts map to prior cases, and suggest the right records to pull

This is correct usage. Letting it draft notes and letters is not. (Procedural emails, why not.) Essentially, ChatGPT Pro lets one do e-discovery and preliminary drafting to a degree that’s good enough for anything less than a few million dollars.

I’ve worked with startups in San Francisco, where lawyers readily take cases on contingency because they’re so easy to win. The only times I’ve urged companies fight back have been recently, because the emails and notes the employee sent were clearly LLM generated and materially false in one instance. That let, in the one case that they insisted on pursuing, the entire corpus of claims be put under doubt and dismissed. Again, in San Francisco, a notoriously employee-friendly jurisdiction.

I’ve invested in legal AI efforts. I’d be thrilled if their current crop of AIs were my adversary in any case. (I’d also take the bet on ignoring an LLM-drafted complaint more than a written one, lawyer or not.)

mikert89 23 hours ago||
No I think the big unlock is a bunch of people that would never file lawsuits can at least approach it. You obviously can’t copy paste its email output, but you can definitely verify what are legal terms, and how to position certain phrases.
JumpCrisscross 22 hours ago|||
> the big unlock is a bunch of people that would never file lawsuits can at least approach it

Totally agree again. LLMs are great at collating and helping you decide if you have a case and, if so, convincing either a lawyer to take it or your adversary to settle.

Where they backfire is when people use them to send chats or demand letters. You suggested this, and this is the part where I’m pointing out that I am personally familiar with multiple cases where this took a case the person could have won, on contingency, and turned it into one where they couldn’t irrespective of which lawyers they retained.

OutOfHere 23 hours ago|||
The legal system is extremely biased in favor of those who can afford an attorney. Moreover, the more expensive the attorney, the more biased it is in their favor.

It is in effect not a legal system, but a system to keep lawyers and judges in business with intentionally vaguely worded laws and variable interpretations.

mikert89 22 hours ago||
Exactly. And it’s comical that the person I was debating with doesn’t understand this. Proclaimed investor in legal tech misses the biggest use case of ai in legal - providing access to people that can’t afford it or otherwise wouldn’t know to work with a lawyer
bigyabai 12 hours ago||
Do you have any preliminary statistics that suggest you're right? AI is mature enough that we should be able to track this.
daft_pink 1 day ago||
Essentially, they want to hire a specific person, while the law requires that they post the job and prefer American citizens, so they don’t want American citizens to apply not that they prefer foreign workers in general they just have a specific candidate in mind.

I think Trump’s position of forcing companies to pay a substantial fee in exchange for a fast tracked green card is really the most sensible position instead of H1B. It should be less than $5 million, but I think if a company had to pay $300k not have any or limited protection against that person quickly finding a job in the. united states, then companies would generally prefer american workers in a way that makes economic sense, because talented workers can be acquired for a price, but not be kept for peanuts in exchange for less than an American worker, because they are stuck with the employer for 20 years if they come from a quota country.

kjkjadksj 1 day ago|
If they had someone specific in mind the usual method is to have their resume next to you when you write up the job app. Make the requirements perfectly match their skills. Now you can say when you picked them that they were the best candidate all along.
daft_pink 19 hours ago|||
I think it’s lot tricker for the large companies that tend to hire H1B visa holders to do this, because a manager would need to convince the HR department to violate the law, and the company might be concerned the risks involved are not a good idea if enough candidates apply.

Plus, there seems to be some indicator tha the job you are applying is an H1B position and they are posting them on sites for Americans to apply too. So it’s not hard to imagine a bunch of highly qualified idealogue’s applying to jobs they never wanted in the first place and reporting them to the government when they get rejected.

It doesn’t seem like a good idea to try and manipulate the system with the current government’s willingness to go after companies.

If they’ll go after a US ally like Hyundai for using ESTA under the VWP illegally, when Hyundai could probably have easily applied for and been granted B-1 visas. Can you imagine what they would do to a company illegally sponsoring H1B visas?

prasadjoglekar 22 hours ago|||
That's one of several tactics. But if someone did apply and was close enough, you still have to do the interview and reject song and dance. Better to deter applications in the first place.
tamimio 19 hours ago||
That's the real reason for the job market crisis; it is not AI, it's just corporate greed to have borderline slaves to lower job wages and workers willing to work extra hours for peanuts. AI is just the scapegoat, easy to blame it on something that's still new while also milking investors' money by promising how it will reduce costs and increase profits. If the job market crisis were really from AI, not only should it happen within a few years of adopting such new tech, but we should see its impact on other industries like lawyers, medical doctors, administrators, and lastly on tech workers, not the other way around.

That's why I keep saying and repeating: the tech industry and especially the engineering one should be further regulated and restricted just like other professions out there, otherwise, you are only allowing anyone to scam and game the system with any potential bubble currently happening.

temptemptemp111 15 hours ago|
[dead]
More comments...