Top
Best
New

Posted by yusufaytas 11/1/2025

AI Broke Interviews(yusufaytas.com)
88 points | 126 comments
neilv 11/1/2025|
> Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers.

Before Google, AFAIK, it was ad hoc, among good programmers. I only ever saw people talking with people about what they'd worked on, and about the company.

(And I heard that Microsoft sometimes did massive-ego interviews early on, but fortunately most smart people didn't mimic that.)

Keep in mind, though, that was was before programming was a big-money career. So you had people who were really enthusiastic, and people for whom it was just a decent office job. People who wanted to make lots of money went into medicine, law, or financial.

As soon as the big-money careers were on for software, and word got out about how Google (founded by people with no prior industry experience) interviewed... we got undergrads prepping for interviews. Which was a new thing, and my impression is that the only people who would need to prep for interviews either weren't good, or were some kind of scammer. But then eventually those students, who had no awareness of anything else, thought that that this was normal, and now so many companies just blindly do it.

If we could just make some other profession be easier big money, maybe only people who are genuinely enthusiastic would be interviewing. And we could interview like adults, instead of like teenagers pledging a frat.

danpalmer 11/1/2025||
Prepping for interviews has been a big deal forever in most other industries though. It's considered normal to read extensively about a company, understand their business, sales strategies, finances, things like that, for any sort of business role.

I think tech is and was an exception here.

makeitdouble 11/1/2025|||
What you're describing sounds to me like just caring for the place we'll be spending half a decade or more and will have the most impact on our health, financial and social life.

I'd advise anyone to read the available financial reports on any company they're intending to join, execpt if it's an internship. You'll spend hours interviewing and years dealing with these people, you could as well take an hour or two to understand if the company is sinking or a scam in the first place.

nunez 11/2/2025|||
Why should anyone do that in a world where fundamentals don't make sense? Yes, knowing how the company makes money is important (though that often is incomplete or unclear from what's publicly available), but knowing their 10-K or earnings reports? Too much.
throwaway98797 11/1/2025|||
kinda silly given the ability of most people to infer anything substantial through finances and marketing copy

really company reviews is all that matters and even that has limited value since your life is determined by your manger

best you can do is sus out how your interviewers are fairing

are they happy? are they stressed, everything else has so much noise to be worse than worthless

GrumpyYoungMan 11/2/2025|||
"Is the company consistently profitable or not?" and "Are revenue and profits growing over time, stable, or declining?" are very important questions to answer, particularly if stock grants are part of the compensation package.

For developers who work on products, getting a sense of whether the product of the team you'd be joining is a core part of the business versus speculative (i.e. stable vs likely to have layoffs) and how successful the product is in the marketplace (teams for products that are failing also are likely to be victims of layoffs) are also very important to understand.

nunez 11/2/2025||
So many ways to juice those numbers though.

And if your team is far from the money, what often matters much much more is how much political capital your skip level manager has and to what extent it can be deployed when the company needs to re-org or cut. Shoot, this can matter even if you're close to the money (if you're joining a team that's in the critical path of the profit center vs a capex moonshot project funded by said profit center).

This is one thing I really like about sales engineering. Sales orgs carry (relatively) very low-BS politically.

nradov 11/2/2025|||
It matters a lot whether the organization is growing. If you get assigned to a toxic manager in a static organization then you're likely to be stuck there indefinitely. In a growing organization there will be opportunities to move up and out to other internal teams.
nunez 11/2/2025||||
I remember being told this during my interview prep classes in college in 2008. Interviewing was so much more formal even then (in the NYC area): business casual attire, (p)leather-bound resume folios, expectations of knowing the company, etc. I definitely don't miss any of that nonsense.
neilv 11/2/2025||
I recently showed up to a new startup interview, with a similar folio (with printed copies of my resume, and a tablet of graph paper), and it paid off.

I was in a small conference room with the two co-founders, and of them hadn't seen my resume, and was trying to read it on his phone while we were talking.

Bam. I whipped out printed copies for both of them, from my interview folio.

neilv 11/2/2025||||
It was good standard advice even for programmers to know at least a little about the company going in. And you should avoid typos and spellos on your resume.

But no "prep" like months of LeetCode grinding, memorizing the "design interview" recital, practicing all the tips for disingenuous "behavioral" passing, etc.

MikeNotThePope 11/2/2025|||
I’m glad civil engineers can’t vibe build a dam.
stavros 11/2/2025||
I'm not glad people here can't avoid injecting casual AI-hate soundbites everywhere. Is there any particular objection you have, or just "AI bad"?
ThrowawayR2 11/2/2025|||
IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs. No way to prep for that.

They only gave it up years later when it became clear even to them it wasn't benefiting them.

neilv 11/2/2025||
> IIRC Google had an even higher bar in their early days: candidates had to submit a transcript showing a very high GPA and they usually hired people only from universities with elite CS programs.

Which sounds like a classic misconception of people with no experience outside of a fancy university echo chamber (many students and professors).

Much like Google's "how much do you remember from first-year CS 101 classes" interviews that coincidentally looked like maybe (among my theories) they were trying to make a metric that matches... (surprise!) a student with a high GPA at a fancy university.

Which is not very objective, nor very relevant. Even before the entire field shifted its basic education to help job-seekers game this company's metric.

makeitdouble 11/1/2025|||
> many companies just blindly do it.

Yes. A while ago a company contacted me to interview, and after the first "casual" round they told me their standard process was going full leetcode on the second round and I'm advised to prepare for those if I'm interested in going further.

While that's the only company that was so upfront about it, most accept that leetcodes are dumb (need to be prepped even for a working engineer) and still base the core of their technical interview on them.

dwohnitmok 11/2/2025|||
> And we could interview like adults, instead of like teenagers pledging a frat.

I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.

> people who are genuinely enthusiastic

This seems absurdly difficult to measure well and gameable in its own way.

The flip side of "ad hoc" interviewing as you put it was an enormous amount of capriciousness. Being personable could count for a lot (being personable in front of programmers is definitely a different flavor of personable in front of frat bros, but it's just a different flavor is all). Pressure interviews were fairly common, where you would intentionally put the candidate in a stressful situation. Interview rubrics could be nonexistent. For all the cognitive biases present in today's interview process, older interviews were rife with much more.

If you try to systematize the interview process and make it more rigorous you inevitably make a system that is amenable to pre-interview preparation. If you forgo that you end up with a wildly capricious interview system.

If course you rarely have absolutes. Even the most rigorous modern interview systems often still have capriciousness in them and there was still some measure of rigor to old interview styles.

But let's not forget all the pain and problems of the old style of interviews.

jjav 11/2/2025|||
> I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.

Yeah, no, not at all. Interviewing in the 90s was just a cool chat between hackers. What interesting stuff have you built, let's talk about it. None of the confrontational leetcode nonsense of later years.

I still refuse to participate in that nonsense, so I'll never make people go through such interviews. I've only hired two awesome people this year, so less than a drop in the bucket, but I'll continue to do what I can to keep some sanity in the interviewing in this industry.

neilv 11/2/2025||||
> > people who are genuinely enthusiastic

> This seems absurdly difficult to measure well and gameable in its own way.

True, and it is gamed currently (some prep books tell you to feign enthusiasm).

But let's whimsically say that the hypothetical of software development no longer being the go-to easy lots-of-money career meant that the gaming people would go to some other field instead, leaving you with only the people who really want to do this job.

Esophagus4 11/2/2025||||
Absolutely.

The amount of times I’ve seen a “do you want to have a beer with them?” test in lieu of a simple programming exam is horrifying. (And it showed in the level of talent they hired.)

Fortunately, most of those have been left by the wayside, roadkill of history.

Because that is really the alternative if we don’t have rigorous, systematic technical interviews: cognitive bias and gut-feel decisions. Both of which are antithetical to high performing environments.

neilv 11/2/2025||
False choice.

The reality of these "rigorous, systematic technical interviews" is that we have a ton of companies doing nonsense theatre that isn't actually about "fundamentals", is also easily biased (as even some purported Google interviewers have admitted on HN), and have almost nothing to do with how effective a software engineer will be (as even Google's own stats show).

Esophagus4 11/2/2025||
There is no alternative.

So you may not think they’re predictive of success, but you should see how much less predictive everything else is.

Hiring is always a risk. It will never be a perfect science.

That’s why it’s important to have a quick off-ramp for those who aren’t working out.

Edit: BTW, where did you see Google saying their interview process doesn’t work? Other than some a few anonymous devs venting on HN, the company still uses coding interviews as critical to their process. You will always find a few complainers, but the fact that one of the world’s top software shops still uses it says what needs to be said.

neilv 11/2/2025||
I've been doing this awhile, and I've seen hiring work really well without the LeetCode grilling.

And I've almost always seen the LeetCode grilling be administered by someone who doesn't know what they're doing (and often also tainted with ego, despite the strange claim of some that a LeetCode grilling is objective).

That said, if you're sourcing random people, good luck, it's a flood of LeetCode gamers to wade through, and too much of your staff interviewing them might also be LeetCode gamers with no experience doing non-LeetCode interviews.

> Edit: BTW, where did you see Google saying their interview process doesn’t work?

I have a bunch of notes on Google hiring I'd have to dig through, but the first link quick at hand is this retrospective by a hiring committee person who left (and I have a note about 8m50s being a funny story of the hiring committee realizing that they would've rejected their own packets): https://www.youtube.com/watch?v=r8RxkpUvxK0 (IIRC, I don't agree with all the beliefs he still holds, but he calls out a lot of problems they found.)

Esophagus4 11/4/2025||
I watched the video, and he makes some fair points... but like, I don't think he's proven what he aims to prove.

He said the interview scores weren't predictive of performance at Google, but his data is highly biased: they didn't hire people with cumulative low scores. So I could easily imagine a world where, let's say, some of the people with good scores were successful and some weren't, but ALL of the people with bad scores were not successful. We just don't see those whether the bad scores performed well because they weren't hired.

I'll note that my process doesn't give numeric scores, I give only written feedback with a hire / pass decision, which is discussed by a committee.

Re: not being able to pass our own hiring process - I've felt the same at my company, but isn't that a good thing? That means I'm doing my part to hire people even better than me, which is good for my team and the company. I'm raising the bar. A Players hire A Players.

nradov 11/2/2025|||
Being personable does count for a lot in any role that involves teamwork. Certain teams can maybe accommodate one member whose technical skills make up for bad interpersonal skills as a special exception, but one is the limit.
nunez 11/2/2025||
Casual interviews definitely still exist, though the companies those jobs are attached to are typically not tech and pay less.

Consulting positions also don't have much leetcode BS. These have always focused much more on practical experience. They also pay less than Staff+ roles at FAANGs.

Esophagus4 11/2/2025||
Most of the consulting positions I’m aware of also hire a far lower talent caliber than FAANG, so they need bodies, not top talent.

I have worked at a company who had a casual, experienced-based, conversational interview. The engineering there was atrocious, and I left as soon as I could.

If you can talk your way into a position, that says a lot about the level of talent there. Top talent wants to work at a place that has a transparent and merit based performance bar, and you can’t smooth-talk your way into a job.

czep 11/3/2025||
If you’re referring to body shop consulting agencies this may be true, but IME as an IC consultant in DS/ML, my rates are well above Staff+ at FAANG, and nobody has ever tried to leetcode interview me. Yes, I have to do a lot of smooth talking, but performance is extremely transparent: if I don’t deliver, I don’t get paid. Honestly I doubt I could pass a leetcode style interview, and I’m glad I don’t have to do that anymore.
nunez 11/6/2025||
That and there are high-quality boutique consulting firms that are happy to pay big bucks for super skilled people
captainkrtek 11/1/2025||
I’ve conducted about 60 interviews this year, and have spotted a lot of AI usage.

At first I was quite concerned, then I realized that in nearly all cases I’d spotted usage, a pattern stood out.

Of the folks I spotted, all spoke far too clearly and linearly when it came to problem solving. No self doubt, no suggestion of different approaches and appearance of thought, just a clear A->B solution. Then, because they often didn’t ask any requirements questions beyond what I initially asked, the solution would be inadequate.

The opinion I came to is that even in the best Pre-AI era interviews I conducted, most engineers contemplate ideas, change their mind, ask clarifying questions. Folks mindlessly using AI don’t do this and instead just treat me as the prompt input and repeat it back. Regardless of if they were using AI or not, I won’t know ultimately, they still fail to meet my bar.

Sure, some more clever folks will mix or limit their LLM usage and get past me, but oh well.

amrocha 11/2/2025||
The real problem will be in 5 years, when current university students having their brains melted by AI that somehow luck into entry level positions can’t ever get to senior level because they’re too reliant on AI and they literally don’t know how to think for themselves. There will never again be as many senior engineers as there are today. There won’t be any good engineers left to hire.

Look around you. 15 years ago we didn’t have phones and now kids are so addicted to them they’re giving themselves anxiety and depression. Not just kids, but kids have it the worst. You know it’s gonna be even worse with AI.

floundy 11/2/2025||
Most departments at companies run on zero to two good engineers anyway. The rest are personality and nepotism hires limping along some half-baked project or sustainment effort.

Most people in my engineering program didn’t deserve their engineering degrees. Where do you think all these people go? Most of them get engineering jobs.

amrocha 11/2/2025||
I’m gonna assume you’re being facetious here. I’ve been in tech for 15 years and I’ve never met a “nepotism hire”. Most of my coworkers have been incredible people.

But in case you’re serious, there’s an old saying that says if everywhere you go smells like shit maybe it’s time to check your shoes.

floundy 11/3/2025||
I don’t work in tech.
amrocha 11/3/2025||
Why are you commenting on a post about technical interviews for engineers then
floundy 11/3/2025||
Because I'm an engineer, who both does and sits for technical interviews.
amrocha 11/3/2025||
So you do work in tech, gotcha
floundy 11/4/2025||
Sheesh, are you a Redditor?
amrocha 11/5/2025||
If you work with computers as an engineer you work in tech, I don’t make the rules sorry
Freedom2 11/1/2025|||
> most engineers contemplate ideas, change their mind, ask clarifying questions

I don't disagree at all. I find it slightly funny that in my experience interviewing for FAANG and YC startups, the signs you mentioned would be seen as "red flags". And that's not just my assumption, when I asked for feedback on the interview, I have multiple times received feedback along the lines of "candidate showed hesitation and indecision with their choice of solution".

happyopossum 11/2/2025|||
I work for a FAANG, have done interview training and numerous interviews. We are explicitly trained that candidates should be asking questions, second guess themselves etc.
saulpw 11/2/2025||||
Hotshot FAANG and YC startups don't want humans, they want zipheads[0].

[0] https://www.urbandictionary.com/define.php?term=Ziphead

captainkrtek 11/2/2025|||
Yeah that is definitely something that is subject to the interviewers opinion and maybe company culture. To me, question asking is a great thing, though the candidate eventually needs to start solving.
DenisM 11/1/2025|||
I interviewed a guy in person and he paused for 5 seconds, then wrote a perfect solution. I tried making the problem more and more complicated and he nailed it anyway, also after a brief pause. We were done in half the time.

Maybe he just memorized the solution, I don’t know.

Would you fail that guy?

captainkrtek 11/2/2025|||
It depends, I had some interviews like this that I suspected. For context, most of the interviews I conduct are technical design related where we have a discussion, less coding. So in those it is quite open ended where we will go, and there are many reasonable solutions.

In those cases where I’ve seen that level of performance, there have been (one or more of):

- Audio/video glitches.

- candidate pausing frequently after each question, no words, then sudden clarity and fluency on the problem.

- candidate often suggests multiple specific ideas/points to each question I ask.

- I can often see their eyes reading back and forth (note; if you use AI in an interview, maybe dont use a 4K webcam).

- way too much specificity when I didn’t ask for it. For example, the topic of profiling a go application came up, and the candidate suggested we use go tool pprof and suggested a few specific arguments that weren’t relevant, later I found in the documentation the same exact example commands verbatim.

In all, the impression I come away with in those types of interviews is that they performed “too well” in an uncanny way.

I worked for AWS for a long time and did a couple hundred interviews there, the best candidates I interviewed were distinctly different in how they solved problems, how they communicated, in ways that reading from an llm response can’t resemble.

DenisM 11/2/2025||
The point is that I interviewed the guy in person and he nailed it 200%. If you interviewed him online you would likely come to conclusion he’s a fake per the criteria you specified, wouldn’t you?
captainkrtek 11/2/2025||
It’s not a rubric I’m checking off for interviews. And in person it’s more straightforward to assess a candidate than questioning if they are using any aids over video… whats your point?
anon_e-moose 11/2/2025||
He made the point clearly, stop dodging the question...
captainkrtek 11/2/2025||
Wasn’t trying to dodge, I misunderstood the premise.

If this was in person, then no I likely wouldn’t fail them. However, In all my in person interviews I’ve conducted, I’ve never seen that even from the best candidates, that’s why I also find it odd over video.

onionisafruit 11/2/2025|||
I might hire him, but I would insist he clock out for his 5 second paused. We can’t have him wasting company time like that.
UltraSane 11/2/2025||
you pay devs hourly?
jjav 11/2/2025||
Apparently by the second. Don't blink too often.
onionisafruit 11/2/2025||
I’m running a high precision outfit over here ya know
Yossarrian22 11/2/2025||
Only Type A run through walls folk
ekropotin 11/1/2025|||
Jumping straight to the optimal solution may also indicate that candidate have seen the problem before.
captainkrtek 11/2/2025||
The funny thing is, they don’t. They often jump to a solution that lacks in many ways, because it barely addresses the few inputs I gave (since they asked no follow up, even when I suggest they ask for more requirements).
tavavex 11/1/2025||
Can I ask - out of the 60 interviews, roughly how many times did you suspect AI usage?
captainkrtek 11/2/2025||
Probably about 10 or so.
artyom 11/1/2025||
The article implies that somewhat, before AI the leetcode/brainteaser/behavioral interview process had somewhat acceptable results.

The reality is that AI just blew up something that was a pile of garbage, and the result is exactly what you'd expect.

We all treat interviews in this industry as a human resources problem, when in reality is an engineering problem.

The people with the skills to assess technical competency are even more scarce than actual engineers (b/c they would be engineers with people skills for interviewing), and that kind of people is usually very very busy to be bothered with what's a (again, perceived) human resources problem.

Then the rest is just random HR personnel pretending that they know what they're talking about. AI just exposed (even more) how incompetent they are.

bluGill 11/2/2025|
The results did filter out a few people who could not think.

i reciently interviewed someone who was a senior engineer on the space shuttle, but managed a call center after that. Can this person still write code is a question we couldn't figure out and so had to pass. (We can't prove it but think we ended up with someone who outsourced the work to elsewhere - but at least that person could code if needed as proved by the interview)

cudgy 11/2/2025||
Ageism
CableNinja 11/2/2025|||
Id hardly call this ageism. The person went from being part of engineering on a major space faring project to managing a callcenter. Thats like going back to zero on the career ladder, as far as engineering is concerned. I would have also been questioning whether or not their skills have collected dust, were still relevant, and most specifically why they went from engineering in aerospace to managing a callcenter, and why they want back into engineering again (probably hates callcenter).
bluGill 11/2/2025||||
We have interviewed and hired plenty of people even older (age is not something ever known/discussed and illegal to factor in - but it isn't hard to make a good guess anyway)

senior engineer could be a project manager who never wrote code.

i remember this because it is one of the faw 'no' I have had where it wasn't proved the person would be bad at the job. Normally the no hire signal is because the person would obviously be bad.

itronitron 11/2/2025||
Why, after interviewing them, were you unable to figure out if this person can still code?
bluGill 11/2/2025||
Because we didn't ask the right questions. We changed the process to require some questions. Which isn't perfect either, but we don't get months to interview someone so.
stavros 11/2/2025|||
The person was 23.
habosa 11/1/2025||
For our coding interviews we encourage people to use whatever tools they want. Cursor, Claude, none, doesn’t matter.

What I’m looking for is strong thinking and problem solving. Sometimes someone uses AI to sort of parallelize their brain, and I’m impressed. Others show me their aptitude without any advanced tools at all.

What I can’t stand is the lazy AI candidates. People who I know can code, asking Claude to write a function that does something completely trivial and then saying literally nothing in the 30 seconds that it “thinks”. They’re just not trying. They’re not leveraging anything, they’re outsourcing. It’s just so sad to set how quickly people are to be lazy, to me it’s like ordering food delivery from the place under your building.

sega_sai 11/1/2025||
I am teaching a coding class, and we had to switch to in person interview/viva assessment about the code written by students, to deal with AI written code. It works, but it requires a lot of extra effort on our side. I don't know if it is sustainable...
xandrius 11/1/2025|
Why wouldn't something like this work?

1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.

2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.

And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.

Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?

sega_sai 11/1/2025|||
A project doesn't quite work for my course, as we teaching different techniques and would like knowledge of each of them.

But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.

hahajk 11/1/2025|||
If I read your suggestion correctly, you're saying the exam is basically a board explaining their decision making around their code. That sounds great in theory but in practice it would be very hard to grade. Or at least, how could someone fail? If you let them use AI you can't really fault them for not understanding the code, can you? Unless you teach the course to 1. use AI and then 2. verify. And step 2 requires an understanding of coding and experience to recognize bad architecture. Which requires you to think through a problem without the AI telling you the answer.
Atotalnoob 11/2/2025|||
If you grade on pass/fail it’s easy to grade. Not every course uses letter grades…

If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.

xandrius 11/2/2025|||
Yep, you can fault them for not understanding it.

Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.

I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.

alyxya 11/2/2025||
Interviews are fundamentally really difficult to get right. On one side, you could try to create the best fairest standardized interview process based on certain metrics, but people will eventually optimize on how well they can do on the standardized interview, making it less effective. On the other side, you could create a customized ad hoc interview to try to learn as much about the candidate as possible, and have them do a work trial for a few days to ensure they're the right candidate, but this takes a ton of time and effort on both the company and the candidate.

I personally think the best interview format is the candidate doing a take home project and giving a presentation on it. It feels like the most comprehensive yet minimal way to assess a candidate on a variety of metrics, tests coding ability in the project, real system design rather than hypothetical, communication skills, and depth of understanding on the project when the interviewer asks follow-up questions. It would be difficult to cheat this with AI since you would need a solid understanding of the whole project for the presentation.

nunez 11/2/2025||
If companies are going back to physical onsites but are using remote interviewers, then maybe it makes more sense to have interview centers. They'd be like testing centers --- device lockers, multiple cameras, nearby proctor, shitty desktops from the 2010s with even worse keyboards --- but just for interviews.
cudgy 11/2/2025|
Ironically, these centers will use AI to interview.
qiu3344 11/2/2025||
Companies being forced to overhaul their interview processes is certainly an unexpected side-effect of the insurgence of LLMs.

On the other hand, encouraging employees to adopt "AI" in their workflows, while at the same time banning "AI" on interviews, seems a bit hypocritical - at least from my perspective. One might argue that this is about dishonesty, and yes, I agree. However, AI-centric companies apparently include AI usage in employee KPIs, so I'm not sure how much they value the raw/non-augmented skill-set of their individual workers.

Of course, in all other cases, not disclosing AI usage is quite a dick move.

ponector 11/2/2025|
>> seems a bit hypocritical

Companies always are.

It's okay for companies to use AI in recruitment process but not for the candidates.

It's okay to lay off people to cut costs but not okay to say you are looking for a new job to get higher salary.

dagmx 11/2/2025||
I’ve mentioned it before, but it’s not just that people “cheat” during interviews with an LLM…it’s that they have atrophied a lot of their basic skills because they’ve become dependent on it.

Honestly, the only ways around it for me are

1. Have in person interviews on a whiteboard. Pseudocode is okay.

2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.

3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale

staticautomatic 11/2/2025|
Rather than trying to trip up the LLM I find it’s much easier to ask about something esoteric that the LLM would know but a normal person wouldn’t.
dagmx 11/2/2025||
That basically amounts to the same thing. LLMs are pretty good at faking responses to conversational questions.
mtneglZ 11/1/2025|
I still think how many golf balls fit in a 747 is a good interview question. No one needs to give me a number but someone could really wow me but outlining a real plan to estimate this, tell me how you would subcontract estimating the size of the golf ball and the plane. It's not about a right or wrong answer but explaining to me how you think. I do software and hardware interviews and always did them in person so we can focus on how a candidate thinks. You can answer every question wrong in my interview but still be above the bar because of how they show me they can think.
pdpi 11/2/2025||
Some of the best hires I’ve ever made would’ve tanked that sort of interview question. Being able to efficiently work through those puzzles is probably decent positive signal, but failure tells me next to nothing, and a question that can fail to give me signal is a question that wastes valuable time — both mine and theirs.

A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.

cudgy 11/2/2025||
So you gave up on the best programmer you ever interviewed, because they weren’t able to perform a single secondary task satisfactorily?
pdpi 11/2/2025|||
First, the average quality of candidates we were getting was pretty good. She stood out, and definitely gave a memorable performance on the technical level, but it wasn't some colossal blow to our org that we didn't make the hire.

Second, she wasn't interviewing for a "money goes in, code comes out" code monkey-type role. Whoever took that role was expected to communicate with a bunch of people.

Third, the ask was "explain this to a layperson", her performance was "a senior technical person can barely keep up". It wasn't a matter of not performing satisfactorily, it was a matter of completely failing. I really liked her as a candidate, I wanted to make the hire happen, and I'm cautious about interview nerves messing with people, so I really tried to steer the conversation in a direction she could succeed, but she just wouldn't follow.

CableNinja 11/2/2025|||
Being able to explain a thing to someone non-technical is an important social requirement. If you have to explain a problem or project to a C-level and you go off the rails with technical stuff, or get deep in the weeds of some part of it, without being asked, youre going to get deer stares and no one in the room is going to understand you. Similarly, if you as an engineer, go too technical when explaining things to an admin or jr, then you are also going to get deer stares and no one is going to understand you, or they will get frustrated.

You can be a """"rockstar"""" engineer and still not be a good fit because you cant sanely explain something to someone not at your technical level.

tavavex 11/2/2025|||
I feel like the stereotype about this question is different from your approach, though: supposedly, it started with quirky, new tech-minded businesses using it rationally to see people who could solve open-ended problems, and evolved to everyone using it because it was the popular thing. If someone still uses it today, I would totally expect the interviewer to have a number up on their screen, and answers that are too far off would lead to a rejection.

Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?

bluGill 11/2/2025|||
Problem is many people want to grade the answer for correctness instead of thinking. It is easy to figure out a correct answer and you can tell hr they were off by some amount t so 'no'. It is much harder to tell hr that even though they were within some amount of correct you shouldn't hire them because they can't think (despite getting a correct answer)
lazyant 11/2/2025|||
I agree that estimation questions (not "brain teasers" as coming up with the clever solution) are good. Developers should be able to think in orders of magnitude.
More comments...