Top
Best
New

Posted by yusufaytas 1 day ago

AI Broke Interviews(yusufaytas.com)
84 points | 119 commentspage 2
dagmx 1 day ago|
I’ve mentioned it before, but it’s not just that people “cheat” during interviews with an LLM…it’s that they have atrophied a lot of their basic skills because they’ve become dependent on it.

Honestly, the only ways around it for me are

1. Have in person interviews on a whiteboard. Pseudocode is okay.

2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.

3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale

staticautomatic 1 day ago|
Rather than trying to trip up the LLM I find it’s much easier to ask about something esoteric that the LLM would know but a normal person wouldn’t.
dagmx 1 day ago||
That basically amounts to the same thing. LLMs are pretty good at faking responses to conversational questions.
mtneglZ 1 day ago||
I still think how many golf balls fit in a 747 is a good interview question. No one needs to give me a number but someone could really wow me but outlining a real plan to estimate this, tell me how you would subcontract estimating the size of the golf ball and the plane. It's not about a right or wrong answer but explaining to me how you think. I do software and hardware interviews and always did them in person so we can focus on how a candidate thinks. You can answer every question wrong in my interview but still be above the bar because of how they show me they can think.
pdpi 1 day ago||
Some of the best hires I’ve ever made would’ve tanked that sort of interview question. Being able to efficiently work through those puzzles is probably decent positive signal, but failure tells me next to nothing, and a question that can fail to give me signal is a question that wastes valuable time — both mine and theirs.

A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.

cudgy 1 day ago||
So you gave up on the best programmer you ever interviewed, because they weren’t able to perform a single secondary task satisfactorily?
pdpi 21 hours ago|||
First, the average quality of candidates we were getting was pretty good. She stood out, and definitely gave a memorable performance on the technical level, but it wasn't some colossal blow to our org that we didn't make the hire.

Second, she wasn't interviewing for a "money goes in, code comes out" code monkey-type role. Whoever took that role was expected to communicate with a bunch of people.

Third, the ask was "explain this to a layperson", her performance was "a senior technical person can barely keep up". It wasn't a matter of not performing satisfactorily, it was a matter of completely failing. I really liked her as a candidate, I wanted to make the hire happen, and I'm cautious about interview nerves messing with people, so I really tried to steer the conversation in a direction she could succeed, but she just wouldn't follow.

CableNinja 23 hours ago|||
Being able to explain a thing to someone non-technical is an important social requirement. If you have to explain a problem or project to a C-level and you go off the rails with technical stuff, or get deep in the weeds of some part of it, without being asked, youre going to get deer stares and no one in the room is going to understand you. Similarly, if you as an engineer, go too technical when explaining things to an admin or jr, then you are also going to get deer stares and no one is going to understand you, or they will get frustrated.

You can be a """"rockstar"""" engineer and still not be a good fit because you cant sanely explain something to someone not at your technical level.

tavavex 1 day ago|||
I feel like the stereotype about this question is different from your approach, though: supposedly, it started with quirky, new tech-minded businesses using it rationally to see people who could solve open-ended problems, and evolved to everyone using it because it was the popular thing. If someone still uses it today, I would totally expect the interviewer to have a number up on their screen, and answers that are too far off would lead to a rejection.

Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?

bluGill 1 day ago|||
Problem is many people want to grade the answer for correctness instead of thinking. It is easy to figure out a correct answer and you can tell hr they were off by some amount t so 'no'. It is much harder to tell hr that even though they were within some amount of correct you shouldn't hire them because they can't think (despite getting a correct answer)
lazyant 23 hours ago|||
I agree that estimation questions (not "brain teasers" as coming up with the clever solution) are good. Developers should be able to think in orders of magnitude.
avidphantasm 1 day ago||
AI is breaking more than interviews. I recently overheard someone who is studying to be a psychiatric nurse practitioner (they are already a RN) via an online program say “ChatGPT is my new best friend.” We are doomed.
Esophagus4 1 day ago||
I agree with the article. Sadly, I have seen candidates cheating, and have hired those I suspected were cheating in hindsight.

It is a horrific drag on the team to have the wrong engineer in a seat.

If we can’t sus out who is cheating and who is legitimate, then the only answer is that we as a field have to move towards “hire fast, fire fast.”

Right now, we generally fire slow. But we can’t have the wrong engineer in a seat for six months while you go though a PIP and performance cycle waiting for a yearly layoff. Management and HR need to get comfortable with firing people in 3 weeks as opposed to 6 months. You need more frequent one-off decisions based on an individual’s contributions and potential.

If you can’t fix the interview process, you need more PIP culture.

harshalizee 22 hours ago||
Unless your firm is offering a solid paycheck and a 6 month severance package a la Netflix, no rational candidate is going to bet on a place that'll boot you in 3 weeks because they felt "the vibes are off". You'll be self selecting for only the most desperate candidates in the market trying to get a job.
Esophagus4 21 hours ago||
Not once did I say that happen because the vibes are off. You’re seeing what you want to in my comment.

If you’re really off the pace and we made a bad hire, moving slowly hurts everyone.

And moving quickly lets us hire the candidates who really deserve the position, not those who game the process.

Balgair 1 day ago||
3 weeks!? Man, I'm still figuring out where a decent sandwich joint is by my work at 3 weeks. There is no way that I could be up to speed on a code base in that short of a time.

Look, I know what you're getting at, and I know that you can feel that a hire isn't good in less than a month. But buddy, you got to give them at least a few months here.

Esophagus4 1 day ago||
A few months with the wrong hire is detrimental to the team.

Your stars will get annoyed that you have someone not pulling their weight, your team will have to clean up the mess of bugs and incomplete stories, and the mentors will spend more of their time supporting an engineer that will not come up to productivity no matter how hard they try.

If it’s really a bad hire, you can’t be afraid to move quickly. If it’s not going to work out, I’d rather it not work out in a month than not work out in six months.

A mentor of mine once said, “You will never regret firing somebody, but you will regret not firing somebody.”

Part of being a manager means having a bias for action and being able to back your decisions once you’ve made them.

Admittedly, you’re not just shooting from the hip and firing at random. But by the time you get to the point where you have to thinking about getting rid of an engineer, it’s probably past the point of no return and you need to move.

kyleee 18 hours ago|||
“You will never regret firing somebody, but you will regret not firing somebody.”

Dumb over generalized piece of advice.

Esophagus4 1 hour ago||
Man, I wish I could downvote this knee-jerk, reactionary comment.

I’m guessing you’re neither a successful founder nor executive from your comment?

If your comment is really good faith, why don’t we try this exercise: you tell me why it wouldn’t be “dumb overgeneralized advice” to make sure you understand it fully.

thunderfork 21 hours ago|||
[dead]
kace91 1 day ago||
I don’t understand how offline interviewing is needed to catch ai use, not counting take homes.

Surely just asking the candidate to lean a bit back on the web interview and then having a regular talk without him reaching for the keyboard is enough? I guess they can have some in between layer hearing the conversation and posting tips but even then it would be obvious someone’s reading from a sheet.

nradov 1 day ago||
That type of cat-and-mouse game is ultimately pointless. It's fairly easy to build an ambient AI assistant that will listen in to the conversation and automatically display answers to interview questions without the candidate touching a keyboard. If the interviewer wants to get any reliable signal then they'll have to ask questions that an AI can't answer effectively.
Gigachad 1 day ago||
There are interview cheating tools which listen in on the call and show a layer over the screen with answers which doesn’t show on screen shares.

So you’d only be going off how they speak which could be filtering out people who are just a bit awkward.

storus 1 day ago||
Universities and education overall also had their foundation detonated by AI. Some Stanford classes now do 15 minute tricky exams to reduce the chance of cheating with AI (it takes some time to type it so the point is to make the exam so short that one can't physically cheat well). I am not sure what the solution for this mess is going to be.
nradov 1 day ago|
Several possible solutions:

1. Strict honor code that is actually enforced with zero tolerance.

2. Exams done in person with screening for electronic devices.

3. Recognize that generative AI is going to be ambient and ubiquitous, and rework course content from scratch to focus on the aspects that only humans still do well.

storus 1 day ago||
Only 3) could scale but then those exam takers not using AI would fail unless they are geniuses in many areas. 1) and 2) can't be done when you have 50-70% of your course consisting of online students (Stanford mixes on-campus with CGOE external students who take the exams off-campus), who are important for your revenue. Proctoring won't work either as one could have two computers, one for the exam, one for the cheating (done for interviews all the time now).
nradov 1 day ago||
Well realistically exam takers not using AI will fail in any sort of real world technical / professional / managerial occupation anyway. They might as well get used to it. Not being able to use LLMs effectively today is like the equivalent of not knowing how to use Windows 20 years ago.
CableNinja 23 hours ago|||
Call me when AI can manage to write a regex that i would write, to parse a complex string, rather than some ridiculous mishmashing of nonascii chars that you need to talk to an ancient shaman to decrypt; or when AI can actually recognize contextual hints enough to know what the fuck im talking about, and not produce a writeup of things no longer relevant; or when it stops hallucinating and giving made up answers just to give an answer (which is far worse than saying i dont know, from a human, or ai)

AI has some uses, but the list of things it cant do is longer than the list of things it can.

cudgy 1 day ago|||
Has AI advanced that far? How do managers use AI in their daily work? To generate emails? Most managers spend all day in meetings. How do they utilize AI for that? Inaccurately Compile the minutes of the meeting and summarize them?
highfrequency 1 day ago||
If AI can solve all of your interview questions trivially, maybe you should figure out how to use AI to do the job itself.
Gigachad 1 day ago||
The questions were just a proxy for the knowledge you needed. If you could answer the questions you must have learned enough to be able to do the work. We invented a way to answer the test questions without being able to do the work.
onionisafruit 1 day ago||
To continue the point. If the knowledge you need is easily obtained from an LLM then knowledge isn’t really necessary for the job. Stop selecting for what the candidate knows and start selecting for something more relevant to the job.
Gigachad 1 day ago||
An accurate test would just be handing them a real piece of work to complete. Which would take ages and people would absolutely hate it. The interview questions are significantly faster, but easy to cheat on in online interviews.

The better option is to just ask the questions in person to prevent cheating.

This isn’t a new problem either. There is a reason certifications and universities don’t allow cheating in tests either. Because being able to copy paste an answer doesn’t demonstrate that you learned anything.

vasilzhigilei 1 day ago||
Interviews should be in-person.
TrackerFF 1 day ago||
In every practical sense, online interviews are the part of the early screening process. The sheer amount of applicants means that you need to do some filtering before inviting people to do on-site interviews.
Gigachad 1 day ago|||
If you make the first interview in person, most people filter themselves out because they aren’t in the country or can’t be bothered.

Do a first phone screening to agree on the details of the job and the salary, but the actual knowledge testing should be in person.

floundy 1 day ago|||
So the process is now:

1. Embellish your resume with AI (or have it outright lie and create fictional work history) to get past the AI screening bots.

2. Have a voice-to-text AI running to cheat your way past the HR screen and first round interview.

3. Show up for in-person interview with all the other liars and unscrupulous cheats.

No matter who gets hired, chances are the company loses and honest people lose. Lame system.

NumberCruncher 1 day ago||
Modern problems sometimes require old-fashioned solutions.
ferrouswheel 1 day ago||
While I agree LLMs have forever changed the interviewing game, I also strongly disagree with deeming slop code as "perfect" and "optimal".

There's a lot of shitty code made my LLMs, even today. So maybe we should lean in, and get people to critique generated code with the interviewer. Besides, being able to talk through, review, and discuss code is more important than the initial creation.

tavavex 1 day ago||
Interview questions are a genre of their own though. They are:

1. Very commonly repeated across the internet

2. Studied to the point of having perfect solutions written for almost any permutation of them

3. Very short and self-contained, not having to interact with greater systems and usually being solvable in a few dozen lines of code

4. Of limited difficulty (since the candidate is put on the spot and can't really think about it much, you can only make it so hard)

All of that lends them to being practically the perfect LLM use case. I would expect a modern LLM to vastly outperform me in almost any interview question. Maybe that changes for non-juniors who advance far enough to have niche specialist knowledge, but if we're talking about the generic Leetcode-style stuff, I have no doubts that an LLM would do perfectly fine compared to me.

harpiaharpyja 1 day ago||
It's an indictment of how bad coding interviews are/were
antiquark 1 day ago|
Welp, back to nepotism, I guess.
More comments...