Honestly, the only ways around it for me are
1. Have in person interviews on a whiteboard. Pseudocode is okay.
2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.
3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale
A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.
Second, she wasn't interviewing for a "money goes in, code comes out" code monkey-type role. Whoever took that role was expected to communicate with a bunch of people.
Third, the ask was "explain this to a layperson", her performance was "a senior technical person can barely keep up". It wasn't a matter of not performing satisfactorily, it was a matter of completely failing. I really liked her as a candidate, I wanted to make the hire happen, and I'm cautious about interview nerves messing with people, so I really tried to steer the conversation in a direction she could succeed, but she just wouldn't follow.
You can be a """"rockstar"""" engineer and still not be a good fit because you cant sanely explain something to someone not at your technical level.
Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?
It is a horrific drag on the team to have the wrong engineer in a seat.
If we can’t sus out who is cheating and who is legitimate, then the only answer is that we as a field have to move towards “hire fast, fire fast.”
Right now, we generally fire slow. But we can’t have the wrong engineer in a seat for six months while you go though a PIP and performance cycle waiting for a yearly layoff. Management and HR need to get comfortable with firing people in 3 weeks as opposed to 6 months. You need more frequent one-off decisions based on an individual’s contributions and potential.
If you can’t fix the interview process, you need more PIP culture.
If you’re really off the pace and we made a bad hire, moving slowly hurts everyone.
And moving quickly lets us hire the candidates who really deserve the position, not those who game the process.
Look, I know what you're getting at, and I know that you can feel that a hire isn't good in less than a month. But buddy, you got to give them at least a few months here.
Your stars will get annoyed that you have someone not pulling their weight, your team will have to clean up the mess of bugs and incomplete stories, and the mentors will spend more of their time supporting an engineer that will not come up to productivity no matter how hard they try.
If it’s really a bad hire, you can’t be afraid to move quickly. If it’s not going to work out, I’d rather it not work out in a month than not work out in six months.
A mentor of mine once said, “You will never regret firing somebody, but you will regret not firing somebody.”
Part of being a manager means having a bias for action and being able to back your decisions once you’ve made them.
Admittedly, you’re not just shooting from the hip and firing at random. But by the time you get to the point where you have to thinking about getting rid of an engineer, it’s probably past the point of no return and you need to move.
Dumb over generalized piece of advice.
I’m guessing you’re neither a successful founder nor executive from your comment?
If your comment is really good faith, why don’t we try this exercise: you tell me why it wouldn’t be “dumb overgeneralized advice” to make sure you understand it fully.
Surely just asking the candidate to lean a bit back on the web interview and then having a regular talk without him reaching for the keyboard is enough? I guess they can have some in between layer hearing the conversation and posting tips but even then it would be obvious someone’s reading from a sheet.
So you’d only be going off how they speak which could be filtering out people who are just a bit awkward.
1. Strict honor code that is actually enforced with zero tolerance.
2. Exams done in person with screening for electronic devices.
3. Recognize that generative AI is going to be ambient and ubiquitous, and rework course content from scratch to focus on the aspects that only humans still do well.
AI has some uses, but the list of things it cant do is longer than the list of things it can.
The better option is to just ask the questions in person to prevent cheating.
This isn’t a new problem either. There is a reason certifications and universities don’t allow cheating in tests either. Because being able to copy paste an answer doesn’t demonstrate that you learned anything.
Do a first phone screening to agree on the details of the job and the salary, but the actual knowledge testing should be in person.
1. Embellish your resume with AI (or have it outright lie and create fictional work history) to get past the AI screening bots.
2. Have a voice-to-text AI running to cheat your way past the HR screen and first round interview.
3. Show up for in-person interview with all the other liars and unscrupulous cheats.
No matter who gets hired, chances are the company loses and honest people lose. Lame system.
There's a lot of shitty code made my LLMs, even today. So maybe we should lean in, and get people to critique generated code with the interviewer. Besides, being able to talk through, review, and discuss code is more important than the initial creation.
1. Very commonly repeated across the internet
2. Studied to the point of having perfect solutions written for almost any permutation of them
3. Very short and self-contained, not having to interact with greater systems and usually being solvable in a few dozen lines of code
4. Of limited difficulty (since the candidate is put on the spot and can't really think about it much, you can only make it so hard)
All of that lends them to being practically the perfect LLM use case. I would expect a modern LLM to vastly outperform me in almost any interview question. Maybe that changes for non-juniors who advance far enough to have niche specialist knowledge, but if we're talking about the generic Leetcode-style stuff, I have no doubts that an LLM would do perfectly fine compared to me.