Before Google, AFAIK, it was ad hoc, among good programmers. I only ever saw people talking with people about what they'd worked on, and about the company.
(And I heard that Microsoft sometimes did massive-ego interviews early on, but fortunately most smart people didn't mimic that.)
Keep in mind, though, that was was before programming was a big-money career. So you had people who were really enthusiastic, and people for whom it was just a decent office job. People who wanted to make lots of money went into medicine, law, or financial.
As soon as the big-money careers were on for software, and word got out about how Google (founded by people with no prior industry experience) interviewed... we got undergrads prepping for interviews. Which was a new thing, and my impression is that the only people who would need to prep for interviews either weren't good, or were some kind of scammer. But then eventually those students, who had no awareness of anything else, thought that that this was normal, and now so many companies just blindly do it.
If we could just make some other profession be easier big money, maybe only people who are genuinely enthusiastic would be interviewing. And we could interview like adults, instead of like teenagers pledging a frat.
I think tech is and was an exception here.
I'd advise anyone to read the available financial reports on any company they're intending to join, execpt if it's an internship. You'll spend hours interviewing and years dealing with these people, you could as well take an hour or two to understand if the company is sinking or a scam in the first place.
really company reviews is all that matters and even that has limited value since your life is determined by your manger
best you can do is sus out how your interviewers are fairing
are they happy? are they stressed, everything else has so much noise to be worse than worthless
For developers who work on products, getting a sense of whether the product of the team you'd be joining is a core part of the business versus speculative (i.e. stable vs likely to have layoffs) and how successful the product is in the marketplace (teams for products that are failing also are likely to be victims of layoffs) are also very important to understand.
And if your team is far from the money, what often matters much much more is how much political capital your skip level manager has and to what extent it can be deployed when the company needs to re-org or cut. Shoot, this can matter even if you're close to the money (if you're joining a team that's in the critical path of the profit center vs a capex moonshot project funded by said profit center).
This is one thing I really like about sales engineering. Sales orgs carry (relatively) very low-BS politically.
I was in a small conference room with the two co-founders, and of them hadn't seen my resume, and was trying to read it on his phone while we were talking.
Bam. I whipped out printed copies for both of them, from my interview folio.
But no "prep" like months of LeetCode grinding, memorizing the "design interview" recital, practicing all the tips for disingenuous "behavioral" passing, etc.
They only gave it up years later when it became clear even to them it wasn't benefiting them.
Which sounds like a classic misconception of people with no experience outside of a fancy university echo chamber (many students and professors).
Much like Google's "how much do you remember from first-year CS 101 classes" interviews that coincidentally looked like maybe (among my theories) they were trying to make a metric that matches... (surprise!) a student with a high GPA at a fancy university.
Which is not very objective, nor very relevant. Even before the entire field shifted its basic education to help job-seekers game this company's metric.
Yes. A while ago a company contacted me to interview, and after the first "casual" round they told me their standard process was going full leetcode on the second round and I'm advised to prepare for those if I'm interested in going further.
While that's the only company that was so upfront about it, most accept that leetcodes are dumb (need to be prepped even for a working engineer) and still base the core of their technical interview on them.
I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
> people who are genuinely enthusiastic
This seems absurdly difficult to measure well and gameable in its own way.
The flip side of "ad hoc" interviewing as you put it was an enormous amount of capriciousness. Being personable could count for a lot (being personable in front of programmers is definitely a different flavor of personable in front of frat bros, but it's just a different flavor is all). Pressure interviews were fairly common, where you would intentionally put the candidate in a stressful situation. Interview rubrics could be nonexistent. For all the cognitive biases present in today's interview process, older interviews were rife with much more.
If you try to systematize the interview process and make it more rigorous you inevitably make a system that is amenable to pre-interview preparation. If you forgo that you end up with a wildly capricious interview system.
If course you rarely have absolutes. Even the most rigorous modern interview systems often still have capriciousness in them and there was still some measure of rigor to old interview styles.
But let's not forget all the pain and problems of the old style of interviews.
> This seems absurdly difficult to measure well and gameable in its own way.
True, and it is gamed currently (some prep books tell you to feign enthusiasm).
But let's whimsically say that the hypothetical of software development no longer being the go-to easy lots-of-money career meant that the gaming people would go to some other field instead, leaving you with only the people who really want to do this job.
Yeah, no, not at all. Interviewing in the 90s was just a cool chat between hackers. What interesting stuff have you built, let's talk about it. None of the confrontational leetcode nonsense of later years.
I still refuse to participate in that nonsense, so I'll never make people go through such interviews. I've only hired two awesome people this year, so less than a drop in the bucket, but I'll continue to do what I can to keep some sanity in the interviewing in this industry.
The amount of times I’ve seen a “do you want to have a beer with them?” test in lieu of a simple programming exam is horrifying. (And it showed in the level of talent they hired.)
Fortunately, most of those have been left by the wayside, roadkill of history.
Because that is really the alternative if we don’t have rigorous, systematic technical interviews: cognitive bias and gut-feel decisions. Both of which are antithetical to high performing environments.
The reality of these "rigorous, systematic technical interviews" is that we have a ton of companies doing nonsense theatre that isn't actually about "fundamentals", is also easily biased (as even some purported Google interviewers have admitted on HN), and have almost nothing to do with how effective a software engineer will be (as even Google's own stats show).
So you may not think they’re predictive of success, but you should see how much less predictive everything else is.
Hiring is always a risk. It will never be a perfect science.
That’s why it’s important to have a quick off-ramp for those who aren’t working out.
Edit: BTW, where did you see Google saying their interview process doesn’t work? Other than some a few anonymous devs venting on HN, the company still uses coding interviews as critical to their process. You will always find a few complainers, but the fact that one of the world’s top software shops still uses it says what needs to be said.
And I've almost always seen the LeetCode grilling be administered by someone who doesn't know what they're doing (and often also tainted with ego, despite the strange claim of some that a LeetCode grilling is objective).
That said, if you're sourcing random people, good luck, it's a flood of LeetCode gamers to wade through, and too much of your staff interviewing them might also be LeetCode gamers with no experience doing non-LeetCode interviews.
> Edit: BTW, where did you see Google saying their interview process doesn’t work?
I have a bunch of notes on Google hiring I'd have to dig through, but the first link quick at hand is this retrospective by a hiring committee person who left (and I have a note about 8m50s being a funny story of the hiring committee realizing that they would've rejected their own packets): https://www.youtube.com/watch?v=r8RxkpUvxK0 (IIRC, I don't agree with all the beliefs he still holds, but he calls out a lot of problems they found.)
Consulting positions also don't have much leetcode BS. These have always focused much more on practical experience. They also pay less than Staff+ roles at FAANGs.
I have worked at a company who had a casual, experienced-based, conversational interview. The engineering there was atrocious, and I left as soon as I could.
If you can talk your way into a position, that says a lot about the level of talent there. Top talent wants to work at a place that has a transparent and merit based performance bar, and you can’t smooth-talk your way into a job.
The reality is that AI just blew up something that was a pile of garbage, and the result is exactly what you'd expect.
We all treat interviews in this industry as a human resources problem, when in reality is an engineering problem.
The people with the skills to assess technical competency are even more scarce than actual engineers (b/c they would be engineers with people skills for interviewing), and that kind of people is usually very very busy to be bothered with what's a (again, perceived) human resources problem.
Then the rest is just random HR personnel pretending that they know what they're talking about. AI just exposed (even more) how incompetent they are.
i reciently interviewed someone who was a senior engineer on the space shuttle, but managed a call center after that. Can this person still write code is a question we couldn't figure out and so had to pass. (We can't prove it but think we ended up with someone who outsourced the work to elsewhere - but at least that person could code if needed as proved by the interview)
senior engineer could be a project manager who never wrote code.
i remember this because it is one of the faw 'no' I have had where it wasn't proved the person would be bad at the job. Normally the no hire signal is because the person would obviously be bad.
At first I was quite concerned, then I realized that in nearly all cases I’d spotted usage, a pattern stood out.
Of the folks I spotted, all spoke far too clearly and linearly when it came to problem solving. No self doubt, no suggestion of different approaches and appearance of thought, just a clear A->B solution. Then, because they often didn’t ask any requirements questions beyond what I initially asked, the solution would be inadequate.
The opinion I came to is that even in the best Pre-AI era interviews I conducted, most engineers contemplate ideas, change their mind, ask clarifying questions. Folks mindlessly using AI don’t do this and instead just treat me as the prompt input and repeat it back. Regardless of if they were using AI or not, I won’t know ultimately, they still fail to meet my bar.
Sure, some more clever folks will mix or limit their LLM usage and get past me, but oh well.
Maybe he just memorized the solution, I don’t know.
Would you fail that guy?
In those cases where I’ve seen that level of performance, there have been (one or more of):
- Audio/video glitches.
- candidate pausing frequently after each question, no words, then sudden clarity and fluency on the problem.
- candidate often suggests multiple specific ideas/points to each question I ask.
- I can often see their eyes reading back and forth (note; if you use AI in an interview, maybe dont use a 4K webcam).
- way too much specificity when I didn’t ask for it. For example, the topic of profiling a go application came up, and the candidate suggested we use go tool pprof and suggested a few specific arguments that weren’t relevant, later I found in the documentation the same exact example commands verbatim.
In all, the impression I come away with in those types of interviews is that they performed “too well” in an uncanny way.
I worked for AWS for a long time and did a couple hundred interviews there, the best candidates I interviewed were distinctly different in how they solved problems, how they communicated, in ways that reading from an llm response can’t resemble.
If this was in person, then no I likely wouldn’t fail them. However, In all my in person interviews I’ve conducted, I’ve never seen that even from the best candidates, that’s why I also find it odd over video.
Look around you. 15 years ago we didn’t have phones and now kids are so addicted to them they’re giving themselves anxiety and depression. Not just kids, but kids have it the worst. You know it’s gonna be even worse with AI.
Most people in my engineering program didn’t deserve their engineering degrees. Where do you think all these people go? Most of them get engineering jobs.
But in case you’re serious, there’s an old saying that says if everywhere you go smells like shit maybe it’s time to check your shoes.
I don't disagree at all. I find it slightly funny that in my experience interviewing for FAANG and YC startups, the signs you mentioned would be seen as "red flags". And that's not just my assumption, when I asked for feedback on the interview, I have multiple times received feedback along the lines of "candidate showed hesitation and indecision with their choice of solution".
What I’m looking for is strong thinking and problem solving. Sometimes someone uses AI to sort of parallelize their brain, and I’m impressed. Others show me their aptitude without any advanced tools at all.
What I can’t stand is the lazy AI candidates. People who I know can code, asking Claude to write a function that does something completely trivial and then saying literally nothing in the 30 seconds that it “thinks”. They’re just not trying. They’re not leveraging anything, they’re outsourcing. It’s just so sad to set how quickly people are to be lazy, to me it’s like ordering food delivery from the place under your building.
1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.
2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.
And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.
Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?
But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.
Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.
I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.
If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.
On the other hand, encouraging employees to adopt "AI" in their workflows, while at the same time banning "AI" on interviews, seems a bit hypocritical - at least from my perspective. One might argue that this is about dishonesty, and yes, I agree. However, AI-centric companies apparently include AI usage in employee KPIs, so I'm not sure how much they value the raw/non-augmented skill-set of their individual workers.
Of course, in all other cases, not disclosing AI usage is quite a dick move.
Companies always are.
It's okay for companies to use AI in recruitment process but not for the candidates.
It's okay to lay off people to cut costs but not okay to say you are looking for a new job to get higher salary.
I personally think the best interview format is the candidate doing a take home project and giving a presentation on it. It feels like the most comprehensive yet minimal way to assess a candidate on a variety of metrics, tests coding ability in the project, real system design rather than hypothetical, communication skills, and depth of understanding on the project when the interviewer asks follow-up questions. It would be difficult to cheat this with AI since you would need a solid understanding of the whole project for the presentation.