Posted by EliotHerbst 7/3/2025
Super Simple "Hallucination Traps" to detect interview cheaters
Here are some examples of this class of prompts which currently work on Cluely and even cause strong models like o4-mini-high to hallucinate, even when they can search the web:
https://chatgpt.com/share/6865d41a-c720-8005-879b-d28240534751 https://chatgpt.com/share/6865d450-6760-8005-8b7b-7bd776cff96b https://chatgpt.com/share/6865d578-1b2c-8005-b7b0-7a9148a40cef https://chatgpt.com/share/6865d59c-1820-8005-afb3-664e49c8b583 https://chatgpt.com/share/6865d5eb-3f88-8005-86b4-bf266e9d4ed9
Link to the vibe-coded code for the site: https://github.com/Build21-Eliot/BeatCluely
Go on with the interview without any such tricks. Hire them if they pass. Fire them afterwards if they heavily underperform.
I keep hearing of employers being duped by AI in interviews; I don't see how it is possible unless:
1) The employer is not spending the time to synchronously connect via live video or in person, which is terrible for interviewing
2) The interviewer is not competent to be interviewing
... what other option is there? Are people sending homework/exams as part of interviews still and expecting good talent to put up with that? I'm confused where this is helpful to a team that is engaged with the interview process.
Bluffing in interviews is nearly a given. Your interview should be designed to suss out the best fit; the cheaters should not even rank into the final consideration if you did a decent interview and met the person via some sort of live interaction.
Before these sort of tools [Cluely], there wasn’t a good way that I'm aware of to cheat on this type of question and respond without any interruption or pause in the conversation.
In real support situations, the tool is not useful as you could pass a major hallucination on to a customer, of course.
I have worked a lot of places in different fields where the HR team leading initial interviews had zero awareness of the role or what the role would really be doing, so I could see Cluely passing those interviews. But surely the team would smell the deception?
For a remote interview, I would do something as simple as share a Lucid app document where they can do a rough diagram of their architecture.
Even before LLMs, it was easy to pass techno trivia interviews by just looking up “the top X interview question for technology Y”
I was surprised by just how easy it is to intentionally trigger hallucinations in recent LLMs and how hard it was as a [temporary] "user" of Cluely to detect these hallucinations while using the tool in some non-rigorous settings, especially given how these tools market themselves as being "undetectable".
Things like diagrams and questions written on paper the held up to the webcam.