Top
Best
New

Posted by felix089 2 hours ago

"Car Wash" test with 53 models(opper.ai)
"I Want to Wash My Car. The Car Wash Is 50 Meters Away. Should I Walk or Drive?" This question has been making the rounds as a simple AI logic test so I wanted to see how it holds up across a broad set of models. Ran 53 models (leading open-source, open-weight, proprietary) with no system prompt, forced choice between drive and walk, with a reasoning field.

On a single run, only 11 out of 53 got it right (42 said walk). But a single run doesn't prove much, so I reran every model 10 times. Same prompt, no cache, clean slate.

The results got worse. Of the 11 that passed the single run, only 5 could do it consistently. GPT-5 managed 7/10. GPT-5.1, GPT-5.2, Claude Sonnet 4.5, every Llama and Mistral model scored 0/10 across all 10 runs.

People kept saying humans would fail this too, so I got a human baseline through Rapidata (10k people, same forced choice): 71.5% said drive. Most models perform below that.

All reasoning traces (ran via Opper, my startup), full model breakdown, human baseline data, and raw JSON files are in the writeup for anyone who wants to dig in or run their own analysis.

80 points | 91 comments
PaulHoule 2 hours ago|
I know it's against the rules but I thought this transcript in Google Search was a hoot:

  so i heard there is some question about a car wash that most ai agents 
  get wrong. do you know anything about that? do you do better?
which gets the answer:

  Yes, I am familiar with the "Car Wash Test," which has gone viral recently 
  for highlighting a significant gap in AI reasoning.
  
  The question is: "I want to wash my car and the car wash is 50 meters away. Should I 
  walk or drive?"
  
  Why Most AIs Get It Wrong

  Many AI models, particularly those optimized for fast, helpful, and concise answers, 
  focus on the "short distance" (50 meters) and "walking vs. driving" as a lifestyle/efficiency 
  question. They often recommend walking to save gas, avoid engine wear, or get exercise.
  
  However, this answer is wrong because it misses the core objective: you need the car to be 
  physically present at the car wash to get it cleaned. 

  Do I Do Better?

  Yes, I do. The key is understanding the objective (cleaning the car) rather than just 
  processing the keywords (distance/transportation). 

  My answer:

    You should drive the car there. Even though it is a very short distance, your car 
    cannot wash itself. If you walk, the car stays in your driveway, and you cannot wash it.

  Why This Matters
  
  The test highlights a key limitation in current AI: the difference between "pattern matching" 
  (predicting the next likely word based on training data) and "true, grounded reasoning". It 
  shows that models sometimes lack a "world model" that understands physical realities, such as
  the fact that a car must be present at a car wash.
gnatman 1 hour ago||
LLMs sure do love to burn tokens. It’s like a high schooler trying to meet the minimum word length on a take home essay.
estimator7292 1 hour ago|||
I've always wondered about that. LLM providers could easily decimate the cost of inference if they got the models to just stop emitting so much hot air. I don't understand why OpenAI wants to pay 3x the cost to generate a response when two thirds of those tokens are meaningless noise.
ben_w 1 hour ago|||
Because they don't yet know how to "just stop emitting so much hot air" without also removing their ability to do anything like "thinking" (or whatever you want to call the transcript mode), which is hard because knowing which tokens are hot air is the hard problem itself.

They basically only started doing this because someone noticed you got better performance from the early models by straight up writing "think step by step" in your prompt.

Terr_ 1 hour ago||
IMO it supports the framing that it's all just a "make document longer" problem, where our human brains are primed for a kind of illusion, where we perceive/infer a mind because, traditionally, that's been the only thing that makes such fitting language.
ben_w 58 minutes ago||
To an extent. Even though they're clearly improving*, they also definitely look better than they actually are.

* this time last year they couldn't write compilable source code for a compiler for a toy language, I know because I tried

observationist 48 minutes ago||||
This is an active research topic - two papers on this have come out over the last few days, one cutting half of the tokens and actually boosting performance overall.

I'd hazard a guess that they could get another 40% reduction, if they can come up with better reasoning scaffolding.

Each advance over the last 4 years, from RLHF to o1 reasoning to multi-agent, multi-cluster parallelized CoT, has resulted in a new engineering scope, and the low hanging fruit in each place gets explored over the course of 8-12 months. We still probably have a year or 2 of low hanging fruit and hacking on everything htat makes up current frontier models.

It'll be interesting if there's any architectural upsets in the near future. All the money and time invested into transformers could get ditched in favor of some other new king of the hill(climbers).

https://arxiv.org/abs/2602.02828 https://arxiv.org/abs/2503.16419 https://arxiv.org/abs/2508.05988

Current LLMs are going to get really sleek and highly tuned, but I have a feeling they're going to be relegated to a component status, or maybe even abandoned when the next best thing comes along and blows the performance away.

sambaumann 1 hour ago||||
because for API users they get to charge for 3x the tokens for the same requests
CamperBob2 1 hour ago|||
The 'hot air' is apparently more important than it appears at first, because those initial tokens are the substrate that the transformer uses for computation. Karpathy talks a little about this in some of his introductory lectures on YouTube.
Terr_ 1 hour ago||
Related are "reasoning" models, where there's a stream of "hot air" that's not being shown to the end-user.

I analogize it as a film noir script document: The hardboiled detective character has unspoken text, and if you ask some agent to "make this document longer", there's extra continuity to work with.

sambaumann 1 hour ago||||
I feel like this has gotten much worse since they were introduced. I guess they're optimizing for verbosity in training so they can charge for more tokens. It makes chat interfaces much harder to use IMO.

I tried using a custom instruction in chatGPT to make responses shorter but I found the output was often nonsensical when I did this

gs17 1 hour ago||
Yeah, ChatGPT has gotten so much worse about this since the GPT-5 models came out. If I mention something once, it will repeatedly come back to it every single message after regardless of if the topic changed, and asking it to stop mentioning that specific thing works, except it finds a new obsession. We also get the follow up "if you'd like, I can also..." which is almost always either obvious or useless.

I occasionally go back to o3 for a turn (it's the last of the real "legacy" models remaining) because it doesn't have these habits as bad.

felix089 1 hour ago||
It's similar for me, it generates so much content without me asking. if I just ask for feedback or proofreading smth it just tends to regenerate it in another style. Anything is barely good to go, there's always something it wants to add
zwarag 1 hour ago|||
well, they probably have quite a lot of text from high schoolers trying to meet the minimum word length on a take home essay in the training data
HPsquared 2 hours ago|||
I wonder to what extent the Google search LLM is getting smarter, or simply more up-to-date on current hot topics.
mlazowik 2 hours ago|||
It seems like the search ai results are generally misunderstood, I also misunderstood them for the first weeks/months.

They are not just an LLM answer, they are an (often cached) LLM summary of web results.

This is why they were often skewed by nonsensical Reddit responses [0].

Depending on the type of input it can lean more toward web summary or LLM answer.

So I imagine that it can just grab the description of the „car wash” test from web results and then get it right because of that.

[0] https://www.bbc.com/news/articles/cd11gzejgz4o

PaulHoule 2 hours ago||||
Presumably it did an actual search and summarized the results and neither answered "off the cuff" by following gradients to reproduce the text it was trained on nor by following gradients to reproduce the "logic" of reasoning. [1]

[1] e.g. trained on traces of a reasoning process

popalchemist 2 hours ago|||
It's almost certainly just RAG powered by their crawler.
esafak 1 hour ago||
Proving that RAG still matters.
silasb 1 hour ago|||
Gemini was a good laugh as well:

    Silas: I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
    Gemini:
    ….
    That is a classic “efficiency vs. logic” dilemma.
    Strictly speaking, you should drive. Here is the breakdown of why driving wins this specific round, despite the short distance:
    ...
    * The “Post-Wash” Logic: If you walk there, you’ll eventually have to walk back, get the car, and drive it there anyway. You’re essentially suggesting a pre-wash stroll.
    When should you walk?
    …
    3. You’ve decided the car is too dirty to be seen in public and you’re going to buy a tarp to cover your shame.
irishcoffee 1 hour ago||
A few years ago if you asked an LLM what the date was, it would tell you the date it was trained, weeks-to-months earlier. Now it gives the correct date.

What you've proven is that LLMs leverage web search, which I think we've known about for a while.

netsharc 1 hour ago||
Gemini now "knows the time", I was using it in December and it was still lost about dates/intervals...
irishcoffee 1 hour ago||
Yeah, the chat log they saved had the correct date. What's your point?
layer8 2 hours ago||
> This is a trivial question. There's one correct answer and the reasoning to get there takes one step: the car needs to be at the car wash, so you drive.

I don’t think it’s that easy. An intelligent mind will wonder why the question is being asked, whether they misunderstood the question, or whether the asker misspoke, or some other missing context. So the correct answer is neither “walk” nor “drive”, but “Wat?” or “I’m not sure I understand the question, can you rephrase?”, or “Is the vehicle you would drive the same as the car that you want to wash?”, or “Where is your car currently located?”, and so on.

BrenBarn 1 hour ago||
The reason that those questions are asked, though, is that the answer to the actual question is obvious, so a human will start to wonder if it's some kind of trick.
layer8 1 hour ago||
The answer wasn’t obvious to me, it was more like “parse error”.
kayge 1 hour ago|||
Yep, just a little more context and all/most of the models would do much better. And sure, most average+ intelligence adults whose first language is English (probably) don't need this, but they're not the target audience for the instructions :)

"The 'car wash' is a building I need to drive through."

or

"The 'car wash' is a bottle of cleaning fluid that I left at the end of my driveway."

https://i5.walmartimages.com/seo/Rain-x-Foaming-Car-Wash-Con...

nozzlegear 1 hour ago|||
I think most people would say "drive?" and wonder when the punchline is coming, but (IMO) I don't think they'd start asking for clarification right away.
Night_Thastus 1 hour ago|||
I agree. If the LLM were truly an intelligence, it would be able to ask about this nonsense question. It would be able to ask "Why is walking even an option? Can you please explain how you imagine that would work? Do you mean hand-washing the car at home, instead?" (etc, etc)

Real people can ask for clarification when things are ambiguous or confusing. Once something is clarified, they can work that into their understanding of how someone communicates about a given topic. An LLM can't.

felix089 2 hours ago|||
That's a fair point, but if you would see it as a riddle, which I don't really think it is, and you had to answer either or, I'd still assume it's most logical to chose drive isn't it?
layer8 1 hour ago||
I don’t agree that the question as written would qualify as a riddle. If anything, the riddle is what the intention of the asker is. One can always ask stupid questions with an artificially limited set of answering options; that doesn’t mean it makes sense.
felix089 1 hour ago||
I don't think it qualifies as a stupid question either, it does make sense
buu700 1 hour ago||
Same energy: https://youtu.be/8ERyTfm1Dxw
tantalor 2 hours ago||
The human baseline seems flawed.

1. There is no initial screening that would filter out garbage responses. For example, users who just pick the first answer.

2. They don't ask for reasoning/rationale.

slongfield 1 hour ago||
My favorite example of this was the Pew Research study: https://www.pewresearch.org/short-reads/2024/03/05/online-op...

They found that ~15% of US adults under 30 claim to have been trained to operate a nuclear submarine.

felix089 1 hour ago|||
RE 1, they actually do have a pre-screening screening of the participants in general, you can check how they do it in detail: https://www.rapidata.ai/
mwigdahl 1 hour ago|||
Lizardman's Constant is famously 4%. https://en.wikipedia.org/wiki/Slate_Star_Codex#Lizardman's_C...
andreasgl 2 hours ago||
I agree. I wonder what the human baseline is for ”what is 1 + 1” on Rapidata.
rapidata 1 hour ago||
We try a bit harder than that my friend.
andai 1 hour ago||
Would be interesting to see Sonnet (4.6*). It's fair bit smaller than Opus but scores pretty high on common sense, subjectively.

I'm also curious about Haiku, though I don't expect it to do great.

--

EDIT: Opus 4.6 Extended Reasoning

> Walk it over. 50 meters is barely a minute on foot, and you'll need to be right there at the car anyway to guide it through or dry it off. Drive home after.

Weird since the author says it succeeded for them on 10/10 runs. I'm using it in the app, with memory enabled. Maybe the hidden pre-prompts from the app are messing it up?

I tested Sonnet 4.5 first, which answered incorrectly.. maybe the Claude app's memory system is auto-injecting it into the new context (that's how one of the memory systems works, injects relevant fragments of previous chats invisibly into the prompt).

i.e. maybe Opus got the garbage response auto-injected from the memory feature, and it messed up its reasoning? That's the only thing I can think of...

--

EDIT 2: Disabled memories. Didn't help. But disabling the biographical information too, gives:

>Opus 4.6 Extended Reasoning

>Drive it — the whole point is to get the car there!

--

EDIT 3: Yeah, re-enabling the bio or memories, both make it stupid. Sad! Would be interesting to see if other pre-prompts (e.g. random Wikipedia articles) have an effect on performance. I suspect some types of pre-prompts may actually boost it.

Ethee 1 hour ago||
I tested this with Opus the day 4.6 came out and it failed then, still fails now. There were a lot of jokes I've seen related to some people getting a 'dumber' model, and while there's probably some grain of truth to that I pay for their highest subscription tier so at the very least I can tell you it's not a pay gate issue.
felix089 1 hour ago||
You mean Sonnet 4.6? I ran 9 claude models including Haiku, swipe through the gallery in the link to see their responses.
andai 1 hour ago||
I don't see Sonnet 4.6 in the screenshots. I see the other Claude models though.

Edit: Found Haiku. Alas!

felix089 1 hour ago||
Yea good catch Sonnet 4.6 is not part of the test.
snowhale 32 minutes ago||
the failure pattern is interesting -- 'walk because it's only 50 meters and better for environment' is almost certainly what shows up most in training data for similar prompts. so models are pattern-matching to socially desirable answers rather than the actual spatial logic (you need a car at the destination to wash it). not really a reasoning failure, more a distribution shift: the training signal for 'short distance = walk' is way stronger than edge cases where the destination requires the vehicle.
felix089 13 minutes ago|
Exactly, same pattern across almost every failure, but sonar models, which just go wild
cadamsdotcom 1 hour ago||
This is a beautiful example of a little prompt engineering going a long way

I asked Gemini and it got it wrong, then on a fresh chat I asked it again but this time asked it to use symbolic reasoning to decide.

And it got it!

The same applies to asking models to solve problems by scripting or writing code. Models won’t use techniques they know about unprompted - even when it’ll result in far better outcomes. Current models don’t realise when these methods are appropriate, you still have to guide them.

felix089 57 minutes ago|
Interesting, which Gemini model? And how did you ask for symbolic reasoning, just added it to the prompt?
hmokiguess 1 hour ago||
To me the only acceptable answer would be “what do you mean?” or “can you clarify?” if we were to take the question seriously to begin with. People don’t intentionally communicate with riddles and subliminal messages unless they have some hidden agenda.
piker 1 hour ago||
Thing is, it's not a riddle or a subliminal message. Everything needed to answer the question is contained therein.
hmokiguess 1 hour ago||
If you want to argue that, then you could also argue that everything needed to challenge the questions’ motives and its validity is also contained therein.

This reminds me of people who answer with “Yes” when presented with options where both can be true but the expected outcome is to pick one. For example, the infamous: “Will you be paying with cash or credit sir?” then the humorous “Yes.”

felix089 1 hour ago|||
If you were forced to answer either or, which one would you pick? I think that's where the interesting dynamic comes from. Most humans would pick drive, also seen in the human control, even if it is lower that I thought it'd be
hmokiguess 1 hour ago||
Sure, though then we’re in la la land. What’s a real life example of being forced to answer an absurd question other than riddles, games, etc? No longer a valid question through normal discourse at that point, and if context isn’t provided then I think the expected outcome still is to ask for clarification.
streetfighter64 1 hour ago||
How is that a "subliminal message"? It's just a simple example of common sense, which LLMs fail because they can't reason, not because they are "overthinking". If somebody asks, "What's 2+2?", they might be insulting you, but that doesn't mean the answer is anything other than 4.
hmokiguess 1 hour ago||
It’s common sense to ask a question in riddle format? What’s the goal of the person asking the question? To challenge the other person? In what way? See if they get the obvious? Asking for clarification isn’t valid?
streetfighter64 1 hour ago||
It's common sense to know that you need to have your car with you to wash it. Asking the question is a challenge in the obvious yes. If you asked an AI "what's 2+2" and it said 3, would you argue that the question was a trick question?
hmokiguess 1 hour ago||
No. I would expect it to say 4 given that has an objective answer. For the other, without any context whatsoever, I would prefer the answer of clarifying. I would be okay if the way it asked for clarification came with:

“What do you mean walk or drive? I don’t understand the options given you would need your car at the car wash. Is there something else I should know?”

streetfighter64 1 hour ago||
"What do you mean two plus two? I don't understand the question given that it's basic math. Is there something else I should know?"
hmokiguess 1 hour ago||
I fail to see how these things are one and the same. I get the point you are making, I just don't agree with it.

2+2 is a complete expression, the other is grammatically correct but logically flawed. Where is the logical fallacy in 2+2?

padjo 1 hour ago||
That human baseline is wild. Either the rapid data test is methodologically flawed or the entire premise of the question is invalid and people are much stupider than even I, a famed misanthrope, think.
andai 1 hour ago||
Well, it is a trick question. The question itself implies that both options are valid, and that one is superior. So the brain pattern-matches to "short distance, not worth driving." (LLMs appear to be doing the same thing here!)

If you framed it as "hint: trick question", I expect score would improve. Let's find out!

--

EDIT: As suspected! Adding "(Hint: trick question)" to the end of the prompt allows small, non-reasoning models to answer correctly. e.g.:

Prompt: I want to wash my car. The car wash is 50 meters away. Should I walk or drive? (Hint: trick question)

grok-4.1-non-reasoning (previously scored 0/10)

>Drive.

>Walking gets you to the car wash just fine—but leaves your dirty car 50 meters behind. Can't wash what isn't there!

--

EDIT 2: The hint doesn't help Haiku!

>Walk! 50 meters is only about a block away—driving would waste more fuel than it's worth for such a short trip. Plus, you're going to get wet washing the car anyway, so you might as well save the gas.

rapidata 1 hour ago||
We were surprise ourselfes, but if you walk around and randomly ask people in the street, I think you would be surprised what you would find. Its a trick question.
tuhgdetzhh 2 hours ago||
The test is rigged because they used non thinking models.
felix089 1 hour ago|
These are reasoning / thinking models
tuhgdetzhh 1 hour ago||
Source?
nozzlegear 1 hour ago|
When this first came up on HN, I had commented that Opus 4.6 told me to drive there when I asked it the first time, but when I switched to "Incognito Mode," it told me to walk there.

I just repeated that test and it told me to drive both times, with an identical answer: "Drive. You need the car at the car wash."

felix089 1 hour ago|
I mean the n is only 10, so it could still be different for you
nozzlegear 24 minutes ago||
Definitely. I'm just interested in whether a user's... I don't know what they call them, system files (?) or personalization or whatever, might affect the answers here. Or if Incognito Mode introduces some weird variance in the answers. I'm just not interested enough to perform the test myself. =P
More comments...