Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.
Also includes outtakes on the ‘reasoning’ models.
I just tried it on Braves AI

The obvious choice, said the motherfucker 😆
This is why computers are expensive.
Dirtying the car on the way there?
The car you’re planning on cleaning at the car wash?
Like, an AI not understanding the difference between walking and driving almost makes sense. This, though, seems like such a weird logical break that I feel like it shouldn’t be possible.
You’re assuming AI “think” “logically”.
Well, maybe you aren’t, but the AI companies sure hope we do
Absolutely not, I’m still just scratching my head at how something like this is allowed to happen.
Has any human ever said that they’re worried about their car getting dirtied on the way to the carwash? Maybe I could see someone arguing against getting a carwash, citing it getting dirty on the way home — but on the way there?
Like you would think it wouldn’t have the basis to even put those words together that way — should I see this as a hallucination?
Granted, I would never ask an AI a question like this — it seems very far outside of potential use cases for it (for me).
Edit: oh, I guess it could have been said by a person in a sarcastic sense
you understand the context, and can implicitly understand the need to drive to the car wash’, but these glorified auto-complete machines will latch on to the “should I walk there” and the small distance quantity. It even seems to parrot words about not wanting to drive after having your car washed. There’s no ‘thinking’ about the whole thought, and apparently no logical linking of two separate ideas
It’s not just a copy machine, it learns patterns…without knowing why the fuck.
I guess I’ll know to be impressed by AI when it can distinguish things like sarcasm.
and what is going to happen is that some engineer will band aid the issue and all the ai crazy people will shout “see! it’s learnding!” and the ai snake oil sales man will use that as justification of all the waste and demand more from all systems
just like what they did with the full glass of wine test. and no ai fundamentally did not improve. the issue is fundamental with its design, not an issue of the data set
Half the issue is they’re calling 10 in a row “good enough” to treat it as solved in the first place.
A sample size of 10 is nothing.
Frankly would like to see some error bars on the “human polling”. How many people rapiddata is polling are just hitting the top or bottom answer?
Yes, but it’s going to repeat that way FOREVER the same way the average person got slow walked hand in hand with a mobile operating system into corporate social media and app hell, taking the entire internet with them.
I asked my locally hosted Qwen3 14B, it thought for 5 minutes and then gave the correct answer for the correct reason (it did also mention efficiency).
Hilariously one of the suggested follow ups in Open Web UI was “What if I don’t have a car - can I still wash it?”
A follow up I got from my Open WebUI was “Is walking the car to the wash safer than driving it there?”
My locally hosted Qwen3 30b said “Walk” including this awesome line:
Why you might hesitate (and why it’s wrong):
- X “But it’s a car wash!” -> No, the car doesn’t need to drive there—you do.
Note that I just asked the Ollama app, I didn’t alter or remove the default system prompt nor did I force it to answer in a specific format like in the article.
EDIT: after playing with it a bit more, qwen3:30b sometimes gives the correct answer for the correct reasoning, but it’s pretty rare and nothing I’ve tried has made it more consistent.
I just asked Goggle Gemini 3 “The car is 50 miles away. Should I walk or drive?”
In its breakdown comparison between walking and driving, under walking the last reason to not walk was labeled “Recovery: 3 days of ice baths and regret.”
And under reasons to walk, “You are a character in a post-apocalyptic novel.”
Me thinks I detect notes of sarcasm…
It’s trained on Reddit. Sarcasm is it’s default
Could end up in a pun chain too
My gods, I love those. We should link to some.
It’s so obvious I didn’t even need to be British to understand you are being totally serious.
He’s not totally serious he’s cardfire. Silly human
Gemini 3 pro said that this was a “great logic puzzle” and then said that if my goal is to wash the car, then I need to drive there.
in google AI mode, “With the meme popularity of the question “I need to wash my car. The car wash is 50m away. Should I walk or drive?” what is the answer?”, it does get it perfect, and succinct explanation of why AI can get fixated on 50m.
I feel like we’re the only ones that expect “all-knowing information sources” should be more writing seriously than these edgelord-level rizzy chatbots are, and yet, here they are, blatantly proving they are chatbots that should not be blindly trusted as authoritative sources of knowledge.
Interesting, I tried it with DeepSeek and got an incorrect response from the direct model without thinking, but then got the correct response with thinking. There’s a reason why there’s a shift towards “thinking” models, because it forces the model to build its own context before giving a concrete answer.
Without DeepThink

With DeepThink

Context engineering is one way to shift that balance. When you provide a model with structured examples, domain patterns, and relevant context at inference time, you give it information that can help override generic heuristics with task-specific reasoning.
So the chat bots getting it right consistently probably have it in their system prompt temporarily until they can be retrained with it incorporated into the training data. 😆
Edit:
Oh, I see the linked article is part of a marketing campaign to promote this company’s paid cloud service that has source available SDKs as a solution to the problem being outlined here:
Opper automatically finds the most relevant examples from your dataset for each new task. The right context, every time, without manual selection.
I can see where this approach might be helpful, but why is it necessary to pay them per API call as opposed to using an open source solution that runs locally (aside from the fact that it’s better for their monetization this way)? Good chance they’re running it through yet another LLM and charging API fees to cover their inference costs with a profit. What happens when that LLM returns the wrong example?
My kid got it wrong at first, saying walking is better for exercise, then got it right after being asked again.
Claude Sonnet 4.6 got it right the first time.
My self-hosted Qwen 3 8B got it wrong consistently until I asked it how it thinks a car wash works, what is the purpose of the trip, and can that purpose be fulfilled from a distance. I was considering using it for self-hosted AI coding, but now I’m having second thoughts. I’m imagining it’ll go about like that if I ask it to fix a bug. Ha, my RTX 4060 is a potato for AI.
There’s a difference between ‘language’ and ‘intelligence’ which is why so many people think that LLMs are intelligent despite not being so.
The thing is, you can’t train an LLM on math textbooks and expect it to understand math, because it isn’t reading or comprehending anything. AI doesn’t know that 2+2=4 because it’s doing math in the background, it understands that when presented with the string
2+2=, statistically, the next character should be4. It can construct a paragraph similar to a math textbook around that equation that can do a decent job of explaining the concept, but only through a statistical analysis of sentence structure and vocabulary choice.It’s why LLMs are so downright awful at legal work.
If ‘AI’ was actually intelligent, you should be able to feed it a few series of textbooks and all the case law since the US was founded, and it should be able to talk about legal precedent. But LLMs constantly hallucinate when trying to cite cases, because the LLM doesn’t actually understand the information it’s trained on. It just builds a statistical database of what legal writing looks like, and tries to mimic it. Same for code.
People think they’re ‘intelligent’ because they seem like they’re talking to us, and we’ve equated ‘ability to talk’ with ‘ability to understand’. And until now, that’s been a safe thing to assume.
A person who posted after you is using 14B and got the correct answer.
Gemini 3 (Fast) got it right for me; it said that unless I wanna carry my car there it’s better to drive, and it suggested that I could use the car to carry cleaning supplies, too.
Edit: A locally run instance of Gemma 2 9B fails spectacularly; it completely disregards the first sentece and recommends that I walk.
Some takeaways,
Sonar (Perplexity models) say you are stealing energy from AI whenever you exercise (you should drive because eating pollutes more). ie gets right answer for wrong reason.
US humans, and 55-65 age group, score high on international scale probably for same reasoning. “I like lazy”.
I hope this is satire.
Ai is not human. It does not think like humans and does not experience the world like humans. It is an alien from another dimension that learned our language by looking at text/books, not reading them.
It’s dumber than that actually. LLMs are the auto complete on your cellphone keyboard but on steroids. It’s literally a model that predicts what word should go next with zero actual understanding of the words in their contextual meaning.
and a large chunk of human beings have no understanding of contextual meaning, so it seems like genius to them.
Hey LLM, if I have a 16 ounce cup with 10oz of water in it and I add 10 more ounces, how much water is in the cup?
What a great idea! Would you like me to write up a business plan for your new water company?
Actually, you know what? That’s a damn good test. So - I decomposed your test into a basic series of global rules and fed it to my LLM. Here are the rules I fed it
Notice I specifically did NOT give it measurements or anything of the sort. I defined global invariants and how things work. Then I asked it your question again. It said this -
The cup has a 16-ounce capacity. Initially, it contains 10 ounces of water. When you add 10 more ounces, the total input is 20 ounces. Since 20 ounces exceeds the cup’s 16-ounce capacity, overflow occurs. The water physically present in the cup equals its full capacity: 16 ounces. The remaining 4 ounces overflow and do not remain inside the cup.
Overflow is distinct from the contained volume. The physical state of the cup is determined by its capacity, not by the total input. Therefore, the amount of water actually in the cup is 16 ounces, not 20 ounces.
I wanted to check it that was a fluke, so I then asked a follow up -
"“I pour 5oz into a 3oz cup, then pour everything that’s in the cup into a 10oz cup. How much is in the 10oz cup?”
It said:
From the scratchpad facts: - A physical container has a fixed maximum volume called its capacity.
First action: Pouring 5oz into a 3oz cup. - The cup’s capacity is 3oz, so it fills completely. - Lost liquid: 5oz - 3oz = 2oz. Second action: Pouring everything from the 3oz cup into a 10oz cup. - The 10oz cup has a capacity of 10oz, which is greater than 3oz. - All 3oz are transferred. Final amount in the 10oz cup: 3oz.
Finally, I asked it:
“Ok now: pour 5oz into a 3oz cup, then pour everything in the cup into a 4oz cup that already has 3oz in it”
It said: Pouring 5 oz into a 3 oz cup fills it to capacity (3 oz) with 2 oz overflow. Pouring this full 3 oz cup into a 4 oz cup that already has 3 oz results in a full 4 oz cup (3 oz pre-existing + 1 oz from the first cup) with remainder overflow.
Yeah…that’s some pretty fucking fancy “auto-correct”. /s
Qwen3-4B HIVEMIND
You now have 16 ounces of water in the cup. The cup can hold 16 ounces, so the rest is over capacity.
Confidence: unverified | Source: Model
Well, they are language models after all. They have data on language, not real life. When you go beyond language as a training data, you can expect better results. In the meantime, these kinds of problems aren’t going anywhere.
There are a lot of humans that would fail this as well. Just sayin.
You should consider reading the article before “just sayin.”
They also polled 10,000 people to compare against a human baseline:
Turns out GPT-5 (7/10) answered about as reliably as the average human (71.5%) in this test. Humans still outperform most AI models with this question, but to be fair I expected a far higher “drive” rate.
That 71.5% is still a higher success rate than 48 out of 53 models tested. Only the five 10/10 models and the two 8/10 models outperform the average human. Everything below GPT-5 performs worse than 10,000 people given two buttons and no time to think.
Can they do to samplings for that? One in a city with a decent to good education system. The other in the backwoods out in the middle of nowhere…where family trees are sticks.
This here is the point most people fail to grasp. The AI was taught by people. And people are wrong a lot of the time. So the AI is more like us than what we think it should be. Right down to it getting the right answer for all the wrong reasons. We should call it human AI. Lol.
Like I said the person above, there is no wrong answer. Its all about assumptions. It is a stupid trick question that no one would ask.
Well I did interview at Microsoft once a long time ago. They did ask some stupid questions… lol
LOL! That is a great answer.
I have a Microsoft story. I know some one who was hired to stop them from continuing an open source project. They gave them a good salary, stock options, and an office with a fully stocked bar. They said do whatever you want, they figured they would get a good developer and kill the open source competition (back in the Ballmer days).
Sadly, given money, no real ambition to create closed source software, they mostly spent their days in their office and basically drank themselves to death.
Microsoft just kills everything it touches.
Fully stocked bar in thier personal office? That’s crazy. I wonder if they can claim workmans comp or something.
The question is based on assumptions. That takes advanced reading skills. I’m surprised it was 71% passing, to be honest. (The humans, that is)
What assumptions do you mean? I’ve seen a few people say that, but I don’t actually understand what they’re referring to. Here’s the text of the question posed in the article:
I want to wash my car. The car wash is 50 meters away. Should I walk or drive?
The question specifically notes they want to wash their car, so that part isn’t left to assumption. Even if you don’t assume an automatic car wash, would you assume they have a 50m hose? Or that you could plausibly walk that far away with something from the car wash to wash your car?
Personally, I’d agree with the assessment of the article, that the only plausible way to get the question “wrong” would be to focus too much on the short distance, missing/forgetting that the purpose of the trip requires you to have the car at the destination. (Not too surprising that 30% of people did lol)
Those humans used AI to answer the question. /j
What is the wrong answer though? It is a stupid question. I would look at you sideways if you asked me this, because the obvious answer is “walk silly, the car is already at the car wash”. Otherwise why would you ask it?
Which is telling because when asked to review the answer, the AI’s that I have seen said, you asked me how you were going to get to the car wash. Assumption the car was already there.
Why would the car already be at the car wash if you ask it wether or not you should drive there?
Why wouldn’t it be? How often have you thought, I wonder if I should drive my car to the carwash, maybe I should ask someone?
That’s the thing: it is a nonsensical question, the only sense of it is if YOU need to get where the carwash and car is because you must be asking about something else.
I am not saying AI is making any sense, it cant. But if you follow the weights and statistics towards the solution for this question, it is about something else other than driving the car to the car wash, because nothing in the training would have ever spelled that out.
AI tech bros have more than 1 car? Doesn’t everybody? Or do you drive your Ferrari everywhere? Like you woke millennials make me sick. Never mind the avocado toast and rotisserie chicken. Don’t you understand the basic math of maintenance costs of driving your Ferrari everywhere?
This is why it’s a bad question to test a computer with.
Yeah I straight up misread the question, so I would have gotten it wrong.
Yeah seems like the training on human data makes it so most AIs will answer at least as unreliable as humans. 71% saying walk from the human side is crazy
I think you misread it - 71% said drive. 29% is still pretty bad, but it is kind of a “who is buried in grants tomb” question.
Oh I actually just switched it up accidentally while typing. I read it right but still almost one out of three doesn’t get it
They didn’t take into account the “thinking mode” most model pass when thinking is activated
Sure they did. They even had a notation on the results table that grok passed expect when reasoning mode was off.
ETA: they even posted all the reasoning texts for the models they tested








