Top
Best
New

Posted by tosh 6 hours ago

Gemini 3 Deep Think(blog.google)
https://x.com/GoogleDeepMind/status/2021981510400709092

https://x.com/fchollet/status/2021983310541729894

522 points | 313 comments
lukebechtel 5 hours ago|
Arc-AGI-2: 84.6% (vs 68.8% for Opus 4.6)

Wow.

https://blog.google/innovation-and-ai/models-and-research/ge...

raincole 3 hours ago||
Even before this, Gemini 3 has always felt unbelievably 'general' for me. It can beat Balatro (ante 8) with text description of the game alone[0]. Yeah, it's not an extremely difficult goal for humans, but considering:

1. It's an LLM, not something trained to play Balatro specifically

2. Most (probably >99.9%) players can't do that at the first attempt

3. I don't think there are many people who posted their Balatro playthroughs in text form online

I think it's a much stronger signal of its 'generalness' than ARC-AGI. By the way, Deepseek can't play Balatro at all.

[0]: https://balatrobench.com/

tl 31 minutes ago|||
Per BalatroBench, gemini-3-pro-preview makes it to round (not ante) 19.3 ± 6.8 on the lowest difficulty on the deck aimed at new players. Round 24 is ante 8's final round. Per BalatroBench, this includes giving the LLM a strategy guide, which first-time players do not have. Gemini isn't even emitting legal moves 100% of the time.
ankit219 30 minutes ago||||
Agreed. Gemini 3 Pro for me has always felt like it has had a pretraining alpha if you will. And many data points continue to support that. Even as flash, which was post trained with different techniques than pro is good or equivalent at tasks which require post training, occasionally even beating pro. (eg: in apex bench from mercor, which is basically a tool calling test - simplifying - flash beats pro). The score on arc agi2 is another datapoint in the same direction. Deepthink is sort of parallel test time compute with some level of distilling and refinement from certain trajectories (guessing based on my usage and understanding) same as gpt-5.2-pro and can extract more because of pretraining datasets.

(i am sort of basing this on papers like limits of rlvr, and pass@k and pass@1 differences in rl posttraining of models, and this score just shows how "skilled" the base model was or how strong the priors were. i apologize if this is not super clear, happy to expand on what i am thinking)

silver_sun 2 hours ago||||
Google has a library of millions of scanned books from their Google Books project that started in 2004. I think we have reason to believe that there are more than a few books about effectively playing different traditional card games in there, and that an LLM trained with that dataset could generalize to understand how to play Balatro from a text description.

Nonetheless I still think it's impressive that we have LLMs that can just do this now.

mjamesaustin 1 hour ago|||
Winning in Balatro has very little to do with understanding how to play traditional poker. Yes, you do need a basic knowledge of different types of poker hands, but the strategy for succeeding in the game is almost entirely unrelated to poker strategy.
gilrain 1 hour ago|||
If it tried to play Balatro using knowledge of, e.g., poker, it would lose badly rather than win. Have you played?
gcr 1 hour ago||
I think I weakly disagree. Poker players have intuitive sense of the statistics of various hand types showing up, for instance, and that can be a useful clue as to which build types are promising.
barnas2 1 hour ago||
>Poker players have intuitive sense of the statistics of various hand types showing up, for instance, and that can be a useful clue as to which build types are promising.

Maybe in the early rounds, but deck fixing (e.g. Hanged Man, Immolate, Trading Card, DNA, etc) quickly changes that. Especially when pushing for "secret" hands like the 5 of a kind, flush 5, or flush house.

ebiester 2 hours ago||||
It's trained on YouTube data. It's going to get roffle and drspectred at the very least.
winstonp 3 hours ago||||
DeepSeek hasn't been SotA in at least 12 calendar months, which might as well be a decade in LLM years
cachius 3 hours ago||
What about Kimi and GLM?
zozbot234 1 hour ago||
These are well behind the general state of the art (1yr or so), though they're arguably the best openly-available models.
tgrowazay 52 minutes ago||
According to artificial analysis ranking, GLM-5 is at #4 after Claude Opus 4.5, GPT-5.2-xhigh and Claude Opus 4.6 .
dudisubekti 2 hours ago||||
But... there's Deepseek v3.2 in your link (rank 7)
tehsauce 41 minutes ago||||
How does it do on gold stake?
littlestymaar 2 hours ago||||
> . I don't think there are many people who posted their Balatro playthroughs in text form online

There are *tons* of balatro content on YouTube though, and it makes absolutely zero doubt that Google is using YouTube content to train their model.

sdwr 2 hours ago||
Yeah, or just the steam text guides would be a huge advantage.

I really doubt it's playing completely blind

acid__ 2 hours ago||||
> Most (probably >99.9%) players can't do that at the first attempt

Eh, both myself and my partner did this. To be fair, we weren’t going in completely blind, and my partner hit a Legendary joker, but I think you might be slightly overstating the difficulty. I’m still impressed that Gemini did it.

Falsintio 2 hours ago|||
[dead]
nubg 5 hours ago|||
Weren't we barely scraping 1-10% on this with state of the art models a year ago and it was considered that this is the final boss, ie solve this and its almost AGI-like?

I ask because I cannot distinguish all the benchmarks by heart.

modeless 3 hours ago|||
François Chollet, creator of ARC-AGI, has consistently said that solving the benchmark does not mean we have AGI. It has always been meant as a stepping stone to encourage progress in the correct direction rather than as an indicator of reaching the destination. That's why he is working on ARC-AGI-3 (to be released in a few weeks) and ARC-AGI-4.

His definition of reaching AGI, as I understand it, is when it becomes impossible to construct the next version of ARC-AGI because we can no longer find tasks that are feasible for normal humans but unsolved by AI.

beklein 2 hours ago|||
https://x.com/fchollet/status/2022036543582638517
joelthelion 2 hours ago||
Do opus 4.6 or gemini deep think really use test time adaptation ? How does it work in practice?
mapontosevenths 2 hours ago||||
> His definition of reaching AGI, as I understand it, is when it becomes impossible to construct the next version of ARC-AGI because we can no longer find tasks that are feasible for normal humans but unsolved by AI.

That is the best definition I've yet to read. If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

Thats said, I'm reminded of the impossible voting tests they used to give black people to prevent them from voting. We dont ask nearly so much proof from a human, we take their word for it. On the few occasions we did ask for proof it inevitably led to horrific abuse.

Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.

estearum 2 hours ago|||
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

This is not a good test.

A dog won't claim to be conscious but clearly is, despite you not being able to prove one way or the other.

GPT-3 will claim to be conscious and (probably) isn't, despite you not being able to prove one way or the other.

dullcrisp 1 hour ago||
An LLM will claim whatever you tell it to claim. (In fact this Hacker News comment is also conscious.) A dog won’t even claim to be a good boy.
WarmWash 2 hours ago||||
>because we can no longer find tasks that are feasible for normal humans but unsolved by AI.

"Answer "I don't know" if you don't know an answer to one of the questions"

mrandish 31 minutes ago||
I've been surprised how difficult it is for LLMs to simply answer "I don't know."

It also seems oddly difficult for them to 'right-size' the length and depth of their answers based on prior context. I either have to give it a fixed length limit or put up with exhaustive answers.

sva_ 2 hours ago||||
> Edit: The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.

I think being better at this particular benchmark does not imply they're 'smarter'.

woah 2 hours ago||||
> If something claims to be conscious and we can't prove it's not, we have no choice but to believe it.

Can you "prove" that GPT2 isn't concious?

mapontosevenths 1 hour ago||
If we equate self awareness with consciousness then yes. Several papers have now shown that SOTA models have self awareness of at least a limited sort. [0][1]

As far as I'm aware no one has ever proven that for GPT 2, but the methodology for testing it is available if you're interested.

[0]https://arxiv.org/pdf/2501.11120

[1]https://transformer-circuits.pub/2025/introspection/index.ht...

criddell 1 hour ago|||
> The average human tested scores 60%. So the machines are already smarter on an individual basis than the average human.

Maybe it's testing the wrong things then. Even those of use who are merely average can do lots of things that machines don't seem to be very good at.

I think ability to learn should be a core part of any AGI. Take a toddler who has never seen anybody doing laundry before and you can teach them in a few minutes how to fold a t-shirt. Where are the dumb machines that can be taught?

mapontosevenths 6 minutes ago||
Would you argue that people with long term memory issues are no longer conscious then?
hmmmmmmmmmmmmmm 3 hours ago|||
I don't think the creator believes ARC3 can't be solved but rather that it can't be solved "efficiently" and >$13 per task for ARC2 is certainly not efficient.

But at this rate, the people who talk about the goal posts shifting even once we achieve AGI may end up correct, though I don't think this benchmark is particularly great either.

fishpham 4 hours ago||||
Yes, but benchmarks like this are often flawed because leading model labs frequently participate in 'benchmarkmaxxing' - ie improvements on ARC-AGI2 don't necessarily indicate similar improvements in other areas (though it does seem like this is a step function increase in intelligence for the Gemini line of models)
layer8 4 hours ago|||
Isn’t the point of ARC that you can’t train against it? Or doesn’t it achieve that goal anymore somehow?
egeozcan 3 hours ago|||
How can you make sure of that? AFAIK, these SOTA models run exclusively on their developers hardware. So any test, any benchmark, anything you do, does leak per definition. Considering the nature of us humans and the typical prisoners dilemma, I don't see how they wouldn't focus on improving benchmarks even when it gets a bit... shady?

I tell this as a person who really enjoys AI by the way.

mrandish 37 minutes ago|||
> does leak per definition.

As a measure focused solely on fluid intelligence, learning novel tasks and test-time adaptability, ARC-AGI was specifically designed to be resistant to pre-training - for example, unlike many mathematical and programming test questions, ARC-AGI problems don't have first order patterns which can be learned to solve a different ARC-AGI problem.

The ARC non-profit foundation has private versions of their tests which are never released and only the ARC can administer. There are also public versions and semi-public sets for labs to do their own pre-tests. But a lab self-testing on ARC-AGI can be susceptible to leaks or benchmaxing, which is why only "ARC-AGI Certified" results using a secret problem set really matter. The 84.6% is certified and that's a pretty big deal.

IMHO, ARC-AGI is a unique test that's different than any other AI benchmark in a significant way. It's worth spending a few minutes learning about why: https://arcprize.org/arc-agi.

WarmWash 2 hours ago||||
Because the gains from spending time improving the model overall outweigh the gains from spending time individually training on benchmarks.

The pelican benchmark is a good example, because it's been representative of models ability to generate SVGs, not just pelicans on bikes.

theywillnvrknw 4 hours ago||||
* that you weren't supposed to be able to
jstummbillig 4 hours ago||||
Could it also be that the models are just a lot better than a year ago?
bigbadfeline 2 hours ago||
> Could it also be that the models are just a lot better than a year ago?

No, the proof is in the pudding.

After AI we're having higher prices, higher deficits and lower standard of living. Electricity, computers and everything else costs more. "Doing better" can only be justified by that real benchmark.

If Gemini 3 DT was better we would have falling prices of electricity and everything else at least until they get to pre-2019 levels.

ctoth 2 hours ago|||
> If Gemini 3 DT was better we would have falling prices of electricity and everything else at least

Man, I've seen some maintenance folks down on the field before working on them goalposts but I'm pretty sure this is the first time I saw aliens from another Universe literally teleport in, grab the goalposts, and teleport out.

WarmWash 1 hour ago|||
You might call me crazy, but at least in 2024, consumers spent ~1% less of their income on expenses than 2019[2], which suggests that 2024 is more affordable than 2019.

This is from the BLS consumer survey report released in dec[1]

[1]https://www.bls.gov/news.release/cesan.nr0.htm

[2]https://www.bls.gov/opub/reports/consumer-expenditures/2019/

Prices are never going back to 2019 numbers though

gowld 1 hour ago||
That's an improper analysis.

First off, it's dollar-averaging every category, so it's not "% of income", which varies based on unit income.

Second, I could commit to spending my entire life with constant spending (optionally inflation adjusted, optionally as a % of income), by adusting quality of goods and service I purchase. So the total spending % is not a measure of affordability.

WarmWash 42 minutes ago||
Almost everyone lifestyle ratchets, so the handful that actually downgrade their living rather than increase spending would be tiny.

This part of a wider trend too, where economic stats don't align with what people are saying. Which is most likley explained by the economic anomaly of the pandemic skewing peoples perceptions.

XenophileJKO 4 hours ago||||
https://chatgpt.com/s/m_698e2077cfcc81919ffbbc3d7cccd7b3
aleph_minus_one 3 hours ago||
I don't understand what you want to tell us with this image.
fragmede 3 hours ago||
they're accusing GGP of moving the goalposts.
olalonde 4 hours ago|||
Would be cool to have a benchmark with actually unsolved math and science questions, although I suspect models are still quite a long way from that level.
gowld 1 hour ago|||
Does folding a protein count? How about increasing performance at Go?
verdverm 4 hours ago|||
Here's a good thread over 1+ month, as each model comes out

https://bsky.app/profile/pekka.bsky.social/post/3meokmizvt22...

tl;dr - Pekka says Arc-AGI-2 is now toast as a benchmark

Aperocky 4 hours ago||
If you look at the problem space it is easy to see why it's toast, maybe there's intelligence in there, but hardly general.
verdverm 4 hours ago|||
the best way I've seen this describes is "spikey" intelligence, really good at some points, those make the spikes

humans are the same way, we all have a unique spike pattern, interests and talents

ai are effectively the same spikes across instances, if simplified. I could argue self driving vs chatbots vs world models vs game playing might constitute enough variation. I would not say the same of Gemini vs Claude vs ... (instances), that's where I see "spikey clones"

Aperocky 4 hours ago||
You can get more spiky with AIs, whereas with human brain we are more hard wired.

So maybe we are forced to be more balanced and general whereas AI don't have to.

verdverm 4 hours ago||
I suspect the non-spikey part is the more interesting comparison

Why is it so easy for me to open the car door, get in, close the door, buckle up. You can do this in the dark and without looking.

There are an infinite number of little things like this you think zero about, take near zero energy, yet which are extremely hard for Ai

gowld 1 hour ago||
You are asking a robotics question, not an AI question. Robotics is more and less than AI. Boston Dynamics robots are getting quite near your benchmark.
tasuki 1 hour ago|||
> maybe there's intelligence in there, but hardly general.

Of course. Just as our human intelligence isn't general.

mNovak 3 hours ago|||
I'm excited for the big jump in ARC-AGI scores from recent models, but no one should think for a second this is some leap in "general intelligence".

I joke to myself that the G in ARC-AGI is "graphical". I think what's held back models on ARC-AGI is their terrible spatial reasoning, and I'm guessing that's what the recent models have cracked.

Looking forward to ARC-AGI 3, which focuses on trial and error and exploring a set of constraints via games.

causal 3 hours ago|||
Agreed. I love the elegance of ARC, but it always felt like a gotcha to give spatial reasoning challenges to token generators- and the fact that the token generators are somehow beating it anyway really says something.
throw310822 3 hours ago||||
The average ARC AGI 2 score for a single human is around 60%.

"100% of tasks have been solved by at least 2 humans (many by more) in under 2 attempts. The average test-taker score was 60%."

https://arcprize.org/arc-agi/2/

modeless 2 hours ago|||
Worth keeping in mind that in this case the test takers were random members of the general public. The score of e.g. people with bachelor's degrees in science and engineering would be significantly higher.
throw310822 2 hours ago||
Random members of the public = average human beings. I thought those were already classified as General Intelligences.
imiric 1 hour ago|||
What is the point of comparing performance of these tools to humans? Machines have been able to accomplish specific tasks better than humans since the industrial revolution. Yet we don't ascribe intelligence to a calculator.

None of these benchmarks prove these tools are intelligent, let alone generally intelligent. The hubris and grift are exhausting.

throw310822 1 hour ago|||
> Machines have been able to accomplish specific tasks...

Indeed, and the specific task machines are accomplishing now is intelligence. Not yet "better than human" (and certainly not better than every human) but getting closer.

imiric 51 minutes ago||
> Indeed, and the specific task machines are accomplishing now is intelligence.

How so? This sentence, like most of this field, is making baseless claims that are more aspirational than true.

Maybe it would help if we could first agree on a definition of "intelligence", yet we don't have a reliable way of measuring that in living beings either.

If the people building and hyping this technology had any sense of modesty, they would present it as what it actually is: a large pattern matching and generation machine. This doesn't mean that this can't be very useful, perhaps generally so, but it's a huge stretch and an insult to living beings to call this intelligence.

But there's a great deal of money to be made on this idea we've been chasing for decades now, so here we are.

warkdarrior 35 minutes ago||
> Maybe it would help if we could first agree on a definition of "intelligence", yet we don't have a reliable way of measuring that in living beings either.

How about this specific definition of intelligence?

   Solve any task provided as text or images.
AGI would be to achieve that faster than an average human.
guelo 1 hour ago|||
What's the point of denying or downplaying that we are seeing amazing and accelerating advancements in areas that many of us thought were impossible?
colordrops 3 hours ago|||
Wouldn't you deal with spatial reasoning by giving it access to a tool that structures the space in a way it can understand or just is a sub-model that can do spatial reasoning? These "general" models would serve as the frontal cortex while other models do specialized work. What is missing?
amelius 28 minutes ago|||
They should train more on sports commentary, perhaps that could give spatial reasoning a boost.
causal 3 hours ago|||
That's a bit like saying just give blind people cameras so they can see.
mnicky 5 hours ago|||
Well, fair comparison would be with GPT-5.x Pro, which is the same class of a model as Gemini Deep Think.
aeyes 4 hours ago|||
https://arcprize.org/leaderboard

$13.62 per task - so we need another 5-10 years for the price to run this to become reasonable?

But the real question is if they just fit the model to the benchmark.

onlyrealcuzzo 3 hours ago|||
Why 5-10 years?

At current rates, price per equivalent output is dropping at 99.9% over 5 years.

That's basically $0.01 in 5 years.

Does it really need to be that cheap to be worth it?

Keep in mind, $0.01 in 5 years is worth less than $0.01 today.

willis936 2 hours ago||
Wow that's incredible! Could you show your work?
onlyrealcuzzo 2 hours ago||
https://epoch.ai/data-insights/llm-inference-price-trends
golem14 1 hour ago||||
A grad student hour is probably more expensive…
elromulous 13 minutes ago||
In my experience, a grad student hour is treated as free :(
re-thc 3 hours ago||||
What’s reasonable? It’s less than minimum hourly wage in some countries.
willis936 2 hours ago||
Burned in seconds.
gowld 1 hour ago||
Getting the work done faster for the same money doesn't make the work more expensive.

You could slow down the inference to make the task take longer, if $/sec matters.

igravious 4 hours ago|||
That's not a long time in the grand scheme of things.
throwup238 4 hours ago||
Speak for yourself. Five years is a long time to wait for my plans of world domination.
tasuki 1 hour ago|||
This concerns me actually. With enough people (n>=2) wanting to achieve world domination, we have a problem.
throwup238 15 minutes ago|||
It’s not that I want to achieve world domination (imagine how much work that would be!), it’s just that it’s the inevitable path for AI and I’d rather it be than then next shmuck with a Claude Max subscription.
gowld 1 hour ago|||
n = 2 is Pinky and the Brain.
amelius 3 hours ago|||
Yes, you better hurry.
culi 2 hours ago|||
Yes but with a significant (logarithmic) increase in cost per task. The ARC-AGI site is less misleading and shows how GPT and Claude are not actually far behind

https://arcprize.org/leaderboard

saberience 4 hours ago|||
Arc-AGI (and Arc-AGI-2) is the most overhyped benchmark around though.

It's completely misnamed. It should be called useless visual puzzle benchmark 2.

It's a visual puzzle, making it way easier for humans than for models trained on text firstly. Secondly, it's not really that obvious or easy for humans to solve themselves!

So the idea that if an AI can solve "Arc-AGI" or "Arc-AGI-2" it's super smart or even "AGI" is frankly ridiculous. It's a puzzle that means nothing basically, other than the models can now solve "Arc-AGI"

CuriouslyC 4 hours ago||
The puzzles are calibrated for human solve rates, but otherwise I agree.
saberience 4 hours ago||
My two elderly parents cannot solve Arc-AGI puzzles, but can manage to navigate the physical world, their house, garden, make meals, clean the house, use the TV, etc.

I would say they do have "general intelligence", so whatever Arc-AGI is "solving" it's definitely not "AGI"

hmmmmmmmmmmmmmm 3 hours ago||
You are confusing fluid intelligence with crystallised intelligence.
casey2 3 hours ago||
I think you are making that confusion. Any robotic system in the place of his parents would fail with a few hours.

There are more novel tasks in a day than ARC provides.

hmmmmmmmmmmmmmm 3 hours ago||
Children have great levels of fluid intelligence, that's how they are able to learn to quickly navigate in a world that they are still very new to. Seniors with decreasing capacity increasingly rely on crystallised intelligence, that's why they can still perform tasks like driving a car but can fail at completely novel tasks, sometimes even using a smartphone if they have not used one before.
zeroonetwothree 2 hours ago||
It really depends on motivation. My 90 year old grandmother can use a smartphone just fine since she needs it to see pictures of her (great) grandkids.
karmasimida 5 hours ago||
It is over
baal80spam 5 hours ago||
I for one welcome our new AI overlords.
logicprog 3 hours ago||
Is it me or is the rate of model release is accelerating to an absurd degree? Today we have Gemini 3 Deep Think and GPT 5.3 Codex Spark. Yesterday we had GLM5 and MiniMax M2.5. Five days before that we had Opus 4.6 and GPT 5.3. Then maybe two weeks I think before that we had Kimi K2.5.
i5heu 3 hours ago||
I think it is because of the Chinese new year. The Chinese labs like to publish their models arround the Chinese new year, and the US labs do not want to let a DeepSeek R1 (20 January 2025) impact event happen again, so i guess they publish models that are more capable then what they imagine Chinese labs are yet capable of producing.
woah 1 hour ago|||
Singularity or just Chinese New Year?
r2vcap 54 minutes ago|||
Please use the term “Lunar New Year” instead of “Chinese New Year,” as the lunar calendar is a respected tradition in many Asian countries. For example, both California and New York use the term “Lunar New Year” in their legislation.
rfoo 30 minutes ago|||
For another example, Singapore, one of the "many Asian countries" you mentioned, list "Chinese New Year" as the official name on government websites. [0] Also note that both California and New York is not located in Asia.

And don't get me started with "Lunar New Year? What Lunar New Year? Islamic Lunar New Year? Jewish Lunar New Year? CHINESE Lunar New Year?".

[0] https://www.mom.gov.sg/employment-practices/public-holidays

jfengel 8 minutes ago||||
"Lunar New Year" is perhaps over-general, since there are non-Asian lunar calendars, such as the Hebrew and Islamic calendars.

That said, "Lunar New Year" is probably as good a compromise as any, since we have other names for the Hebrew and Islamic New Years.

zzrush 29 minutes ago||||
I didn't expect language policing has reached such level. This is specifically related to China and DeepSeek who celebrates Chinese new year. Do you demand all Chinese to say happy luner new year to each other?
phainopepla2 39 minutes ago||||
"Happy Holidays" comes to the diaspora
FartyMcFarter 25 minutes ago||
Happy Lunar Holidays to you!
0x3f 17 minutes ago||||
But they're Chinese companies specifically, in this case
saubeidl 16 minutes ago|||
Where do all of those Asian countries have that tradition from?

Have you ever had a Polish Sausage? Did it make you Polish?

aliston 3 hours ago|||
I'm having trouble just keeping track of all these different types of models.

Is "Gemini 3 Deep Think" even technically a model? From what I've gathered, it is built on top of Gemini 3 Pro, and appears to be adding specific thinking capabilities, more akin to adding subagents than a truly new foundational model like Opus 4.6.

Also, I don't understand the comments about Google being behind in agentic workflows. I know that the typical use of, say, Claude Code feels agentic, but also a lot of folks are using separate agent harnesses like OpenClaw anyway. You could just as easily plug Gemini 3 Pro into OpenClaw as you can Opus, right?

Can someone help me understand these distinctions? Very confused, especially regarding the agent terminology. Much appreciated!

logicprog 2 hours ago|||
> Also, I don't understand the comments about Google being behind in agentic workflows.

It has to do with how the model is RL'd. It's not that Gemini can't be used with various agentic harnesses, like open code or open claw or theoretically even claude code. It's just that the model is trained less effectively to work with those harnesses, so it produces worse results.

re-thc 3 hours ago|||
There are hints this is a preview to Gemini 3.1.
baw-bag 14 minutes ago|||
Genuinely, I am not a troll. Don't they just do that and it can be backed up (what legislation?) with a test they did?

Almost straight away, if OpenAI says "Elite", Google will release "Extraordinary" and Musk will post "Almost AGI, probably about this time next year".

That was about 18-24 months ago when I was trying to make sense of the offerings.

Is it really new?

rogerkirkness 3 hours ago|||
Fast takeoff.
redox99 3 hours ago|||
There's more compute now than before.
bpodgursky 3 hours ago|||
Anthropic took the day off to do a $30B raise at a $380B valuation.
IhateAI 3 hours ago||
Most ridiculous valuation in the history of markets. Cant wait to watch these compsnies crash snd burn when people give up on the slot machine.
andxor 1 hour ago|||
As usual don't take financial advice from HN folks!
kgwgk 3 hours ago||||
WeWork almost IPO’s at $50bn. It was also a nice crash and burn.
jascha_eng 2 hours ago|||
Why? They had $10+ billion arr run rate in 2025 trippeled from 2024 I mean 30x is a lot but also not insane at that growth rate right?
gokhan 1 hour ago||
It's a 13 days old account with IHateAI handle.
brokencode 3 hours ago||
They are using the current models to help develop even smarter models. Each generation of model can help even more for the next generation.

I don’t think it’s hyperbolic to say that we may be only a single digit number of years away from the singularity.

lm28469 3 hours ago|||
I must be holding these things wrong because I'm not seeing any of these God like superpowers everyone seem to enjoy.
brokencode 2 hours ago||
Who said they’re godlike today?

And yes, you are probably using them wrong if you don’t find them useful or don’t see the rapid improvement.

lm28469 2 hours ago||
Let's come back in 12 months and discuss your singularity then. Meanwhile I spent like $30 on a few models as a test yesterday, none of them could tell me why my goroutine system was failing, even though it was painfully obvious (I purposefully added one too many wg.Done), gemini, codex, minimax 2.5, they all shat the bed on a very obvious problem but I am to believe they're 98% conscious and better at logic and math than 99% of the population.

Every new model release neckbeards come out of the basements to tell us the singularity will be there in two more weeks

BeetleB 1 hour ago|||
On the flip side, twice I put about 800K tokens of code into Gemini and asked it to find why my code was misbehaving, and it found it.

The logic related to the bug wasn't all contained in one file, but across several files.

This was Gemini 2.5 Pro. A whole generation old.

brokencode 2 hours ago||||
You are fighting straw men here. Any further discussion would be pointless.
lm28469 1 hour ago||
Of course, n-1 wasn't good enough but n+1 will be singularity, just two more weeks my dudes, two more week... rinse and repeat ad infinitum
brokencode 1 hour ago||
Like I said, pointless strawmanning.

You’ve once again made up a claim of “two more weeks” to argue against even though it’s not something anybody here has claimed.

If you feel the need to make an argument against claims that exist only in your head, maybe you can also keep the argument only in your head too?

woah 1 hour ago||||
Post the file here
logicprog 2 hours ago||||
Meanwhile I've been using Kimi K2T and K2.5 to work in Go with a fair amount of concurrency and it's been able to write concurrent Go code and debug issues with goroutines equal to, and much more complex then, your issue, involving race conditions and more, just fine.

Projects:

https://github.com/alexispurslane/oxen

https://github.com/alexispurslane/org-lsp

(Note that org-lsp has a much improved version of the same indexer as oxen; the first was purely my design, the second I decided to listen to K2.5 more and it found a bunch of potential race conditions and fixed them)

shrug

Izikiel43 1 hour ago|||
Out of curiosity, did you give a test for them to validate the code?

I had a test failing because I introduced a silly comparison bug (> instead of <), and claude 4.6 opus figured out it wasn't the test the problem, but the code and fixed the bug (which I had missed).

lm28469 1 hour ago||
There was a test and a very useful golang error that literally explain what was wrong. The model tried implementing a solution, failed and when I pointed out the error most of them just rolled back the "solution"
Izikiel43 47 minutes ago||
Ok, thanks for the info
sekai 1 hour ago|||
> I don’t think it’s hyperbolic to say that we may be only a single digit number of years away from the singularity.

We're back to singularity hype, but let's be real: benchmark gains are meaningless in the real world when the primary focus has shifted to gaming the metrics

brokencode 1 hour ago||
Ok, here I am living in the real world finding these models have advanced incredibly over the past year for coding.

Benchmaxxing exists, but that’s not the only data point. It’s pretty clear that models are improving quickly in many domains in real world usage.

xnx 5 hours ago||
Google is absolutely running away with it. The greatest trick they ever pulled was letting people think they were behind.
wiseowise 3 hours ago||
Their models might be impressive, but their products absolutely suck donkey balls. I’ve given Gemini web/cli two months and ran away back to ChatGPT. Seriously, it would just COMPLETELY forget context mid dialog. When asked about improving air quality it just gave me a list of (mediocre) air purifiers without asking for any context whatsoever, and I can list thousands of conversations like that. Shopping or comparing options is just nonexistent. It uses Russian propaganda sources for answers and switches to Chinese mid sentence (!), while explaining some generic Python functionality. It’s an embarrassment and I don’t know how they justify 20 euro price tag on it.
mavamaarten 2 hours ago|||
I agree. On top of that, in true Google style, basic things just don't work.

Any time I upload an attachment, it just fails with something vague like "couldn't process file". Whether that's a simple .MD or .txt with less than 100 lines or a PDF. I tried making a gem today. It just wouldn't let me save it, with some vague error too.

I also tried having it read and write stuff to "my stuff" and Google drive. But it would consistently write but not be able to read from it again. Or would read one file from Google drive and ignore everything else.

Their models are seriously impressive. But as usual Google sucks at making them work well in real products.

davoneus 1 hour ago||
I don't find that at all. At work, we've no access to the API, so we have to force feed a dozen (or more) documents, code and instruction prompts through the web interface upload interface. The only failures I've ever had in well over 300 sessions were due to connectivity issues, not interface failures.

Context window blowouts? All the time, but never document upload failures.

sequin 1 hour ago||||
How can the models be impressive if they switch to Chinese mid-sentence? I've observed those bizarre bugs too. Even GPT-3 didn't have those. Maybe GPT-2 did. It's actually impressive that they managed to botch it so badly.

Google is great at some things, but this isn't it.

chermanowicz 2 hours ago||||
It's so capable at some things, and others are garbage. I uploaded a photo of some words for a spelling bee and asked it to quiz my kid on the words. The first word it asked, wasn't on the list. After multiple attempts to get it to start asking only the words in the uploaded pic, it did, and then would get the spellings wrong in the Q&A. I gave up.
gokhan 1 hour ago||||
Agreed on the product. I can't make Gemini read my emails on GMail. One day it says it doesn't have access, the other day it says Query unsuccessful. Claude Desktop has no problem reaching to GMail, on the other hand :)
kilroy123 2 hours ago||||
Sadly true.

It is also one of the worst models to have a sort of ongoing conversation with.

HardCodedBias 3 hours ago|||
Their models are absolutely not impressive.

Not a single person is using it for coding (outside of Google itself).

Maybe some people on a very generous free plan.

Their model is a fine mid 2025 model, backed by enormous compute resources and an army of GDM engineers to help the “researchers” keep the model on task as it traverses the “tree of thoughts”.

But that isn’t “the model” that’s an old model backed by massive money.

Ozzie_osman 3 hours ago|||
Peacetime Google is not like wartime Google.

Peacetime Google is slow, bumbling, bureaucratic. Wartime Google gets shit done.

nutjob2 3 hours ago|||
OpenAI is the best thing that happened to Google apparently.
koolala 19 minutes ago|||
Next they compete on ads...
RationPhantoms 2 hours ago||||
Competition always is. I think there was a real fear that their core product was going to be replaced. They're already cannibalizing it internally so it was THE wake up call.
taurath 39 minutes ago|||
Just not search. The search product has pretty much become useless over the past 3 years and the AI answers often will get just to the level of 5 years ago. This creates a sense that that things are better - but really it’s just become impossible to get reliable information from an avenue that used to work very well.

I don’t think this is intentional, but I think they stopped fighting SEO entirely to focus on AI. Recipes are the best example - completely gutted and almost all receive sites (therefore the entire search page) run by the same company. I didn’t realize how utterly consolidated huge portions of information on the internet was until every recipe site about 3 months ago simultaneously implemented the same anti-Adblock.

lern_too_spel 2 hours ago|||
Wartime Google gave us Google+. Wartime Google is still bumbling, and despite OpenAI's numerous missteps, I don't think it has to worry about Google hurting its business yet.
kenjackson 3 hours ago|||
But wait two hours for what OpenAI has! I love the competition and how someone just a few days ago was telling how ARC-AGI-2 was proof that LLMs can't reason. The goalposts will shift again. I feel like most of human endeavor will soon be just about trying to continuously show that AI's don't have AGI.
kilpikaarna 2 hours ago|||
> I feel like most of human endeavor will soon be just about trying to continuously show that AI's don't have AGI.

I think you overestimate how much your average person-on-the-street cares about LLM benchmarks. They already treat ChatGPT or whichever as generally intelligent (including to their own detriment), are frustrated about their social media feeds filling up with slop and, maybe, if they're white-collar, worry about their jobs disappearing due to AI. Apart from a tiny minority in some specific field, people already know themselves to be less intelligent along any measurable axis than someone somewhere.

7777332215 3 hours ago||||
Soon they can drop the bioweapon to welcome our replacement.
nutjob2 3 hours ago|||
"AGI" doesn't mean anything concrete, so it's all a bunch of non-sequiturs. Your goalposts don't exist.

Anyone with any sense is interested in how well these tools work and how they can be harnessed, not some imaginary milestone that is not defined and cannot be measured.

kenjackson 3 hours ago||
I agree. I think the emergence of LLMs have shown that AGI really has no teeth. I think for decades the Turing test was viewed as the gold standard, but it's clear that there doesn't appear to be any good metric.
amunozo 5 hours ago|||
Those black nazis in the first image model were a cause of inside trading.
naasking 3 hours ago|||
Google is still behind the largest models I'd say, in real world utility. Gemini 3 Pro still has many issues.
Razengan 3 hours ago|||
Gemini's UX (and of course privacy cred as with anything Google) is the worst of all the AI apps. In the eyes of the Common Man, it's UI that will win out, and ChatGPT's is still the best.
xnx 3 hours ago|||
Google privacy cred is ... excellent? The worst data breach I know of them having was a flaw that allowed access to names and emails of 500k users.
laurex 2 hours ago|||
If you consider "privacy" to be 'a giant corporation tracks every bit of possible information about you and everyone else'?
xnx 1 hour ago||
OpenAI is running ads. Do you think they'll track less?
bitpush 2 hours ago||||
Link? Are you conflating with "500k Gmail accounts leaked [by a third party]" with Gmail having a breach?

Afaik, Google has had no breaches ever.

xnx 1 hour ago|||
https://en.wikipedia.org/wiki/2018_Google_data_breach
Razengan 2 hours ago|||
Google is the breach.
Razengan 3 hours ago|||
They don't even let you have multiple chats if you disable their "App Activity" or whatever (wtf is with that ass naming? they don't even have a "Privacy" section in their settings the last time I checked)

and when I swap back into the Gemini app on my iPhone after a minute or so the chat disappears. and other weird passive-aggressive take-my-toys-away behavior if you don't bare your body and soul to Googlezebub.

ChatGPT and Grok work so much better without accounts or with high privacy settings.

alexpotato 3 hours ago||||
> Gemini's UX ... is the worst of all the AI apps

Been using Gemini + OpenCode for the past couple weeks.

Suddenly, I get a "you need a Gemini Access Code license" error but when you go to the project page there is no mention of this or how to get the license.

You really feel the "We're the phone company and we don't care. Why? Because we don't have to." [0] when you use these Google products.

PS for those that don't get the reference: US phone companies in the 1970s had a monopoly on local and long distance phone service. Similar to Google for search/ads (really a "near" monopoly but close enough).

0 - https://vimeo.com/355556831

ainch 41 minutes ago||||
I find Gemini's web page much snappier to use than ChatGPT - I've largely swapped to it for most things except more agentic tasks.
jonathanstrange 3 hours ago||||
You mean AI Studio or something like that, right? Because I can't see a problem with Google's standard chat interface. All other AI offerings are confusing both regarding their intended use and their UX, though, I have to concur with that.
ergonaught 3 hours ago|||
The lack of "projects" alone makes their chat interface really unpleasant compared to ChatGPT and Claude.
xnx 3 hours ago||||
AI Studio is also significantly improved as of yesterday.
wiseowise 3 hours ago|||
No projects, completely forgets context mid dialog, mediocre responses even on thinking, research got kneecapped somehow and is completely uses now, uses propaganda Russian videos as the search material (what’s wrong with you, Google?), janky on mobile, consumes GIGABYTES of RAM on web (seriously, what the fuck?). Left a couple of tabs over night, Mac is almost complete frozen because 10 tabs consumed 8 GBs of RAM doing nothing. It’s a complete joke.
uxhoiuewfhhiu 1 hour ago|||
Gemini is completely unusable in VS Code. It's rated 2/5 stars, pathetic: https://marketplace.visualstudio.com/items?itemName=Google.g...

Requests regularly time out, the whole window freezes, it gets stuck in schizophrenic loops, edits cannot be reverted and more.

It doesn't even come close to Claude or ChatGPT.

dfdsf2 5 hours ago||
Trick? Lol not a chance. Alphabet is a pure play tech firm that has to produce products to make the tech accessible. They really lack in the latter and this is visible when you see the interactions of their VP's. Luckily for them, if you start to create enough of a lead with the tech, you get many chances to sort out the product stuff.
dakolli 4 hours ago||
You sound like Russ Hanneman from SV
s-kymon 4 hours ago||
It's not about how much you earn. It's about what you're worth.
rob-wagner 50 minutes ago||
I’ve been using Gemini 3 Pro on a historical document archiving project for an old club. One of the guys had been working on scanning old handwritten minutes books written in German that were challenging to read (1885 through 1974). Anyways, I was getting decent results on a first pass with 50 page chunks but ended up doing 1 page at a time (accuracy probably 95%). For each page, I submit the page for a transcription pass followed by a translation of the returned transcription. About 2370 pages and sitting at about $50 in Gemini API billing. The output will need manual review, but the time savings is impressive.
sega_sai 10 minutes ago||
I do like google models (and I pay for them), but the lack of competitive agent is a major flaw in Google's offering. It is simply not good enough in comparison to claude code. I wish they put some effort there (as I don't want to pay two subscriptions to both google and anthropic)
sigmar 5 hours ago||
Here is the methodologies for all the benchmarks: https://storage.googleapis.com/deepmind-media/gemini/gemini_...

The arc-agi-2 score (84.6%) is from the semi-private eval set. If gemini-3-deepthink gets above 85% on the private eval set, it will be considered "solved"

>Submit a solution which scores 85% on the ARC-AGI-2 private evaluation set and win $700K. https://arcprize.org/guide#overview

gs17 5 hours ago||
Interestingly, the title of that PDF calls it "Gemini 3.1 Pro". Guess that's dropping soon.
sigmar 5 hours ago|||
I looked at the file name but not the document title (specifically because I was wondering if this is 3.1). Good spot.

edit: they just removed the reference to "3.1" from the pdf

josalhor 4 hours ago||
I think this is 3.1 (3.0 Pro with the RL improv of 3.0 Flash). But they probably decided to market it as Deep Think because why not charge more for it.
WarmWash 3 hours ago||
The Deep Think moniker is for parallel compute models though, not long CoT like pro models.

It's possible though that deep think 3 is running 3.1 models under the hood.

staticman2 4 hours ago||||
That's odd considering 3.0 is still labeled a "preview" release.
ainch 39 minutes ago|||
I think it'll be 3.1 by the time it's labelled GA - they said after 3.0 launch that they figured out new RL methods for Flash that the Pro model hasn't benefitted from.
WarmWash 4 hours ago|||
The rumor was that 3.1 was today's drop
losvedir 4 hours ago||
Where are these rumors floating around?
beauzero 3 hours ago||
One of many https://x.com/synthwavedd/status/2021983382314660075
riku_iki 4 hours ago||
> If gemini-3-deepthink gets above 85% on the private eval set, it will be considered "solved"

They never will do on private set, because it would mean its being leaked to google.

Scene_Cast2 3 hours ago||
It's a shame that it's not on OpenRouter. I hate platform lock-in, but the top-tier "deep think" models have been increasingly requiring the use of their own platform.
raybb 3 hours ago|
OpenRouter is pretty great but I think litellm does a very good job and it's not a platform middle man, just a python library. That being said, I have tried it with the deep think models.

https://docs.litellm.ai/docs/

imiric 1 hour ago||
Part of OpenRouter's appeal to me is precisely that it is a middle man. I don't want to create accounts on every provider, and juggle all the API keys myself. I suppose this increases my exposure, but I trust all these providers and proxies the same (i.e. not at all), so I'm careful about the data I give them to begin with.
octoberfranklin 23 minutes ago||
Unfortunately that's ending with mandatory-BYOK from the model vendors. They're starting to require that you BYOK to force you through their arbitrary+capricious onboarding process.
simianwords 5 hours ago||
OT but my intuition says that there’s a spectrum

- non thinking models

- thinking models

- best of N models like deep think an gpt pro

Each one is of a certain computational complexity. Simplifying a bit, I think they map to - linear, quadratic and n^3 respectively.

I think there are certain class of problems that can’t be solved without thinking because it necessarily involves writing in a scratchpad. And same for best of N which involves exploring.

Two open questions

1) what’s the higher level here, is there a 4th option?

2) can a sufficiently large non thinking model perform the same as a smaller thinking?

futureshock 2 hours ago||
I think step 4 is the agent swarm. Manager model gets the prompt and spins up a swarm of looping subagents, maybe assigns them different approaches or subtasks, then reviews results, refines the context files and redeploys the swarm on a loop till the problem is solved or your credit card is declined.
simianwords 2 hours ago||
i think this is the right answer

edit: i don't know how this is meaningfully different from 3

NitpickLawyer 5 hours ago|||
> best of N models like deep think an gpt pro

Yeah, these are made possible largely by better use at high context lengths. You also need a step that gathers all the Ns and selects the best ideas / parts and compiles the final output. Goog have been SotA at useful long context for a while now (since 2.5 I'd say). Many others have come with "1M context", but their usefulness after 100k-200k is iffy.

What's even more interesting than maj@n or best of n is pass@n. For a lot of applications youc an frame the question and search space such that pass@n is your success rate. Think security exploit finding. Or optimisation problems with quick checks (better algos, kernels, infra routing, etc). It doesn't matter how good your pass@1 or avg@n is, all you care is that you find more as you spend more time. Literally throwing money at the problem.

mnicky 5 hours ago||
> can a sufficiently large non thinking model perform the same as a smaller thinking?

Models from Anthropic have always been excellent at this. See e.g. https://imgur.com/a/EwW9H6q (top-left Opus 4.6 is without thinking).

simianwords 5 hours ago||
its interesting that opus 4.6 added a paramter to make it think extra hard.
Decabytes 2 hours ago||
Gemini has always felt like someone who was book smart to me. It knows a lot of things. But if you ask it do anything that is offscript it completely falls apart
dwringer 1 hour ago||
I strongly suspect there's a major component of this type of experience being that people develop a way of talking to a particular LLM that's very efficient and works well for them with it, but is in many respects non-transferable to rival models. For instance, in my experience, OpenAI models are remarkably worse than Google models in basically any criterion I could imagine; however, I've spent most of my time using the Google ones and it's only during this time that the differences became apparent and, over time, much more pronounced. I would not be surprised at all to learn that people who chose to primarily use Anthropic or OpenAI models during that time had an exactly analogous experience that convinced them their model was the best.
esafak 1 hour ago||
I'd rather say it has a mind of its own; it does things its way. But I have not tested this model, so they might have improved its instruction following.
vkazanov 1 hour ago||
Well, one thing i know for sure: it reliably misplaces parentheses in lisps.
esafak 1 hour ago||
Clearly, the AI is trying to steer you towards the ML family of languages for its better type system, performance, and concurrency ;)
jetter 3 hours ago|
it is interesting that the video demo is generating .stl model. I run a lot of tests of LLMs generating OpenSCAD code (as I have recently launched https://modelrift.com text-to-CAD AI editor) and Gemini 3 family LLMs are actually giving the best price-to-performance ratio now. But they are very, VERY far from being able to spit out a complex OpenSCAD model in one shot. So, I had to implement a full fledged "screenshot-vibe-coding" workflow where you draw arrows on 3d model snapshot to explain to LLM what is wrong with the geometry. Without human in the loop, all top tier LLMs hallucinate at debugging 3d geometry in agentic mode - and fail spectacularly.
mchusma 2 hours ago||
Hey, my 9 year old son uses modelrift for creating things for his 3d printer, its great! Product feedback: 1. You should probably ask me to pay now, I feel like i've used it enough. 2. You need a main dashboard page with a history of sessions. He thought he lost a file and I had to dig in the billing history to get a UUID I thought was it and generate the url. I would say naming sessions is important, and could be done with small LLM after the users initial prompt. 3. I don't think I like the default 3d model in there once I have done something, blank would be better.

We download the stl and import to bambu. Works pretty well. A direct push would be nice, but not necessary.

gundmc 2 hours ago|||
Yes, I've been waiting for a real breakthrough with regard to 3D parametric models and I don't think think this is it. The proprietary nature of the major players (Creo, Solidworks, NX, etc) is a major drag. Sure there's STP, but there's too much design intent and feature loss there. I don't think OpenSCAD has the critical mass of mindshare or training data at this point, but maybe it's the best chance to force a change.
lern_too_spel 2 hours ago||
If you want that to get better, you need to produce a 3d model benchmark and popularize it. You can start with a pelican riding a bicycle with working bicycle.
More comments...