Top
Best
New

Posted by MallocVoidstar 1 day ago

Gemini 3.1 Pro(blog.google)
Preview: https://console.cloud.google.com/vertex-ai/publishers/google...

Card: https://deepmind.google/models/model-cards/gemini-3-1-pro/

856 points | 853 commentspage 12
ChrisArchitect 22 hours ago|
More discussion: https://news.ycombinator.com/item?id=47075318
BMFXX 21 hours ago||
Just wish iI could get 2.5 daily limit above 1000 requests easily. Driving me insane...
jeffbee 23 hours ago||
Relatedly, Gemini chat seems to be if not down then extremely slow.

ETA: They apparently wiped out everyone's chats (including mine). "Our engineering team has identified a background process that was causing the missing user conversation metadata and has successfully stopped the process to prevent further impact." El Mao.

sergiotapia 23 hours ago||
To use in OpenCode, you can update the models it has:

    opencode models --refresh
Then /models and choose Gemini 3.1 Pro

You can use the model through OpenCode Zen right away and avoid that Google UI craziness.

---

It is quite pricey! Good speed and nailed all my tasks so far. For example:

    @app-api/app/controllers/api/availability_controller.rb 
    @.claude/skills/healthie/SKILL.md 

    Find Alex's id, and add him to the block list, leave a comment 
    that he has churned and left the company. we can't disable him 
    properly on the Healthie EMR for now so 
    this dumb block will be added as a quick fix.
Result was:

    29,392 tokens
    $0.27 spent
So relatively small task, hitting an API, using one of my skills, but a quarter. Pricey!
gbalduzzi 23 hours ago|
I don't see it even after refresh. Are you using the opencode-gemini-auth plugin as well?
sergiotapia 23 hours ago||
No I am not just vanilla OpenCode. I do have OpenCode Zen credits, and I did opencode login whatever their command is to auth against opencode itself. Maybe that's the reason I see these premium models.
cmrdporcupine 23 hours ago||
Doesn't show as available in gemini CLI for me. I have one of those "AI Pro" packages, but don't see it. Typical for Google, completely unclear how to actually use their stuff.
saberience 1 day ago||
I always try Gemini models when they get updated with their flashy new benchmark scores, but always end up using Claude and Codex again...

I get the impression that Google is focusing on benchmarks but without assessing whether the models are actually improving in practical use-cases.

I.e. they are benchmaxing

Gemini is "in theory" smart, but in practice is much, much worse than Claude and Codex.

rocho 20 hours ago||
I find Gemini is outstanding at reasoning (all topics) and architecture (software/system design). On the other hand, Gemini CLI sucks and so I end up using Claude Code and Codex CLI for agentic work.

However, I heavily use Gemini in my daily work and I think it has its own place. Ultimately, I don't see the point of choosing the one "best" model for everything, but I'd rather use what's best for any given task.

konart 23 hours ago|||
> but without assessing whether the models are actually improving in practical use-cases

Which cases? Not trying to sound bad but you didn't even provide of cases you are using Claude\Codex\Gemini for.

skerit 23 hours ago|||
I'm glad someone else is finally saying this, I've been mentioning this left and right and sometimes I feel like I'm going crazy that not more people are noticing it.

Gemini can go off the rails SUPER easily. It just devolves into a gigantic mess at the smallest sign of trouble.

For the past few weeks, I've also been using XML-like tags in my prompts more often. Sometimes preferring to share previous conversations with `<user>` and `<assistant>` tags. Opus/Sonnet handles this just fine, but Gemini has a mental breakdown. It'll just start talking to itself.

Even in totally out-of-the-ordinary sessions, it goes crazy. After a while, it'll start saying it's going to do something, and then it pretends like it's done that thing, all in the same turn. A turn that never ends. Eventually it just starts spouting repetitive nonsense.

And you would think this is just because the bigger the context grows, the worse models tend to get. But no! This can happen well below even the 200.000 token mark.

reilly3000 22 hours ago||
Flash is (was?) was better than Pro on these fronts.
user34283 23 hours ago|||
I exclusively use Gemini for Chat nowadays, and it's been great mostly. It's fast, it's good, and the app works reliably now. On top of that I got it for free with my Pixel phone.

For development I tend to use Antigravity with Sonnet 4.5, or Gemini Flash if it's about a GUI change in React. The layout and design of Gemini has been superior to Claude models in my opinion, at least at the time. Flash also works significantly faster.

And all of it is essentially free for now. I can even select Opus 4.6 in Antigravity, but I did not yet give it a try.

cmrdporcupine 23 hours ago||
Honestly doesn't feel like Google is targeting the agentic coding crowd so much as they are the knowledge worker / researcher / search-engine-replacement market?

Agree Gemini as a model is fairly incompetent inside their own CLI tool as well as in opencode. But I find it useful as a research and document analysis tool.

verdverm 21 hours ago||
For my custom agentic coding setup, I use Claude Code derived prompts with Gemini models, primarily flash. It's night and day compared to Google's own agentic products, which are all really bad.

The models are all close enough on the benchmarks and I think people are attributing too much difference in the agentic space to the model itself. I strongly believe the difference is in all the other stuff, which is why Antropic is far ahead of the competition. They have done great work with Claude Code, Cowork, and their knowledge share through docs & blog, bar none on this last point imo.

hn_throw2025 18 hours ago||
Yeah great, now can I have my pinned chats back please?

https://www.google.com/appsstatus/dashboard/incidents/nK23Zs...

makeavish 1 day ago||
I hope to have great next two weeks before it gets nerfed.
unsupp0rted 23 hours ago|
I've found Google (at least in AI Studio) are the only provider NOT to nerf their models after a few weeks
makeavish 23 hours ago|||
I don't use AI studio for my work. I used Antigravity/Gemini CLI and 3 pro was great for few weeks and now it's worse than 3 flash or any smaller model from competitor which are rated lower on benchmarks
scrlk 23 hours ago|||
IME, they definitely nerf models. gemini-2.5-pro-exp-03-25 through AI Studio was amazing at release and steadily degraded. The quality started tanking around the time they hid CoT.
mrcwinn 18 hours ago||
It's fascinating to watch this community react to positively to Google model releases and so negatively toward OpenAI's. You all do understand that an ad revenue model is exactly where Google will go, right?
sidrag22 17 hours ago||
It's all so astroturfed so its hard to tell. I got the opposite impression though. Seemed like OpenAI had more fake positivity towards the top that i tried to skim, and this had way less and a lot of complaints.

Im biased I dont trust either of them, so perhaps im just hard looking for the hate and attributing all the positive stuff to advertising.

webtcp 17 hours ago|||
An enemy is better than a traitor
mrcwinn 15 hours ago||
Quite a low bar. And in any case, isn’t Google already a traitor to its original mission statement?
jeffbee 18 hours ago||
Gemini already drives ad revenue. If the conversation goes in that direction it will use product search results with the links attributable to Google.
himata4113 22 hours ago|
The visual capabilities of this model are frankly kind of ridicioulus what the hell.
More comments...