Top
Best
New

Posted by vinhnx 4 hours ago

Kimi K2.5 Technical Report [pdf](github.com)
106 points | 46 comments
zeroxfe 1 hour ago|
I've been using this model (as a coding agent) for the past few days, and it's the first time I've felt that an open source model really competes with the big labs. So far it's been able to handle most things I've thrown at it. I'm almost hesitant to say that this is as good as Opus.
armcat 1 hour ago||
Out of curiosity, what kind of specs do you have (GPU / RAM)? I saw the requirements and it's a beyond my budget so I am "stuck" with smaller Qwen coders.
zeroxfe 1 hour ago|||
I'm not running it locally (it's gigantic!) I'm using the API at https://platform.moonshot.ai
rc1 2 minutes ago|||
How long until this can be run on consumer grade hardware or a domestic electricity supply I wonder.

Anyone have a projection?

BeetleB 1 hour ago|||
Just curious - how does it compare to GLM 4.7? Ever since they gave the $28/year deal, I've been using it for personal projects and am very happy with it (via opencode).

https://z.ai/subscribe

InsideOutSanta 41 minutes ago|||
There's no comparison. GLM 4.7 is fine and reasonably competent at writing code, but K2.5 is right up there with something like Sonnet 4.5. it's the first time I can use an open-source model and not immediately tell the difference between it and top-end models from Anthropic and OpenAI.
zeroxfe 53 minutes ago||||
It's waaay better than GLM 4.7 (which was the open model I was using earlier)! Kimi was able to quickly and smoothly finish some very complex tasks that GLM completely choked at.
akudha 25 minutes ago||||
Is the Lite plan enough for your projects?
cmrdporcupine 26 minutes ago|||
From what people say, it's better than GLM 4.7 (and I guess DeepSeek 3.2)

But it's also like... 10x the price per output token on any of the providers I've looked at.

I don't feel it's 10x the value. It's still much cheaper than paying by the token for Sonnet or Opus, but if you have a subscribed plan from the Big 3 (OpenAI, Anthropic, Google) it's much better value for $$.

Comes down to ethical or openness reasons to use it I guess.

esafak 13 minutes ago||
Exactly. For the price it has to beat Claude and GPT, unless you have budget for both. I just let GLM solve whatever it can and reserve my Claude budget for the rest.
Carrok 1 hour ago||||
Not OP but OpenCode and DeepInfra seems like an easy way.
tgrowazay 1 hour ago|||
Just pick up any >240GB VRAM GPU off your local BestBuy to run a quantized version.

> The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs.

thesurlydev 1 hour ago||
Can you share how you're running it?
eknkc 1 hour ago|||
I've been using it with opencode. You can either use your kimi code subscription (flat fee), moonshot.ai api key (per token) or openrouter to access it. OpenCode works beautifully with the model.

Edit: as a side note, I only installed opencode to try this model and I gotta say it is pretty good. Did not think it'd be as good as claude code but its just fine. Been using it with codex too.

Imustaskforhelp 48 minutes ago||
I tried to use opencode for kimi k2.5 too but recently they changed their pricing from 200 tool requests/5 hour to token based pricing.

I can only speak from the tool request based but for some reason anecdotally opencode took like 10 requests in like 3-4 minutes where Kimi cli took 2-3

So I personally like/stick with the kimi cli for kimi coding. I haven't tested it out again with OpenAI with teh new token based pricing but I do think that opencode might add more token issue.

Kimi Cli's pretty good too imo. You should check it out!

https://github.com/MoonshotAI/kimi-cli

zeroxfe 1 hour ago||||
Running it via https://platform.moonshot.ai -- using OpenCode. They have super cheap monthly plans at kimi.com too, but I'm not using it because I already have codex and claude monthly plans.
esafak 10 minutes ago|||
Where? https://www.kimi.com/code starts at $19/month, which is same as the big boys.
UncleOxidant 42 minutes ago|||
so there's a free plan at moonshot.ai that gives you some number of tokens without paying?
explorigin 1 hour ago||||
https://unsloth.ai/docs/models/kimi-k2.5

Requirements are listed.

KolmogorovComp 58 minutes ago||
To save everyone a click

> The 1.8-bit (UD-TQ1_0) quant will run on a single 24GB GPU if you offload all MoE layers to system RAM (or a fast SSD). With ~256GB RAM, expect ~10 tokens/s. The full Kimi K2.5 model is 630GB and typically requires at least 4× H200 GPUs. If the model fits, you will get >40 tokens/s when using a B200. To run the model in near full precision, you can use the 4-bit or 5-bit quants. You can use any higher just to be safe. For strong performance, aim for >240GB of unified memory (or combined RAM+VRAM) to reach 10+ tokens/s. If you’re below that, it'll work but speed will drop (llama.cpp can still run via mmap/disk offload) and may fall from ~10 tokens/s to <2 token/s. We recommend UD-Q2_K_XL (375GB) as a good size/quality balance. Best rule of thumb: RAM+VRAM ≈ the quant size; otherwise it’ll still work, just slower due to offloading.

Gracana 47 minutes ago||
I'm running the Q4_K_M quant on a xeon with 7x A4000s and I'm getting about 8 tok/s with small context (16k). I need to do more tuning, I think I can get more out of it, but it's never gonna be fast on this suboptimal machine.
esafak 7 minutes ago||
The pitiful state of GPUs. $10K for a sloth with no memory.
gigatexal 1 hour ago|||
Yeah I too am curious. Because Claude code is so good and the ecosystem so just it works that I’m Willing to pay them.
Imustaskforhelp 46 minutes ago|||
I tried kimi k2.5 and first I didn't really like it. I was critical of it but then I started liking it. Also, the model has kind of replaced how I use chatgpt too & I really love kimi 2.5 the most right now (although gemini models come close too)

To be honest, I do feel like kimi k2.5 is the best open source model. It's not the best model itself right now tho but its really price performant and for many use cases might be nice depending on it.

It might not be the completely SOTA that people say but it comes pretty close and its open source and I trust the open source part because I feel like other providers can also run it and just about a lot of other things too (also considering that iirc chatgpt recently slashed some old models)

I really appreciate kimi for still open sourcing their complete SOTA and then releasing some research papers on top of them unlike Qwen which has closed source its complete SOTA.

Thank you Kimi!

epolanski 1 hour ago|||
You can plug another model in place of Anthropic ones in Claude Code.
zeroxfe 59 minutes ago||
That tends to work quite poorly because Claude Code does not use standard completions APIs. I tried it with Kimi, using litellm[proxy], and it failed in too many places.
samtheprogram 1 minute ago|||
[delayed]
AnonymousPlanet 9 minutes ago|||
It worked very well for me using qwen3 coder behind a litellm. Most other models just fail in weird ways though.
epolanski 1 hour ago||
It's interesting to note that a model that can OpenAI is valued almost 400 times more than moonshotai, despite their models being surprisingly close.
moffkalast 33 minutes ago|
Well to be the devil's advocate: One is a household name that holds most of the world's silicon wafers for ransom, and the other sounds like a crypto scam. Also estimating valuation of Chinese companies is sort of nonsense when they're all effectively state owned.
llmslave 54 minutes ago||
The benchmarks on all these models are meaningless
alchemist1e9 32 minutes ago|
Why and what would a good benchmark look like?
moffkalast 14 minutes ago||
30 people trying out all models on the list for their use case for a week and then checking what they're still using a month after.
miroljub 47 minutes ago||
I've been quite satisfied lately with MiniMax M-2.1 in opencode.

How does Kimi 2.5 compare to it in real world scenarios?

viraptor 43 minutes ago|
A lot better in my experience. M2.1 to me feels between haiku and sonnet. K2.5 feels close to opus. That's based on my testing of removing some code and getting it to reimplement based on tests. Also the design/spec writing feels great. You can still test k2.5 for free in OpenCode today.
miroljub 30 minutes ago||
Well, Minimax was the equivalent of Sonnet in my testing. If Kimi approach Opus, that would be great.
derac 1 hour ago||
I really like the agent swarm thing, is it possible to use that functionality with OpenCode or is that a Kimi CLI specific thing? Does the agent need to be aware of the capability?
esafak 4 minutes ago||
Has anyone tried it and decided it's worth the cost; I've heard it's even more profligate with tokens?
zeroxfe 56 minutes ago||
It seems to work with OpenCode, but I can't tell exactly what's going on -- I was super impressed when OpenCode presented me with a UI to switch the view between different sub-agents. I don't know if OpenCode is aware of the capability, or the model is really good at telling the harness how to spawn sub-agents or execute parallel tool calls.
margorczynski 1 hour ago||
I wonder how K2.5 + OpenCode compares to Opus with CC. If it is close I would let go of my subscription, as probably a lot of people.
eknkc 51 minutes ago||
It is not opus. It is good, works really fast and suprisingly through about its decisions. However I've seen it hallucinate things.

Just today I asked for a code review and it flagged a method that can be `static`. The problem is it was already static. That kind of stuff never happens with Opus 4.5 as far as I can tell.

Also, in an opencode Plan mode (read only). It generated a plan and instead of presenting it and stopping, decided to implement it. Could not use the edit and write tools because the harness was in read only mode. But it had bash and started using bash to edit stuff. Wouldn't just fucking stop even though the error messages it received from opencode stated why. Its plan and the resulting code was ok so I let it go crazy though...

esafak 2 minutes ago|||
Some models have a mind of their own. I keep them on a leash with `permission` blocks in OC -- especially for rm/mv/git.
naragon 57 minutes ago|||
I've been using K2.5 with OpenCode to do code assessments/fixes and Opus 4.5 with CC to check the work, and so far so good. Very impressed with it so far, but I don't feel comfortable canceling my Claude subscription just yet. Haven't tried it on large feature implementations.
ithkuil 59 minutes ago||
I also wonder if CC can be used with k2.5 with the appropriate API adapter
behnamoh 1 hour ago|
It's a decent model but works best with kimi CLI, not CC or others.
alansaber 1 hour ago|
Why do you think that is?
chillacy 53 minutes ago||
I heard it's because the labs fine tune their models for their own harness. Same reason why claude does better in claude code than cursor.