Posted by kisamoto 10 hours ago
Having access to dozens of models through a single API key, tracking cost of each request, being able to run the same request on different models and comparing their results next to each other, separating usages through different API keys, adding your own presets, setting your routing rules...
And once you start using an account with multiple users, it's even more useful to have all those features!
Not relying on a subscription and having the right to do exactly what you want with your API key (using it with any tool/harness...) is also a big plus to me.
For general use, I personally don’t see much justification as to why I would want to pay a per-token fee just to not create a few accounts with my trusted providers and add them to an instance for users. It is transparent to users beyond them having a single internal API key (or multiple if you want to track specific app usage) for all the models they have access to, with limits and logging. They wouldn’t even need to know what provider is hosting the model and the underlying provider could be swapped without users knowing.
It is certainly easier to pay a fee per token on a small scale and not have to run an instance, so less technical users could definitely find advantage in just sticking with OpenRouter.
1. The LLM provider doesn't know it's you (unless you have personally identifiable information in your queries). If N people are accessing GPT-5.x using OpenRouter, OpenAI can't distinguish the people. It doesn't know if 1 person made all those requests, or N.
2. The ability to ensure your traffic is routed only to providers that claim not to log your inputs (not even for security purposes): https://openrouter.ai/docs/guides/routing/provider-selection...
It's been forever since I played with LiteLLM. Can I get these with it?
FWIW this is highly unlikely to be true.
It's true that the upstream provider won't know it's _you_ per se, but most LLM providers strongly encourage proxies like OpenRouter to distinguish between downstream clients for security and performance reasons.
For example:
- https://developers.openai.com/api/docs/guides/safety-best-pr...
- https://developers.openai.com/api/docs/guides/prompt-caching...
For prompt caching, they already say they permit it, and do not consider it "logging" (i.e. if you have zero retention turned on, it will still go to providers who do prompt caching).
Not true in any non startup where there is an actual finance department
But if OpenRouter does better (even though it's the same sort of API layer) maybe it's worth it?
If you're only using flagship model providers then openrouter's value add is a lot more limited
The minus is that context caching is only moderately working at best, rendering all savings nearly useless.
well worth the 5% they take
if that wasn't the reason, hey that's actually a great way to launder money (not financial advice).
Or what are you really saying here? I don't understand how that's related to "you don't have the right to do what you want with the API Key", which is the FUD part.
Quote from their own TOS: access the Site or Service for purposes of reselling API access to AI Models or otherwise developing a competing service;
When you say "you don't have the right to do what you want with the API Key" it makes it sound like specific use cases are disallowed, or something similar. "You don't have the right to go against the ToS, for some reason they block you then!" would have been very different, and of course it's like that.
Bit like complaining that Stripe is preventing you from accepting credit card payments for narcotics. Yes, just because you have an API key doesn't mean somehow you can do whatever you want.
Are we allowed yes or not to make a service that charge per Token to end-users, like giving access to Kimi K2.5 to end-users through Openrouter in a pay per token basis?
Eg: Ctrl+P "Open Fol.." in Zed does not surface "Opening a Folder". Zed doesn't call them folders. You have to know that's called "Workspace". And even then, if you type "Open Work..." it doesn't surface! You have to purposefully start with "work..."
Just the floating and ephemeral "Search in files" modal in Jetbrain IDEs would convince me to switch from any other IDE.
But their tab complete situation is abysmal, and Supermaven got macrophaged by Cursor
I don’t have any extensions installed and I’m basically leaving it open, idle, as a note scratch space. I do have projects open with many files but not many actual files are open
Anyway idk
I opened just one of the typescript projects inside VSCode and I see something like 1gb (combining the helpers usage). I'm not using it actively, so no extra plugins and so on.
That's on mac, so I guess it may vary on other systems.
Spent a couple of hours trying to make the Svelte extension ignore a particular type of false positive CSS error, failed, and returned to VS Code
Will definitely give it another chance when the extension system is more mature though!
I'd like to give the new GLM models a try for personal stuff.
And people keep claiming the token providers are running inference at a profit.
Not everyone gets $1K of usage, and you don't know how fat the per-token margins are. It's like saying the local buffet place is losing money because you eat $100 worth of takeout for $30.
Well, we're going to find out sooner rather than later. Right now you don't know how thin (or negative) the margins are, either, after all.
All we know for certain is how much VC cash they got. Revenue, spend, profit, etc calculated according to GAAP are still a secret.
If you're trying to minimize cost then having one of the inexpensive models do exploratory work and simple tasks while going back to Opus for the serious thinking and review is a good hybrid model. Having the $20/month Claude plan available is a good idea even if you're primarily using OpenRouter available models.
I think trying to use anything other than the best available SOTA model for important work is not a good tradeoff, though.
Extrapolating that out, the subscription pricing is HEAVILY subsidized. For similar work in Claude Code, I use a Pro plan for $20/month, and rarely bang up against the limits.
It's obviously capital-subsidized and so I have zero expectation of that lasting, but it's pretty anti-competitive to Cursor and others that rely on API keys.
It's nice that it works for the author, though, and OpenRouter is pretty nice for trying out models or interacting with multiple ones through a unified platform!
Any insights / suggestions / best practices?
> Run <other harness> in tmux and interrogate it how feature X works, then build me the equivalent as a pi extension.
Maybe in a few years there will be obvious patterns with harnesses having built really optimal flows, but right now it works so much better to experiment and try new approaches and prompts and flows, and pi is the easiest one to tweak and make it your own.
That’s what really appeals to me. I’ve been fighting Claude Code’s attempts to put everything in memory lately (which is fine for personal preferences), when I prefer the repo to contain all the actual knowledge and learnings. Made me realise how these micro-improvements could ultimately, some day, lead to lock-in.
> Run <other harness> in tmux and interrogate it how feature X works, then build me the equivalent as a pi extension.
I’ll give it a try!
It's designed to be a small simple core with a rich API which you can use for extensions (providing skills, tools, or just modifying/extending the agent's behaviour).
It's likely that you'll eventually need to find extensions for some extended functionality, but for each feature you can pick the one that fits your need exactly (or just use Pi to hack a new extension).
No need for database MCP, I use postgres and tell it to use psql.
Occasionally I use prettier to remove indentation - the LLM makes a lot less edit errors that way. Just add the indent back before you commit. Or tell pi to do it.
With the anthropic billing change (not being able to use the max credits for pi) I think I have to cancel - as I'm whirring through credits now.
Going to move to the $250/mo OpenAI codex plan for now.
Is OpenAI codex not also charging by usage instead of subscription when using pi?
at first i thought i was goring to build lots of extra plugins and commands but what ended up working for me is:
- i have a simpel command that pulls context from a linear issue
- simple review command
- project specific skills for common tasks
He went on an "OSS vacation", which is perfectly reasonable and said he'd be back on a certain date. I had a PR open for a trivial fix, someone asked when it would land. I shared he was still away. After his return I politely asked, "@badlogic hey, what can we do to progress this? Thanks x"
I then got what I would consider an abusive reply, because he confused me with someone else. In the meantime he extended his vacation. Didn't even think his shitty attitude was worthy of an apology, that HE confused me with someone else.
https://github.com/badlogic/pi-mono/discussions/1475#discuss...
And another other thing I fixed with no attribution, just landed it himself separately. https://github.com/badlogic/pi-mono/discussions/1080
and
https://github.com/badlogic/pi-mono/issues/1079#event-223896...
Now he's seemingly marked anything with my name on as a "clanker", despite all my changes being by hand.
I've been around open source enough to have a thick skin, but when i'm doing something "for fun" and someone treats you like that, i'd rather avoid it as far as possible. I certainly could not in good faith use this project for anything work related.
As someone else pointed out cooler heads and less passive aggressive responses would've resolved this issue easily.
Honestly, it seems like both of you were feeling a bit "grumpy" at the moment, but sending passive aggressiveness towards the maintainer you are trying to get to merge your code (or not your code, someone else's code?) seems like a very bold strategy regardless.
But that doesn't negate the maintainer talking to people like that (and taking contributions without attribution).. and the net result is I don't want to use the software, and frankly they probably won't miss me.. so the end result is neutral.. I just find it sad.
Quite sure most (perhaps >99%) adult people would consider this passive aggressive.
But yeah, I agree with you for the rest part. Why did Mario assume that bot is you...?
OpenCode picked up my CLAUDE.md files and skills straight away, and I got similar performance to Opus 4.6.
I'm pretty conservative when it comes to clearing the context, and I also tend to provide the right files to work on (or at least the right starting point).
I had seen prior to using the model that it starts producing much worse results when the context used is larger, so my usage style probably helps getting better results. I work like this with Claude Code anyway, so it wasn't a big change.
Many of us got the annual Lite plan when they had the $28 discount. But even at $120 I think it's a good deal.
I have been wanting to subscribe but based on how awful the experience is for most people, I just can’t pull the trigger
FWIW, I've never dealt with outages since I signed up over 3 months ago (Lite plan). It is slow - always has been. I can live with that.
At the same time, I'm not using it for work. It's for the occasional project once in a while. So maybe I just haven't hit any limits? I did use it for OpenClaw for 2-3 weeks. Never had connection issues.
Looking at https://docs.z.ai/devpack/faq
you can see the details of their limits. Seems GLM 5.1 has low thresholds, and will get lower starting May. On Reddit I see some people switching to GLM 5 and claiming they haven't hit limits - the site doesn't indicate the limits for that model.
They also say that those who subscribed before February have different limits - unsure if it's lower or higher!
GLM-4.7 is still a fairly capable model. Not as good as Opus, but for most personal projects it's been adequate. I see on Reddit plenty of people plan using GLM-5.1, and use 4.7 for implementation.
OpenRouter credit rollover is the real insight — credits that don't expire vs time windows that reset whether you used them or not. I'm surprised Anthropic hasn't offered a token pack option alongside the subscription.
I switched to OpenCode Zen + GitHub Copilot. For some reason, Claude Code burns through my quota really quickly.
Honesty as a marketing strategy is really undervalued in cases like this
Due to the quota changes, I actually find myself using Claude less and less
If I just let opencode zen run claude opus to plan and execute, I'd spend $20 in like 5 minutes lol
Personally, I've had a lot of good results in my little personal projects with Kimi K2.5, GLM 5 and 5.1, and MiniMax M2.5.