Posted by kisamoto 12 hours ago
My backup has been Opencode + Kimi K2. It's definitely not as strong as even Sonnet but it's pretty fast and is serviceable for basic web app work like the above.
I am in a situation where every sub-folder has its own language server settings, lint settings, etc. VSCode (and forks) can handle this by creating a workspace, adding each folder to the workspace, and having a separate .vscode per-folder. I haven't figured out how to do the same with Zed.
I would love to stop using VSCode forks
Because GH is accessing the API behind the scenes, you should face less degradation when using Sonnet/Opus models compared to a Claude subscription.
Keep a ChatGPT $20 subscription alongside for back-and-forth conversations and you'll get great bang for buck.
- context is aggressively trimmed compared to CC obviously for cost saving reasons, so the performance is worse
- the request pricing model forces me to adjust how I work
Just these alone are not worth saving the 60$/month for me.I like the VSCode integration and the MCP/LSP usage surprised me sometimes over the dumb grep from CC. Ironically VSCode is becoming my terminal emulator of choice for all the CLI agents - SSH/container access and the automatic port mapping, etc. - it's more convenient than tmux sessions for me. So Copilot would be ideal for me but yeah it's just tweaked for being budget/broad scope tool rather than a tool for professionals that would pay to get work done.
It turns it into a very good value for money, as far as I'm concerned.
GHCP at least is transparent about the pricing: hit enter on a prompt= one request. CC/Codex use some opaque quota scheme, where you never really know if a request will be 1,2,10% of your hourly max, let alone weekly max.
I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.
Then we've had wildly different results. Running CC and GH copilot with Opus 4.6 on same task and the results out of CC were just better, likewise for Codex and GPT 5.4. I have to assume it's the aggressive context compaction/limited context loading because tracking what copilot does it seems to read way less context and then misses out on stuff other agents pick up automatically.
https://www.techradar.com/pro/bad-news-skeptics-github-says-...
It’s not just Zed, CoPilot also reduces the capabilities and options available when using models directly.
No thanks, definitely agree with the Open Router approach or native harness to keep full functionality.
I might be paranoid but I feel that access to models will become more constraint in the future as the industry gets more regulated.
We are not the only one. I found other people online experiencing the same issue. It is hard to tell how wide-spread this is but it is strange to say the least.
So yes obviously you can do what you want as long as you abide by terms of service, but the terms of service does NOT allow you to resell the API.
> TOS says: access the Site or Service for purposes of reselling API access to AI Models or otherwise developing a competing service;
I think what you meant is "you aren't allowed to expose the access to the API to end users", which is a fair condition IMHO.
You're still allowed to expose the functionality (ie. build a SaaS or AI assistant powered by OpenRouter API), just don't build a proxy.
It does talk about a competing service, if I build a service that propose all the image gen models of Openrouter, and charge the user for it per token, am I allowed?
OpenRouter recently started enforcing account-level regional restrictions for providers that enforce it (OpenAI, Anthropic, Google) - ie blocking accounts that look like they are being used by users in China. The regional restriction used to be based on the Cloudflare edge worker IP's geolocation and enforced upstream, so a proxy/server running inside of supported regions would get around the geoblocks, but now OpenRouter are using (unspecified) signals like your billing address to geoblock. People say "banned" because the error message says "Author <provider> is banned", which really should be read as "Unable to use models from provider due to upstream ban".