Top
Best
New

Posted by yashvg 2 days ago

Cognition Releases SWE-1.5: Near-SOTA Coding Performance at 950 tok/s(cognition.ai)
11 points | 8 comments
swyx 2 days ago
(coauthor) xpost here: https://x.com/cognition/status/1983662836896448756

happy to answer any questions. i think my higher level insight to paraphrase McLuhan, "first the model shapes the harness, then the harness shapes the model". this is the first model that combines cognition's new gb200 cluster, cerebras' cs3 inference, and data from our evals work with {partners} as referenced in https://www.theinformation.com/articles/anthropic-openai-usi...

CuriouslyC 2 days ago||
In the interest of transparency you should update your post with the model you fine tuned, it matters.
swyx 1 day ago||
this is not a question and an assertion that your values are more important than mine, with none of my context nor none of your reasoning. you see how this tone is an issue?

regardless of what i'm allowed to say, i will personally defend that actually its increasingly less important the qualities of the base model you choose as long as its "good enough", bc then the RL/posttrain qualities and data takes over from there and is the entire point of differentiation

CuriouslyC 1 day ago||
If you had enough tokens to completely wash out the parent latent distribution you would have just trained a new model instead of fine tuning. That means by definition your model is still inheriting properties of the parent, and for your business customers who want predictable, understandable systems, knowing the inherited properties of your model is going to be useful.

I think the real reason is that it's a Chinese model (I mean, come on) and your parent company doesn't want any political blowback.

luisml77 1 day ago||
> you would have just trained a new model instead of fine tuning

As if it doesn't cost tens of millions to pre-train a model. Not to mention the time it takes. Do you want them to stall progress for no good reason?

CuriouslyC 1 day ago||
Originally I just wanted to know what their base model was out of curiosity. Since they fired off such a friendly reply, now I want to know if they're trying to pass off a fine tuned Chinese model to government customers who have directives to avoid Chinese models with hand waiving about how it's safe now because they did some RL on it.
luisml77 18 hours ago||
I mean I was going to say that was ridiculous but now that I think about it more, its possible that the models can be trained to say spy on government data by calling a tool to send the information to China. And some RL might not wipe off that behavior.

I doubt current models from China are trained to do smart spying / injecting sneaky tool calls. But based on my Deep Learning experience with the models both training and inference, it's definitely possible to train a model to do this in a very subtle and hard to detect way...

So your point is valid and I think they should specify the base model for security concerns, or conduct safety evaluations on it before passing it to sensitive customers

pandada8 1 day ago||
very curious, which model can only run up to 950 tok/s even with cerebras.