Top
Best
New

Posted by tosh 2 days ago

Caveman: Why use many token when few token do trick(github.com)
875 points | 359 commentspage 8
contingencies 2 days ago|
Better: use classical Chinese.
anshumankmr 2 days ago||
Though I do use Claude Code, is it possible to get this for Github Copilot too?
phainopepla2 2 days ago|
Yes, Copilot supports skills, which are basically just stored prompts in markdown files. You can use the same skill in that GitHub repo
rsynnott 2 days ago||
I mean, I assume you run into the same problem as Kevin in the office; that sort of faux-simple speech is actually very ambiguous.

(Though, I wonder has anyone tried Newspeak.)

adam_patarino 2 days ago||
Or you could use a local model where you’re not constrained by tokens. Like rig.ai
dostick 2 days ago|
How is your offering different from local ollama?
adam_patarino 2 days ago||
Its batteries included. No config.

We also fine tuned and did RL on our model, developed a custom context engine, trained an embedding model, and modified MLX to improve inference.

Everything is built to work with each other. So it’s more like an apple product than Linux. Less config but better optimized for the task.

dostick 6 hours ago||
I only understood half of the tech jargon in your answer. If I understood it all I’d probably run it myself. If someone who is less knowing than me is your customer, you need to explain in simpler terms!
adam_patarino 1 hour ago||
Fair enough! The simple answer is: we did a lot of work to make the model better at coding without requiring complicated installation or configuration. One comman to install and run.

All the benefits of claude code, without any of the limitations or rug pulls.

kristopolous 1 day ago||
This is a well known compaction technique. Where are the evals
ggm 1 day ago||
F u cn Rd ths u cld wrk scrtry 'cpt w tk l thr jbs
drewbeck 1 day ago||
If you’re not cavemaxxing you’re falling behind.
grg0 1 day ago|
I dropped dead after reading this.
Applejinx 1 day ago||
As very much an outsider and, to some extent, apostate to all this, it's pretty astonishing to see.

Unironically not just delegating all thinking to a sketchy and untrustworthy machine, but doubling down on it by aping the caveman in the belief that this will more effectively summon the great metal-wing sky god and bring limitless yum stuff.

Wow. I don't even have to do anything. You guys are disemvoweling yourselves in some kind of strange ritual. You sure are trusting souls!

semessier 2 days ago||
the real interesting question would be if it then does its language-based reasoning also in short form and if so if quality is impacted.
vova_hn2 2 days ago||
I don't know about token savings, but I find the "caveman style" much easier to read and understand than typical LLM-slop.
More comments...