Top
Best
New

Posted by tosh 2 days ago

Caveman: Why use many token when few token do trick(github.com)
875 points | 359 commentspage 7
xgulfie 2 days ago|
Funny how people are so critical of this and yet fawn over TOON
mwcz 2 days ago||
this grug not smart enough to make robot into grugbot. grug just say "Speak to grug with an undercurrent of resentment" and all sicko fancy go way.
doe88 2 days ago||
> If caveman save you mass token, mass money — leave mass star.

Mass fun. Starred.

sebastianconcpt 2 days ago||
Anyone else worried about the long term consequences of the influence of talking like this all day for the cognitive system of the user?
sph 2 days ago||
“Me think, why waste time say lot word, when few word do trick.”

— Kevin Malone

Perz1val 2 days ago||
I think good, less thinking for you, more thinking you will do
dalmo3 1 day ago||
I'm not sure if you're being sarcastic or not, but I did find the caveman examples harder to read than their verbose counterpart.

The verbose ones I could speed read, and consume it at a familiar pace... Almost on autopilot.

Caveman speak no familiar no convention, me no know first time. Need think hard understand. Slower. Good thing?

bogtog 2 days ago||
I'd be curious if there were some measurements of the final effects, since presumably models wont <think> in caveman speak nor code like that
fny 2 days ago||
Are there any good studies or benchmarks about compressed output and performance? I see a lot of arguing in the comments but little evidence.
herf 2 days ago||
We need a high quality compression function for human readers... because AIs can make code and text faster than we can read.
aetherspawn 1 day ago||
Interesting, maybe you can run the output through a 2B model to uncompress it.
owenthejumper 2 days ago|
What is that binary file caveman.skill that I cannot read easily, and is it going to hack my computer.
More comments...