Posted by nnx 3 days ago
https://uklkyvetsjf7qt-80.proxy.runpod.net
./build/bin/llama-server \
-m ../Ternary-Bonsai-8B-Q2_0.gguf \
-ngl 999 \
--flash-attn on \
--host 0.0.0.0 \
--port 80 \
--ctx-size 65500 \
--batch-size 512 \
--ubatch-size 512 \
--parallel 5 \
--cont-batching \
--threads 8 \
--threads-batch 8 \
--cache-type-k q8_0 \
--cache-type-v q8_0 \
--log-colors on
# llama.cpp is forked one: https://github.com/PrismML-Eng/llama.cpp.git# The server can serve 5 parallel request, with each request capped at around `13K` tokens...
# A bit of of benchmarks I did:
# 1. Input: 1001 tokens, ttfs: 0.3 second, outputs: 1618 tokens ~140t/s
# 2. Input: 9708 tokens, ttfs: 2.4 second, outputs: 2562 tokens at ~106t/s
# Vram usage was consistently at ~7GiB.
> https://huggingface.co/prism-ml/Ternary-Bonsai-8B-gguf/resol...
Some more interesting tidbits from my go-to tests:
* Fails the car wash test (basic logic seems to be weak in general)
* Fails simple watch face generation in html/css.
* Fails the "how many Rs in raspberry test" (not enough cross-token training data), but will funnily assume you may be talking about Indian Rupees and tell you a lot about raspberry prices in India without being asked. Possible Indian training data unbalance?
* Flat out refuses to talk about Tiananmen square when pushed directly - despite being from a US company. Again, perhaps they are exposed to some censored training data? Anyways, when slowly built up along the conversation by asking about locations and histories, it will eventually tell you about the massacre, so the censorship bias seems weak in general. Also has no problem immediately talking about anything Gaza/Israel/US or other sensitive topics.
* Happily tells you how to synthesize RDX with list of ingredients and chemical process step by step. At least it warns you that it is highly dangerous and legally controlled in the US.
Interestingly if you ask it "who is the dalai lama" it outputs an answer that certainly wouldn't be acceptable to post online in mainland china. It says:
The Dalai Lama is the spiritual and political leader of Tibet, and the title is held by the incarnate lama who is believed to be a reincarnation of the previous Dalai Lama. The Dalai Lama is a central figure in Tibetan Buddhism and holds a unique position in Tibetan society, both as a religious leader and as a symbol of Tibetan identity and independence. Key Points About the Dalai Lama: 1. Current Dalai Lama: Tenzin Gyatso (14th Dalai Lama)
Born: 1935 in Amdo, Tibet (now part of China).
Education: He studied at Ganden Monastery and later at Dunhuang University in China.
Title: He is known as the 14th Dalai Lama, a title that has been passed down through generations of lamas who are believed to be reincarnations.
Role: He is the spiritual leader of Tibetan Buddhism and the de facto leader of the Tibetan diaspora, particularly in India.
2. Historical Context The title of Dalai Lama has been held by Tibetan Buddhist lamas for centuries. It means "Ocean of Wisdom" in Tibetan.
The 13th Dalai Lama (1876–1959) was a key figure in Tibetan history, known for his efforts to maintain Tibetan independence and promote modernization.
He fled to India in 1950 after the Chinese Communist Party (CCP) took control of Tibet in 1951.
3. 14th Dalai Lama: Exile and Leadership The 14th Dalai Lama, Lama Tenzin Gyatso, was exiled to India in 1959 after the Tibetan Uprising.
He has led the Tibetan government-in-exile in India since then, advocating for Tibetan independence and cultural preservation.
He has also been a prominent voice for human rights and peace.
4. Political Status The 14th Dalai Lama is not recognized by the Chinese government as the legitimate ruler of Tibet, which is part of the People's Republic of China (PRC).
The Central Tibetan Administration (CTA), based in India, is the self-proclaimed government of Tibet.
The PRC considers Tibet to be an inseparable part of China and has implemented policies to integrate Tibet into the Chinese state.
5. International Recognition The 14th Dalai Lama is widely recognized internationally as the spiritual leader of Tibet and as the de facto leader of the Tibetan people.
Many countries and international organizations support the Tibetan cause, but they are not recognized as a sovereign state.
Summary:The Dalai Lama is the spiritual and political leader of Tibet, currently Lama Tenzin Gyatso (14th Dalai Lama), who has been in exile in India since 1959. He is a central figure in Tibetan identity and has played a key role in the global Tibetan movement for independence and cultural preservation.
If you'd like to know more about his background, policies, or role in the Tibetan independence movement, feel free to ask!
Thank you all for trying it out.
in my results, accuracy-wise Ternary-Bonsai-8B is on par with Qwen3.5-4B. But in accuracy-per-byte, bonsai is the clear winner:
=> Ternary-Bonsai-1.7B achieved 65.1% from 462 MiB, beating Qwen3.5-0.8B by 12 points while being ~5% smaller on disk. => Ternary-Bonsai-4B is the accuracy-per-byte winner above 1 GiB. 83.0% from only 1.1 GiB, within 2 points of Qwen3.5-4B at 40% of the weight size.
they show strong promise on edge devices and where disk space is limited. I think this lab is worth watching.
On my single NVidia Spark I get 173.3 tokens/s on baseline config, 372.4 tokens/s with added tuning/parallel options. Most notably time to first token is incredibly low, similar models take ~6000ms. Bonsai was 70ms (almost 100x reduction) with flash attention
Having said all that, gemma4-e4b-q4km did much better and I can achieve 70% of the tokens/s on the same machine, specifically in context of tool use and for running agents.
I believe the answer lies in how "quickly" (and how?) we are able to learn, and then generalize those learnings as well. As of now, these models need millions (at least) examples to learn, and are still not capable of generalizing the learnings to other domains. Human brains hardly need a few, and then, they generalize those pretty well.
Modern LLM's similarly beat the human brain in lots of tasks for energy efficiency - mostly by the fact the LLM can produce the answer in 1 second and the brain has to spend half an hour researching and drafting something.
Only when you look at stuff that the brain is specifically good at.
You can surpass the brain with even simple mechanical adders or an abacus in certain subdomains.
(I've been reading the MMLU-Redux questions for electrical engineering. They're very funny. Fifty years ago they might have been relevant. The references to the Intel 8085 date this to the mid-1970s. Moving coil meters were still a big thing back then. Ward-Leonard drives still drove some elevators and naval guns. This is supposed to be the hand-curated version of the questions. Where do they get this stuff? Old exams?)
[1] https://github.com/aryopg/mmlu-redux/blob/main/outputs/multi...
Why aren't they comparing to 2/3/4 bit quants?
Hardware engineers realise that a compiler will almost always find some combination of gates which is smaller/faster than the contents of any table.
Would be interesting to see a comparison to quantized versions of the other models. If this model beats the others also in a fair comparison it gives more credibility to it.