Top
Best
New

Posted by meander_water 1 day ago

Tongyi DeepResearch – open-source 30B MoE Model that rivals OpenAI DeepResearch(tongyi-agent.github.io)
352 points | 145 comments
zurfer 1 day ago|
It makes me wonder if we'll see an explosion of purpose trained LLMs because we hit diminishing returns on invest with pre training or if it takes a couple of months to fold these advantages back into the frontier models.

Given the size of frontier models I would assume that they can incorporate many specializations and the most lasting thing here is the training environment.

But there is probably already some tradeoff, as GPT 3.5 was awesome at chess and current models don't seem trained extensively on chess anymore.

criemen 1 day ago||
> or if it takes a couple of months to fold these advantages back into the frontier models.

Right now, I believe we're seeing that the big general-purpose models outperform approximately everything else. Special-purpose models (essentially: fine tunes) of smaller models make sense when you want to solve a specific task at lower cost/lower latency, and you transfer some/most of the abilities in that domain from a bigger model to a smaller one. Usually, people don't do that, because it's a quite costly process, and the frontier models develop so rapidly, that you're perpetually behind them (so in fact, you're not providing the best possible abilities).

If/when frontier model development speed slows down, training smaller models will make more sense.

nextos 1 day ago|||
The advantage of small purpose-specific models is that they might be much more robust i.e., unlikely to generate wrong sequences for your particular domain. That is at least my experience working on this topic during 2025. And, obviously, smaller models mean you may deploy them on cheaper hardware, latency is reduced, energy consumption is lower, etc. In some domains like robotics, these two advantages might be very compelling, but it's obviously early to draw any long-term conclusions.
larodi 20 hours ago||
I second this. Smaller models indeed may be much better positioned for fine-tuning for the very reason you point out - less noise to begin with.
barrell 22 hours ago||||
> If/when frontier model development speed slows down

You do not believe that this has already started? It seems to me that we’re well into a massive slowdown

enraged_camel 16 hours ago||
Not the OP but I use AI all day every day and have noticed substantial improvements in the models over the past ~6 months. GPT-5 was a huge leap (contrary to reporting) and so was Sonnet 4.5.
barrell 15 hours ago|||
GPT5 was by no means a huge leap. I’d be willing to believe that you prefer it, or that you found it an improvement, despite both of those being wildly contrary to my experience (and most of the rhetoric online). But objectively speaking it was a small improvement, even going by OpenAI’s marketing claims.

In practice, I upgraded everything to GPT-5 and the performance was so terrible I had to rollback the update.

embedding-shape 14 hours ago|||
> GPT-5 was a huge leap (contrary to reporting) and

Depends on what you compare it to. For us who were using o3/o1 Pro Mode before GPT-5, the new model isn't that huge of a leap, compared to whatever was before Pro Mode existed.

fragmede 1 day ago|||
Right, the Costco problem. A small boutique eg wine store might be able to do better for picking a very specific wine for a specific occasion, but Costco is just so much bigger that they can make it up in Volume and buy cases and cases of everything with a lower markup, so it ends up being cheaper to shop at Costco, no matter how much you want to support the local wine boutique.
semi-extrinsic 18 hours ago||
In Norway there is a state-owned monopoly on selling wine and liquor (anything above 4.75% ABV). They have 350+ physical shops, a large online shop and around $2bn annual revenue. This makes them one of the largest purchasers of wine and spirits in Europe, and they can get some very good deals.

So even though you have high taxes and a restrictive alcohol policy, the end result is shops that have high customer satisfaction because they have very competent staff, excellent selection and a surprisingly good price for quality products.

The downsides are the limited opening hours and the absence of cheap low-quality wine - the tax disproportionally impacts the low quality stuff, almost nobody will buy shitty wine at $7 per bottle when the decent stuff costs $10, so the shitty wine just doesn't get imported. But for most of the population these are minor drawbacks.

Imustaskforhelp 1 day ago|||
> But there is probably already some tradeoff, as GPT 3.5 was awesome at chess and current models don't seem trained extensively on chess anymore.

Wow, I am so curious, can you provide me the source

I am so interested in a chess LLM's benchmark as someone who occasionally plays chess. I have thought about creating things like these but it would be very interesting to find the best model at chess which isn't stockfish/lila but general purpose large language models.

I also agree that there might be an explosion of purpose trained LLM's. I had this idea some year ago when there was llama / before deepseek that what if I want to write sveltekit and there are models like deepseek which know about sveltekit but they are so damn big and bloated when I only want to use sveltekit/svelte models. Yes there are thoughts on why we might need the whole network to get better quality but I genuinely feel like right now, the better quality is debtable thanks to all this benchmarkmaxxing and I would happily take a model trained on sveltekit on like preferably 4b-8b parameter but if an extremely good SOTA-ish model for sveltekit is even around 30-40b I would be happy since I could buy a gpu on my pc to run it or run it on my mac

I think my brother who actually knows what he's talking about in the AI space, (unlike me), also said the same thing a few months back to me as well.

In fact, its funny because I had asked him to please create a website comparing benchmarks of AI playing chess and having an option where we can make two AI LLM's play against each other and we can view it or we can also play against an LLM inside an actual chess board on the web and more..., I had given this idea to him a few months ago after the talk about small llm's really lol and he said that its good but he was busy right now. I think then later he might have forgotten about it and I had forgotten about it too until now.

radarsat1 21 hours ago|||
Just search for "chess LLM leaderboard" there are already several. Also check https://www.reddit.com/r/llmchess/ although admittedly it doesn't get a lot of traffic.
zurfer 18 hours ago||||
this was the article I had in mind, when writing this: https://dynomight.substack.com/p/chess
Imustaskforhelp 5 hours ago||
Ohhh I think this was the same article that I also had in mind

Key memory unlocked. I had an Aha moment with this article, thanks a lot for sharing it, appreciate it.

cindyllm 22 hours ago|||
[dead]
deepanwadhwa 1 day ago|||
-> GPT 3.5 was awesome at chess I don't agree with this. I did try to play chess with GPT3.5 and it was horrible. Full of hallucinations.
zurfer 18 hours ago|||
Yeah I was not precise; it was `gpt-3.5-turbo-instruct`, other variants weren't trained on it apparently. https://dynomight.substack.com/p/chess
miki123211 1 day ago|||
It was GPT-3 I think.

As far as I remember, it's post-training that kills chess ability for some reason (GPT-3 wasn't post-trained).

Imustaskforhelp 1 day ago||
This is so interesting, I am curious as to why, can you (or anyone) please provide any resources or insightful comments about it, they would really help a ton out here, thanks!
pixelmelt 20 hours ago||
Gpt3 was trained on completion data so it likely saw lots of raw chess games layed out in whatever standard format moves are listed in, while 3.5 was post trained on instruct data (talking back and forth) which would have needed to explicitly include those chess games as conversational training data for it to retain as much as it would otherwise
onlyrealcuzzo 1 day ago|||
Isn't the whole point of the MOE architecture exactly this?

That you can individually train and improve smaller segments as necessary

ainch 1 day ago|||
Generally you train each expert simultaneously. The benefit of MoEs is that you get cheap inference because you only use the active expert parameters, which constitute a small fraction of the total parameter count. For example Deepseek R1 (which is especially sparse) only uses 1/18th of the total parameters per-query.
pama 1 day ago||
> only uses 1/18th of the total parameters per-query.

only uses 1/18th of the total parameters per token. It may use the large fraction of them in a single query.

ainch 7 hours ago||
That's a good correction, thanks.
idiotsecant 1 day ago|||
I think it's the exact opposite - you don't specifically train each 'expert' to be a SME at something. Each of the experts is a generalist but becomes better at portions of tasks in a distributed way. There is no 'best baker', but things evolve toward 'best applier of flour', 'best kneader', etc. I think explicitly domain-trained experts are pretty uncommon in modern schemes.
viraptor 1 day ago||
That's not entirely correct. Most of moe right now are fully balanced, but there is an idea of a domain expert moe where the training benefits fewer switches. https://arxiv.org/abs/2410.07490
idiotsecant 14 hours ago||
Yes, explicitly trained experts were a thing for a little while, but not anymore. Yet another application of the Hard Lesson.
almaight 19 hours ago|||
https://seed-tars.com/game-tars
almaight 19 hours ago||
Video games have long served as a crucial proving ground for artificial intelligence. Like the real world, they offer rich, dynamic environments with responsive, real-time settings and complex challenges that push the boundaries of AI capabilities. The history of AI in gaming is marked by landmark achievements, from mastering classic board games to achieving superhuman performance in complex strategy titles. However, the next frontier lies beyond mastering individual, known environments.

To meet this challenge, we introduce Game-TARS: a next-generation generalist game agent designed to master complex video games and interactive digital environments using human-like perception, reasoning, and action. Unlike traditional game bots or modular AI frameworks, Game-TARS integrates all core faculties—visual perception, strategic reasoning, action grounding, and long-term memory—within a single, powerful vision-language model (VLM). This unified approach enables true end-to-end autonomous gameplay, allowing the agent to learn and succeed in any game without game-specific code, scripted behaviors, or manual rules.

With Game-TARS, this work is not about achieving the highest possible score in a single game. Instead, our focus is on building a robust foundation model for both generalist game-playing and broader computer use. We aim to create an agent that can learn to operate in any interactive digital environment it encounters, following instructions just like a human.

alephnerd 1 day ago|||
> if we'll see an explosion of purpose trained LLMs...

Domain specific models have been on the roadmap for most companies for years now for both competitive (why give up your moat to OpenAI or Anthropic) and financial (why finance OpenAI's margins) perspective.

AmbroseBierce 1 day ago||
It reminds me of a story I read somewhere that some guy high on drugs climbed to the top of some elevated campus headlights shouting things about being a moth and loving lights, and the security guys tried telling him to go down but he paid no attention to that and time went on until a janitor came and shut off the lights, then turned one of those high powered handheld ones and point it at him the guy quickly climbed down there.

So yeah I think there are different levels of thinking, maybe future models with have some sort of internal models once they recognize patterns of some level of thinking, I'm not that knowledgeable of the internal workings of LLMs so maybe this is all nonsense.

tbruckner 1 day ago||
Has anyone found these deep research tools useful? In my experience, they generate really bland reports don't go much further than summarization of what a search engine would return.
remus 18 hours ago||
I run a small website and am based in the UK and have used it a couple of times to summarise what I need to do to comply with different bits of legislation e.g. Online Safety Act. What's really useful for me is that I can feed in a load of context about what the site does and get a response that's very tailored to what's relevant for me, and generate template paperwork that I can then fill out to improve my position with regard to the legislation.

For sure it's probably missing stuff that a well payed lawyer would catch, but for a project with zero budget it's a massive step up over spending hours reading through search results and trying to cobble something together myself.

roryirvine 12 hours ago||
The hidden cost there is that the risk of complying with the legislation remained entirely with you. Even the best specialist research LLM still might easily have hallucinated or made some other sort of error which resulted in it giving you confusing or incorrect advice - and you would have been the one held liable for following it.

Whereas with real legal advice, your lawyer will carry Professional Indemnity Insurance which will cover any costs incurred if they make a mistake when advising you.

As you say, it's a reasonable trade-off for you to have made when the alternative was sifting through the legislation in your own spare time. But it's not actually worth very much, and you might just as well have used a general model to carry out the same task and the outcome would likely have been much the same.

So it's not particularly clear that the benefits of these niche-specific models or specialised fine-tunes are worth the additional costs.

(Caveat: things might change in the future, especially if advancements in the general models really are beginning to plateau.)

andy99 1 day ago|||
My experience is the same as yours. It feels to me (similar to most LLM writing) like they write for someone who’s not going to read it or use it but is going to glance at it and judge the quality that way and assume it’s good.

Not to different from a lot of consulting reports, in fact, and pretty much of no value if if you’re actually trying to learn something.

Edit to add: even the name “deep research” to me feels like something defined to appeal to people who have never actually done or consumed research, sort of like the whole “phd level” thing.

tbruckner 1 day ago||
"they write for someone who’s not going to read it" is a great way to phrase it.
ainch 1 day ago|||
The reports are definitely bland, but I find them very helpful for discovering sources. For example, if I'm trying to ask an academic question like "has X been done before," sending something to scour the internet and find me examples to dig into is really helpful - especially since LLMs have some base knowledge which can help with finding the right search terms. It's not doing all the thinking, but those kind of broad overviews are quite helpful, especially since they can just run in the background.
kmarc 1 day ago|||
I caught myself that most of my LLM usage is like this:

ask a loaded, "filter question" I more or less know the answer for, and mostly skip the prose and get to the links to its sources.

ukuina 15 hours ago||
The "loaded question" approach works for getting MUCH better pro/con lists, too, in general, across all LLMs.
vogu66 19 hours ago|||
I do that too, I wonder how much of it is the LLM being helpful and how much of it is the RAG algorithm somehow providing better references to the LLM than a google search can?
blaesus 1 day ago|||
"Summarization of what a search engine would return" is good enough for many of my purposes though. Good for breaking into new grounds, finding unknown unknowns, brainstorming etc.
andai 15 hours ago||
I have a script that searches DDG (free), scrapes top 5 results, shoves them into an LLM, and answers your question.

I wrote it back when AI web search was a paid feature and I wanted access to it.

At the time Auto-GPT was popular and using the LLM itself to slowly and unreliably do the research.

So I realized a Python program would be way faster and it would actually be deterministic in terms of doing what you expect.

This experience sort of shaped my attitude about agentic stuff, where it looks like we are still relying too heavily on the LLM and neglecting to mechanize things that could just work perfectly every time.

zo1 11 hours ago||
If you think these things are just using a "dumb" search query, and using the top 5 hits, you're in for a lot of surprises very soon.
andai 11 hours ago||
Well, considering TFA, it would be pretty strange if I did!

My point was it's silly to rely on a slow, expensive, unreliable system to do things you can do quickly and reliably with ten lines of Python.

I saw this in the Auto-GPT days. They tried to make GPT-4 (the non-agentic one with the 8k context window) use tool calls to do a bunch of tasks. And it kept getting confused and forgetting to do stuff.

Whereas if you just had

for page in pages: summarize(page)

it works 100% of the time, can be parallelized etc.

And of course the best part is that the LLM itself can write that code, i.e. it already has the power to make up for its own weaknesses, and make (parts of itself) run deterministically.

---

On that note, do you know more about the environment they ran this thing in? I got API access (it's free on OpenRouter), but I'm not sure what to plug this into. OpenRouter provides a search tool, but the paper mentions intelligent context compression and all sorts of things.

TACIXAT 22 hours ago|||
I have used Gemini's 2.5 Pro deep research probably about 10 times. I love it. Most recently was reviewing PhD programs in my area then deep diving into faculty research areas.
criemen 1 day ago|||
I tend to use them when I'm looking to buy something of category X, and want to get a market overview. I can then still dig in and decide whether I consider the sources used trustworthy or not, and before committing money, I'll read some reviews myself, too. Still, it's a speedup for me.
edot 1 day ago|||
Yes, this is one of my primary use cases for deep research right now. It will become garbage in a few short years once OpenAI starts selling influence / ads. I think they’ve started a bit with doing this but the recommendations still seem mostly “correct”. My prior way of doing this was Googling with site:Reddit.com for real reviews and not SEO spam reviewers.
infecto 14 hours ago|||
Same case for me. I find it pretty good at it too. Far from perfect but it usually a pretty darn good start.
threecheese 10 hours ago|||
Perplexity’s Research tool has basically replaced Google for me, for any search where I don’t already know the answer or know that it’s available somewhere (like documentation).

I use it dozens of times per day, and typically follow up or ask refining questions within the thread if it’s not giving me what I need.

It typically takes between 10sec and 5 minutes, and mostly replicates my manual process - search, review results, another 1..N search passes, review, etc. Initially it rephrases/refines my query, then builds a plan, and this looks a lot like what I might do manually.

infecto 14 hours ago|||
I use ChatGPTs quite often. I can send it a loaded question and it helps tease out sources and usually at the very least scrapes away some of the nuance. I have used it a lot for finding a list of a type of products too. Taking the top n search results is already pretty useful for me but I find it typically is a little more in depth than that, going down a few rabbit holes of search depending on the topic. It does not eliminate doing your own research but it helps consolidate some of the initial information.

Then I can further interrogate the information returned with a vanilla LLM.

andai 15 hours ago|||
You can copy-paste it into your favorite LLM and ask questions about it. That solves several problems simultaneously.
alasr 1 day ago||
I haven't used any LLM deep research tools in the past; today, after reading this HN post, I gave Tongyi DeepResearch a try to see how it performs on a simple "research" task (in an area I've working experience in: healthcare and EHR) and I'm satisfied with its response (for the given tasks; I, obviously, can't say anything how it'll performs on other "research" tasks I'll ask it in the future). I think I'll keep using this model for tasks for which I was using other local LLM models before.

Besides I might give other large deep research models a try when needed.

sumo43 1 day ago||
I made a 4B Qwen3 distill of this model (and a synthetic dataset created with it) a while back. Both can be found here: https://huggingface.co/flashresearch
Imustaskforhelp 1 day ago||
Can you please create a huggingface space or something similar, I am not sure about the state of huggingface but I would love to be able to try it out in a browser or something similar if possible as I am really curious and I just love qwen3 4b as they were one of the models which work even on my intel integrated gpu graphics card at a really impressive rate and they were really nice the last time I tried but this looks even more cooler/practical.

I had once an idea of using something like qwen4 or some pre-trained AI model just to do a (to censor or not to) idea after the incidents of mecha-hitler. I thought if there was some extremely cheap model which could detect that it is harmful that the AI models of Grok itself couldn't recognize, it would've been able to prevent the absolute advertising/ complete disaster that happened.

What are your thoughts on it? I would love to see an Qwen 4B of something similar if possible if you or anyone is up to the challenge or any small LLM's in generals. I just want to know if this idea fundamentally made sense or not.

Another idea was to use it for routing purposes similar to what chatgpt does but I am not sure about that now really but I still think that it maybe worth it but this routing idea I had was before chatgpt had implemented it, so now after it implemented, we are gonna find some more data/insights about if its good or not/ worth it, so that's nice.

greggh 13 hours ago|||
I use emotions-analyzer-bert for this classifying content in a similar way. It's very small and very fast, under a gig of vram in use.
bigyabai 21 hours ago|||
> What are your thoughts on it?

You don't really need an entire LLM to do this - lightweight encoder models like BERT are great at sentiment analysis. You feed it an arbitrary string of text, and it just returns a confidence value from 0.0 to 1.0 that it matches the characteristics you're looking for.

Nymbo 1 day ago||
Just tried this out with my web search mcp, extremely impressed with it. Never seen deep research this good from a model so small.
rokob 1 day ago||
This whole series of work is quite cool. The use of `word-break: break-word;` makes this really hard to read though.
soared 1 day ago|
I actually can’t read it for some reason? My brain just can’t connect the words
don-bright 1 day ago|||
so it appears the entire text has been Translated with non-breaking space unicode x00a0 instead of normal spaces x0020, so the web layout is considering all paragraph text as a super-long single word ('the\00a0quick\00a0\brown\00a0fox' instead of 'the quick brown fox') - the non-breaking space character appears identically to breaking-space when rendered but underlying coding breaks the concept of "break at end of word" because there is no end as 00a0 literally means "non-breaking"). per Copilot spending a half hour explaining this to me, apparently this can be fixed by opening web browser developer view, and copy/pasting this code into the console.

function replaceInTextNodes(node) { if (node.nodeType === Node.TEXT_NODE) { node.nodeValue = node.nodeValue .replace(/\u00A0/g, ' '); } else { node.childNodes.forEach(replaceInTextNodes); } }

replaceInTextNodes(document.body);

nl 1 day ago||
This is completely fascinating although puzzling how that happens.

The script is great!

dlisboa 1 day ago|||
That’s why typography matters. You can’t read it because a very basic convention has been broken here and that throws everything off.
aliljet 1 day ago||
Sunday morning, and I find myself wondering how the engineering tinkerer is supposed to best self-host these models? I'd love to load this up on the old 2080ti with 128gb of vram and play, even slowly. I'm curious what the current recommendation on that path looks like.

Constraints are the fun part here. I know this isn't the 8x Blackwell Lamborghini, that's the point. :)

giobox 1 day ago||
If you just want to get something running locally as fast as possible to play with (the 2080ti typically had 11gb of VRAM which will be one of the main limiting factors), the ollama app will run most of these models locally with minimum user effort:

https://ollama.com/

If you really do have a 2080ti with 128gb of VRAM, we'd love to hear more about how you did it!

jlokier 1 day ago|||
I use a Macbook Pro with 128GB RAM "unified memory" that's available to both CPU and GPU.

It's slower than a rented Nvidia GPU, but usable for all the models I've tried (even gpt-oss-120b), and works well in a coffee shop on battery and with no internet connection.

I use Ollama to run the models, so can't run the latest until they are ported to the Ollama library. But I don't have much time for tinkering anyway, so I don't mind the publishing delay.

anon373839 1 day ago|||
I’d strongly advise ditching Ollama for LM Studio, and using MLX versions of the models. They run quite a bit faster on Apple Silicon. Also, LM Studio is much more polished and feature rich than Ollama.
terhechte 1 day ago||
Fully agree to this. LM Studio is much nicer to use and with MLX faster on Apple Silicon
MaxMatti 1 day ago|||
How's the battery holding up during vibe coding sessions or occasional LLM usage? I've been thinking about getting a MacBook or a laptop with a similar Ryzen chip specifically for that reason.
btbuildem 1 day ago|||
I've recently put together a setup that seemed reasonable for my limited budget. Mind you, most of the components were second-hand, open box deals, or deep discount of the moment.

This comfortably fits FP8 quantized 30B models that seem to be "top of the line for hobbyists" grade across the board.

- Ryzen 9 9950X

- MSI MPG X670E Carbon

- 96GB RAM

- 2x RTX 3090 (24GB VRAM each)

- 1600W PSU

nine_k 1 day ago|||
Does it offer more performance than a Macbook Pro that could be had for a comparable sum? Your build can be had for under $3k; a used MBP M3 with 64 GB RAM can be had for approximately $3.5k.
btbuildem 1 day ago|||
I'm not sure, I did not run any benchmarks. As a ballpark figure -- with both cards throttled down to 250W, running a Qwen-30B FP8 model (variant depending on task), I get upwards of 60 tok/sec. It feels on par with the premium models, tbh.

Of course this is in a single-user environment, with vLLM keeping the model warm.

bee_rider 1 day ago|||
MacBooks have some clever chips, but 2x 3090 is a lot of brawn to overcome.
PeterStuer 1 day ago||||
Unfortunately the RTX 3090 has no native FP8 support.
pstuart 1 day ago|||
That's basically what I imagined would be my rig if I were to pull the trigger. Do you have an NVLink adapter as well?
btbuildem 1 day ago||
No NVLink; it took me a long time to compose the exact hardware specs, because I wanted to optimize performance. Both cards are on x8 PCIe direct CPU channels, close to their max throughput anyway. It runs hot with the CPU engaged, but it runs fast.
jwr 1 day ago|||
I just use my laptop. A modern MacBook Pro will run ~30B models very well. I normally stick to "Max" CPUs (initially for more performance cores, recently also for the GPU power) with 64GB of RAM. My next update will probably be to 128GB of RAM, because 64GB doesn't quite cut it if you want to run large Docker containers and LLMs.
Lapel2742 19 hours ago|||
> I'd love to load this up on the old 2080ti with 128gb of vram and play, even slowly.

I think you mean ram and no vram. AFAIK this is a 30b moe model with 3b active parameters. Comparable to the Qwen3 MOE model. If you do not expect 60 tps such models should run sufficiently fast.

I run the Qwen3 MOE Model (https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/blob/main/...) in 4-bit quantization on an 11 year old i5-6600 (32GB) and a Radeon 6600 with 8GB. According to a quick search your card is faster than that and I get ~12 tps with 16k context on Llama.cpp, which is ok for playing around.

My Radeon (ROCm) specific batch file to start this:

llama-server --ctx-size 16384 --flash-attn on --cache-type-k q8_0 --cache-type-v q8_0 --device ROCm0 -ngl -1 --model /usr/local/share/gguf/Qwen3-30B-A3B-Q4_0.gguf --cache-ram 16384 --cpu-moe --numa distribute --override-tensor "\.ffn_.*_exps\.weight=CPU" --jinja --temp 0.7 --port 8080

greggh 13 hours ago|||
If you really need a lot of VRAM cheap rocm still supports the amd MI50 and you can get 32gb versions of the MI50 on alibaba/aliexpress for around $150-$250 each. A few people on r/localllama have shown setups with multiple MI50s running with 128gb of VRAM and doing a decent job with large models. Obviously it won't running as fast as any brand new GPUs because of memory bandwidth and a few other things, but more than fast enough to be usable.

This can end up getting you 128gb of VRAM for under $1000.

homarp 1 day ago|||
llama.cpp gives you the most control to tune it for your machine.
CuriousSkeptic 1 day ago|||
Im sure this guy has some helpful hints on that: https://youtube.com/@azisk
sumo43 1 day ago|||
Try running this using their harness https://huggingface.co/flashresearch/FlashResearch-4B-Thinki...
aliljet 1 day ago|||
oh my god. 128 gb of RAM! way too late to repair this thread, but most people caught this.
sigmarule 22 hours ago|||
The Framework Desktop runs this perfectly well, and for just about $2k.
3abiton 1 day ago|||
As many pointed out, Macs are decent enough to run them (with maxxed rams). You also have more alternative, like DGX Sparks (if you appreciate the ease of cuda, albeit a tad bit slower token generation performance), or the Strix Halo (good luck with ROCm though, AMD still peddling hype). There is no straitghtforwars "cheap" answer. You either go big (gpu server), or compromise. Either way use either vllm or sglang, or llama.cpp. ollama is just inferior in every way to llama.cpp.
exe34 1 day ago||
llama.cpp + quantized: https://huggingface.co/bartowski/Alibaba-NLP_Tongyi-DeepRese...

get the biggest one that will fit in your vram.

trebligdivad 1 day ago|||
How do people deal with all the different quantisations? Generally if I see an Unsloth I'm happy to try it locally; random other peoples...how do I know what I'm getting?

(If nothing else Tongyi are currently winning AI with cutest logo)

exe34 1 day ago||
personally I've only used them for toying around - but in all cases you have to test them for your use case anyway.
davidsainez 1 day ago|||
This is the way. I managed to run (super) tiny models on CPU only with this approach.
theflyestpilot 1 day ago||
I hope the translation for this is actually "Agree" Deep research. Just a dig at "You are absolutely right!" sycophancy.
numpad0 1 day ago|
TIL the "full" name of Alibaba Qwen is 通義千問(romanized as "Tongyi Qianwen", something along "knows all thousand questions"), of which the first half without the Chinese accent flags is romanized identically to "同意", meaning "same intents" or "agreed".

The Chinese version of the link says "通义 DeepResearch" in the title, so doesn't look like the "agree" to be the case. Completely agreed that it would be hilarious.

1: https://www.alibabacloud.com/en/solutions/generative-ai/qwen...

rahimnathwani 1 day ago||
For people who don't read Chinese: the two 'yi' characters numpad0 mentioned (义 and 義) are the same, but written in different variants of Chinese script (Simplified/Traditional).
whatpeoplewant 11 hours ago||
Great to see an open 30B MoE aimed at “deep research.” These shine when used in a multi-agent setup: run parallel agentic AI workers (light models for browsing/extraction) and reserve the 30B agentic LLM for planning, tool routing, and verification—keeping latency/cost in check while boosting reliability. MoE specialization fits distributed agentic AI well, but you’ll want orchestration for retries/consensus and task-specific evals on multi-hop web research to guard against brittle routing and hallucinations.
jychang 1 day ago||
This is over a month old, they released the weights a long time ago.
jwr 1 day ago||
That's OK — not all of us follow all the progress on a daily basis, and a model that is a month old doesn't become useless just by being a month old!
earthnail 1 day ago||
And for those not so tightly in the loop: how does it compare?
embedding-shape 1 day ago||
Isn't OpenIA "Deep research" (not "DeepResearch") a methodology/tooling thing, and you'll get different responses depending on what specific model you use with it? As far as the UI allows you to, you could use Deep research with GPT-5, GPT-4o, o3 and so on, and that'll have an impact on the responses. Skimming the paper and searching for some simple terms makes it seem like they never expand on what exact models they've used, just that they've used a specific feature from ChatGPT?
simonw 1 day ago||
At this point "deep research" is more of a pattern - OpenAI and Perplexity and Google Gemini all offer products with that name which work essentially the same way, and Anthropic and Grok have similar products with a slightly different name attached.

The pattern is effectively long-running research tasks that drive a search tool. You give them a prompt, they churn away for 5-10 minutes running searches and they output a report (with "citations") at the end.

This Tongyi model has been fine-tuned to be really good at using its search tool in a loop to produce a report.

embedding-shape 1 day ago||
Yes, but I think my previous point still matter, namely what exact model is being used greatly affects the results.

So without specifying which model is being used, it's really hard to know what is better than something else, because we don't understand what the underlying model is, and if it's better because of the model itself, or the tooling, which feels like an important distinction.

andai 15 hours ago|
Tongyi provides this model on OpenRouter, including a free version.

https://openrouter.ai/alibaba/tongyi-deepresearch-30b-a3b

https://openrouter.ai/alibaba/tongyi-deepresearch-30b-a3b:fr...

More comments...