Top
Best
New

Posted by pretext 12/10/2025

Qwen3-Omni-Flash-2025-12-01:a next-generation native multimodal large model(qwen.ai)
316 points | 106 comments
gardnr 12/10/2025|
This is a 30B parameter MoE with 3B active parameters and is the successor to their previous 7B omni model. [1]

You can expect this model to have similar performance to the non-omni version. [2]

There aren't many open-weights omni models so I consider this a big deal. I would use this model to replace the keyboard and monitor in an application while doing the heavy lifting with other tech behind the scenes. There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.

1. https://huggingface.co/Qwen/Qwen2.5-Omni-7B

2. https://artificialanalysis.ai/models/qwen3-30b-a3b-instruct

red2awn 12/10/2025||
This is a stack of models:

- 650M Audio Encoder

- 540M Vision Encoder

- 30B-A3B LLM

- 3B-A0.3B Audio LLM

- 80M Transformer/200M ConvNet audio token to waveform

This is a closed source weight update to their Qwen3-Omni model. They had a previous open weight release Qwen/Qwen3-Omni-30B-A3B-Instruct and a closed version Qwen3-Omni-Flash.

You basically can't use this model right now since none of the open source inference framework have the model fully implemented. It works on transformers but it's extremely slow.

olafura 12/10/2025|||
Looks like it's not open source: https://www.alibabacloud.com/help/en/model-studio/qwen-omni#...
coder543 12/10/2025||
No... that website is not helpful. If you take it at face value, it is claiming that the previous Qwen3-Omni-Flash wasn't open either, but that seems wrong? It is very common for these blog posts to get published before the model weights are uploaded.
red2awn 12/10/2025||
The previous -Flash weight is closed source. They do have weights for the original model that is slightly behind in performance https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct
coder543 12/10/2025||
Based on things I had read over the past several months, Qwen3-Flash seemed to just be a weird marketing term for the Qwen3-Omni-30B-A3B series, not a different model. If they are not the same, then that is interesting/confusing.
red2awn 12/10/2025||
It is an in-house closed weight model for their own chat platform, mentioned in Section 5 of the original paper: https://arxiv.org/pdf/2509.17765

I've seen it in their online materials too but can't seem to find it now.

gardnr 12/10/2025|||
I can't find the weights for this new version anywhere. I checked modelscope and huggingface. It looks like they may have extended the context window to 200K+ tokens but I can't find the actual weights.
pythux 12/10/2025||
They link to: https://huggingface.co/collections/Qwen/qwen3-omni-68d100a86... from the blog post but it does seem like this redirects to their main space on HF so maybe they didn't yet make the model public?
tensegrist 12/10/2025|||
> There is also a reasoning version, which might be a bit amusing in an interactive voice chat if it pronounces the thinking tokens while working through to a final answer.

last i checked (months ago) claude used to do this

plipt 12/10/2025|||
I dont think the Flash model discussed in the article is 30B

Their benchmark table shows it beating Qwen3-235B-A22B

Does "Flash" in the name of a Qwen model indicate a model-as-a-service and not open weights?

red2awn 12/10/2025||
Flash is a closed weight version of https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct (it is 30B but with addtional training on top of the open weight release). They deploy the flash version on Qwen's own chat.
plipt 12/10/2025||
Thanks

Was it being closed weight obvious to you from the article? Trying to understand why I was confused. Had not seen the "Flash" designation before

Also 30B models can beat a semi-recent 235B with just some additional training?

red2awn 12/10/2025||
They had a Flash variant released alongside the original open weight release. It is also mentioned in Section 5 of the paper: https://arxiv.org/pdf/2509.17765

For the evals it's probably just trained on a lot of the benchmark adjacent datasets compared to the 235B model. Similar thing happened on other model today: https://x.com/NousResearch/status/1998536543565127968 (a 30B model trained specifically to do well in maths get near SOTA scores)

andy_ppp 12/11/2025|||
Haha, you could hear how it’s mind thinks, maybe by putting a lot of reverb on the thinking tokens or some other effect…
andy_xor_andrew 12/10/2025||
> This is a 30B parameter MoE with 3B active parameters

Where are you finding that info? Not saying you're wrong; just saying that I didn't see that specified anywhere in the linked page, or on their HF.

plipt 12/10/2025|||
The link[1] at the top of their article to HuggingFace goes to some models named Qwen3-Omni-30B-A3B that were last updated in September. None of them have "Flash" in the name.

The benchmark table shows this Flash model beating their Qwen3-235B-A22B. I dont see how that is possible if it is a 30B-A3B model.

I don't see a mention of a parameter count anywhere in the article. Do you? This may not be an open weights model.

This article feels a bit deceptive

1: https://huggingface.co/collections/Qwen/qwen3-omni

gardnr 12/13/2025|||
I was wrong. I confused this with their open model. Looking at it more closely, it is likely an omni version of Qwen3-235B-A22B. I wonder why they benchmarked it against Qwen2.5-Omni-7B instead of Qwen3-Omni-30B-A3B.

I wish I could delete the comment.

sosodev 12/10/2025||
Does Qwen3-Omni support real-time conversation like GPT-4o? Looking at their documentation it doesn't seem like it does.

Are there any open weight models that do? Not talking about speech to text -> LLM -> text to speech btw I mean a real voice <-> language model.

edit:

It does support real-time conversation! Has anybody here gotten that to work on local hardware? I'm particularly curious if anybody has run it with a non-nvidia setup.

potatoman22 12/10/2025||
From what I can tell, their official chat site doesn't have a native audio -> audio model yet. I like to test this through homophones (e.g. record and record) and asking it to change its pitch or produce sounds.
dragonwriter 12/11/2025|||
“record and record”, if you mean the verb for persisting something and the noun for the thing persisted, are heteronyms (homographs which are not homophones), which incidentally is also what you would probably want to test what you are talking about here (distinguishing homophones would test use of context to understand meaning, but wouldn’t test anything about whether or not logic was working directly on audio or only working on text processed from audio, failing to distinguish heteronyms is suggestive of processing occurring on text, not audio directly.)
bakeman 12/11/2025|||
There are homophones of “record”, such as:

“He’s on record saying he broke the record for spinning a record.”

dragonwriter 12/11/2025||
True.

OTOH my point that the thing being suggested to be tested is not testable by seeing whether or not the system is capable of distinguishing homophones, but might be by seeing whether or not it distingishes heteronyms still stands. (The speculation that the record/record distinction intended was one that is actually a pair of heteronyms and that the error was merely the use of the word “homophone" in place of “heteronym”, rather than the basic logic of the comment is somewhat tangential to the main point.)

potatoman22 12/11/2025|||
Ah I meant heteronyms. Thanks!
sosodev 12/10/2025||||
Huh, you're right. I tried your test and it clearly can't understand the difference between homophones. That seems to imply they're using some sort of TTS mechanism. Which is really weird because Qwen3-Omni claims to support direct audio input into the model. Maybe it's a cost saving measure?
sosodev 12/11/2025|||
Weirdly, I just tried it again and it seems to understand the difference between record and record just fine. Perhaps if there's heavy demand for voice chat, like after a new release, they load shed by using TTS to a smaller model.

However, It still doesn't seem capable of producing any of the sounds, like laughter, that I would expect from a native voice model.

potatoman22 12/11/2025|||
To be fair, discerning heteronyms might just be a gap in its training.
djtango 12/11/2025|||
Is record a homophone? At least in the UK we use different pronunciations for the meanings. Re-cord for the verb, rec-ord for the noun.
potatoman22 12/11/2025||
I was mistaken about what homophone means!
red2awn 12/10/2025|||
None of inference frameworks (vLLM/SGLang) supports the full model, let alone non-nvidia.
AndreSlavescu 12/10/2025|||
We actually deployed working speech to speech inference that builds on top of vLLM as the backbone. The main thing was to support the "Talker" module, which is currently not supported on the qwen3-omni branch for vLLM.

Check it out here: https://models.hathora.dev/model/qwen3-omni

sosodev 12/10/2025|||
Is your work open source?
AndreSlavescu 12/15/2025||
At the moment, no unfortunately. However, to my recent knowledge of open source alternatives, the vLLM team published a separate repository for omni models now:

https://github.com/vllm-project/vllm-omni

I have not yet tested out if this does full speech to speech, but this seems like a promising workspace for omni-modal models.

red2awn 12/10/2025|||
Nice work. Are you working on streaming input/output?
AndreSlavescu 12/10/2025||
Yeah, that's something we currently support. Feel free to try the platform out! No cost to you for now, you just need a valid email to sign up on the platform.
valleyer 12/11/2025||
I tried this out, and it's not passing the record (n.) vs. record (v.) test mentioned elsewhere in this thread. (I can ask it to repeat one, and it often repeats the other.) Am I not enabling the speech-to-speech-ness somehow?
AndreSlavescu 12/15/2025||
From my understanding of the above problem, this would be something to do with the model weights. Have you tested this with the transformers inference baseline that is shown on huggingface?

In our deployment, we do not actually tune the model in any way, this is all just using the base instruct model provided on huggingface:

https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Instruct

And with the potential concern around conversation turns, our platform is designed for one-off record -> response flows. But via the API, you can build your own conversation agent to use the model.

sosodev 12/10/2025||||
That's unfortunate but not too surprising. This type of model is very new to the local hosting space.
whimsicalism 12/11/2025|||
Makes sense, I think streaming audio->audio inference is a relatively big lift.
red2awn 12/11/2025||
Correct, it's breaks the single prompt, single completion assumption baked into the frameworks. Conceptually it's still prompt/completion but for low latency response you have to do streaming KV cache prefill with a websocket server.
whimsicalism 12/11/2025||
I imagine you have to start decoding many speculative completions in parallel to have true low latency.
ivape 12/10/2025|||
That's exciting. I doubt there are any polished voice chat local apps yet that you can easily plug this into (I doubt the user experience is "there" yet). Even stuff like Silly Tavern is near unusable, lots of work to be done on the local front. Local voice models are what's going to enable that whole Minority Report workflow soon enough (especially if commands and intent are determined at the local level, and the meat of the prompt is handled by a larger remote model).

This is part of programming that I think is the new field. There will be tons of work for those that can build the new workflows which will need to be primarily natural language driven.

sosodev 12/10/2025||
I did find this app: https://github.com/gabber-dev/gabber

The creator posted a little demo of it working with Qwen3 Omni that is quite impressive: https://www.youtube.com/watch?v=5DBFVe3cLto

He didn't include any details regarding how the model was running though

dsrtslnd23 12/10/2025||
it seems to be able to do native speech-speech
sosodev 12/10/2025||
It does for sure. I did some more digging and it does real-time too. That's fascinating.
terhechte 12/10/2025||
Is there a way to run these Omni models on a Macbook quantized via GGUF or MLX? I know I can run it in LMStudio or Llama.cpp but they don't have streaming microphone support or streaming webcam support.

Qwen usually provides example code in Python that requires Cuda and a non-quantized model. I wonder if there is by now a good open source project to support this use case?

tgtweak 12/10/2025||
You can probably follow the vLLM instructions for omni here, then use the included voice demo html to interface with it:

https://github.com/QwenLM/Qwen3-Omni#vllm-usage

https://github.com/QwenLM/Qwen3-Omni?tab=readme-ov-file#laun...

mobilio 12/10/2025||
Yes - there is a way: https://github.com/ggml-org/whisper.cpp
novaray 12/10/2025||
Whisper and Qwen Omni models have completely different architectures as far as I know
stevenhuang 12/10/2025||
Wayback for those that can't reach https://web.archive.org/web/20251210164048/https://qwen.ai/b...
sim04ful 12/10/2025||
The main issue I'm facing with realtime responses (speech output) is how to separate non-diegetic outputs (e.g thinking, structured outputs) from outputs meant to be heard by the end user.

I'm curious how anyone has solved this

artur44 12/10/2025|
A simple way is to split the model’s output stream before TTS. Reasoning/structured tokens go into one bucket, actual user-facing text into another. Only the second bucket is synthesized. Most thinking out loud issues come from feeding the whole stream directly into audio.
pugio 12/10/2025||
There is no TTS here. It's a native audio output model which outputs audio tokens directly. (At least, that's how the other real-time models work. Maybe I've misunderstood the Qwen-Omni architecture.)
artur44 12/10/2025||
True, but even with native audio-token models you still need to split the model’s output channels. Reasoning/internal tokens shouldn't go into the audio stream only user-facing content should be emitted as audio. The principle is the same, whether the last step is TTS or audio token generation.
regularfry 12/11/2025||
There's an assumption there that the audio stream contains an equivalent of the <think>/</think> tokens. Every reason to think it should, but without seeing the tokeniser config it's a bit of a guess.
devinprater 12/10/2025||
Wow, just 32B? This could almost run on a good device with 64 GB RAM. Once it gets to Ollama I'll have to see just what I can get out of this.
plipt 12/10/2025||
I see that their HuggingFace link goes to some Qwen3-Omni-30B-A3B models that show a last updated date of September

The benchmark table in their article shows Qwen3-Omni-Flash-2025-12-01 (and the previous Flash) as beating Qwen3-235B-A22B. How is that possible if this is only a 30B-A3B model? Also confusing how that comparison column starts out with one model but changes them as you descend down the table.

I don't see any FLASH variant listed on their Hugginface. Am i just missing it or are these specifying a model only used for their API service and there are no open weights to download?

apexalpha 12/10/2025||
I run these on a 48gb Mac because of the universal ram.
aschobel 12/10/2025||
Looks to be API only. Bummer.
readyplayeremma 12/10/2025||
The models are right here, one of the first links in the post: https://huggingface.co/collections/Qwen/qwen3-omni

edit: Nevermind, in spite of them linking it at the top, they are the old models. Also, the HF demo is calling their API and not using HF for compute.

aschobel 12/10/2025||
It is super confusing. I also thought this initially was open weights.
Alifatisk 12/14/2025|||
It seems to be available on Qwen chat? https://chat.qwen.ai/settings/model?id=qwen3-omni-flash-2025...
binsquare 12/10/2025||
Does anyone else find that there's hard to pin down reason of life-lessness in the speech of these voice models?

Especially in the fruit pricing portion of the video for this model. Sounds completely normal but I can immediately tell it is ai. Maybe it's intonation or the overly stable rate of speech?

Lapel2742 12/10/2025||
IMHO it's not lifeless. It's just not overly emotional. I definitely prefer it that way. I do not want the AI to be excited. It feels so contrived.

On the video itself: Interesting, but "ideal" was pronounced wrong in German. For a promotional video, they should have checked that with native speakers. On the other hand its at least honest.

nunodonato 12/10/2025||
I hate with a passion the over-americanized "accent" of chatgpt voices. Give me a bland one any day of the week
wkat4242 12/11/2025||
Yeah that overly fake-excited voice type. Doesn't work for Europe at all. But indeed common in American customer service scenarios.
sosodev 12/10/2025|||
I think it's because they've crammed vision, audio, multiple voices, prosody control, multiple languages, etc into just 30 billion parameters.

I think ChatGPT has the most lifelike speech with their voice models. They seem to have invested heavily in that area while other labs focused elsewhere.

vessenes 12/10/2025|||
I'm not convinced its end-to-end multimodal - in that case, you'll have a speech synthesis section and this will be some of the result. You could test by having it sing or do some accents, or have it talk back to you in an accent you give it.
esafak 12/10/2025|||
> Sounds completely normal but I can immediately tell it is ai.

Maybe that's a good thing?

colechristensen 12/10/2025||
I'm perfectly ok with and would prefer an AI "accent".
banjoe 12/10/2025||
Wow, crushing 2.5 Flash on every benchmark is huge. Time to move all of my LLM workloads to a local GPU rig.
embedding-shape 12/10/2025||
Just remember to benchmark it yourself first with you private task collection, so you can actually measure them against each other. Pretty much any public benchmark is unreliable at this moment, and making model choices based on other's benchmarks is bound to leave you disappointed.
MaxikCZ 12/10/2025||
This. Last benchmarks of DSv3.2spe hinted at beating basically everything, yet in my testing even sonnet is miles ahead both in terms of speed and accuracy
red2awn 12/10/2025|||
Why would you use an Omni model for text only workload... There is Qwen3-30B-A3B.
skrunch 12/11/2025||
Except the image benchmarks are compared against 2.0, which seems suspicious that they would casually drop to an older model for those.
mohsen1 12/11/2025|
Having lots of success with Gemini Flash Live 2.5. I am hoping 3.0 to come out soon. Benchmarks here claim better results that Gemini Live but have to test it. In past I've always been disappointed with Qwen Omni models in my English-first case...
More comments...