Top
Best
New

Posted by meetpateltech 9 hours ago

Voxtral Transcribe 2(mistral.ai)
606 points | 151 commentspage 2
mdrzn 8 hours ago|
There's no comparison to Whisper Large v3 or other Whisper models..

Is it better? Worse? Why do they only compare to gpt4o mini transcribe?

tekacs 8 hours ago||
WER is slightly misleading, but Whisper Large v3 WER is classically around 10%, I think, and 12% with Turbo.

The thing that makes it particularly misleading is that models that do transcription to lowercase and then use inverse text normalization to restore structure and grammar end up making a very different class of mistakes than Whisper, which goes directly to final form text including punctuation and quotes and tone.

But nonetheless, they're claiming such a lower error rate than Whisper that it's almost not in the same bucket.

tekacs 8 hours ago||
On the topic of things being misleading, GPT-4o transcriber is a very _different_ transcriber to Whisper. I would say not better or worse, despite characterizations such. So it is a little difficult to compare on just the numbers.

There's a reason that quite a lot of good transcribers still use V2, not V3.

satvikpendem 7 hours ago||
Different how?
GaggiX 8 hours ago||
Gpt4o mini transcribe is better and actually realtime. Whisper is trained to encode the entire audio (or at least 30s chunks) and then decode it.
mdrzn 8 hours ago|||
So "gpt4o mini transcribe" is not just whisper v3 under the hood? Btw it's $0.006 / minute

For Whisper API online (with v3 large) I've found "$0.00125 per compute second" which is the cheapest absolute I've ever found.

breisa 5 hours ago|||
Deepinfra offers Whisper V3 at 0.00045$ / minute of transcribed audio.
GaggiX 8 hours ago|||
>So it's not just whisper v3 under the hood?

Why it should be Whisper v3? They even released an open model: https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-26...

emmettm 8 hours ago|||
The linked article claims the average word error rate for Voxtral mini v2 is lower than GPT-4o mini transcribe
GaggiX 8 hours ago||
Gpt4o mini transcribe is better than whisper, the context is the parent comment.
fph 5 hours ago||
Is there an open source Android keyboard that would support it? Everything I find is based on Whisper, which is from 2022. Ages ago given how fast AI is evolving.
antirez 2 hours ago|
I wish I had a Google Keyboard that could easily run on Whisper Medium. This is already great. But unfortunately would be too much inference cost, incredibly slow. The problem with Whisper is not the inference quality: medium and large are incredible. Is that the base model is not enough, and the only one with fast inference in mobile devices.
satvikpendem 7 hours ago||
Looks like this model doesn't do realtime diarization, what model should I use if I want that? So far I've only seen paid models do diarization well. I heard about Nvidia NeMo but haven't tried that or even where to try it out.
breisa 5 hours ago|
Not sure if its "realtime" but the recently released VibeVoice-ASR from Microsoft does do diarization. https://huggingface.co/microsoft/VibeVoice-ASR
XCSme 6 hours ago||
Is it me or error rate of 3% is really high?

If you transcribe a minute of conversation, you'll have like 5 words transcribed wrongly. In an hour podcast, that is 300 wrongly transcribed words.

cootsnuck 6 hours ago|
The error rate for human transcription can be as high as 5%.
qingcharles 41 minutes ago|||
I did transcription for a while in 2021. It is absurdly hard. Especially as these days humans only get the difficult jobs that AI has already taken a stab at.

The hardest one I did was for a sports network where it was a motorcross motorbike event where most of what you could hear was the roar of the bikes. There were two commentators I had to transcribe over the top of that mess and they were using the slang insider nicknames for all the riders, not their published names, so I had to sit and Google forums to find the names of the riders while I was listening. I'm not even sure how these local models would even be able to handle that insanity at all because they almost certainly lack enough domain knowledge.

XCSme 6 hours ago|||
Oh wow, I thought humans are like 0.1% error rate, if they are native speakers and aware of the subject being discussed.
zipy124 5 hours ago|||
I was skepitcal upon hearing the figure but various sources do indeed back it up and [0] is a pretty interesting paper (old but still relevant human transcibers haven't changed in accuracy).

[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...

XCSme 4 hours ago||
I think it's actually hard to verify how correct a transcription is, at scale. Curious where those error rate numbers come from, because they should test it on people actually doing their job.
rhdunn 3 hours ago||||
It can depend a lot on different factors like:

- familiarity with the accent and/or speaker;

- speed and style/cadence of the speech;

- any other audio that is happening that can muffle or distort the audio;

- etc.

It can also take multiple passes to get a decent transcription.

qingcharles 39 minutes ago||
You missed a giant factor: domain knowledge. Transcribing something outside of your knowledge realm is very hard. I posted above about transcribing the commentary of a motorbike race where the commentators only used the slang names of the riders.
Nimitz14 3 hours ago|||
Most of these errors will not be meaningful. Real speech is full of ambiguities. 3% is low
serf 8 hours ago||
things I hate:

"Click me to try now!" banners that lead to a warning screen that says "Oh, only paying members, whoops!"

So, you don't mean 'try this out', you mean 'buy this product'.

Let's not act like it's a free sampler.

I can't comment on the model : i'm not giving them money.

ReadEvalPost 8 hours ago|
You can try it on HF: https://huggingface.co/spaces/mistralai/Voxtral-Mini-Realtim...
boobsbr 7 hours ago||
I'm impressed.
aavci 7 hours ago||
What's the cheapest device specs that this could realistically run on?
kamranjon 7 hours ago|
I haven't quite figured out if the open weights they released on huggingface amount to being able to run the (realtime) model locally - i hope so though! For the larger model with diarization I don't think they open sourced anything.
IanCal 3 hours ago||
The HF page suggests yes, with vllm.

> We've worked hand-in-hand with the vLLM team to have production-grade support for Voxtral Mini 4B Realtime 2602 with vLLM. Special thanks goes out to Joshua Deng, Yu Luo, Chen Zhang, Nick Hill, Nicolò Lucchesi, Roger Wang, and Cyrus Leung for the amazing work and help on building a production-ready audio streaming and realtime system in vLLM.

https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-26...

https://docs.vllm.ai/en/latest/serving/openai_compatible_ser...

sgt 2 hours ago||
What's the best way to train this further on a specific dialect or accent or even terminology?
antirez 8 hours ago||
Italian represents, I believe, the most phonetically advanced human language. It has the right compromise among information density, understandability, and ability to speech much faster to compensate the redundancy. It's like if it had error correction built-in. Note that it's not just that it has the lower error rate, but is also underrepresented in most datasets.
nindalf 6 hours ago||
I love seeing people from other countries share their own folk tales about what makes their countries special and unique. I've seen it up close in my country and I always cringed when I heard my fellow countrymen came up with these stories. In my adulthood I'm reassured that it happens everywhere and I find it endearing.

On the information density of languages: it is true that some languages have a more information dense textual representation. But all spoken languages convey about the same information in the same time. Which is not all that surprising, it just means that human brains have an optimal range at which they process information.

Further reading: Coupé, Christophe, et al. "Different Languages, Similar Encoding Efficiency: Comparable Information Rates across the Human Communicative Niche." Science Advances. https://doi.org/10.1126/sciadv.aaw2594

antirez 6 hours ago||
Different representations at the same bitrate may have features that make one a lot more resilient to errors. This thing about Italian, you fill find in any benchmark of vastly different AI transcribing models. You can find similar results also on the way LLMs mostly trained on English generalize usually very well with Italian. All this despite Italian accounting for marginal percentage of the training set. How do you explain that? I always cringe when people refute evidence.
hollowturtle 1 hour ago|||
> All this despite Italian accounting for marginal percentage of the training set.

Evidence?

testdelacc1 5 hours ago|||
Where is this evidence you’ve cited for your claims?
Archelaos 7 hours ago|||
This is largely due to the fact that modern Italian is a systematised language that emerged from a literary movement (whose most prominent representative is Alessandro Manzoni) to establish a uniform language for the Italian people. At the time of Italian unification in 1861, only about 2.5% of the population could speak this language.
gbalduzzi 7 hours ago||
The language itself was not invented for the purpose: it was the language spoken in Florence, than adopted by the literary movement and than selected as the national language.

It seems like the best tradeoff between information density and understandability actually comes from the deep latin roots of the language

mr_tox 3 hours ago|||
in the end (our) italian language wasn’t optimized by engineers, it was refactored by poets
ithkuil 55 minutes ago||
and disseminated to the entire peninsula by broadcast television featuring Mike Buongiorno
gbalduzzi 7 hours ago|||
I was honestly surprised to find it in the first place, because I assumed English to be at first place given the simpler grammar and the huge dataset available.

I agree with your belief, other languages have either lower density (e.g. German) or lower understandability (e.g. English)

riffraff 7 hours ago||
English has a ton of homophones, way more sounds that differ slightly (long/short vowels), and major pronunciation differences across major "official" languages (think Australia/US/Canada/UK).

Italian has one official italian (two, if you count IT_ch, but difference is minor), doesn't pay much attention to stress and vowel length, and only has a few "confusable" sounds (gl/l, gn/n, double consonants, stuff you get wrong in primary school). Italian dialects would be a disaster tho :)

NewsaHackO 7 hours ago|||
The only knowledge I have about how difficult Italian is comes from Inglourious Basterds.
hackyhacky 6 hours ago|||
> the most phonetically advanced human language

That's interesting. As a linguist, I have to say that Haskell is the most computationally advanced programming language, having the best balance of clear syntax and expressiveness. I am qualified to say this because I once used Haskell to make a web site, and I also tried C++ but I kept on getting errors.

/s obviously.

Tldr: computer scientists feel unjustifiably entitled to make scientific-sounding but meaningless pronouncements on topics outside their field of expertise.

mmooss 6 hours ago||
At least some relatively well-known research finds that all languages have similar information density in terms of bits/second (~39 bits/second based on a quick search). Languages do it with different amounts of phonetic sound / syllables / words per bit and per second, but the bps comes out the same.

I don't know how widely accepted that conclusion is, what exceptions there may be, etc.

gwerbret 6 hours ago||
I really wish those offering speech-to-text models provided transcription benchmarks specific to particular fields of endeavor. I imagine performance would vary wildly when using jargon peculiar to software development, medicine, physics, and law, as compared to everyday speech. Considering that "enterprise" use is often specialized or sub-specialized, it seems like they're leaving money on Dragon's table by not catering to any of those needs.
ccleve 4 hours ago|
This looks great, but it's not clear to me how to use it for a practical task. I need to transcribe about 10 years worth of monthly meetings. These are government hearings with a variety of speakers. All the videos are on YouTube. What's the most practical and cost-effective way to get reasonably accurate transcripts?
IanCal 3 hours ago||
If you use something like youtube-dlp you can download the audio from the meetings, and you could try things out in mistrals ai studio.

You could use their api (they have this snippet):

```curl -X POST "https://api.mistral.ai/v1/audio/transcriptions" \ -H "Authorization: Bearer $MISTRAL_API_KEY" \ -F model="voxtral-mini-latest" \ -F file=@"your-file.m4a" \ -F diarize=true \ -F timestamp_granularities="segment"```

In the api it took 18s to do a 20m audio file I had lying around where someone is reviewing a product.

There will, I'm sure, be ways of running this locally up and available soon (if they aren't in huggingface right now) but the API is $0.003/min. If it's something like 120 meetings (10 years of monthly ones) then it's roughly $20 if the meetings are 1hr each. Depending on whether they're 1 or 10 hours (or if they're weekly or monthly but 10 parallel sessions or something) then this might be a price you're willing to pay if you get the results back in an afternoon.

edit - their realtime model can be run with vllm, the batch model is not open

isoprophlex 3 hours ago|||
- get an API key for this service

- make sure you have a list of all these YouTube meeting URLs somewhere

- ask your preferred coding assistant to write you up a script that downloads the audio for these videos with yt-dlp & calls Mixtrals' API

- ????

- profit

jimmy76615 4 hours ago||
If they are on Youtube, try Gemini 3 Flash first. Use AI studio, it lets you insert YouTube videos into context.
More comments...