Top
Best
New

Posted by freediver 10/27/2024

Moonshine, the new state of the art for speech to text(petewarden.com)
172 points | 36 comments
v7n 10/27/2024|
I gave this a shot using speech-to-speech¹ modified so that it skips the LLM/AI assistant part and just repeats back what it thinks I said and displays the text.

For longer sentences my perception is that Moonshine performs at 80-90% of what Whisper² could do, while using considerably less resources. When trying shorter, two-word utterances it nosedived for some reason.

These numbers don't mean much, but when paired with MeloTTS, Moonshine and Whisper² ate up 1.2 and 2.5 GB of my GPU's memory, respectively.

¹ https://github.com/huggingface/speech-to-speech ² distil-whisper/distil-large-v3

thunderbong 10/27/2024||
GitHub

https://github.com/usefulsensors/moonshine

magicalhippo 10/27/2024||
Having played with the GB sized Whisper models, I'm amazed to learn the 80MB version is actually useful for anything.

I was aiming for an agent-like experience, and found the accuracy drop below what I'd consider useful levels even above the 1GB mark.

Perhaps for shorter, few word sentences like "lights on"?

rjwilmsi 10/27/2024||
I've played quite a lot with all the whisper models up to "medium" size. Mostly via faster-whisper as original OpenAI whisper only seem to care for/optimize performance for GPU.

I would agree that the "tiny" model has a clear drop off in accuracy, not good enough for anything real (even if transcribing your own speech, the error rate means too much editing needed). In my experience, accuracy can be more of a problem on shorter sentences because there is less context to help it.

I think for serious use (on GPU) it would be the "medium" or "large" models only. There is now a "large-turbo" model which is apparently faster than "medium" (on GPU) but more accurate than "medium" - haven't tried it yet.

On CPU for personal use (faster-whisper, CPU) I have found "base" is usable, "small" is good. On a laptop CPU though "small" is slow for real time. "Medium" is more accurate, though mostly just on punctuation, far too slow for CPU. Of course all models will get some uncommon surnames, place names wrong.

Since OpenAI have re-released the "large" models twice and now done a "large-turbo" I hope that they will re-release the smaller models too so that the smallest models become more useful.

These moonshine models are compared to original OpenAI whisper, but really I'd say they need to compare to faster-whisper: multiple projects are faster than original OpenAI whisper.

heigh 10/27/2024|||
There are libraries that can help with this, such as SpeechRecognition for python. If all you're looking for is short terms with minimal background noise, this should do it for you.
Randor 10/27/2024||
Looks like Moonshine is competing against the Whisper-tiny model. There isn't any information in the paper to see how it compares to the larger whisper-large-v3.
magicalhippo 10/27/2024||
Yeah I was just mildly surprised such a small variant would be useful. Will certainly try when I get back home.
heigh 10/27/2024||
This looks awesome! Actually something I’m looking at playing with this evening!
heigh 10/27/2024|
I don't mean to give negative feedback, as I don't consider myself a full-blown expert with Python/ML, however, for someone with passing experience, it fails out of the box for me, with and without the typically required 16Hz bit rate audio files (of various codecs/formats).

Was really hoping it would be a quick, brilliant solution to something I'm working on now, perhaps I'll dig in and invest in it, but I'm not sure I have the luxury right now to do the exploratory work... Hope someone else has better luck than I!

krisoft 10/27/2024||
> I don't mean to give negative feedback

I would recommend then to be more specific. Did you had trouble installing it? Did it give you an error? Was there no output? Was the output wrong? Is it not working on your files, but working on example files? Is it solving a different problem than the one you have?

heigh 10/27/2024||
Installing was okay, but it was not running on any of the sample files I had. This is the output I got:

UserWarning: You are using a softmax over axis 3 of a tensor of shape (1, 8, 1, 1). This axis has size 1. The softmax operation will always return the value 1, which is likely not what you intended. Did you mean to use a sigmoid instead? warnings.warn(

I know this isn't the right place for this, the right place is raising within github, but because you asked I posted...

keveman 10/27/2024||
Moonshine author here: The warning is from Keras library, and is benign. If you didn't get any other output, it was probably because the model thought there was no speech (not saying there really was no speech). We uploaded ONNX version that is considerably faster than the Torch/JAX/TF versions, and is usable with less package bloat. I hope you would give it another shot.
bbor 10/27/2024||
Very, very cool. Will have to try it out! It’s all fun and games until a universal translator comes out in glasses or earpiece form…
rsiqueira 10/27/2024||
* Which languages is it available in?

* Does the system automatically detect the language?

* What are the hardware requirements for it to work?

perihelion_zero 10/27/2024||
Nice. Looks like a way to have achievable live text transcripts via tiny devices without using APIs.
pabs3 10/27/2024||
Wonder where the training data for this is.
heigh 10/27/2024|
They supply their paper in the Git repo, here: https://github.com/usefulsensors/moonshine/blob/main/moonshi...

The section "3.2. Training data collection & preprocessing" covers what you're inquiring about: "We train Moonshine on a combination of 90K hours from open ASR datasets and over 100K hours from own internally-prepared dataset, totalling around 200K hours. From open datasets, we use Common Voice 16.1 (Ardila et al., 2020), the AMI corpus (Carletta et al., 2005), Gi- gaSpeech (Chen et al., 2021), LibriSpeech (Panayotov et al., 2015), the English subset of multilingual Lib- riSpeech (Pratap et al., 2020), and People’s Speech (Galvez et al., 2021). We then augment this training corpus with data that we collect from openly-available sources on the web. We discuss preparation methods for our self-collected data in the following."

It does continue...

More comments...