Top
Best
New

Posted by kwindla 1/7/2026

Building voice agents with Nvidia open models(www.daily.co)
126 points | 20 comments
amelius 1/7/2026|
I've been using festival under Linux.

https://manpages.ubuntu.com/manpages/trusty/man1/festival.1....

But it is quite old now and pre-dates the DL/AI era.

Does anybody know of a good modern replacement that I can "apt install"?

sigmonsays 1/7/2026|
I used piper with a model I found online. It's _ALOT_ better than festival afaik. I'm not sure you can apt install it though.

echo "hello" | piper --model ~/.local/share/piper/en_US-lessac-medium.onnx --output_file - | aplay

gunalx 1/7/2026||
You can in fact apt install piper.
amelius 1/7/2026||
That's a different piper.

    piper - GTK application to configure gaming devices
gunalx 1/20/2026||
^piper-tts exists.
smusamashah 1/8/2026||
Do any of the top models let you pause and think while speaking? I have to speak non-stop to Gemini assitant and ChatGPT, which is very very useless/unnatural for voice mode. Specially for non-english speakers probably. I sometimes have to think more to translate my thoughts to english.
fragmede 1/8/2026|
Have you tried talking to ChatGPT in your native tongue? I was blown away by my mother speaking her native tongue to ChatGPT and having it respond in that language. (It's ever so slightly not a mainstream one.)
smusamashah 1/9/2026||
Even in my own language I can't talk without any pauses.
jjcm 1/7/2026||
These have gotten good enough to really make command-by-voice interactions pleasant. I'd love to try this with Cursor - just use it fully with voice.
rickydroll 1/8/2026||
<pedantic>Voice recognition identifies who you are, speech recognition identifies what you say. </pedantic>

Example:

Voice recognition: arrrrrrgh! (Oh, I know that guy. He always gets irritated when someone uses terms speech and voice recognition wrong)

Speech Recognition: "Why can't you guys keep it straight? It is as simple as knowing the difference between hypothesis and theory."

nowittyusername 1/7/2026||
This is perfect for me. I just started working on the voice related stuff for my agent framework and this will be of real use. Thanks.
jauntywundrkind 1/7/2026||
There's also the excellent also open source unmute.sh. which alas is also Nvidia only at this point. https://unmute.sh/
vikboyechko 1/8/2026|
The game show is pretty good. Have a feeling this project will consume all my attention this week, thanks for the tip.
atonse 1/8/2026||
Can't wait for this to land in MacWhisper. I like the idea of the streaming dictation especially when dictating long prompts to Claude Code.
deckar01 1/8/2026|
It supports Turing T4, but not Ampere…
nsbk 1/8/2026|
Any ideas on how to add Ampere support? I have a use case in mind that I would love to try on my 3090 rig
deckar01 1/8/2026||
Magpie-TTS needs a kernel compiled targeting Ampere, but it appears to be closed source. It was compiled for the 2018 T4, but not 2020-2024 consumer cards, just 2025 consumer cards.
nsbk 1/11/2026||
I actually forked the repo, modified the Dockerfile and build/run scripts targeting Ampere and the whole setup is running seamlessly on my 3090, Magpie is running fine and using under 3Gb of memory, ~2Gb for nemotron STT, and ~18Gb for Nemotron Nano 30b. Latencies are great and the turn detection works really well!

I'm going to use this setup as the base for a language learning App for my gf :)

deckar01 1/25/2026||
I got your fork working (also on a 3090). I was not impressed with the latency or the recommended LLM’s quality.
nsbk 1/25/2026||
Make sure you’re using the nemotron-speech asr model. I added support for Spanish via Canary models but these have like 10x the latency: 160ms on nemotron-speech vs 1.5s canary.

For the LLM I’m currently using Mistral-Small-3.2-24B-Instruct instead of Nemotron 3 and it works well for my use case