Posted by georgemandis 6/25/2025
The cheaper 2.5 flash made noticeably more mistakes, for example it didn't correctly output numbers while the Pro model did.
As for OpenAI, their gpt-4o-transcribe model did worse than 2.5 flash, completely messing up names of places and/or people. Plus it doesn't label the conversation in turns, it just outputs a single continuous piece of text.
Did I miss that the task was time sensitive?
If so: could you split the audiofile and process the latter half by pitch shifting, say an octave, and then merging them together to get shorter audiofile — then transcribe and join them back to a linear form, tagging removed. (You could insert some prerecorded voice to know at which point the second voice starts.). If pitch change is not enough, maybe manipulate it further by formants.
I'm confused because I read in various places that the YouTube API doesn't provide access to transcripts ... so how do all these YouTube transcript extractor services do it?
I want to build my own YouTube summarizer app. Any advice and info on this topic greatly appreciated!
https://github.com/jdepoix/youtube-transcript-api
For our internal tool that transcribes local city council meetings on YouTube (often 1-3 hours long), we found that these automatic ones were never available though.
(Our tool usually 'processes' the videos within ~5-30 mins of being uploaded, so that's also why none are probably available 'officially' yet.)
So we use yt-dlp to download the highest quality audio and then process them with whisper via Groq, which is way cheaper (~$0.02-0.04/hr with Groq compared to $0.36/hr via OpenAI's API.) Sometimes groq errors out so there's built-in support for Replicate and Deepgram as well.
We run yt-dlp on our remote Linode server and I have a Python script I created that will automatically login to YouTube with a "clean" account and extract the proper cookies.txt file, and we also generate a 'po token' using another tool:
https://github.com/iv-org/youtube-trusted-session-generator
Both cookies.txt and the "po token" get passed to yt-dlp when running on the Linode server and I haven't had to re-generate anything in over a month. Runs smoothly every day.
(Note that I don't use cookies/po_token when running locally at home, it usually works fine there.)
It's frustrating to have to jump through all these hoops just to extract transcripts when the YouTube Data API already gives reasonable limits to free API calls ... would be nice if they allowed transcripts too.
Do you think the various YouTube transcript extractor services all follow a similar method as yours?
./yt-dlp --skip-download --write-sub --write-auto-sub --sub-lang en --sub-format json3 <youtube video URL>
You can also feed the same command a playlist or channel URL and it'll run through and grab all the transcripts for each video in the playlist or channel.But that was a few months ago, so for all I know they've tightened down more hatches since then.
I use this free tool to extract those and dump the transcripts into a LLM with basic prompts: https://contentflow.megalabs.co