Top
Best
New

Posted by bertman 11/12/2025

Yt-dlp: External JavaScript runtime now required for full YouTube support(github.com)
1106 points | 627 commentspage 2
yupyupyups 11/12/2025|
Even when the so called "ad-pocalypse" happened, this wasn't as big of an issue as it is today.

What's going on with Google being extra stingy seems to correlate well with the AI boom (curse). I suspect there are companies running ruthless bots scraping TBs of videos from YouTube. Not just new popular videos that are on fast storage, but old obscure ones that probably require more resources to fetch. This is unnatural, and goes contrary to the behaviour pattern of normal users that YT is optimized for.

I think AI-companies abusing the internet is why things are getting more constrained in general. If I'm right, they deserve the bulk of the blame imo.

narrator 11/12/2025||
This will happen in the real world when the robot mass production gets going. We'll climb the exponential till we run into the resource limits of the planet at meteoric speed.

Yes, the regulators will try and manage it, but eventually every decision about who can use the robot/AI genie for what will go through them because of the robot/AI genie's enormous strain on natural resources, and you'll be back to a planned economy where the central planners are the environmental regulators.

There are hard decisions to make as well. Who gets to open a rare earth processing plant and have a tailing pond that completely ecologically destroys that area? Someone has to do it to enable the modern economy. It's kind of like we won't have a good AI video generator and will always be behind China if some Youtube creators refuse to license their content for AI training. Same goes for the rare earth processing tailing pond. Nobody can agree on where it's going to go, so China wins.

Alex2037 11/12/2025|||
>I suspect there are companies running ruthless bots scraping TBs of videos from YouTube.

certainly, but for Google, that bandwidth and compute is a drop in the bucket. at the scale Google operates, even if there were a hundred such bots (there aren't - few companies can afford to store exabytes of data), those wouldn't even register on the radar. of course, like the other social media oligarchs, Google wants to be the only entity with unrestricted access to their catalog of other people's content, but even that isn't their motivation here - "login to prove you're not a bot :^)" was ALWAYS going to happen, even without the AI bubble.

enshitiffication is unstoppable and irreversible, and Google is its prophet - what they don't kill, they turn to shit.

>I think AI-companies abusing the internet is why things are getting more constrained in general.

even before the AI bubble, every other fuckass blog with 0.5 daily visitors was behind Cloudflare, for the same reason those fuckass blogs are built with FOTM javascript frameworks - there's nowt so queer as webshits.

yupyupyups 11/12/2025||
>every other fuckass blog with 0.5 daily visitors was behind Cloudflare

Lol, that's so true.

>Google wants to be the only entity with unrestricted access to their catalog of other people's content,

Yeah, data is money. Reddit are doing the same thing, but even more aggressively. You want API access? Pay an astronomical amount of money for it, that is other people's content. Reddit also hosts a much small amount of media relative to YT.

For YT, I'm not so sure the increase in traffic is a drop in the bucket for them. It can depend a lot on which videos are being fetched. Cheap storage is cheap only for storing a large amount of data, not doing an unusual amount of (random) access.

Who knows.

HumblyTossed 11/12/2025||
Hopefully most of what the bots are ruthlessly scraping is all the AI slop that is filling YT. Hopefully garbage in - garbage out will kill off all the AI nonsense.

yes, "AI" can be useful, but nonsense and slop are not.

rob 11/12/2025||
We use this for AI transcriptions internally on our Linode VPS server.

It's been working great by itself for the most part since the beginning of the year, with only a couple of hiccups along the way.

We do use a custom cookies.txt file generated on the server as well as generate a `po_token` every time, which seems to help.

(I originally thought everything would just get blocked from a popular VPS provider, but surprisingly not?)

Most recently though, we were getting tons of errors like 429 until we switched to the `tv_embedded` client, which seems to have resolved things for the most part.

1vuio0pswjnm7 11/12/2025||
"Support for YouTube without a JavaScript runtime is now considered "deprecated." It does still work somewhat; however, format availability will be limited, and severely so in some cases (e.g. for logged-in users). "

The devil is in the details

There are some formats, perhaps the one(s) the user wants, that do not require a JS runtime

Interesting that "signing up" for a website publishing public infomation and "logging in" may not always work in the user's favor. For example, here they claim it limits format availability

"Format availability without a JS runtime is expected to worsen as time goes on, and this will not be considered a "bug" but rather an inevitability for which there is no solution. It's also expected that, eventually, support for YouTube will not be possible at all without a JS runtime."

It is speculated that this format availability might change in the future

jdewerd 11/12/2025||
How long until it comes with a DRM AI and then my anti-DRM AI will have to fight it in a virtual arena (with neon lights and killer soundtrack, of course)?
1vuio0pswjnm7 11/12/2025||
s/infomation//information/
mellosouls 11/12/2025||
Discussed here a few weeks ago:

https://news.ycombinator.com/item?id=45358980

Yt-dlp: Upcoming new requirements for YouTube downloads - 1244 points, 620 comments

goku12 11/12/2025||
Just one question. I see all these 3rd party clients solving the problem separately. Isn't it easier for everyone to build a unified decoder backend that exposes a stable and consistent interface for all the frontends? That way, it will get more attention and each modification will have to be done only once.

Since JS is the big issue here, the backend itself could be written in JS, TS or something else that compiles to WASM. That way, the decoder doesn't have to be split between two separate codebase. Deno also allows the bundle to be optionally compiled into a native executable that can run without having to install Deno separately.

tracker1 11/12/2025||
Alternatively, I'm not sure if this might be an impetus to move the bulk of the codebase itself to TS/JS and just use Deno/Node/Bun or otherwise to move to Rust with rusty_v8 or deno_core directly.
pwdisswordfishy 11/13/2025|||
It will also save Google time trying to fingerprint it!
goku12 11/13/2025||
How so?
zahlman 11/12/2025||
You mean, like the yt-dlp-ejs package?
goku12 11/13/2025||
Hmm! Close enough, I guess. I didn't notice that. But I hope that it has a flexible interface and that all projects adopt it.
zahlman 11/13/2025||
From what I can tell, it basically includes the JavaScript to run, and then the main program tells it which engine to feed it to. From some issue tracker discussion, it does the latter in the common way of "look up an executable on $PATH or use a user-provided path, and shell out to it". I didn't check where the logic is located for building the command line needed for each supported engine.
globular-toast 11/12/2025||
It's quite worrying. A sizeable chunk of cultural and educational material produced in the last decade is in control of greedy bastards who will never have enough. Unfortunately, downloading the video data is only part of it. Even if we shared it all on BitTorrent it's nowhere near as useful without the index and metadata.
anal_reactor 11/12/2025||
From the preservation point of view yes. But realistically, it's been the norm throughout human history that irrelevant culture simply gets removed.
eviks 11/13/2025||
So is much of relevant culture, it's not like there is a magic preservation wand that sorts by relevance before removal
anal_reactor 11/13/2025||
There is. If something is relevant, it'll organically be kept somewhere.
crazygringo 11/12/2025||
What are you talking about? It's in control of the creators. YT doesn't get exclusive copyright on user's content. Those creators can upload wherever they want.

And YT isn't "greedy bastards". They provide a valuable service, for free, that is extremely expensive to run. Do you think YT ought to be government-funded or a charity or something?

OGWhales 11/12/2025||
> Do you think YT ought to be government-funded

Benn Jordan made a pretty compelling video on this topic, arguing that the existing copyright system and artifacts of it are actually not that great and a potential government system might actually be better: https://www.youtube.com/watch?v=PJSTFzhs1O4

I will say that is something I would not have considered reasonable prior to watching his video.

worldsavior 11/12/2025||
yt-dlp feels like a whole army fighting Google. Users reporting and the army performs.
dewey 11/12/2025|
If by army you mean underfunded open source volunteers then yes.
worldsavior 11/12/2025|||
That's the point, they don't have the fund but still give the sense of fight and power of an army.
Almondsetat 11/12/2025|||
If it's free, can you even talk about underfunding?
dewey 11/12/2025||
You can donate to a free project, including yt-dlp (https://github.com/yt-dlp/yt-dlp/blob/master/Maintainers.md#...)
Almondsetat 11/12/2025||
The developers can arbitrarily decide their own goals and always claim underfunding, that's why it's meaningless in this context
dewey 11/12/2025||
I'm not sure what you are arguing for or against, but the fact that big corporations are built on top of the work of volunteers (curl, ffmpeg) who mostly have to beg for funding is a known fact.

Example from yesterday: https://thenewstack.io/ffmpeg-to-google-fund-us-or-stop-send...

Almondsetat 11/12/2025||
Nobody "has to" beg for funding, because nobody is forcing them to work! You are underfunded if you are forced to accomplish a task and are given too little money. In community driven projects this is not the case. The only thing keeping the developers from taking a nice vacation is their abstract own sense of duty, which is their prerogative, but completely optional
dekhn 11/12/2025||
Frankly I think this is inevitable- it's practically one of the laws of computing: any sufficiently complex system will ultimately require a turing-complete language regardless of its actual necessity.

See also: """Zawinski's Law states: "Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can."""" and """Greenspun's tenth rule of programming is an aphorism in computer programming and especially programming language circles that states:[1][2]

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."""

(from the above I conclude that if you want to take over the computer world, implementing a mail reader with an embedded Lisp).

tensegrist 11/12/2025||
previously: https://news.ycombinator.com/item?id=45358980
api 11/12/2025|
Someday it will have to launch a VM with a copy of Chrome installed and use an AI model to operate the VM to make it look like a human, then use frame capture inside the VM to record the video and transcode it.
More comments...