Posted by lightandlight 3 days ago
It's not a "Show HN" because there's no way for others to try it (yet).
Some things I'm curious about:
* Is anyone else here doing something similar?
* Would anyone like to use this service?
* Are there other YouTube-meets-news-feed features that you'd love to see?
The problem was while I migrated in a big chunk using some jank javascript this wasn't a process that kept in sync and gradually over time it got out of date and the initial problem was fixed and I removed them all. I haven't yet seen an open source self hosted solution for this for getting the list and providing an Atom feed but its definitely something I want.
This is very good to know!
> If you have a feed reader system there is no need to subscribe [via YouTube] in the first place. You’ve obviated that system.
This approach works, and it's a great way to subscribe to public channels without a YouTube account. The main reason I'm not doing it is that I want to subscribe via YouTube.
> It would be great if it was more discoverable
Oh, hopefully there's a browser extension that detects feeds on a page and lights up and provides a menu. Shame that the YouTube mobile app isn't similarly extensible.
I added all my subscriptions once, but it quickly became overwhelming, so I deleted them all. I’m not sure if bundling them all in a single feed would be better for me or not. I could bookmark my subscriptions page for the same effect. I find I’m in a very different headspace when I’m looking to watch YouTube vs reading my RSS feeds.
I personally don't care, as big tech CEO already said in dawn of AI that they don't care about robots.txt
Additionally I have a project that is able to read RSS links and provides it in JSON response
robots.txt is used to HELP bots. It tells bots what pages to visit and what pages are not intended for consumption. If a bot goes ahead and scraps everything anyway, that's entirely its own prerogative. Particularly for less sophisticated bots without a lot of storage, a good robots.txt can help it not get stuck on dynamically generated content or "useless for indexing" content.