Posted by memet_rush 5 days ago
Ask HN: Why is there no P2P streaming protocol like BitTorrent?
I was thinking most people nowaday have at least 30mbps upload and a 1080p stream only needs ~10mbps and 720p needs ~5ish. Also i think it wouldnt have to be live, people would definitely not mind some amount of lag. I was thinking the big O for packets propagating out in the network should be Log(N) since if a master is sharing the content then is connected to 10 slaves, then those connected to 10 other slaves and so on.
The other limitation I could think of is prioritizing who gets the packets first since there's a lot of people with 1gbs connections or >10mbps connections. Also deprioritizing leechers to keep it from degrading the stream.
Does anyone have knowledge on why it isn't a thing still though? it's super easy to find streams on websites but they're all 360p or barely load. I saw the original creator of bittorrent was creating something like this over 10 years ago and seems to be a dead project. Also this is ignoring the huge time commitment it would take to program something like this. I want to know if this is technically possible to have streams of lets say 100,000 people and why or why not.
Just some thoughts, thanks in advance!
I work on low latency and live broadcast. The appropriate latency of any video stream is the entire duration of it. Nobody else seems to share this opinion though.
What does that mean?
The steps to live are pretty simple on the server side (assuming HLS):
1. Stream to your encoder, ideally at a bitrate higher than the transcoded bitrate.
2. Encode and transcode your video, ideally to 540/720/1080p 30fps. Each resolution will have its own bitrate, so maybe 2/3.5/5.5 respectively. Assume 2 second segments, and a manifest duration of 10 seconds. So you have 5 segments out there at any given time (though there are usually a few more hanging around).
3. Put the 3 newest segments to storage, and rewrite the four manifests with the new segment URLs. (do you need to rewrite the top-level manifest? I believe you do, but I can't remember).
4. Delete the older segment(s) (optional)
So when the client requests the manifest (the m3u8), it'll getch the three sub-manifests (forgot the term) and chose the appropriate resolution. It'll also start loading the segments up. Ideally it would look at the manifest and fetch the latest segment, so it starts nearer to "now."
Then the client will occasionally re-fetch the manifests to get the new segments (the manifest is marked as live; VoD manifests don't require reload). The fetch time probably must be < than the segment duration, which is in the manifest somewhere.
All that takes time. It takes time for the server to encode, time for the encoder to put the file(s), time for the client to fetch the manifests, and time for the client to fetch a video segment.
Looking at the above sequence, a client can be generally 0-10 seconds behind everyone else, depending on how the client behaves. And that's a few seconds behind "live," because receiving, encoding and putting files takes time.
So can you do p2p live? As long as you relax the constraints on what you mean by "live," yes. As you can imagine, the chain of latenty keeps growing longer the more peers a segment goes through. And that segment is only really good for 2 seconds (or up to 10 seconds, if the client is sloppy). If live means "up to 20 seconds since now" then yes, you can definitely do it. The tighter that time window gets the less likely you'll be able to do it. You might be able to do it with a lower bandwidth stream, but even TLS negotiation takes time. Does your client not use TLS? That will save you time.
I imagine that cutting out the live service ($$$) and SaaS have a large role to play.
If the goal is to cut costs — like vendors trying to avoid AWS/CDN bills — that’s a very different problem than building for censorship resistance or resilience.
Without a clear “why,” the tradeoffs (latency, peer churn, unpredictable bandwidth) are hard to justify. Centralized infra is boring but reliable — and maybe that's good enough for 99% of use cases.
The interesting question is: what’s the niche where the pain is big enough to make P2P worth it?
Even “modern” cities like NYC are limited to a MAXIMUM of 30Mbps upstream due to ISP monopolies and red tape.
It’s getting better, but Spectrum is still literally the only ISP available for many city residents, and their offerings are so lopsided that their highest-end package is a whopping 980/30.
That’s right. If you use the majority of that 980Mbps your IP overhead will gladly take that 30Mbps, leaving you with just about Zero headroom for anything else.
But the reality is for 99% of people Youtube and Twitch work just fine.
Plus most residential ISPs have really poor upload speed, and very restrictive data caps.
* Asymmetric network links, slow upload especially on cellular
* Traffic package limitations, and both DL and UL are counted
* Some ISP are very against p2p, sometimes it's a government policy (China banned "Residential CDNs")
* NAT