Hacker Timesnew | past | comments | ask | show | jobs | submit | Sean-Der's commentslogin

SFU is nice because it gives everyone privacy. Neither you or ISP know who you are communicating with.

Otherwise if everything is trusted I love that we can have conversations without depending on others over the internet :)


There's nothing bad about SFU, particularly the version you wrote, which forms the basis of Livekit. It would be my first choice for supporting larger groups in Briefing anyway. If the traffic is E2EE, it doesn't matter if an SFU is involved. The critical part is the signalling, in my opinion. This is where the initial communication is established. In the current version of my app, whose source code is yet to be published, this can happen via an untrusted server.

What’s painful about running LiveKit? What would make running WebRTC server easier?

The ecosystem around RTMP is bonkers.

RTMP has SO many users for video (Twitch, YouTube etc...) yet you have librtmp which has so many forks. OBS has its own version in-tree....

Then at the same time you have people trying to add extensions to RTMP still!

RTMP has held back the use cases I have cared about since ~2015 so I am excited to see people embrace other options.


That's exciting! When you were evaluating it everything about the protocol/APIs fits your needs?

Just features/software need to be implemented?


I wouldn't say I'm done evaluating it, and as a spare-time project, my NVR's needs are pretty simple at present.

But WebCodecs is just really straightforward. It's hard to find anything to complain about.

If you have an IP camera sitting around, you can run a quick WebSocket+WebCodecs example I threw together: <https://github.com/scottlamb/retina> (try `cargo run --package client webcodecs ...`). For one of my cameras, it gives me <160ms glass-to-glass latency, [1] with most of that being the IP camera's encoder. Because WebCodecs doesn't supply a particular jitter buffer implementation, you can just not have one at all if you want to prioritize liveness, and that's what my example does. A welcome change from using MSE.

Skipping the jitter buffer also made me realize with one of my cameras, I had a weird pattern where up to six frames would pile up in the decode queue until a key frame and then start over, which without a jitter buffer is hard to miss at 10 fps. It turns out that even though this camera's H.264 encoder never reorders frames, they hadn't bothered to say that in their VUI bitstream restrictions, so the decoder had to introduce additional latency just in case. I added some logic to "fix" the VUI and now its live stream is more responsive too. So the problem I had wasn't MSE's fault exactly, but MSE made it hard to understand because all the buffering was a black box.

[1] https://pasteboard.co/Jfda3nqOQtyV.png


What was the WebRTC bug, would love to help! I saw at work that FireFox doesn't properly implement [0] I wanted to go fix after FFmpeg + WHEP.

If you are still struggling with WebRTC problems would love to help. Pion has a Discord and https://webrtcforthecurious.com helps a bit to understand the underlying stuff, makes it easier to debug.

[0] https://datatracker.ietf.org/doc/html/rfc8445#section-7.2.5....


I really thought that OBS + WebRTC would be a few months project. Ended up being way stickier than I expected.

Most days I’m more excited for this space than Pion. It’s just more fun to make software for users vs businesses mostly doing AI maybe?


If anyone is using/testing WebRTC I would love to hear how it is working for them :) I am hoping Simulcast makes a impact with smaller streamers/site operators.

* Cheaper servers. More competition and I want to see people running their own servers.

* Better video quality. Encoding from source is going to be better then transcoding.

* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC

* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.


If anyone is using/testing WebRTC I would love to hear how it is working for them :) I am hoping Simulcast makes a impact with smaller streamers/site operators.

* Cheaper servers. More competition and I want to see people running their own servers.

* Better video quality. Encoding from source is going to be better then transcoding.

* No more bad servers. Send video to your audience and server isn't able to do modification/surveillance with E2E Encryption via WebRTC

* Better Latency. No more time lost transcoding. I love low latency streaming where people are connected to community. Not just blasting one-way video.


I built a Open Source tool to help people debug/understand their live stream quality better. I would really appreciate people's feedback! https://streamsniff.com/

I see a constant stream of people saying 'my video is blurry' or 'why do I have so much latency' and I wanted to build something that helped people understand/diagnose. Also wanted to make something that runs in the browser so you can share/send a link to a friend (or on discord) to make debugging easier.


https://github.com/sean-der/stream-sniff is the code.

I wanted to help people debug/understand their live stream quality better. I would really appreciate people's feedback!

I see a constant stream of people saying 'my video is blurry' or 'why do I have so much latency' and I wanted to build something that helped people understand/diagnose. Also wanted to make something that runs in the browser so you can share/send a link to a friend (or on discord) to make debugging easier.

I am hoping to move it to https://github.com/glimesh org soon so it can sit next to broadcast-box.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: