Moonfire NVR looks like an amazing project that I can learn a lot from for an entirely different project with streams and stuff that I’m interested in and have zero experience with!
Also great that it’s written in rust
Is there any way to test it without rpi/camera?
If I understand the readme correctly, you store frames individually (as jpegs?) on disk and construct flexible mp4 streams on the fly. Naturally I would have assumed that this would be inefficient so I’m wondering if I got this right, not very familiar with stream/video/codec tech
Without a Raspberry Pi, yes. It should run on any Unix-like OS. I've tested Linux/arm32, Linux/x86-64, and macOS/x86-64. (For the last, install ffmpeg via homebrew first.)
Without an IP camera...hmm, there are probably some public RTSP live streams somewhere. Not sure offhand.
> If I understand the readme correctly, you store frames individually (as jpegs?) on disk and construct flexible mp4 streams on the fly. Naturally I would have assumed that this would be inefficient so I’m wondering if I got this right, not very familiar with stream/video/codec tech
No, I store the video stream in the compressed form the camera gave it to me. Currently that's H.264; it wouldn't be hard to add H.265 support as well. I break it apart into roughly one-minute segments at convenient locations. The schema design doc talks about that here: https://github.com/scottlamb/moonfire-nvr/blob/master/design...
.mp4 serving will aggregate those together (maybe clipping the start and end segment) to give you a .mp4 segment for any time range of interest. It comes up with a mp4::File struct which knows what video segments to serve and maps byte locations to parts of the .mp4 container format. I don't have a good doc about how this works other than the source code right now. https://github.com/scottlamb/moonfire-nvr/blob/master/src/mp... [edit: and you probably won't be successful in understanding it without having a pdf of the ISO/IEC 14496-12 specification open next to it.] Here's some debug output for generating a five-minute video segment: https://pastebin.com/Wzfz7BF7
Storing individual frames as jpegs would be inefficient I agree in all sorts of ways: recording CPU (you have to decode the H.264 and re-encode it as JPEGs), disk space, disk seeks, playback bandwidth, browser CPU, etc. My understanding is this is how Zoneminder currently works. I imagine it worked better with the cameras Zoneminder was originally designed for: low-resolution, low-qps webcams that didn't do their own H.264 encoding.
Also great that it’s written in rust
Is there any way to test it without rpi/camera?
If I understand the readme correctly, you store frames individually (as jpegs?) on disk and construct flexible mp4 streams on the fly. Naturally I would have assumed that this would be inefficient so I’m wondering if I got this right, not very familiar with stream/video/codec tech