Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I could be wrong, but my understanding of some sound cards is that they have essentially a single memory buffer that they are reading from when instructed to play a sound. Most sound cards let the OS split the buffer into two halves and raise an interrupt when one half of the buffer completes playing.

Interrupt prioritization doesn't help much, because the sound data is likely being generated from user mode, and the playback complete interrupt is being handled in kernel mode, which would require a transition back for further processing. When receiving data from a network card, no transfer back to user mode is required and will have implicit priority over sound generation. Therefore, network processing is likely to starve out sound generation barring choices like those described.

(This is somewhat hearsay, corrections welcome :))



Needing to wake up usermode is not necessarily an insurmountable obstacle; under Linux you can use a PREEMPT config. Storming a network card having unwanted consequences can also be mitigated: under Linux you have some network device driver that use an hybrid interrupt/polling approach with the NAPI. Combining both should theoretically allow a system to generate sound smoothly in the described conditions.


IMO you shouldn't really need to wake user mode that often. Just let user code buffer the output samples well ahead of the hardware needing them and have the kernel do the copies when needed.


Of course if you can do that, it's really good. However, sometimes you need low latency.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: