I'm using Icecast to stream live audio from internal microphones and want the listener to have as small a latency as possible.
A naive solution would be to simply access http://myhostname:8000/my_mountpoint
to get the stream but the <audio>
tag does internal buffering before playing and results in a pretty high latency.
Current solution: I used the ReadableStreams
API to decode (using decodeAudioData
of Web Audio API) and play chunks of data by routing the decoded data to an Audio Context destination (internal speakers). This works and brings down the latency significantly.
Problem: This streams API, while experimental, should technically work on the latest Chrome, Safari, Opera, FF (after setting a particular flag). I'm however having problems with decodeAudioData
in all other browsers except Chrome and Opera. My belief is that FF and Safari cannot decode partial MP3 data because I usually hear a short activation of the speakers when I start streaming. On Safari, the callback on a successful decodeAudioData
is never called and FF simply says EncodingError: The given encoding is not supported.
Are there any workarounds if I want to at least get it to work on Safari and FF? Is the decodeAudioData
implementation actually different on Chrome and Safari such that one works on partial MP3 and the other doesn't?
Don't use Icecast if you need sub-second latency!
Don't use Icecast if you need sub-10-second latency and don't have full control over the whole chain of software and network!
Yes, this is an answer, not a comment. Icecast is not designed for such use cases. It was designed for 1-to-n bulk broadcast of data over HTTP in non-synchronized fashion.
What you are explaining sounds like you really should consider something that was designed to be low latency, like web-RTC.
If you think you really should use Icecast, please explain why. Because your question speaks otherwise. I'm all for more Icecast use, I'm it's maintainer after all, but its application should make sense.