I work at a telehealth company and we are using connected medical devices in order to provide the doctor with real time information from these equipements, the equipements are used by a trained health Professional.
Those devices work with video and audio. Right now, we are using them with peerjs (so peer to peer connection) but we are trying to move away from that and have a RPI with his only job to stream data (so streaming audio and video).
Because the equipements are supposed to be used with instructions from a doctor we need the doctor to receive the data in real time.
But we also need the trained health professional to see what he is doing (so we need a local feed from the equipement)
How do we capture audio and video
We are using ffmpeg with a go client that is in charge of managing the ffmpeg clients and stream them to a SRS server. This works but we are having a 2-3 sec delay when streaming the data. (rtmp from ffmpeg and flv on the front end)
ffmpeg settings :
("ffmpeg", "-f", "v4l2", `-i`, "*/video0", "-f", "flv", "-vcodec", "libx264", "-x264opts", "keyint=15", "-preset", "ultrafast", "-tune", "zerolatency", "-fflags", "nobuffer", "-b:a", "160k", "-threads", "0", "-g", "0", "rtmp://srs-url")
My questions
- Is there a way for this set up to achieve low latency (<1 sec) (for the nurse and for the doctor) ?
- Is the way I want to achieve this good ? Is there a batter way ?
Flow schema
Data exchange and use case flow:

Note: The nurse and doctor use
HTTP-FLVto play the live stream, for low latency.
In your scenario, the latency is introduced by two parts:
FFmpeg in RPI.FFmpeg in RPI
I noticed that you have already set some args, you could see full help by
ffmpeg --help fullto check these params.The
keyintequals to-g, so please removekeyint, and set the fps(-r). Please set-r 15 -g 15which set the gop to 1s or 15fps:The x264 options
presetandtuneis useful for low latency, but also need to set another oneprofileto turn off bframe. Please set to-profile baseline -preset ultrafast -tune zerolatencyfor lower latency:You set a wrong
-fflags nobufferwhich is for decoder(player), instead you should use-fflags flush_packetsfor encoder:The cli for FFmpeg, please covert to your params:
However, I think these settings only works when you change your player settings, because the bottleneck is in the player now(latency 1~3s).
Player
For HTTP-FLV, please use
conf/realtime.conffor SRS server, and please useffplayto test the latency:I think the latency should be <1s, better than H5 player, which uses MSE. You could compare the latency of them.
However, you couldn't let your users to use ffplay, it's test only for development. So we must use a low latency H5 player, that is WebRTC.
Please config SRS with
conf/rtmp2rtc.confwhich allows you to publish by FFmpeg by RTMP in low latency, and play the stream by WebRTC.When your SRS is started, there is a WebRTC player, for example: http://localhost:8080/players/rtc_player.html and please read more about WebRTC from here
The url is very similar:
rtmp://ip/live/livestreamhttp://ip/live/livestream.flvhttp://ip/live/livestream.m3u8webrtc://ip/live/livestreamIf you use WebRTC player, the latency should be
~500msand very stable.