I wonder if anyone is able to help with this conundrum...? On a RPi 4, running the AWS Labs WebRTC SDK's sample app (https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c/blob/master/samples/kvsWebRTCClientMasterGstreamerSample.c) I have edited the gstreamer pipeline to send 3x video streams from USB webcams/HDMI capture and audio stream from one of the camera's mic. It's working really nicely... except:
When testing only the video streams (via https://matwerber1.github.io/aws-kinesisvideo-webrtc-react/) , latency is very low, but once I add in the audio, it starts off in sync but gradually the video becomes approx 2 second delayed. One alternative pipeline setup with a single camera had the opposite effect of the audio gradually slipping out of sync to about 2 secs latency.
This is my pipeline as added to the sample app:
"v4l2src do-timestamp=TRUE device=/dev/video0 ! "
"video/x-raw,width=720,height=480 ! "
"videomixer name=mix sink_1::ypos=10 sink_1::xpos=10 sink_2::ypos=10 sink_2::xpos=180 ! "
"queue ! videoconvert ! "
"x264enc bframes=0 speed-preset=veryfast bitrate=1024 byte-stream=TRUE tune=zerolatency ! "
"video/x-h264,stream-format=byte-stream,alignment=au,profile=high,framerate=30/1 ! "
"appsink sync=TRUE emit-signals=TRUE name=appsink-video "
"v4l2src device=/dev/video2 ! "
"queue ! videoconvert ! video/x-raw,width=160,height=120 ! mix.sink_1 "
"v4l2src device=/dev/video4 ! "
"queue ! videoconvert ! video/x-raw,width=160,height=120 ! mix.sink_2 "
"alsasrc device=hw:2,0 !"
"queue ! audioconvert ! audioresample ! opusenc ! "
"audio/x-opus,rate=48000,channels=1 ! appsink sync=TRUE emit-signals=TRUE name=appsink-audio"
I have tried adjusting nearly all parameters with no improvement. Do I have the queue !
elements in the right places? Do I need to employ some buffering, if so, where? I have tried adding framerates to the caps but that stops all streams working completely.
Any recommendations or suggestions appreciated.
Thanks