I would like to use the redirection operator to bring the stream from ffmpeg to cv2 so that I can recognize or mark the faces on the stream and redirect this stream again so that it runs under another stream.
One withoutfacedetect and One withfacedetect.
raspivid -w 1920 -h 1080 -fps 30 -o - -t 0 -vf -hf -b 6000000 | ffmpeg -f h264 -i - -vcodec copy -g 50 -strict experimental -f tee -map 0:v "[f=flv]rtmp://xx.xx.xx.xx/live/withoutfacedetect |[f=h264]pipe:1" > test.mp4
I then read up on CV2 and came across the article.
I then ran the script with my picture and was very amazed that there was a square around my face.
But now back to business. What is the best way to do this?
thanks to @Mark Setchell, forgot to mention that I'm using a Raspberry Pi 4.
I'm still not 100% certain what you are really trying to do, and have more thoughts than I can express in a comment. I have not tried all of what I think you are trying to do, and I may be over-thinking it, but if I put down my thought-train, maybe others will add in some helpful thoughts/corrections...
Ok, the video stream comes from the camera into the Raspberry Pi initially as RGB or YUV. It seems silly to use
ffmpegto encode that to h264, to pass it to OpenCV on itsstdinwhen AFAIK, OpenCV cannot easily decode it back into BGR or anything it naturally likes to do face detection with.So, I think I would alter the parameters to
raspividso that it generates RGB data-frames, and remove all the h264 bitrate stuff i.e.Now we have RGB coming into
ffmpeg, so you need to useteeandmapsimilar to what you have already and sendRGBto OpenCV on itsstdinand h264-encode the second stream tortmpas you already have.Then in OpenCV, you just need to do a
read()fromstdinof 1920x1080x3 bytes to get each frame. The frame will be in RGB, but you can use:to re-order the channels to BGR as OpenCV requires.
When you read the data from
stdinyou need to do:rather than:
which mangles binary data such as images.