Using FFMPEG command to read the frame and show using the inshow function in opencv

2k Views Asked by At

I am trying to get the frame using the ffmpeg command and show using the opencv function cv2.imshow(). This snippet gives the black and white image on the RTSP Stream link . Output is given below link [ output of FFmpeg link]. I have tried the ffplay command but it gives the direct image . i am not able to access the frame or apply the image processing.

Output of FFMPEG

import cv2
import subprocess as sp
command = [ 'C:/ffmpeg/ffmpeg.exe',
            '-i', 'rtsp://192.168.1.12/media/video2',
            '-f', 'image2pipe',
            '-pix_fmt', 'rgb24',
            '-vcodec', 'rawvideo', '-']


import numpy
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
    raw_image = pipe.stdout.read(420*360*3)
   # transform the byte read into a numpy array
    image =  numpy.fromstring(raw_image, dtype='uint8')
    image = image.reshape((360,420,3))
    cv2.imshow('hello',image)
    cv2.waitKey(1)
    # throw away the data in the pipe's buffer.
    pipe.stdout.flush()
1

There are 1 best solutions below

10
On

You're using a wrong output format, it should be -f rawvideo. This should fix your primary problem. Current -f image2pipe wraps the RGB data in an image format (donno what it is maybe BMP as rawvideo codec is being used?) thus not shown correctly.

Other tips:

  • If your data is grayscale, use -pix_fmt gray and read 420*360 bytes at a time.
  • Don't know the difference in speed, but I use np.frombuffer instead of np.fromstring
  • pipe.stdout.flush() is a dangerous move IMO as the buffer may have a partial frame. Consider setting bufsize to be an exact integer multiple of framesize in bytes.
  • If you are expecting processing to take much longer than input frame rate, you may want to reduce the output framerate -r to match the processing rate (to avoid extraneous data transfer from ffmpeg to python)