I am trying to get the frame using the ffmpeg command and show using the opencv function cv2.imshow(). This snippet gives the black and white image on the RTSP Stream link . Output is given below link [ output of FFmpeg link]. I have tried the ffplay command but it gives the direct image . i am not able to access the frame or apply the image processing.
import cv2
import subprocess as sp
command = [ 'C:/ffmpeg/ffmpeg.exe',
'-i', 'rtsp://192.168.1.12/media/video2',
'-f', 'image2pipe',
'-pix_fmt', 'rgb24',
'-vcodec', 'rawvideo', '-']
import numpy
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
while True:
raw_image = pipe.stdout.read(420*360*3)
# transform the byte read into a numpy array
image = numpy.fromstring(raw_image, dtype='uint8')
image = image.reshape((360,420,3))
cv2.imshow('hello',image)
cv2.waitKey(1)
# throw away the data in the pipe's buffer.
pipe.stdout.flush()
You're using a wrong output format, it should be
-f rawvideo
. This should fix your primary problem. Current-f image2pipe
wraps the RGB data in an image format (donno what it is maybe BMP asrawvideo
codec is being used?) thus not shown correctly.Other tips:
-pix_fmt gray
and read420*360
bytes at a time.np.frombuffer
instead ofnp.fromstring
pipe.stdout.flush()
is a dangerous move IMO as the buffer may have a partial frame. Consider settingbufsize
to be an exact integer multiple of framesize in bytes.-r
to match the processing rate (to avoid extraneous data transfer from ffmpeg to python)