Unable to create gstreamer pipeline for nVidia deepstream inference

379 Views Asked by At

I have been trying to run inference from python using deepstream and gstreamer. The problem I have is to get my simple video stream and get inference done. The video I have is made from jpeg using ffmeg:

ffmpeg -framerate 30 -pattern_type glob -i './images/*.jpg' -c:v libx264 -pix_fmt yuv420p output_video.mp4

The problem I have lies in the pipeline does start.

vMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 Error: Internal data stream error.

The resulting file is below, and also attached as reference.

The mediainfo of the file is: General Complete name : output_video.mp4 Format : MPEG-4 Format profile : Base Media Codec ID : isom (isom/iso2/avc1/mp41) File size : 68.1 MiB Duration : 6 min 44 s Overall bit rate : 1 411 kb/s Writing application : Lavf58.29.100

Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings : CABAC / 4 Ref Frames Format settings, CABAC : Yes Format settings, Reference frames : 4 frames Codec ID : avc1 Codec ID/Info : Advanced Video Coding Duration : 6 min 44 s Bit rate : 1 409 kb/s Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 30.000 FPS Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.051 Stream size : 68.0 MiB (100%) Writing library : x264 core 155 r2917 0a84d98 Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=12 / lookahead_threads=2 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=1 / weightb=1 / open_gop=0 / weightp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=23.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00 Codec configuration box : avcC

The python code I use for inference is the deepstream sample test1 (deepstream-test1.py) and modified the pipeline to fit my needs (so I thought):

def main():
    # Create the main loop
    loop = GObject.MainLoop()

    # Create a GStreamer pipeline
    pipeline = Gst.Pipeline.new("avi-mjpeg-player")

    # Create pipeline elements
    source = Gst.ElementFactory.make("filesrc", "source")
    if not source:
        sys.stderr.write(" Unable to create Source \n")
    
    decoder = Gst.ElementFactory.make("decodebin", "decode")
    if not decoder:
        sys.stderr.write(" Unable to create decode \n")
    
    convert = Gst.ElementFactory.make("nvvideoconvert", "convertor") 
    if not convert:
        sys.stderr.write(" Unable to create nvvidconv \n")
    
    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")           
        
    infer = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not infer:
        sys.stderr.write(" Unable to create inference \n")

    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")    
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")
        
    #convert = Gst.ElementFactory.make("videoconvert", "convert")
    #if not convert:
    #    sys.stderr.write(" Unable to create convert \n")
        
    sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")        
    if not sink:
        sys.stderr.write(" Unable to create sink \n")
    

    # Set the input AVI file path
    source.set_property("location", "/home/aiadmin/output_video.mp4")
    
    streammux.set_property('width', 1280)
    streammux.set_property('height', 720)
    streammux.set_property('batched-push-timeout', 4000000)    
    streammux.set_property('batch-size', 1)
    
    # Set the infer configuration file
    infer.set_property("config-file-path", "dstest1_pgie_config.txt")
    
    # Build the pipeline
    pipeline.add(source)
    pipeline.add(decoder)
    pipeline.add(convert)
    pipeline.add(streammux)
    pipeline.add(infer)
    pipeline.add(nvosd)
    pipeline.add(sink)

    source.link(decoder)
    decoder.link(convert)
    #Create the link to sink 
    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = convert.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    
    srcpad.link(sinkpad)
    streammux.link(infer)
    infer.link(nvosd)
    nvosd.link(sink)

    # Set up the bus to watch for messages
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", on_message, loop)
    
    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
    
    # Set the pipeline to playing state
    pipeline.set_state(Gst.State.PLAYING)

    
    try:
        loop.run()
    except KeyboardInterrupt:
        pass
    finally:
        # Clean up
        pipeline.set_state(Gst.State.NULL)

Any Ideas on modifying the pipelien to run the videofile through inference would be nice. also to run 3 identical pipelines to benchmark it with three streams.

thanks for any help on the gst-pipline making...

0

There are 0 best solutions below