How to perform batch Inference using a Yolo v8 model

1.1k Views Asked by At

I have a question regarding the batch Inference in YOLO v8. I am using a pre-trained YOLO V8 model (huge model). The inference time to predict on single image on a RTX3060-Ti GPU is about 18 ms, I was trying the batch prediction on 64 images which is about 1152 ms...which doesn't gives me any time advantage. As previously, I was using the YOLO 4 model the time for batch inference, and there was around 600 ms for 64 images which gave me a time advantage over the individual inference.

I have attached the code for reference.

What is wrong or how do I do batch inference in YOLO v8?

from ultralytics import YOLO
import cv2
import time

camera_list = []

for camera in range(64):
    camera_list.append(cv2.VideoCapture(r"E:\Python_Project\demo_video.mp4"))

model = YOLO(".\Models\yolov8x.pt")

while True:
    camera_frames = []
    for camera in camera_list:
        ret, frame = camera.read()
        # frame = cv2.resize(frame, ())
        camera_frames.append(frame)
        if not(ret):
            break
    # print(camera_frames)
    start_time = time.time()
    results = model.predict(source=camera_frames,show=True)
        # print(results)
    # print(results)
    print(time.time()-start_time)

    # print(results)

cv2.waitKey(0)
0

There are 0 best solutions below