I have a 3d scene rendered by Metal on iOS device. The goal is to have multiple videos being played and mapped on the surfaces in the scene. I am using 'AVPlayerItemVideoOutput' to extract video frames and everything works as expected when there is one video played.
The problem is that as soon as the second video is played simultaneously with the first one using the same exact method (i.e. extracting frames with 'AVPlayerItemVideoOutput') the first 'AVPlayerItemVideoOutput' object returns false when 'hasNewPixelBufferForItemTime' method is called. I am creating completely separate 'AVPlayer', 'AVPlayerItem', AVPlayerItemVideoOutput, etc., instances for each video played.
Is this a limitation or there is something wrong with the setup? Is there any alternative to achieve the goal?