I've read a lot of posts describing how people use AVAssetReader or AVPlayerItemVideoOutput to get video frames as raw pixel data from a video file, which they then use to upload to an OpenGL texture. However, this seems to create the needless step of decoding the video frames with the CPU (as opposed to the graphics card), as well as creating unnecessary copies of the pixel data.
Is there a way to let AVFoundation own all aspects of the video playback process, but somehow also provide access to an OpenGL texture ID it created, which can just be drawn into an OpenGL context as necessary? Has anyone come across anything like this?
In other words, something like this pseudo code:
initialization:
- open movie file, providing an opengl context;
- get opengl texture id;
every opengl loop:
- draw texture id;
If you were to use the
Video Decode Acceleration Framework
on OS X, it will give you aCVImageBufferRef
when you "display" decoded frames, which you can callCVOpenGLTextureGetName (...)
on to use as a native texture handle in OpenGL software.This of course is lower level than your question, but it is definitely possible for certain video formats. This is the only technique that I have personal experience with. However, I believe
QTMovie
also has similar functionality at a much higher level, and would likely provide the full range of features you are looking for.I wish I could comment on AVFoundation, but I have not done any development work on OS X since 10.6. I imagine the process ought to be similar though, it should be layered on top of CoreVideo.