I'm writing an application which uses FFMPEG
and I need a way to rotate AVFrame
s by 90, 180 and 270 degrees since a camera could be positioned differently. I heard there're filters for doing something like that. I read filtering_video.c
file in examples directory, but it's really hard to understand what's going on in that example. I saw "transpose=cclock"
part of the code, which my imply that this is what I need, vut there's really not much explanation about how it works.
What is const AVFilter *buffersrc = avfilter_get_by_name("buffer");
and const AVFilter *buffersink = avfilter_get_by_name("buffersink");
why do I even need those if I'm only trying to make a rotation? Why do I need enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_GRAY8, AV_PIX_FMT_NONE };
when I don't want to change pixel formats?
Can anyone please explain how do I make FFMPEG
rotate decoded AVFrame
?
| What is const AVFilter *buffersrc = avfilter_get_by_name("buffer");
The Ffmpeg filter API is designed for a more complex use. Filters can be chained together to form a "graph", some filters have more than one input, some have more that one output. Creating a API that can support any filter type and create a link in a graph to any other possible filter means the API needs to be complex.
a "buffer" is a way into the graph, where the user can feed outside data (frames) into the graph
A "buffersink" is a way to read the frames out of the graph. Without those there is no way to use the filter graph.
Some filters can only operate on specific pixel formats, and some filters can change pixel formats. So format in may not equal the format out. Setting the pixel format can hint the graph to convert formats when necessary.