I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.
My questions:
- Can I read video with
MediaExtractor
and then pass data toMediaMuxer
to save video in another file? Without using MediaCodec? - If I want to modify frames before saving, can I do that without using
Surface
? Just by modifyingByteBuffer
? I assume that I need to decode data fromMediaExtractor
, then modify content, then encode it toMediaMuxer
. - Does
sample
is the same asframe
in context of methodMediaExtractor::readSampleData
? - Do I need to decode sample?
This is a brief description of what each class does:
This is how you pipeline should generally look like:
MediaExtractor -> MediaCodec(As Decoder) -> Your editing -> MediaCodec(As Encoder) -> MediaMuxer
To answer you questions: