I am building a video editor whose process looks like this:
Demuxing -> Decoding -> Editing -> Encoding -> Muxing.
The demuxing and muxing process is currently done with mp4box.js. I would like to replace mp4box.js with ffmpeg.wasm. Unfortunately, I can't get along with the process.
What should FFmpeg.wasm do in the demuxing process?
- load a .mp4 file
- extract the encodedVideoChunks and store them as EncodedVideoChunk objects in an array
- extract the encodedAudioChunks and store them as EncodedAudioChunk objects in an array
- get some metadata like: duration, timescale, fps, track_width, track_height, codec, audio_channel_count, sample_rate ....
public async loadFile(file: File) {
let data = await fetchFile(file)
let blob = new Blob();
await this.ffmpeg.setProgress(({ratio }) => console.log(`Extracting frames: ${Math.round(ratio * 100)}%`));
this.ffmpeg.FS('writeFile', 'videoTest.mp4', data);
//Here is where I am struggling
//Should look like this:
//const command = '-i videoTest.mp4 -c:v copy .... '
//await this.ffmpeg.run(command);
//....
}
Lets get deeper into my problem:
Because FFmpeg.wasm is still a cli tool, I have no idea what the best way to safe the encodedChunks into a file is (and what kind of filetype I should use). Further I would like to know how to read that file propertly so that i can safe the input of the file into seperate EncodedVideo- and AudioChunks.