Porting scriptprocessor based application to audioworklet

2.7k Views Asked by At

Since the old Webaudio scriptprocessor has been deprecated since 2014 and Audioworklets came up in Chrome 64 I decided to give those a try. However I'm having difficulties in porting my application. I'll give 2 examples from a nice article to show my point.

First the scriptprocessor way:

var node = context.createScriptProcessor(1024, 1, 1);
node.onaudioprocess = function (e) {
  var output = e.outputBuffer.getChannelData(0);
  for (var i = 0; i < output.length; i++) {
    output[i] = Math.random();
  }
};
node.connect(context.destination);

Another one that fills a buffer and then plays it:

var node = context.createBufferSource(), buffer = 
context.createBuffer(1, 4096, context.sampleRate), data = buffer.getChannelData(0);

for (var i = 0; i < 4096; i++) {
  data[i] = Math.random();
}

node.buffer = buffer;
node.loop = true;
node.connect(context.destination);
node.start(0);

The big difference between the two is the first one fills the buffer with new data during playback while the second one generates all data beforehand.

Since I generate a lot of data I can't do it beforehand. There's a lot of examples for the Audioworklet, but they all use other nodes, on which one can just run .start(), connect it and it'll start generating audio. I can't wrap my head around a way to do this when I don't have such a method.

So my question basically is how to do the above example in Audioworklet, when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.

I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.

My current scriptprocessor based code is at github, specifically in vgmplay-js-glue.js.

I've been adding some code to the constructor of the VGMPlay_WebAudio class, moving from the examples to the actual result, but as I said, I don't know in which direction to move now.

constructor() {
            super();

            this.audioWorkletSupport = false;

            window.AudioContext = window.AudioContext||window.webkitAudioContext;
            this.context = new AudioContext();
            this.destination = this.destination || this.context.destination;
            this.sampleRate = this.context.sampleRate;

            if (this.context.audioWorklet && typeof this.context.audioWorklet.addModule === 'function') {
                    this.audioWorkletSupport = true;
                    console.log("Audioworklet support detected, don't use the old scriptprocessor...");
                    this.context.audioWorklet.addModule('bypass-processor.js').then(() => {
                            this.oscillator = new OscillatorNode(this.context);
                            this.bypasser = new AudioWorkletNode(this.context, 'bypass-processor');
                            this.oscillator.connect(this.bypasser).connect(this.context.destination);
                            this.oscillator.start();
                    });
            } else {
                    this.node = this.context.createScriptProcessor(16384, 2, 2);
            }
    }
1

There are 1 best solutions below

11
On BEST ANSWER

So my question basically is how to do the above example in Audioworklet,

For your first example, there is already an AudioWorklet version for it: https://github.com/GoogleChromeLabs/web-audio-samples/blob/gh-pages/audio-worklet/basic/js/noise-generator.js

I do not recommend the second example (aka buffer stitching), because it creates lots of source nodes and buffers thus it can cause GC which will interfere with the other tasks in the main thread. Also discontinuity can happen at the boundary of two consecutive buffers if the scheduled start time does not fall on the sample. With that said, you won't be able to hear glitch in this specific example because the source material is noise.

when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.

The first thing you should do is to separate the audio generator from the main thread. The audio generator must run on AudioWorkletGlobalScope. That's the whole purpose of AudioWorklet system - the lower latency and the better audio rendering performance.

In your code, VGMPlay_WebAudio.generateBuffer() should be called in AudioWorkletProcessor.process() callback to fill the output buffer of the processor. That roughly matches what your onaudioprocess callback does.

I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.

I don't think your use case requires MessagePort. I've seen other methods in the code but they really don't do much other than starting and stopping the node. That can be done by connecting/disconnecting AudioWorkletNode in the main thread. No cross-thread messaging necessary.

The code example at the end can be the setup for AudioWorklet. I am well aware that the separation between the setup and the actual audio generation can be tricky, but it will be worth it.

Few questions to you:

  1. How does the game graphics engine send messages to the VGM generator?
  2. Can the VGMPlay class live on the worker thread without any interaction with the main thread? I don't see any interaction in the code except for starting and stopping.
  3. Is XMLHttpRequest essential to the VGMPlay class? Or can that be done somewhere else?