Mac Core Audio output buffer played after AudioDeviceIOProc returns?

699 Views Asked by At

My application (for macOS only) uses low level CoreAudio features (AudioDevice-low, not AudioUnit).

My question is: is the output buffer played out immediately after my AudioDeviceIOProc returns or an additional cycle is needed?

Let me explain where the question comes from with an example. Consider an input monitoring app that does some processing on the input and plays it back right away, for simplicity considering the same device for both input and output.

I set the buffer size to 480 frames (10ms @ 48kHz) through AudioObjectSetPropertyData addressing the property kAudioDevicePropertyBufferFrameSize. When my AudioDeviceIOProc is called, I have 10ms of input data which I process and then write to the output buffer before my AudioDeviceIOProc returns.

At this point I need to understand (once and for all) which of the two following cases is the correct one:

  • A) 10 more milliseconds need to pass before the output buffer that I just set can be played out
  • B) the buffer is played immediately after the callback returns. This doesn't seem to be possible because it would require that the callback takes exactly the same time at every cycle. In fact, if we consider as an example that the second time the ioproc is called the processing takes 20 microseconds more than it took the previous cycle, this would result in a gap of almost 1 sample (0.02 * 48000/1000 samples = 0.96 samples).

I always assumed A) to be correct answer, which would fit with the rule of thumb for calculating the monitoring roundtrip latency as 2*I/O buffersize (e.g. as explained here https://support.apple.com/en-om/HT201530), but lately I've been reading discording info about it. Can anyone clear this up for me? Thank you

1

There are 1 best solutions below

0
On BEST ANSWER

When coreaudiod calls your IO Proc, it is filling an internal buffer for output to the audio device. The output won't occur immediately since Core Audio needs time to process your data and potentially mix it with streams from other applications. In fact there's even a property that allows you to control how much time in the IO cycle you need to prepare your samples, kAudioDevicePropertyIOCycleUsage.

Once Core Audio has your data, it may not immediately send it to the device for playback. Both AudioDevice and AudioStream objects have configurable latency. For AudioStream you can read up on kAudioStreamPropertyLatency.

Given this complexity, the AudioDeviceIOProc gives you a parameter to determine when the samples will be written to the device. Look at second to last parameter, const AudioTimeStamp* inOutputTime.