CoreAudio AudioQueue | How to stream from network to specific Audio Server plug-in device on macOS?

933 Views Asked by At

I am relatively new to macOS programming but not programming generally. I'd prefer to work in (Objective)C/C++ rather than Swift. I need to open a specific audio device for output and stream a live stream of audio data from the network to the output device. The device has a custom Audio Server plug-in driver that we have source for. I'm feeling really stupid trying to figure out from the Apple documentation what I need to call to do these things. Can anyone help answer the following:

1) What are some of the appropriate APIs to use to do this? I'm thinking I need CoreAudio and AudioQueue, but I'm too ignorant here to be sure. Any references to similar example applications would be appreciated. Book recommendations would be appreciated, too.

2) How do I open my specific, custom driver for output? Does it have something to do with the UUID I see in the driver code, or is the driver identified some other way? I need my program to find the custom driver without any human assistance like picking from a selection list.

3) A dumb question because I haven't seen a clear example in application examples I've looked at: I downloaded the CAPlayThrough application (https://developer.apple.com/library/archive/samplecode/CAPlayThrough/Introduction/Intro.html) and kind of understand it, but I don't understand something in particular. How do I write my "pushed" in-memory data from the network to the output device? Do I need to use some kind of callback that reads from a ring buffer that the network live stream is written to?

ADDENDUM:

3/24/2020 Based on further research, I've answered my main questions but still have an issue that I think is out of scope. I will give my answer below and write up a new question.

2

There are 2 best solutions below

0
freshtop On

1) What are some of the appropriate APIs to use to do this?

The Core Audio API would be fine. See AudioDeviceCreateIOProcID and AudioDeviceStart in CoreAudio/AudioHardware.h. For some reason, the Apple documentation site doesn't have the docs for them, so you have to find it in the header file.

Or you could use AVAudioEngine. (But not AVAudioPlayer.)

Depending on your other requirements, it might be easier to use an existing program like GStreamer or VLC.

Any references to similar example applications would be appreciated.

CAPlayThrough could be an OK place to start, but it uses AUGraph, which is deprecated. I'm not sure whether you'll need the varispeed audio unit it uses to adjust for differences in the sample rates, but you can at least get started without it.

This looks it does something similar without using AUGraph: https://github.com/pje/WavTap/blob/master/App/AudioTee.cpp

How do I open my specific, custom driver for output?

The driver would publish an audio output device and it would have a property called kAudioDevicePropertyModelUID (See AudioHardwareBase.h.) and should have a constant value for it. You can check it by double clicking on the device in HALLab.

Your program could use AudioObjectGetPropertyData to get the kAudioHardwarePropertyDevices property of the "audio system object" (kAudioObjectSystemObject) and then get the kAudioDevicePropertyModelUID property of each device.

If you're using C++, you might want to use the Public Utility classes to help with that.

Does it have something to do with the UUID I see in the driver code, or is the driver identified some other way?

The driver is identified by its bundle ID. You can use the kAudioHardwarePropertyPlugInForBundleID property to the the AudioObjectID of the audio object that represents the driver (i.e. the plug-in). You can also find the device through the plug-in object.

How do I write my "pushed" in-memory data from the network to the output device? Do I need to use some kind of callback that reads from a ring buffer that the network live stream is written to?

The function you pass to AudioDeviceCreateIOProcID will be called every IO cycle and given a buffer to fill with the samples for that cycle.

0
Karen On

1) What are some of the appropriate APIs to use to do this?

I've determined that Audio Queue Services fit my need best.

2) How do I open my specific, custom driver for output?

I found I can set the desired audio output device by using the SetAudioQueueProperty function with the kAudioQueueProperty_CurrentDevice parameter as in the following fragment.

  CFStringRef deviceUID = CFSTR("BlackHole_UID"); // For example
  status = AudioQueueSetProperty(queue, kAudioQueueProperty_CurrentDevice, &deviceUID, sizeof(deviceUID));

3) How do I write my "pushed" in-memory data from the network to the output device?

Considering how AudioQueues work, it looks like I should do the following: send the decoded, streamed audio to a ring buffer that is pulled from the output AudioQueue callback.