Objective-C - Passing Streamed Data to Audio Queue

3k Views Asked by At

I am currently developing an app on iOS that reads IMA-ADPCM audio data in over through a TCP socket and converts it to PCM and then plays the stream. At this stage, I have completed the class that pulls (or should I say reacts to pushes) in the data from the stream and decoded it to PCM. I have also setup the Audio Queue class and have it playing a test tone. Where I need assistance is the best way to pass the data into the Audio Queue.

The audio data comes out of the ADPCM decoder as 8 Khz 16bit LPCM at 640 bytes a chunk. (it originates as 160 bytes of ADPCM data but decompresses to 640). It comes into the function as uint_8t array and passes out an NSData object. The stream is a 'push' stream, so everytime the audio is sent it will create/flush the object.

-(NSData*)convertADPCM:(uint8_t[]) adpcmdata {

The Audio Queue callback of course is a pull function that goes looking for data on each pass of the run loop, on each pass it runs:

-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {

I've been working on this for a few days and the PCM conversion was quite taxing and I am having a little bit of trouble assembling in my head the best way to bridge the data between the two. It's not like I am creating the data, then I could simply incorporate data creation into the fillbuffer routine, rather the data is being pushed.

I did setup a circular buffer, of 0.5 seconds in a uint16_t[] ~ but I think I have worn my brain out and couldn't work out a neat way to push and pull from the buffer, so I ended up with snap crackle pop.

I have completed the project mostly on Android, but found AudioTrack a very different beast to Core-Audio Queues.

At this stage I will also say I picked up a copy of Learning Core Audio by Adamson and Avila and found this an excellent resource for anyone looking to demystify core audio.

UPDATE: Here is the buffer management code:

-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {

    int frame = 0;
    double frameCount = bufferSize / self.streamFormat.mBytesPerFrame; 
    // buffersize = framecount = 8000 / 2 = 4000
    //    

    // incoming buffer uint16_t[] convAudio holds 64400 bytes (big I know - 100 x 644 bytes)
    // playedHead is set by the function to say where in the buffer the
    // next starting point should be

    if (playHead > 99) {
        playHead = 0;
    }

    // Playstep factors playhead to get starting position   
    int playStep = playHead * 644;

    // filling the buffer
    for (frame = 0; frame < frameCount; ++frame) 
        // framecount = 4000
       {
        // pointer to buffer
        SInt16 *data = (SInt16*)buffer->mAudioData;
        // load data from uint16_t[] convAudio array into frame
        (data)[frame] = convAudio[(frame + playStep)];
    }

    // set buffersize
    buffer->mAudioDataByteSize = bufferSize;

    // return no Error - Osstatus will return an error otherwise if there is one. (I think)
    return noErr;
}

As I said, my brain was fuzzy when I wrote this, and there's probably something glaringly obvious I am missing.

Above code is called by the callback:

static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer) 
{
    soundHandler *sHandler = (__bridge soundHandler*)inUserData;

    CheckError([sHandler fillBuffer: inCompleteAQBuffer],
               "can't refill buffer",
               "buffer refilled");
    CheckError(AudioQueueEnqueueBuffer(inAQ,
                                       inCompleteAQBuffer,
                                       0,
                                       NULL),
               "Couldn't enqueue buffer (refill)",
               "buffer enqued (refill)");

}

On the convAudio array side of things I have dumped the it to log and it is getting filled and refilled in a circular fashion, so I know at least that bit is working.

1

There are 1 best solutions below

0
On

The hard part in managing rates, and what to do if they don't match. At first, try using a huge circular buffer (many many seconds) and mostly fill it before starting the Audio Queue to pull from it. Then monitor the buffer level to see his big a rate matching problem you have.