I'm attempting to create a realtime audio analyzer on iOS7. What I'm looking to get is volume and pitch from the native microphone on an iPod Touch Gen 5, and write to a CSV along with a timestamp. I'd like to break it up into 7 channels, and sample at 8Hz. I've looked at a bunch of documentation and code samples, but can't get anything to work.
I'm trying to start something simple now from scratch, but it seems to me that there is nothing outlining how I can achieve what I mentioned above.
Most recently I've tried AVAudioSessionCategoryAudioProcessing
hoping to be able to use it for signal processing, but the Audio Session docs suggest that only automated signal processing can be done...and only in voice or video chat modes.
- (void)analyzeAudio
{
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
audioUnit = (AudioUnit*)malloc(sizeof(AudioUnit));
NSError *activationError = nil;
BOOL success = [[AVAudioSession sharedInstance] setActive: YES error: &activationError];
if (!success)
{
NSLog(@"AudioSession could not init");
}
[audioSession setCategory:AVAudioSessionCategoryAudioProcessing error:nil];
[audioSession setActive:YES error:nil];
}
Is there a simple way with Audio Session to get what I'm looking for?
Found out that I can use the AVAudioRecorder method updateMeters on a timer to get the peakPowerForChannel: value at some interval.