Audio Library Programming Guide


AFSoundcard is the interface to both built-in audio devices and external USB or FireWire sound cards. AFSoundfile is the interface to sound files.


Sound cards

AFSoundcard is initialized by passing it either an input AFList or an output AFList. However, it is probably easier to let the AFManager create an AFSoundcard object for you, since you will not need to bother with explicitly creating the AFList objects.


#import <AudioLibrary/AFLibrary.h>

@interface MyAudioApp : NSObject <AFSoundcardDelegate> {
AFManager *manager ;
AFSoundcard *inputSoundcard, *outputSoundcard ;
}
@end


@implementation MyAudioApp
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
manager = [ [ AFManager alloc ] init ] ;
inputSoundcard = [ manager newInputSoundcard ] ;
outputSoundcard = [ manager newOutputSoundcard ] ;
}

- (void)dealloc
{
[ inputSoundcard release ] ;
[ outputSoundcard release ] ;
[ manager release ] ;
[ super dealloc ] ;
}
@end



The ownership of the
AFSoundcard objects that the AFManager returns is passed to the caller, and you are responsible for releasing the AFSoundcard objects when they are no longer needed.

You can ask for multiple
AFSoundcard instances from AFManager. Each instance can manage its own controls.


Obtaining sound samples from an input AFSoundcard

To start accepting data from the input soundcard, send the
start message to it.


- (void)startInput
{
[ inputSoundcard setDelegate:self ] ;
[ inputSoundcard start ] ;
}

- (void)inputReceivedFromSoundcard:(AFSoundcard*)soundcard buffers:(float**)buffers numberOfBuffers:(int)n samples:(int)m
{
// consume audio samples from buffers
}


When data samples arrive at the soundcard, they are forwarded to the -
inputReceivedFromSoundcard:buffers:numberOfBuffers:samples: of its delegate. -inputReceivedFromSoundcard is defined in the AFSoundcardDelegate protocol.

The above is all the code that you will need to receive sound samples from the default input sound card that is selected in the System Preferences Sound pane. [
Yes, really.]

buffers is an array of pointers to floating point buffers, one for each channel of the sound card. Each buffer contains samples, sampled at the current sampling rate (which you can find and change by using /Applications/Utilities/Audio MIDI Setup).


Sending sound samples to an output AFSoundcard

To start sending data to the output soundcard, send the
start message to it.


- (void)startOutput
{
[ outputSoundcard setDelegate:self ] ;
[ outputSoundcard start ] ;
}

- (Boolean)outputNeededBySoundcard:(AFSoundcard*)soundcard buffers:(float**)buffers numberOfBuffers:(int)n samples:(int)m
{
// copy audio samples into buffers
}


The output soundcard calls -outputNeededBySoundcard:buffers:numberOfBuffers:samples: of its delegate when data is needed by the sound device. -outputNeededBySoundcard is defined in the AFSoundcardDelegate protocol.

As in the input case, buffers is an array of pointers to floating point buffers, one for each channel of the sound card. When called, the delegate should fill the buffers with audio samples at the current sampling rate and return true. If no data is available, the delegate should return false, and the AFSoundcard will fill the buffers with zeros.


Playthrough from an input AFSoundcard to an output AFSoundcard

Audio Library provides an easy way for you to push data buffers that clocked at the sampling rate of an input AFSoundcard to an output AFSoundcard, See -pushBuffer.


Selecting sound devices, sampling rates, etc.

The
Managed Controls section of the programming guide explains how you can manipulate volume levels, and how you can select different physical sound cards, sound sources, sampling rates. You can select a preferred buffer size for the data callbacks, and you can use a "push" model to transfer data to the output sound card instead of the "pull" model described above.

You can interface with AFSoundcard at a different sampling rate than the sampling rate of the physical device by setting the
resamplingRate.


Sound files

An
AFSoundfile object is initialized as an input sound file with -initAsInput or as an output sound file with -initAsOutput.


#import <AudioLibrary/AFLibrary.h>

@interface MyAudioApp : NSObject <AFSoundcardDelegate> {
AFSoundfile *inputSoundfile, *outputSoundfile ;
}
@end

@implementation MyAudioApp
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
inputSoundfile = [ [ AFSoundfile alloc ] initAsInput ] ;
outputSoundfile = [ [ AFSoundfile alloc ] initAsOutput ] ;
}

- (void)dealloc
{
[ inputSoundfile release ] ;
[ outputSoundfile release ] ;
[ super dealloc ] ;
}
@end




Input Sound files

Use
-open to connect an input AFSoundfile to a physical file if you would like AFSoundFile to present you with a NSOpenPanel interface.


OSStatus status ;
status = [ inputSoundFile open ] ;


The returned status is ether one of the Mac OS X OSStatus values or one of the
AFSoundfile status values that is defined in AFSoundfile.h:


#define kNotInputSoundfile     -400
#define kNotOutputSoundfile
    -401
#define kOpenCancelledByUser
   -402
#define kCannotOpenFile
        -403
#define kFormatNotImplemented
  -404


After opening a sound file, you can obtain its URL, sampling rate and number of channels.


NSURL *url = [ inputSoundFile url ] ;
float samplingRate = [ inputSoundFile samplingRate ] ;
int channels = [ inputSoundFile channels ] ;


If you are using your own Cocoa interface to obtain a URL, or if you know the path to the file, use the "faceless"
-openURL call:


status = [ inputSoundFile openURL:[ NSURL fileURLWithPath:@"existingSoundfile.wav" isDirectory:NO ] ] ;


Please note that you cannot set the number of channels or the sampling rate of an input sound file(!). However, you can set the
resampling rate which your input callback is called and you can also set a "channel mask" which determines which channels of a multichannel sound file are sent in the callback.


[ inputSoundFile setRamplingRate:11025.0 ] ;
[ inputSoundFile setChannelMask:0x06 ] ;


You can get the duration of the file (in seconds), and also the position of the file pointer (also in seconds) by asking the
AFSoundfile object for -elapsed and -duration. You can position where the next sample is read by using the -cueTo (also in seconds) call.


float elapsed = [ inputSoundFile elapsed ] ;
float fileDuration = [ inputSoundFile duration ] ;
[ inputSoundFile cueTo:fileDuration*0.5 ] ;
  // position to 50% of the file duration


Use
-getBuffers read floating point sound samples from the file,


float *bufferPointers[2], leftChannel[512], rightChannel[512] ;

bufferPointers[0] = &leftChannel[0] ;
bufferPointers[1] = &rightChannel[0] ;
status = [ inputSoundFile getBuffers:bufferPointers numberOfBuffers:2 samples:512 ] ;


If the resampling rate of the AFSoundfile has not been set, the samples that is returned corresponds to the samples in the file itself. If the resampling rate has been set, the data in the file will be resampled to that rate before being returned to the client.


Output Sound files

To create an output sound file for the most common cases, you can use one of the following calls,


status = [ outputSoundFile createAIFFWithSamplingRate:11025.0 ] ;
status = [ outputSoundFile createWAVWithSamplingRate:48000.0 ] ;

status = [ outputSoundFile createAACWithSamplingRate:44100.0 ] ;
status = [ outputSoundFile createCAFWithSamplingRate:16000.0 ] ;


The above calls will open a two channel file for the AIFF, WAV, AAC, and Core Audio formats.
AFSoundFile will present a NSSavePanel for you to select a file name and location to create the file.

The AAC file is a compressed audio format that is contained in an MPEG4 file container, and usually has the
m4a file extension.

If you need a AIFF, WAV, ACC or Core Audio Format file with different number of channels than two, use the following call (AIFF file type shown in the example):


status = [ outputSoundFile createWithFileType:kAudioFileAIFFType samplingRate:11025.0 channels:4 ] ;


The audio File Type must be one of the following:


kAudioFileAIFFType
kAudioFileWAVEType
kAudioFileM4AType
kAudioFileCAFType


If you need an audio file format that is different from the above, use the following call (MP3 file type shown in the example):


AudioStreamBasicDescription asbd ;
// fill in AudioStreamBasicDescription structure here
status = [ outputSoundFile createWithFileType:audioFileTypeID descriptor:&asbd ] ;


For this case, you will need to fill in the members of the
AudioStreamBasicDescription structure, and the file type has to be one of the AudioFileTypeID in the Core Audio AudioFile.h header file.

Like the input
AFSoundfile case above, you can submit your own "faceless" URL instead of letting AFSoundfile create one for you with its NSSavePanel interface.


status = [ outputSoundFile createURL:url type:audioFileTypeID samplingRate:44100.0 channels:2 overWrite:YES ] ;


The overWrite flag indicates whether AFSoundfile is allowed to overwrite an existing file.

Like the input
AFSoundfile case, you must use one of the four file types listed above (AIFF, WAV, AAC and CAF). For other file formats, use


status = [ outputSoundFile createURL:url type:audioFileTypeID descriptor:&asbd overWrite:YES ] ;


Like the -
createWithFileType:descriptor: method, you will also need to fill in AudioStreamBasicDescription structure for the descriptor argument.

Use -putBuffers to write floating point sound samples to the file,


bufferPointers[0] = &leftChannel[0] ;
bufferPointers[1] = &rightChannel[0] ;
status = [ outputSoundFile putBuffers:bufferPointers numberOfBuffers:2 samples:512 ] ;


Like -getBuffers for the input AFSoundfile, if the resampling rate of the AFSoundfile has not been set, it is assumed that the samples are written at the sampling rate that the file format was set to. If the resampling rate has been set, the data that is written to the file will first be resampled before being written.

An output AFSoundfile ignores the -cueTo message. The output file will always be pointe at the end of the file, and as such, the output AFSoundfile returns the same value for -duration and -elapsed.

Note that with a pair of input and output
AFSoundfile objects, you can very easily build an application that makes a copy of a sound file, but with a different sampling rate and even different sound file format.

Be sure to send a -close to the output AFSoundfile when you are finished writing to it, or the file may not be readable as an audio file. A final release of the AFSoundfile object will also implicitly close an underlying file that was not closed.