I have a project for Android reading a short[]
array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this..
import android.media.AudioFormat;
import android.media.AudioRecord;
...
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...));
recorder.startRecording();
later I read the buffer with
recorder.read(data, offset, length); //data is short[]
(That's what i'm looking for)
Documentation: https://developer.android.com/reference/android/media/AudioRecord.html
I'm very new to Swift and iOS. I've read a lot of documentation about AudioToolkit, ...Core and whatever. All I found is C++/Obj-C and Bridging Swift Header solutions. Thats much to advanced and outdated for me.
For now I can read PCM-Data to a CAF-File with AVFoundation
settings = [
AVLinearPCMBitDepthKey: 16 as NSNumber,
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVLinearPCMIsBigEndianKey: 0 as NSNumber,
AVLinearPCMIsFloatKey: 0 as NSNumber,
AVSampleRateKey: 12000.0,
AVNumberOfChannelsKey: 1 as NSNumber,
]
...
recorder = try AVAudioRecorder(URL: someURL, settings: settings)
recorder.delegate = self
recorder.record()
But that's not what I'm looking for (or?). Is there an elegant way to achieve the android read
functionality described above? I need to get a sample-array from the microphone buffer. Or do i need to do the reading on the recorded CAF file?
Thanks a lot! Please help me with easy explanations or code examples. iOS terminology is not mine yet ;-)
If you don't mind floating point samples and 48kHz, you can quickly get audio data from the microphone like so: