-->

Mac OS X Simple Voice Recorder

2020-02-11 09:14发布

问题:

Does anyone have some sample code for a SIMPLE voice recorder for Mac OS X? I would just like to record my voice coming from the internal microphone on my MacBook Pro and save it to a file. That is all.

I have been searching for hours and yes, there are some examples that will record voice and save it to a file such as http://developer.apple.com/library/mac/#samplecode/MYRecorder/Introduction/Intro.html . The sample code for Mac OS X seems to be about 10 times more complicated than similar sample code for the iPhone.

For iOS the commands are as simple as:

soundFile =[NSURL FileURLWithPath:[tempDir stringByAppendingString:@"mysound.cap"]];
soundSetting = [NSDictionary dictionaryWithObjectsAndKeys: // dictionary setting code left out goes here
soundRecorder = [[AVAudioRecorder alloc] initWithURL:soundFile settings:soundSetting error:nil];
[soundRecorder record];
[soundRecorder stop];  

I think there is code to do this for the Mac OS X that would be as simple as the iPhone version. Thank you for your help.

Here is the code (currently the player will not work)

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>

@interface MyAVFoundationClass : NSObject <AVAudioPlayerDelegate>
{
    AVAudioRecorder *soundRecorder;

}

@property (retain) AVAudioRecorder *soundRecorder;

-(IBAction)stopAudio:(id)sender;
-(IBAction)recordAudio:(id)sender;
-(IBAction)playAudio:(id)sender;

@end


#import "MyAVFoundationClass.h"

@implementation MyAVFoundationClass

@synthesize soundRecorder;

-(void)awakeFromNib
{
    NSLog(@"awakeFromNib visited");
    NSString *tempDir;
    NSURL *soundFile;
    NSDictionary *soundSetting;

    tempDir = @"/Users/broncotrojan/Documents/testvoices/";
    soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];    
    NSLog(@"soundFile: %@",soundFile);

    soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                    [NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
                    [NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
                    [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
                    [NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];

    soundRecorder = [[AVAudioRecorder alloc] initWithURL: soundFile settings: soundSetting error: nil];
}

-(IBAction)stopAudio:(id)sender
{
    NSLog(@"stopAudioVisited");
    [soundRecorder stop];
}

-(IBAction)recordAudio:(id)sender
{
    NSLog(@"recordAudio Visited");
    [soundRecorder record];

}

-(IBAction)playAudio:(id)sender
{
    NSLog(@"playAudio Visited");
    NSURL *soundFile;
    NSString *tempDir;
    AVAudioPlayer *audioPlayer;

    tempDir = @"/Users/broncotrojan/Documents/testvoices/";
    soundFile = [NSURL fileURLWithPath: [tempDir stringByAppendingString:@"test1.caf"]];  
    NSLog(@"soundFile: %@", soundFile);

    audioPlayer =  [[AVAudioPlayer alloc] initWithContentsOfURL:soundFile error:nil];

    [audioPlayer setDelegate:self];
    [audioPlayer play];

}

@end

回答1:

The AVFoundation framework is new in Lion and is very similar to the iOS version. That includes AVAudioRecorder. You can use the code from iOS with little or no modification.

Docs are here.



回答2:

The reason that your code does not play the audio is audioPlayer variable is immediately released as soon as it reaches the end of the method block.

So move the following variable to the outside of the method block, then it will play the audio well.

 AVAudioPlayer *audioPlayer; 

By the way, your code snippet was very helpful for me! :D



回答3:

Here is the snippet for Mac:

    NSDictionary *soundSetting = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithFloat: 44100.0],AVSampleRateKey,
                             [NSNumber numberWithInt: kAudioFormatMPEG4AAC],AVFormatIDKey,
                             [NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
                             [NSNumber numberWithInt: AVAudioQualityHigh],AVEncoderAudioQualityKey, nil];

NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSURL* audioFileURL = [NSURL fileURLWithPath: [documentsDirectory stringByAppendingString:@"/test.wav"]];

NSError* error;
AVAudioRecorder* soundRecorder = soundRecorder = [[AVAudioRecorder alloc] initWithURL: audioFileURL settings: soundSetting error: &error];

if (error)
{
    NSLog(@"Error!  soundRecorder initialization failed...");
}

// start recording
[soundRecorder record];


回答4:

Here is the code that is working for me on macOS 10.14 with Xcode 10.2.1, Swift 5.0.1.

First of all you have to set up NSMicrophoneUsageDescription aka Privacy - Microphone Usage Description in your Info.plist file as described in the Apple docs: Requesting Authorization for Media Capture on macOS.

Then you have to request a permission from a user to use a microphone:

switch AVCaptureDevice.authorizationStatus(for: .audio) {
case .authorized: // The user has previously granted access to the camera.
  // proceed with recording 

case .notDetermined: // The user has not yet been asked for camera access.
  AVCaptureDevice.requestAccess(for: .audio) { granted in

    if granted {
      // proceed with recording
    }
  }

case .denied: // The user has previously denied access.
  ()

case .restricted: // The user can't grant access due to restrictions.
  ()

@unknown default:
  fatalError()
}

Then you can use the following methods to start and stop audio recording:

import AVFoundation

open class SpeechRecorder: NSObject {
  private var destinationUrl: URL!

  var recorder: AVAudioRecorder?
  let player = AVQueuePlayer()

  open func start() {
    destinationUrl = createUniqueOutputURL()

    do {
      let format = AVAudioFormat(settings: [
        AVFormatIDKey: kAudioFormatMPEG4AAC,
        AVEncoderAudioQualityKey: AVAudioQuality.high,
        AVSampleRateKey: 44100.0,
        AVNumberOfChannelsKey: 1,
        AVLinearPCMBitDepthKey: 16,
        ])!
      let recorder = try AVAudioRecorder(url: destinationUrl, format: format)

      // workaround against Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL issue 
      // https://stackoverflow.com/a/57670740/598057
      let firstSuccess = recorder.record()
      if firstSuccess == false || recorder.isRecording == false {
        recorder.record()
      }
      assert(recorder.isRecording)

      self.recorder = recorder
    } catch let error {
      let code = (error as NSError).code
      NSLog("SpeechRecorder: \(error)")
      NSLog("SpeechRecorder: \(code)")

      let osCode = OSStatus(code)

      NSLog("SpeechRecorder: \(String(describing: osCode.detailedErrorMessage()))")
    }
  }

  open func stop() {
    NSLog("SpeechRecorder: stop()")

    if let recorder = recorder {
      recorder.stop()
      NSLog("SpeechRecorder: final file \(destinationUrl.absoluteString)")

      player.removeAllItems()
      player.insert(AVPlayerItem(url: destinationUrl), after: nil)
      player.play()
    }
  }

  func createUniqueOutputURL() -> URL {
    let paths = FileManager.default.urls(for: .musicDirectory,
                                         in: .userDomainMask)
    let documentsDirectory = URL(fileURLWithPath: NSTemporaryDirectory())

    let currentTime = Int(Date().timeIntervalSince1970 * 1000)

    let outputURL = URL(fileURLWithPath: "SpeechRecorder-\(currentTime).m4a",
      relativeTo: documentsDirectory)

    destinationUrl = outputURL

    return outputURL
  }
}

extension OSStatus {
  //**************************
  func asString() -> String? {
    let n = UInt32(bitPattern: self.littleEndian)
    guard let n1 = UnicodeScalar((n >> 24) & 255), n1.isASCII else { return nil }
    guard let n2 = UnicodeScalar((n >> 16) & 255), n2.isASCII else { return nil }
    guard let n3 = UnicodeScalar((n >>  8) & 255), n3.isASCII else { return nil }
    guard let n4 = UnicodeScalar( n        & 255), n4.isASCII else { return nil }
    return String(n1) + String(n2) + String(n3) + String(n4)
  } // asString

  //**************************
  func detailedErrorMessage() -> String {
    switch(self) {
    case 0:
      return "Success"

    // AVAudioRecorder errors
    case kAudioFileUnspecifiedError:
      return "kAudioFileUnspecifiedError"

    case kAudioFileUnsupportedFileTypeError:
      return "kAudioFileUnsupportedFileTypeError"

    case kAudioFileUnsupportedDataFormatError:
      return "kAudioFileUnsupportedDataFormatError"

    case kAudioFileUnsupportedPropertyError:
      return "kAudioFileUnsupportedPropertyError"

    case kAudioFileBadPropertySizeError:
      return "kAudioFileBadPropertySizeError"

    case kAudioFilePermissionsError:
      return "kAudioFilePermissionsError"

    case kAudioFileNotOptimizedError:
      return "kAudioFileNotOptimizedError"

    case kAudioFileInvalidChunkError:
      return "kAudioFileInvalidChunkError"

    case kAudioFileDoesNotAllow64BitDataSizeError:
      return "kAudioFileDoesNotAllow64BitDataSizeError"

    case kAudioFileInvalidPacketOffsetError:
      return "kAudioFileInvalidPacketOffsetError"

    case kAudioFileInvalidFileError:
      return "kAudioFileInvalidFileError"

    case kAudioFileOperationNotSupportedError:
      return "kAudioFileOperationNotSupportedError"

    case kAudioFileNotOpenError:
      return "kAudioFileNotOpenError"

    case kAudioFileEndOfFileError:
      return "kAudioFileEndOfFileError"

    case kAudioFilePositionError:
      return "kAudioFilePositionError"

    case kAudioFileFileNotFoundError:
      return "kAudioFileFileNotFoundError"

    //***** AUGraph errors
    case kAUGraphErr_NodeNotFound:             return "AUGraph Node Not Found"
    case kAUGraphErr_InvalidConnection:        return "AUGraph Invalid Connection"
    case kAUGraphErr_OutputNodeErr:            return "AUGraph Output Node Error"
    case kAUGraphErr_CannotDoInCurrentContext: return "AUGraph Cannot Do In Current Context"
    case kAUGraphErr_InvalidAudioUnit:         return "AUGraph Invalid Audio Unit"

    //***** MIDI errors
    case kMIDIInvalidClient:     return "MIDI Invalid Client"
    case kMIDIInvalidPort:       return "MIDI Invalid Port"
    case kMIDIWrongEndpointType: return "MIDI Wrong Endpoint Type"
    case kMIDINoConnection:      return "MIDI No Connection"
    case kMIDIUnknownEndpoint:   return "MIDI Unknown Endpoint"
    case kMIDIUnknownProperty:   return "MIDI Unknown Property"
    case kMIDIWrongPropertyType: return "MIDI Wrong Property Type"
    case kMIDINoCurrentSetup:    return "MIDI No Current Setup"
    case kMIDIMessageSendErr:    return "MIDI Message Send Error"
    case kMIDIServerStartErr:    return "MIDI Server Start Error"
    case kMIDISetupFormatErr:    return "MIDI Setup Format Error"
    case kMIDIWrongThread:       return "MIDI Wrong Thread"
    case kMIDIObjectNotFound:    return "MIDI Object Not Found"
    case kMIDIIDNotUnique:       return "MIDI ID Not Unique"
    case kMIDINotPermitted:      return "MIDI Not Permitted"

    //***** AudioToolbox errors
    case kAudioToolboxErr_CannotDoInCurrentContext: return "AudioToolbox Cannot Do In Current Context"
    case kAudioToolboxErr_EndOfTrack:               return "AudioToolbox End Of Track"
    case kAudioToolboxErr_IllegalTrackDestination:  return "AudioToolbox Illegal Track Destination"
    case kAudioToolboxErr_InvalidEventType:         return "AudioToolbox Invalid Event Type"
    case kAudioToolboxErr_InvalidPlayerState:       return "AudioToolbox Invalid Player State"
    case kAudioToolboxErr_InvalidSequenceType:      return "AudioToolbox Invalid Sequence Type"
    case kAudioToolboxErr_NoSequence:               return "AudioToolbox No Sequence"
    case kAudioToolboxErr_StartOfTrack:             return "AudioToolbox Start Of Track"
    case kAudioToolboxErr_TrackIndexError:          return "AudioToolbox Track Index Error"
    case kAudioToolboxErr_TrackNotFound:            return "AudioToolbox Track Not Found"
    case kAudioToolboxError_NoTrackDestination:     return "AudioToolbox No Track Destination"

    //***** AudioUnit errors
    case kAudioUnitErr_CannotDoInCurrentContext: return "AudioUnit Cannot Do In Current Context"
    case kAudioUnitErr_FailedInitialization:     return "AudioUnit Failed Initialization"
    case kAudioUnitErr_FileNotSpecified:         return "AudioUnit File Not Specified"
    case kAudioUnitErr_FormatNotSupported:       return "AudioUnit Format Not Supported"
    case kAudioUnitErr_IllegalInstrument:        return "AudioUnit Illegal Instrument"
    case kAudioUnitErr_Initialized:              return "AudioUnit Initialized"
    case kAudioUnitErr_InvalidElement:           return "AudioUnit Invalid Element"
    case kAudioUnitErr_InvalidFile:              return "AudioUnit Invalid File"
    case kAudioUnitErr_InvalidOfflineRender:     return "AudioUnit Invalid Offline Render"
    case kAudioUnitErr_InvalidParameter:         return "AudioUnit Invalid Parameter"
    case kAudioUnitErr_InvalidProperty:          return "AudioUnit Invalid Property"
    case kAudioUnitErr_InvalidPropertyValue:     return "AudioUnit Invalid Property Value"
    case kAudioUnitErr_InvalidScope:             return "AudioUnit InvalidScope"
    case kAudioUnitErr_InstrumentTypeNotFound:   return "AudioUnit Instrument Type Not Found"
    case kAudioUnitErr_NoConnection:             return "AudioUnit No Connection"
    case kAudioUnitErr_PropertyNotInUse:         return "AudioUnit Property Not In Use"
    case kAudioUnitErr_PropertyNotWritable:      return "AudioUnit Property Not Writable"
    case kAudioUnitErr_TooManyFramesToProcess:   return "AudioUnit Too Many Frames To Process"
    case kAudioUnitErr_Unauthorized:             return "AudioUnit Unauthorized"
    case kAudioUnitErr_Uninitialized:            return "AudioUnit Uninitialized"
    case kAudioUnitErr_UnknownFileType:          return "AudioUnit Unknown File Type"
    case kAudioUnitErr_RenderTimeout:             return "AudioUnit Rendre Timeout"

    //***** Audio errors
    case kAudio_BadFilePathError:      return "Audio Bad File Path Error"
    case kAudio_FileNotFoundError:     return "Audio File Not Found Error"
    case kAudio_FilePermissionError:   return "Audio File Permission Error"
    case kAudio_MemFullError:          return "Audio Mem Full Error"
    case kAudio_ParamError:            return "Audio Param Error"
    case kAudio_TooManyFilesOpenError: return "Audio Too Many Files Open Error"
    case kAudio_UnimplementedError:    return "Audio Unimplemented Error"

    default: return "Unknown error (no description)"
    }
  }
}

The workaround for the inPropertyData == NULL issue is adapted from Swift, AVAudioRecorder: Error 317: ca_debug_string: inPropertyData == NULL.

The code that provides string messages for the OSStatus codes is adapted from here: How do you convert an iPhone OSStatus code to something useful?.