-->

播放暂停AVAudioRecorder文件(Play a paused AVAudioRecorde

2019-06-25 11:05发布

在我的节目,我希望用户能够:

  • 录制自己的声音,
  • 暂停记录过程中,
  • 听他录制的内容
  • 然后继续录制。

我设法去的地步,我可以记录与AVAudioRecorder和AVAudioPlayer播放录音。 但每当我试图录制,暂停录制然后播放,播放部分失败,并没有错误。

我可以猜测,它不打的原因是因为音频文件尚未保存,并且仍然在内存或东西。

有没有一种方法,我可以玩暂停录音? 如果有,请告诉我如何

我使用的Xcode 4.3.2

Answer 1:

RecordAudioViewController.h

 #import <UIKit/UIKit.h>
 #import <AVFoundation/AVFoundation.h>
 #import <CoreAudio/CoreAudioTypes.h>

   @interface record_audio_testViewController : UIViewController <AVAudioRecorderDelegate> {

IBOutlet UIButton * btnStart;
IBOutlet UIButton * btnPlay;
IBOutlet UIActivityIndicatorView * actSpinner;
BOOL toggle;

//Variables setup for access in the class:
NSURL * recordedTmpFile;
AVAudioRecorder * recorder;
NSError * error;

 }

 @property (nonatomic,retain)IBOutlet UIActivityIndicatorView * actSpinner;
 @property (nonatomic,retain)IBOutlet UIButton * btnStart;
 @property (nonatomic,retain)IBOutlet UIButton * btnPlay;

 - (IBAction) start_button_pressed;
 - (IBAction) play_button_pressed;
 @end

RecordAudioViewController.m

  @synthesize actSpinner, btnStart, btnPlay;
   - (void)viewDidLoad {
    [super viewDidLoad];

//Start the toggle in true mode.
toggle = YES;
btnPlay.hidden = YES;

//Instanciate an instance of the AVAudioSession object.
AVAudioSession * audioSession = [AVAudioSession sharedInstance];
//Setup the audioSession for playback and record. 
//We could just use record and then switch it to playback leter, but
//since we are going to do both lets set it up once.
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error: &error];
//Activate the session
[audioSession setActive:YES error: &error];

  }


 - (IBAction)  start_button_pressed{

if(toggle)
{
    toggle = NO;
    [actSpinner startAnimating];
    [btnStart setTitle:@"Stop Recording" forState: UIControlStateNormal ];  
    btnPlay.enabled = toggle;
    btnPlay.hidden = !toggle;

    //Begin the recording session.
    //Error handling removed.  Please add to your own code.

    //Setup the dictionary object with all the recording settings that this 
    //Recording sessoin will use
    //Its not clear to me which of these are required and which are the bare minimum.
    //This is a good resource: http://www.totodotnet.net/tag/avaudiorecorder/
    NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
    [recordSetting setValue :[NSNumber numberWithInt:kAudioFormatAppleIMA4] forKey:AVFormatIDKey];
    [recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey]; 
    [recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];

    //Now that we have our settings we are going to instanciate an instance of our recorder instance.
    //Generate a temp file for use by the recording.
    //This sample was one I found online and seems to be a good choice for making a tmp file that
    //will not overwrite an existing one.
    //I know this is a mess of collapsed things into 1 call.  I can break it out if need be.
    recordedTmpFile = [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent: [NSString stringWithFormat: @"%.0f.%@", [NSDate timeIntervalSinceReferenceDate] * 1000.0, @"caf"]]];
    NSLog(@"Using File called: %@",recordedTmpFile);
    //Setup the recorder to use this file and record to it.
    recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error];
    //Use the recorder to start the recording.
    //Im not sure why we set the delegate to self yet.  
    //Found this in antother example, but Im fuzzy on this still.
    [recorder setDelegate:self];
    //We call this to start the recording process and initialize 
    //the subsstems so that when we actually say "record" it starts right away.
    [recorder prepareToRecord];
    //Start the actual Recording
    [recorder record];
    //There is an optional method for doing the recording for a limited time see 
    //[recorder recordForDuration:(NSTimeInterval) 10]

}
else
{
    toggle = YES;
    [actSpinner stopAnimating];
    [btnStart setTitle:@"Start Recording" forState:UIControlStateNormal ];
    btnPlay.enabled = toggle;
    btnPlay.hidden = !toggle;

    NSLog(@"Using File called: %@",recordedTmpFile);
    //Stop the recorder.
    [recorder stop];
}
  }

  - (void)didReceiveMemoryWarning {
// Releases the view if it doesn't have a superview.
[super didReceiveMemoryWarning];

// Release any cached data, images, etc that aren't in use.
  }

  -(IBAction) play_button_pressed{

//The play button was pressed... 
//Setup the AVAudioPlayer to play the file that we just recorded.
AVAudioPlayer * avPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error];
[avPlayer prepareToPlay];
[avPlayer play];

  }

   - (void)viewDidUnload {
// Release any retained subviews of the main view.
// e.g. self.myOutlet = nil;
//Clean up the temp file.
NSFileManager * fm = [NSFileManager defaultManager];
[fm removeItemAtPath:[recordedTmpFile path] error:&error];
//Call the dealloc on the remaining objects.
[recorder dealloc];
recorder = nil;
recordedTmpFile = nil;
  }


  - (void)dealloc {
[super dealloc];
  }

 @end

RecordAudioViewController.xib

取2个按钮。 1开始录制,另一个用于播放记录



Answer 2:

如果要播放录音,然后是你必须停止记录,然后才能将文件加载到AVAudioPlayer实例。

如果你希望能够在中间播放的一些记录,然后听着它之后添加更多的记录,或者说记录..然后你在为一些麻烦。

你必须创建一个新的音频文件,然后将它们结合在一起。

这是我的解决方案:

// Generate a composition of the two audio assets that will be combined into
// a single track
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack* audioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
                                                                 preferredTrackID:kCMPersistentTrackID_Invalid];

// grab the two audio assets as AVURLAssets according to the file paths
AVURLAsset* masterAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.masterFile] options:nil];
AVURLAsset* activeAsset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:self.newRecording] options:nil];

NSError* error = nil;

// grab the portion of interest from the master asset
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, masterAsset.duration)
                    ofTrack:[[masterAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
                     atTime:kCMTimeZero
                      error:&error];
if (error)
{
    // report the error
    return;
}

// append the entirety of the active recording
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, activeAsset.duration)
                    ofTrack:[[activeAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
                     atTime:masterAsset.duration
                      error:&error];

if (error)
{
    // report the error
    return;
}

// now export the two files
// create the export session
// no need for a retain here, the session will be retained by the
// completion handler since it is referenced there

AVAssetExportSession* exportSession = [AVAssetExportSession
                                       exportSessionWithAsset:composition
                                       presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession)
{
    // report the error
    return;
}


NSString* combined = @"combined file path";// create a new file for the combined file

// configure export session  output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:combined]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type

[exportSession exportAsynchronouslyWithCompletionHandler:^{

    // export status changed, check to see if it's done, errored, waiting, etc
    switch (exportSession.status)
    {
        case AVAssetExportSessionStatusFailed:
            break;
        case AVAssetExportSessionStatusCompleted:
            break;
        case AVAssetExportSessionStatusWaiting:
            break;
        default:
            break;
    }
    NSError* error = nil;

    // your code for dealing with the now combined file
}];

我不能充分信贷这项工作,但它是从其他几个输入拼凑:

AVAudioRecorder / AVAudioPlayer -追加记录到文件

(我找不到此刻的其他链接)



Answer 3:

我们必须为我们所描述的OP应用相同的要求,并遇到同样的问题(即记录已被停止,而不是暂停,如果用户想要听什么,她已录到这一点)。 我们的应用程序( 项目的GitHub库 )使用AVQueuePlayer播放和类似方法kermitology的回答来连接部分的录音,有一些显着的差异:

  • 斯威夫特实施
  • 串接多个记录到一个
  • 没有与轨道搞乱

最后一个项目背后的基本原理就是这么简单的录音与AVAudioRecorder会有一个轨道,并为这个整体的解决方法的主要原因是连接这些单曲中的资产(见附录3)。 那么,为什么不使用AVMutableCompositioninsertTimeRange方法来代替,这需要一个AVAsset代替的AVAssetTrack

相关部分:( 完整的代码 )

import UIKit
import AVFoundation

class RecordViewController: UIViewController {

    /* App allows volunteers to record newspaper articles for the
       blind and print-impaired, hence the name.
    */
    var articleChunks = [AVURLAsset]()

    func concatChunks() {
        let composition = AVMutableComposition()

        /* `CMTimeRange` to store total duration and know when to
           insert subsequent assets.
        */
        var insertAt = CMTimeRange(start: kCMTimeZero, end: kCMTimeZero)

        repeat {
            let asset = self.articleChunks.removeFirst()

            let assetTimeRange = 
                CMTimeRange(start: kCMTimeZero, end: asset.duration)

            do {
                try composition.insertTimeRange(assetTimeRange, 
                                                of: asset, 
                                                at: insertAt.end)
            } catch {
                NSLog("Unable to compose asset track.")
            }

            let nextDuration = insertAt.duration + assetTimeRange.duration
            insertAt = CMTimeRange(start: kCMTimeZero, duration: nextDuration)
        } while self.articleChunks.count != 0

        let exportSession =
            AVAssetExportSession(
                asset:      composition,
                presetName: AVAssetExportPresetAppleM4A)

        exportSession?.outputFileType = AVFileType.m4a
        exportSession?.outputURL = /* create URL for output */
        // exportSession?.metadata = ...

        exportSession?.exportAsynchronously {

            switch exportSession?.status {
            case .unknown?: break
            case .waiting?: break
            case .exporting?: break
            case .completed?: break
            case .failed?: break
            case .cancelled?: break
            case .none: break
            }
        }

        /* Clean up (delete partial recordings, etc.) */
    }

这个图帮我避开什么期望什么,并从那里继承。 ( NSObject被暗指为超在没有继承箭头。)


附录1:我有我的关于保留switch部分,而不是使用志愿AVAssetExportSessionStatus ,但该文档是明确指出exportAsynchronously的回调块‘编写完成或书面故障的情况下,当被调用’。

附录2:以防万一,如果有人有问题与AVQueuePlayer : “一个AVPlayerItem不能AVPlayer的多个实例相关联”

附录3:除非你正在录制立体声,但移动设备具有一个输入据我所知。 此外,使用花哨的混音也将需要使用的AVCompositionTrack 。 一个良好的SO线程:适当AVAudioRecorder设置录制语音?



文章来源: Play a paused AVAudioRecorder file