Scheduling an audio file for playback in the futur

2019-05-21 03:34发布

问题:

I'm trying to figure out how to correcty schedule an audiofile in the near future. My actual goal is to play multiple tracks synchonized.

So how to configure 'aTime' correctly so it starts in about for instance 0.3 seconds from now. I think that I maybe need the hostTime as well, but I don't know how to use that correctly

func createStartTime() -> AVAudioTime? {
    var time:AVAudioTime?
    if let lastPlayer = self.trackPlayerDictionary[lastPlayerKey] {
        if let sampleRate = lastPlayer.file?.processingFormat.sampleRate {
            var sampleTime = AVAudioFramePosition(shortStartDelay * sampleRate )
            time = AVAudioTime(sampleTime: sampleTime, atRate: sampleRate)
        }
    }
    return time
}

Here is the function I use to start playback:

func playAtTime(aTime:AVAudioTime?){
    self.startingFrame = AVAudioFramePosition(self.currentTime * self.file!.processingFormat.sampleRate)
    let frameCount = AVAudioFrameCount(self.file!.length - self.startingFrame!)

    self.player.scheduleSegment(self.file!, startingFrame: self.startingFrame!, frameCount: frameCount, atTime: aTime, completionHandler:{ () -> Void in
        NSLog("done playing")//actually done scheduling
    })
    self.player.play()

}

回答1:

I figured it out!

for the hostTime parameter I filled in mach_absolute_time(), this is the computer/iPad's 'now' time. the AVAudioTime(hostTime:sampleTime:atRate) adds the sampleTime to the hostTime and gives back a time in the near future that can be used to schedule multiple audio segments at the same startingTime

func createStartTime() -> AVAudioTime? {

    var time:AVAudioTime?
    if let lastPlayer = self.trackPlayerDictionary[lastPlayerKey] {

        if let sampleRate = lastPlayer.file?.processingFormat.sampleRate {

            var sampleTime = AVAudioFramePosition(shortStartDelay * sampleRate )

            time = AVAudioTime(hostTime: mach_absolute_time(), sampleTime: sampleTime, atRate: sampleRate)
        }
    }
    return time

}


回答2:

Well - it is ObjC - but you'll get the point...

No need for mach_absolute_time() - if your engine is running you already got a @property lastRenderTime in AVAudioNode - your player's superclass ...

AVAudioFormat *outputFormat = [playerA outputFormatForBus:0];

const float kStartDelayTime = 0.0; // seconds - in case you wanna delay the start

AVAudioFramePosition startSampleTime = playerA.lastRenderTime.sampleTime;

AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:(startSampleTime + (kStartDelayTime * outputFormat.sampleRate)) atRate:outputFormat.sampleRate];

[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];

[player...

By the way - you can achieve the same 100% sample-frame accurate result with the AVAudioPlayer class...

NSTimeInterval startDelayTime = 0.0; // seconds - in case you wanna delay the start
NSTimeInterval now = playerA.deviceCurrentTime;

NSTimeInterval startTime = now + startDelayTime;

[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];

[player...

With no startDelayTime the first 100-200ms of all players will get clipped off because the start command actually takes its time to the run loop although the players have already started (well, been scheduled) 100% in sync at now. But with a startDelayTime = 0.25 you are good to go. And never forget to prepareToPlay your players in advance so that at start time no additional buffering or setup has to be done - just starting them guys ;-)


For an even more in-depth explanation have a look at my answer in

AVAudioEngine multiple AVAudioInputNodes do not play in perfect sync