I'm trying to merge two videos I get after recording using the camera as a UIImagePickerController. I've succeeded with combining the videos into one but I have some problems with the orientation of the videos.
As I've understood it with the UIImagePickerController is that all videos are captured in landscape, this means that the videos recorded in portrait are rotated 90°.
After each recording I add the new video to an array
func imagePickerController(picker: UIImagePickerController!, didFinishPickingMediaWithInfo info:NSDictionary) {
let tempImage = info[UIImagePickerControllerMediaURL] as NSURL
videos.append(tempImage)
let pathString = tempImage.relativePath
self.dismissViewControllerAnimated(true, completion: {})
}
Then when I want to merge I go through each video and create an instruction and adds the instruction to another array
var composition = AVMutableComposition()
let trackVideo:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
let trackAudio:AVMutableCompositionTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
var insertTime = kCMTimeZero
for i in 0...(videos.count-1){
let moviePathUrl = videos[i]
let sourceAsset = AVURLAsset(URL: moviePathUrl, options: nil)
let tracks = sourceAsset.tracksWithMediaType(AVMediaTypeVideo)
let audios = sourceAsset.tracksWithMediaType(AVMediaTypeAudio)
if tracks.count > 0{
var videoDuration = CMTimeRangeMake(kCMTimeZero, sourceAsset.duration);
let assetTrack:AVAssetTrack = tracks[0] as AVAssetTrack
let assetTrackAudio:AVAssetTrack = audios[0] as AVAssetTrack
trackVideo.insertTimeRange(videoDuration, ofTrack: assetTrack, atTime: insertTime, error: nil)
trackAudio.insertTimeRange(videoDuration, ofTrack: assetTrackAudio, atTime: insertTime, error: nil)
//Rotate
var rotater = AVMutableVideoCompositionLayerInstruction(assetTrack: assetTrack)
rotater.setTransform(assetTrack.preferredTransform, atTime: insertTime)
rotater.setOpacity(0.0, atTime: CMTimeAdd(insertTime, sourceAsset.duration))
instructions.append(rotater)
//Resize
var resizer = AVMutableVideoCompositionLayerInstruction(assetTrack: assetTrack)
resizer.setCropRectangle(CGRectMake(0, 0, 300, 300), atTime: insertTime)
instructions.append(resizer)
insertTime = CMTimeAdd(insertTime, sourceAsset.duration)
}
}
When I've created all the instructions I add them to the main instruction and create the export session.
var instruction = AVMutableVideoCompositionInstruction();
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, insertTime);
instruction.layerInstructions = instructions;
var mainCompositionInst = AVMutableVideoComposition()
mainCompositionInst.instructions = NSArray(object: instruction)
mainCompositionInst.frameDuration = CMTimeMake(1, 60);
mainCompositionInst.renderSize = CGSizeMake(300, 300);
var exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)
exporter.videoComposition = mainCompositionInst;
What am I missing?
Try using
trackVideo
in the initializer for your layer Instruction so it will use theAVMutableCompositionTrack
'strackID
rather than the source asset'strackID
Update:
You only need one
AVMutableVideoCompositionLayerInstruction
, so declare it before the loop with theAVMutableCompositionTrack
as the parameter. Then on each iteration of the loop, set the necessary properties of the layer instruction (transform, crop rect) for the current video asset you're working with. You're controlling how the video content in the composition track should be displayed at each insert time.At the end, place the single layerInstruction in the instructions array, and use that in the video composition.
You can just use this simple code,It works for me:
You have two layers. You need to apply the rotation instruction to both layers in the composition. What you are doing here is applying the rotation instruction to only one of them. Get a reference to both elements in the video composition and apply separate instructions to the two layers.