Quickblox video chat saving

2019-08-20 02:19发布

问题:

I am using QuickBlox iOS SDK for vidoe chating in my app. It works fine. Now I want to record the chat video and save it in camera roll. How can I do that. I have gone through their documentation and implemented this -

 -(IBAction)record:(id)sender{


   // Create video Chat
   videoChat = [[QBChat instance] createAndRegisterVideoChatInstance];
   [videoChat setIsUseCustomVideoChatCaptureSession:YES];

   // Create capture session
    captureSession = [[AVCaptureSession alloc] init];

   // ... setup capture session here

   /*We create a serial queue to handle the processing of our frames*/
   dispatch_queue_t callbackQueue= dispatch_queue_create("cameraQueue", NULL);
  [videoCaptureOutput setSampleBufferDelegate:self queue:callbackQueue];

  /*We start the capture*/
  [captureSession startRunning];
   }

 -(void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

  // Do something with samples
  // ...

  // forward video samples to SDK
  [videoChat processVideoChatCaptureVideoSample:sampleBuffer];
 }

But I am not sure what to do from here. How should I get the video data ?

回答1:

From the quickblox docs

To setup a custom video capture session you simply follow these steps:

create an instance of AVCaptureSession setup the input and output implement frames callback and forward all frames to the QuickBlox iOS SDK tell the QuickBlox SDK that you will use your own capture session

To setup a custom video capture session, setup input and output:

-(void) setupVideoCapture{
self.captureSession = [[AVCaptureSession alloc] init];

__block NSError *error = nil;

// set preset
[self.captureSession setSessionPreset:AVCaptureSessionPresetLow];


// Setup the Video input
AVCaptureDevice *videoDevice = [self frontFacingCamera];
//
AVCaptureDeviceInput *captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if(error){
    QBDLogEx(@"deviceInputWithDevice Video error: %@", error);
}else{
    if ([self.captureSession  canAddInput:captureVideoInput]){
        [self.captureSession addInput:captureVideoInput];
    }else{
        QBDLogEx(@"cantAddInput Video");
    }
}

// Setup Video output
AVCaptureVideoDataOutput *videoCaptureOutput = [[AVCaptureVideoDataOutput alloc] init];
videoCaptureOutput.alwaysDiscardsLateVideoFrames = YES;
//
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[videoCaptureOutput setVideoSettings:videoSettings];
/*And we create a capture session*/
if([self.captureSession canAddOutput:videoCaptureOutput]){
    [self.captureSession addOutput:videoCaptureOutput];
}else{
    QBDLogEx(@"cantAddOutput");
}
[videoCaptureOutput release];


// set FPS
int framesPerSecond = 3;
AVCaptureConnection *conn = [videoCaptureOutput connectionWithMediaType:AVMediaTypeVideo];
if (conn.isVideoMinFrameDurationSupported){
    conn.videoMinFrameDuration = CMTimeMake(1, framesPerSecond);
}
if (conn.isVideoMaxFrameDurationSupported){
    conn.videoMaxFrameDuration = CMTimeMake(1, framesPerSecond);
}

/*We create a serial queue to handle the processing of our frames*/
dispatch_queue_t callbackQueue= dispatch_queue_create("cameraQueue", NULL);
[videoCaptureOutput setSampleBufferDelegate:self queue:callbackQueue];
dispatch_release(callbackQueue);

// Add preview layer
AVCaptureVideoPreviewLayer *prewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession] autorelease];
[prewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
CGRect layerRect = [[myVideoView layer] bounds];
[prewLayer setBounds:layerRect];
[prewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
myVideoView.hidden = NO;
[myVideoView.layer addSublayer:prewLayer];


/*We start the capture*/
[self.captureSession startRunning];
}

- (AVCaptureDevice *) cameraWithPosition:(AVCaptureDevicePosition) position{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if ([device position] == position) {
            return device;
        }
    }
    return nil;
}


- (AVCaptureDevice *) backFacingCamera{
    return [self cameraWithPosition:AVCaptureDevicePositionBack];
}

- (AVCaptureDevice *) frontFacingCamera{
    return [self cameraWithPosition:AVCaptureDevicePositionFront];
}

Implement frames callback:

- (void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    // Usually we just forward camera frames to QuickBlox SDK
    // But we also can do something with them before, for example - apply some video filters or so  
    [self.videoChat processVideoChatCaptureVideoSample:sampleBuffer];
}

Tell to QuickBlox iOS SDK that we use our own video capture session:

self.videoChat = [[QBChat instance] createAndRegisterVideoChatInstance];
self.videoChat.viewToRenderOpponentVideoStream = opponentVideoView;
//
// we use own video capture session
self.videoChat.isUseCustomVideoChatCaptureSession = YES;