How to live stream videos from iPhone to server like Ustream or Qik? I know there's something called Http Live Streaming from Apple, but most resources I found only talks about streaming videos from server to iPhone.
Is Apple's Http Living Streaming something I should use? Or something else? Thanks.
I have found one library that will help you on this.
HaishinKit Streaming Library
Above Library is giving you all option streaming Via RTMP or HLS.
Just follow this library given step and read it all instruction carefully. Please don't direct run example code given in this library it is having some error instead of that get required class and pod into your demo app.
I have just done it with this you can record screen, Camera and Audio.
I'm not sure you can do that with HTTP Live Streaming. HTTP Live Streaming segments the video in 10 secs (aprox.) length, and creates a playlist with those segments. So if you want the iPhone to be the stream server side with HTTP Live Streaming, you will have to figure out a way to segment the video file and create the playlist.
How to do it is beyond my knowledge. Sorry.
There isn't a built-in way to do this, as far as I know. As you say, HTTP Live Streaming is for downloads to the iPhone.
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
Then the output device's delegate (here, self) has to implement the callback:
EDIT/UPDATE
Several people have asked how to do this without sending the frames to the server one by one. The answer is complex...
Basically, in the
didOutputSampleBuffer
function above, you add the samples into anAVAssetWriter
. I actually had three asset writers active at a time -- past, present, and future -- managed on different threads.The past writer is in the process of closing the movie file and uploading it. The current writer is receiving the sample buffers from the camera. The future writer is in the process of opening a new movie file and preparing it for data. Every 5 seconds, I set
past=current; current=future
and restart the sequence.This then uploads video in 5-second chunks to the server. You can stitch the videos together with
ffmpeg
if you want, or transcode them into MPEG-2 transport streams for HTTP Live Streaming. The video data itself is H.264-encoded by the asset writer, so transcoding merely changes the file's header format.