I am using AVCaptureSession
to capture video and get real time frame from iPhone camera but how can I send it to server with multiplexing of frame and sound and how to use ffmpeg to complete this task, if any one have any tutorial about ffmpeg or any example please share here.
相关问题
- CALayer - backgroundColor flipped?
- Core Data lightweight migration crashes after App
- Core Data lightweight migration crashes after App
- How can I implement password recovery in an iPhone
- back button text does not change
相关文章
- 现在使用swift开发ios应用好还是swift?
- Handling ffmpeg library interface change when upgr
- How to use a framework build of Python with Anacon
- Could I create “Call” button in HTML 5 IPhone appl
- TCC __TCCAccessRequest_block_invoke
- xcode 4 garbage collection removed?
- Unable to process app at this time due to a genera
- How can I add media attachments to my push notific
The way I'm doing it is to implement an AVCaptureSession, which has a delegate with a callback that's run on every frame. That callback sends each frame over the network to the server, which has a custom setup to receive it.
Here's the flow:
http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW2
And here's some code:
Then the output device's delegate (here, self) has to implement the callback:
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
1) Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
2) Write your own parser for the H.264/AAC output (very hard).
3) Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
Look here , and here
Try capturing video using AV Foundation framework. Upload it to your server with HTTP streaming.
Also check out a stack another stack overflow post below
(The post below was found at this link here)
There is a long and a short story to it.
This is the short one: go look at https://github.com/OpenWatch/H264-RTSP-Server-iOS
this is a starting point.
you can get it and see how he extracts the frame. This is a small and simple project.
Then you can look at kickflip which has a specific function "encodedFrame" its called back onces and encoded frame arrives from this point u can do what you want with it, send via websocket. There is a bunch of very hard code avalible to read mpeg atoms