I am using a multispectral camera to collect data. One is near-infrared and another is colorful. Not two cameras, but one camera can obtain two different kinds of images in the same time. There are some API functions I could use like J_Image_OpenStream. Two part of core codes are shown as follows. One is used to open two streams(actually they are in one sample and I have to use them, but I am not too clearly with their meanings) and set the two avi files' saving paths and begin the acquisition.
// Open stream
retval0 = J_Image_OpenStream(m_hCam[0], 0, reinterpret_cast<J_IMG_CALLBACK_OBJECT>(this), reinterpret_cast<J_IMG_CALLBACK_FUNCTION>(&COpenCVSample1Dlg::StreamCBFunc0), &m_hThread[0], (ViewSize0.cx*ViewSize0.cy*bpp0)/8);
if (retval0 != J_ST_SUCCESS) {
AfxMessageBox(CString("Could not open stream0!"), MB_OK | MB_ICONEXCLAMATION);
return;
}
TRACE("Opening stream0 succeeded\n");
retval1 = J_Image_OpenStream(m_hCam[1], 0, reinterpret_cast<J_IMG_CALLBACK_OBJECT>(this), reinterpret_cast<J_IMG_CALLBACK_FUNCTION>(&COpenCVSample1Dlg::StreamCBFunc1), &m_hThread[1], (ViewSize1.cx*ViewSize1.cy*bpp1)/8);
if (retval1 != J_ST_SUCCESS) {
AfxMessageBox(CString("Could not open stream1!"), MB_OK | MB_ICONEXCLAMATION);
return;
}
TRACE("Opening stream1 succeeded\n");
const char *filename0 = "C:\\Users\\shenyang\\Desktop\\test0.avi";
const char *filename1 = "C:\\Users\\shenyang\\Desktop\\test1.avi";
int fps = 10; //frame per second
int codec = -1;//choose the compression method
writer0 = cvCreateVideoWriter(filename0, codec, fps, CvSize(1296,966), 1);
writer1 = cvCreateVideoWriter(filename1, codec, fps, CvSize(1296,964), 1);
// Start Acquision
retval0 = J_Camera_ExecuteCommand(m_hCam[0], NODE_NAME_ACQSTART);
retval1 = J_Camera_ExecuteCommand(m_hCam[1], NODE_NAME_ACQSTART);
// Create two OpenCV named Windows used for displaying "BGR" and "INFRARED" images
cvNamedWindow("BGR");
cvNamedWindow("INFRARED");
Another one is the two stream functions, they look very similar.
void COpenCVSample1Dlg::StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo)
{
if (m_pImg0 == NULL)
{
// Create the Image:
// We assume this is a 8-bit monochrome image in this sample
m_pImg0 = cvCreateImage(cvSize(pAqImageInfo->iSizeX, pAqImageInfo->iSizeY), IPL_DEPTH_8U, 1);
}
// Copy the data from the Acquisition engine image buffer into the OpenCV Image obejct
memcpy(m_pImg0->imageData, pAqImageInfo->pImageBuffer, m_pImg0->imageSize);
// Display in the "BGR" window
cvShowImage("INFRARED", m_pImg0);
frame0 = m_pImg0;
cvWriteFrame(writer0, frame0);
}
void COpenCVSample1Dlg::StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo)
{
if (m_pImg1 == NULL)
{
// Create the Image:
// We assume this is a 8-bit monochrome image in this sample
m_pImg1 = cvCreateImage(cvSize(pAqImageInfo->iSizeX, pAqImageInfo->iSizeY), IPL_DEPTH_8U, 1);
}
// Copy the data from the Acquisition engine image buffer into the OpenCV Image obejct
memcpy(m_pImg1->imageData, pAqImageInfo->pImageBuffer, m_pImg1->imageSize);
// Display in the "BGR" window
cvShowImage("BGR", m_pImg1);
frame1 = m_pImg1;
cvWriteFrame(writer1, frame1);
}
The question is if I do not save the avi files, as
/*writer0 = cvCreateVideoWriter(filename0, codec, fps, CvSize(1296,966), 1);
writer1 = cvCreateVideoWriter(filename1, codec, fps, CvSize(1296,964), 1);*/
//cvWriteFrame(writer0, frame0);
//cvWriteFrame(writer0, frame0);
In the two display windows, the pictures captured like similarly which means they are synchronous. But if I have to write data to the avi files, due to the different size of two kinds of pictures and their large size, it turns out that this influence the two camera's acquire speed and pictures captured are non-synchronous. But I could not create such a huge buffer to store the whole data in the memory and the I/O device is rather slow. What should I do? Thank you very very much.
some class variables are:
public:
FACTORY_HANDLE m_hFactory; // Factory Handle
CAM_HANDLE m_hCam[MAX_CAMERAS]; // Camera Handles
THRD_HANDLE m_hThread[MAX_CAMERAS]; // Stream handles
char m_sCameraId[MAX_CAMERAS][J_CAMERA_ID_SIZE]; // Camera IDs
IplImage *m_pImg0 = NULL; // OpenCV Images
IplImage *m_pImg1 = NULL; // OpenCV Images
CvVideoWriter* writer0;
IplImage *frame0;
CvVideoWriter* writer1;
IplImage *frame1;
BOOL OpenFactoryAndCamera();
void CloseFactoryAndCamera();
void StreamCBFunc0(J_tIMAGE_INFO * pAqImageInfo);
void StreamCBFunc1(J_tIMAGE_INFO * pAqImageInfo);
void InitializeControls();
void EnableControls(BOOL bIsCameraReady, BOOL bIsImageAcquiring);
The correct approach at recording the video without frame drops is to isolate the two tasks (frame acquisition, and frame serialization) such that they don't influence each other (specifically so that fluctuations in serialization don't eat away time from capturing the frames, which has to happen without delays to prevent frame loss).
This can be achieved by delegating the serialization (encoding of the frames and writing them into a video file) to separate threads, and using some kind of synchronized queue to feed the data to the worker threads.
Following is a simple example showing how this could be done. Since I only have one camera and not the kind you have, I will simply use a webcam and duplicate the frames, but the general principle applies to your scenario as well.
Sample Code
In the beginning we have some includes:
Synchronized Queue
The first step is to define our synchronized queue, which we will use to communicate with the worker threads that write the video.
The primary functions we need is the ability to:
We use
std::queue
to hold thecv::Mat
instances, andstd::mutex
to provide synchronization. Astd::condition_variable
is used to notify the consumer when image has been inserted into the queue (or the cancellation flag set), and a simple boolean flag is used to notify cancellation.Finally, we use the empty
struct cancelled
as an exception thrown frompop()
, so we can cleanly terminate the worker by cancelling the queue.Storage Worker
The next step is to define a simple
storage_worker
, which will be responsible for taking the frames from the synchronized queue, and encode them into a video file until the queue has been cancelled.I've added simple timing, so we have some idea about how much time is spent encoding the frames, as well as simple logging to console, so we have some idea about what is happening in the program.
Processing
Finally, we can put this all together.
We begin by initializing and configuring our video source. Then we create two
frame_queue
instances, one for each stream of images. We follow this by creating two instances ofstorage_worker
, one for each queue. To keep things interesting, I've set a different codec for each.Next step is to create and start worker threads, which will execute the
run()
method of eachstorage_worker
. Having our consumers ready, we can start capturing frames from the camera, and feed them to theframe_queue
instances. As mentioned above, I have only single source, so I insert copies of the same frame into both queues.NB: I need to use the
clone()
method ofcv::Mat
to do a deep copy, otherwise I would be inserting references to the single buffer OpenCVVideoCapture
uses for performance reasons. That would mean that the worker threads would be getting references to this single image, and there would be no synchronization for access to this shared image buffer. You need to make sure this does not happen in your scenario as well.Once we have read the appropriate number of frames (you can implement any other kind of stop-condition you desire), we cancel the work queues, and wait for the worker threads to complete.
Finally we write some statistics about the time required for the different tasks.
Console Output
Running this little sample, we get the following log output in the console, as well as the two video files on the disk.
NB: Since this was actually encoding a lot faster than capturing, I've added some wait into the storage_worker to show the separation better.
Possible Improvements
Currently there is no protection against the queue getting too full in the situation when the serialization simply can't keep up with the rate the camera generates new images. Set some upper limit for the queue size, and check in the producer before you push the frame. You will need to decide how exactly you want to handle this situation.