Is dlib capable of large scale datasets for training object detector. I have >450K face images to train a face detector. Is it possible to use Dlib or I need to direct to another alternative?
相关问题
- Reduction axis 1 is empty in shape [x,0]
- Tensorflow Object Detection: use Adam instead of R
- dlib not using CUDA
- Generating bounding boxes from heatmap data
- RAM Error running the Tensorflow Object Detection
相关文章
- In Dlib how do I save image with overlay?
- Object Tracking on Image
- Could not install pycocotools in windows: fatal er
- Cropping face using dlib facial landmarks
- 4-step Alternating RPN / Faster R-CNN Training? -
- How to send OpenCV output to browser with python?
- Fine-tuning and transfer learning by the example o
- Where to get background sample images for haar tra
How much data you can use is a function of how much RAM is in your computer. So maybe you can train on that many depending on how large each image is and how much RAM you have.
But more importantly, you are probably asking about the HOG+SVM detector in dlib. And for training a face detector, 450K faces is far beyond the point of diminishing returns for HOG+SVM. For example, the frontal face detector that comes with dlib, which is very accurate, is trained on only a small 62MB dataset (this one http://dlib.net/files/data/dlib_face_detector_training_data.tar.gz). Training this kind of detector with more than a few thousand images is not going to get you any additional accuracy.
Now if you have a whole lot of pose variability in your data then HOG+SVM isn't going to be able to capture that. The best thing to do in that case is to train multiple detectors, one for each pose. You can automatically cluster your dataset into different poses using the --cluster option of dlib's imglab tool.