Generic web camera calibration

2019-08-02 06:52发布

I am building a website that does cool things using computer vision techniques, with videos live recorded and uploaded by users using their webcam. For this, I need camera intrinsic and distortion parameters. I am trying to figure out what would be the best way to compute these given the user uploaded videos. We can make no assumptions about what videos user might upload - but a reasonable assumption is that a human might be present in the video. I am still in the initial stages of this, but I am interested in knowing how others have solved this problem.

To be specific, below are the questions that I would appreciate someone experienced in the group might comment upon:

  • What algorithms, libraries and techniques are available to extract intrinsic and distortion parameters of any generic webcam available in the market? [I say "extract" and not "calibrate" to include cases where intrinsic parameters are just a method call away with no calibration necessary].
  • In general, how much variance have you observed in the intrinsic and distortion parameters in the webcams available in the market? Did you approximate them with a single intrinsic and distortion parameters or what approach did you follow?
  • What camera self-calibration methods, if any, could be employed in these scenarios? Are there any opensource or commercial libraries available which might be of some help?
  • If we aim to calibrate the webcams using the videos user record and upload, what assumptions in the parameters [like fx==fy or no distortion params] makes sense and sounds reasonable to you?
  • Would a reasonable approximation of intrinsic and distortion params for all the cameras make sense? What would be a reasonable approach to validate how good particular intrinsic and distortion parameters are for a specific webcam?
  • Are there any other issues that need to be considered?

3条回答
你好瞎i
2楼-- · 2019-08-02 07:19

Sometimes I am the one who comes with the bad news :) So do I now.

For almost all your points there the clear answer is No, None, Not, and so on. Only for the last point, with the other issues, the answer is not a no, but a long list :).

Actually, camera calibration without a chessboard and some specific constraints is almost impossible.

The closest implementation to a no-assumptions calibration is found in the stitching module in OpenCV. Hovewer, it is not perfect, and it's not working on random videos. Give it a try.

查看更多
老娘就宠你
3楼-- · 2019-08-02 07:26
  1. There is the famous Camera Calibration Toolbox, a good Matlab implementation of extracting intrinsic and extrinsic parameters.

  2. There is a variance not only amongst webcams, but also of:

    • Different modules
    • Different zoom levels (Affects the optics)
  3. I think that this is a really hard problem, if you restrict yourself to making no assumptions regarding the video. Both the calibration and the evaluation is hard if you don't use something known - such as checker board in Camera Calibration Toolbox.

查看更多
欢心
4楼-- · 2019-08-02 07:28

Many algorithms, including the currently used in opencv requires that known points can be detected (e.g corners in a chess board). You would have to require that your users took pictures of this known patterns, which ruin the concept of random videos. I dont have a solution to this but you might want to consider requiring users to record videos of structures scenes(no specific patterns or objects) and use the algorithm described in: "Camera calibration with lens distortion from low-rank textures" http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5995548&tag=1

Haven't tried it myself though.

查看更多
登录 后发表回答