Location>code7788 >text

OpenCV Development Notes (81): Correcting Camera Images by Fisheye Calibration of Camera Internal Reference Matrix via Tessellations

Popularity:340 ℃/2024-10-25 11:02:20

preamble

  For wide-angle camera through the camera picture can identify the checkerboard corner points to calculate the camera internal reference matrix, through the distortion calibration can get better results, but the fisheye camera through this way to get around the surrounding four weeks of the image effect is not very good. So, the fisheye camera has some differences in calibration with the normal camera.
  In this article, an image is recognized to compute to get the camera internal reference matrix and fisheye correction is used to correct the image distortion.

 

Demo

  Fisheye mode distortion calibration effect
  在这里插入图片描述
  Normal Aberration Calibration Effect
  在这里插入图片描述

 

Calibration Example

  Note: Here the demo only uses two recognizable and basically similar tessellations as calculations, with fisheye distortion correction, if the difference is large, then the calibration will run out due to computational error overruns.

Step 1: Initialize the picture list

  

QStringList list;
list.append("D:/qtProject/openCVDemo/openCVDemo/modules/openCVManager/images/");
list.append("D:/qtProject/openCVDemo/openCVDemo/modules/openCVManager/images/");
int chessboardColCornerCount = 8;
int chessboardRowCornerCount = 11;

Step 2: Loop through the image checkerboard grid and put its world coordinates and recognized corners into a list.

  
  

std::vector<std::vector<cv::Point3f>> vectorObjectPoint;
std::vector<std::vector<cv::Point2f>> vectorImagePoint;
cv::Mat grayMat;
cv::Mat srcMat;
for(int n = 0; n < list.size(); n++)
{
    QString str = list.at(n);
    std::string srcFilePath = str.toStdString();
    // Step 1: Read the file
    cv::Mat mat = cv::imread(srcFilePath);
    LOG << mat.cols << mat.rows;
#if 1
    srcMat = cv::Mat(mat.rows * 2, mat.cols * 2, CV_8UC3);
    cv::Mat matRoi = srcMat(cv::Rect(mat.cols / 2, mat.rows / 2, mat.cols, mat.rows));
    cv::addWeighted(mat, 1.0f, matRoi, 0, 0, matRoi);
#else
    srcMat = mat.clone();
#endif
    // Step 2: Zoom in, zoom out if too big (can be omitted)
    cv::resize(srcMat, srcMat, cv::Size(srcMat.cols / 2, srcMat.rows / 2));
    cv::Mat srcMat2 = srcMat.clone();
    cv::Mat srcMat3 = srcMat.clone();
    // Step 3: Graying
    cv::cvtColor(srcMat, grayMat, cv::COLOR_BGR2GRAY);
    cv::imshow("grayMat", grayMat);
    // Step 4: Detecting Corner Points
    std::vector<cv::Point2f> vectorPoint2fCorners;
    bool patternWasFound = false;
    patternWasFound = cv::findChessboardCorners(grayMat,
                                             cv::Size(chessboardColCornerCount,
                                                    chessboardRowCornerCount),
                                             vectorPoint2fCorners,
                                             cv::CALIB_CB_ADAPTIVE_THRESH |
                                             cv::CALIB_CB_FAST_CHECK |
                                             cv::CALIB_CB_NORMALIZE_IMAGE);
    if(!patternWasFound)
    {
        LOG << "not find ChessboardCorners:" << chessboardColCornerCount << chessboardRowCornerCount;
        continue;
    }
    // use adaptive thresholding to transform the image into a binary image.
    enum { CALIB_CB_ADAPTIVE_THRESH = 1, // Convert the image to a binary image using adaptive thresholding
          CALIB_CB_NORMALIZE_IMAGE = 2, // normalize the image to grayscale coefficients (using histogram equalization or adaptive thresholding)
          CALIB_CB_FILTER_QUADS = 4, // Exclude false hypotheses using additional conditions in the contour extraction phase
          CALIB_CB_FAST_CHECK = 8 // Fast detection
         }
    */
    cvui::printf(srcMat, 0, 0, 1.0, 0xFF0000, "found = %s", patternWasFound ? "true" : "false");
    cvui::printf(srcMat, 0, 24