Review History


All reviews of published articles are made public. This includes manuscript files, peer review comments, author rebuttals and revised materials. Note: This was optional for articles submitted before 13 February 2023.

Peer reviewers are encouraged (but not required) to provide their names to the authors when submitting their peer review. If they agree to provide their name, then their personal profile page will reflect a public acknowledgment that they performed a review (even if the article is rejected). If the article is accepted, then reviewers who provided their name will be associated with the article itself.

View examples of open peer review.

Summary

  • The initial submission of this article was received on October 6th, 2020 and was peer-reviewed by 3 reviewers and the Academic Editor.
  • The Academic Editor made their initial decision on December 8th, 2020.
  • The first revision was submitted on January 31st, 2021 and was reviewed by 2 reviewers and the Academic Editor.
  • The article was Accepted by the Academic Editor on March 18th, 2021.

Version 0.2 (accepted)

· Mar 18, 2021 · Academic Editor

Accept

The authors have attempted to respond to all the comments raised by reviewers, thus it is now acceptable. However, please go through the manuscript carefully to fix any possible mistakes, language-wise and presentation-wise.

[# PeerJ Staff Note: Although the Academic Editor is happy to accept your article as being scientifically sound, a final check of the manuscript shows that it would benefit from further English editing. Therefore, if you can identify further edits, please work with our production group to address them while in proof stage #]

·

Basic reporting

Approved with the corrections.

Experimental design

no comment

Validity of the findings

no comment

Additional comments

The article is appropriate for publication.

Reviewer 2 ·

Basic reporting

no comment

Experimental design

no comment

Validity of the findings

no cemment

Additional comments

Author revised the paper as suggestion and give clearly answer.

Version 0.1 (original submission)

· Dec 8, 2020 · Academic Editor

Major Revisions

The authors must revise this manuscript carefully based on all the comments provided by three reviewers. Then, it can be reviewed again for possible publication.

[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate.  It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter.  Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]

·

Basic reporting

The paper has been well organized. All literature, figures, and tables support the article. Only a typo as inline 333 (in fig Figure12...) is found.

Experimental design

The research has been designed with many dimensions. The experiment covered the hypothesis.

Validity of the findings

no comment

Additional comments

This paper proposed the method to perform the optimized calibration of the stereo vision system.

Reviewer 2 ·

Basic reporting

This article is very interested that try to propose the optimization for stereo-camera imaging calibration method along with the OpenCV implementation. The explanation is clear through the article. However, in the literature review, the stereo imaging setups are widely used in 3D reconstruction. Is there any application else that required the stereo imaging setups? If any, please explain to give more reason why we need to develop the new stereo calibration optimization method.

Experimental design

This article gives a well-defined research question that try to optimize the setup of the stereo imaging calibration. The experimental design and arrangement is well defined.

The suggest is in figure 4 to give this image gain more understanding, please include the 3 steps: 1) The total set of image pairs, 2) identify corners, and 3) store indices of image pairs that mentioned in line 212-214 along with computational graph for the process of image points preparation.

Validity of the findings

- In the part of "Principle component analysis". Why this article apply only one clustering algorithm that is K-Means? Can we use another clustering algorithm?

- For the experimental result. There is only the result of the proposed method. Is possible to show the result of the comparison of traditional method v.s. the proposed method of stereo calibration optimization.

Additional comments

The figure 1, figure 2, figure 13, and figure 14 are not mentioned in the contexts, please verify.

Reviewer 3 ·

Basic reporting

The author presents an approach to select the best subset of images to calibrate a stereo rig using the well-known library OpenCV for image processing. The author has implemented a routine to try different sets of calibration images and evaluated them in terms of several metrics. By using these metrics the author selects the best subset of images and final calibration.

The article is well written and structured and the literature review is concise and well linked to the publication.

The presented figures are of exceptional quality and help the reader understand the contents fully.

Experimental design

The author validates the approach with a single dataset of 264 stereo images (I expect them to be time-synchronised) and presents several calibration runs together with some metric values in particular cases of the runs.

Please note line 292 Jmdir is said to be an outlier if Jmdir > 25 mm, but figure 8 also suggests an outlier is any run with Jmdir < 15 mm

Looking at Table 2 results, given that the standard deviation of the plane is approx. 0.95 there is no statistical difference between any of the runs. I would suggest the author to validate the results using more precise methods.
In my opinion, there is a lack of information in terms of the geometry used in the experimental design. What is the expected calibration resolution? What is the intended working distance? At which distance was the calibration pattern captured in the images?

There is also another important detail: how does your method ensure that all the image are is covered at some point by a calibration point and avoid skewed or biased distortion models?
Are "good candidates" the points at the centre of the image (less distorted) or are they normally distributed across the image?

Figure 12 mentions a deviation from the nominal. What is the nominal value here? Has the standard deviation also been deviated from the nominal? That is not stated.

I personally do not see the value of the PCA data transformation. Could the author validate that with the reprojection error at some point?

Validity of the findings

I had a look at the images and they do not look sharp. I would encourage the author to use a harder support material for the checkerboard pattern. Cardboard can easily bend and provide a > 1 mm deviation inaccuracy.

The presented results are not statistically sound and controlled. I believe further work needs to be done from data gathering to better analysing it.

All text and materials provided via this peer-review history page are made available under a Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.