Just enter your email
Besides somer minor flaws, I have one main issue as regards this study: the raw data was not obtained in a scientifically sound way and therefore the results are not valid.
The main problem is the measuring error, which has also been mentioned per margin by the authors in the Dicsussion section.
The measurement data was collected by means of human assessment and - since more details are lacking in the manuscript - I assume that it was collected by only one single human expert. Errare human est: Human measurement is prone to errors and to mitigate this problem, good scientific practice requires that human measurements are performed by more than one expert. In order to able to assess the magnitude of the measurement error, an interrater agreement needs to be considered and usually standardized and established key indicators (Pearson, Fleiss, Krippendorff...) are calculated. Another way to mitigate this problem could be repeated tests to calculate a retest-reliability, but this should only be the last option if it is not possible to recruit more than one assessor.
Furthermore, the figures in Table 1 pretend a measurement precision of 1/1000 Pixel (i.e., three decimal places). I doubt that a human assessor can ever reach this precision. A quarter of a pixel (1/4) might be possible and could be a basis of further calculations, but under no circumstances a 1/1000 of a pixel - and this puts all subsequent calculations into question.
The key research question presented in this preprint is interesting and probably worth publishing, but the general methodology needs to be reconsidered strongly. All measurements should be performed by at least two experts (better three or more) in order to collect sound raw data - or (as a fallback) at least more than one assessment round should be performed.
Errare humanum est, sed in errare perseverare diabolicum.