This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Preprints) and either DOI or URL of the article must be cited.
The ever-growing need to evaluate researchers without actually reading their work has fertilized the soil for the appearance of hundreds of scientometric indicators. The reason why they continue to emerge is simple: no perfect one has been found yet. The major problem is that any indicator which starts to dominate the evaluation practices causes the evaluated to adjust their behavior accordingly, leading to depreciation of the indicator and diverse scientific malpractice, such as excessive self-citation, self-plagiarism, salami-slicing of publications, guest authorships, and so on. Hence, an indicator is needed that cannot be affected by the evaluated researchers at will, but still captures scientific excellence. Such an indicator is proposed here.