I looked for this paper in the references but did not see it.
i've always liked it: Handwerker HO. Impact factor blues. Eur J Pain. 2010 Jan;14(1):3-4. doi:
10.1016/j.ejpain.2009.11.003. PubMed PMID: 20123449.
From page 1:
"Over the years several articles have analyzed the weaknesses of
this bibliometric measure (Moed and Van Leeuwen, 1996; Seglen,
1997; Frank, 2002) but these analyses have had no impact on its
popularity. It is obvious that the IF of a journal does not necessarily
reflect the citation frequency of a particular article, not to speak of
its scientific quality. Nevertheless, more personalized measures,
e.g. the number of citations of the articles of a candidate, or derived
figures such as the Hirsch Index (Hirsch, 2005) are by far less pop-
ular than the IF which simply reflects an arithmetic mean of cita-
tions neglecting the standard deviation – a denotation we would
never allow in a scientific publication (Frank, 2002).
The persistence of the IF as a measure of worth within the sci-
entific community must have a deeper reason. In my opinion it re-
flects a fundamental aspect of the scientific life: scientists have to
be highly rational in their work, i.e. in the experiments they con-
duct and the papers they produce. However, they are very irratio-
nal and emotional in their relationships to peers. Their business is
highly competitive and for this competition a simple measure like
the IF provides a public benchmark. It does not matter that this
benchmark is as unreliable as a rubber band for measuring dis-
tances.Itis only important that it is established and universally recognized.
Introducing new benchmarks, e.g. the other indices
published by ISI, or the Hirsh Factor for the selection of staff, would
inevitably lead to discussions on meaning and value of these indi-
ces. This very discussion would devalue all measures for their main
purpose, namely to create easily established ranking hierarchies."