January - March 2007: 
Volume 20, Issue 1

Click on the image to download the Issue in PDF format.


Measurement of individual scientific work
The evaluation of the scientific work of a clinical or basic researcher is multifactorial. Evaluation helps to compare candidates for a post or a grant competition, and even for the promotion of a scientist.
Full text

The internationally established method for such analyses is the search for the author’s publications in the well-known database of Thompson ISI web of science (www.isiknowledge.com), and more recently that of Elsevier Scopus (www.scopus.com). Not only researchers but also Medical Schools and Departments can be evaluated by the same way, especially if the number of publications are divided by the number of faculty in that institution. It should be noted that the above two databases differ in the number of citations as they cite different journals and start from different periods of time (ISI starts from 1996, and Scopus from 1966).

When judging the impact of one’s work, it is common to look not only at the total number of publications, the total number of citations, the publications in journals with high impact factor, and the number of publications in the last three years, but also at the impact those publications have had on the scientific community. Two common measures are the Impact Factor and the H index.1

The Impact Factor was originally designed to judge the quality of a research journal, based on the number of times the journal is cited (over the previous two years), divided by the number of papers published in the journal during the same time. One can perform a similar exercise with ones own publications, taking the total number of citations divided by the number of publications. If someone has 160 publications, and (based on the ISI Web of Science or Scopus) these papers have 1,800 citations, this means the Impact Factor is 11.25 (a high number of publications in the last year will decrease it, since some of them will be too new to have accumulated citations).

The H index is a different method for trying to measure research productivity. This index is calculated easily from a descending ordering of the publications (from the most highly cited papers to least cited) found in the Web of Science or the Scopus.1 The number of publications each having the same or greater number of citations is the H index. For example, if a scholar has an H index 25 means that he has published at least 25 papers, each of which have at least 25 citations. In contrast to impact factor, H index recognizes scientists with important publications in less well-known journals. This issue gave the idea to Jorge Hirsch, a Professor of physics at the University of California, San Diego, for the new index.1

In the US, for example, the usual H index to receive tenure is 12 and 18 for promotion to full professor. Getting into the National Academy of Sciences would require an H index of 45 for a physicist1. In general, an H index 20 after 20 years of scientific career characterizes a successful scientist, while an H index 40 after 20 years of scientific career characterizes an outstanding scientist, usually serving in top universities or research centres.

If someone has published many papers but has a low H index, it means that those publications have no impact in their field. In contrast a low number of publications with high H index means he does not publish a lot but has wide influence in his field. On the one hand, someone with 500 publications and an H index of 12, might send the message that he publishes a lot, but the publications do not have much impact on their field. On the other hand, someone who has only published 15 papers, with an H index of 12, could say that he publishes a little, but the papers have great impact on their field.2

H index increases with age (age dependent) and is not indicated for the evaluation of recent work, which usually has not many citations (table 1). Therefore, it is a good index for evaluating scientists in the second half of their careers, but not before that.

The advantages and disadvantages each of the above criteria are shown in table 1.

In conclusion, evaluating an individual’s scientific productivity is multifactorial and is not reflected by a single number. Differences in measured values in different fields, determined in part by the average number of references per paper in the field, the number of scientists in the field, the nature of the field (e.g., basic science vs. surgery) should be taken into account since both will affect impact factor and H index3-6. However, all these can give a rough approximation of an individual’s productivity, and many other factors in combination should be considered in the evaluation. Furthermore, if most of the research publications have been in a supporting role, especially in a renowned institution, this will merely affect the level of impact ultimately achieved.

1. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Nat Acad Sci 2005; 102:16569-16572.
2. Miller CW. Superiority of the H index over the impact factor for Physics.
3. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997; 314:497
4. Walter G, Blosch S, Hunt G et al. Counting on citations: a flawed
way to measure quality. Med Aust 2003; 178: 280-281
5. Hirst G. Discipline impact factor: a method for determining core journal lists. J Am
Soc inform Sci 1978; 29:171-172
6. Ramirez AM, Garcia EO, Rio JAD. Renormalized impact factor.
Scientometrics 2000; 47:3-9.