Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else?

Evaluating Scientists: Citations, Impact Factor, h-Index,
Online Page Hits and What Else?

M Jagadesh Kumar
Editor-in-Chief, IETE Technical Review, Department of Electrical
Engineering, IIT, Hauz Khas, New Delhi-110 016, India

How to cite this article:
M. J. Kumar, “Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else?” IETE Technical Review, Vol. 26, pp.165-168,  2009

Identifying the key-performance parameters for active ­scientists has always remained a problematic issue. Evaluating and comparing researchers working in a given area have become a necessity since these competing scientists vie for the same limited resources, promotions, awards or fellowships of scientific academies. Whatever method we choose for evaluating the worth of a scientist’s individual research contribution, it should be simple, fair and transparent. A tall order indeed!

One common approach that has been used for a long time is to calculate the number of citations for the publications of a scientist and also see the impact factor of journals in which these publications have appeared. This approach, universally used as a decision-making tool, does have its limitations.

1. Citation Count

The number of citations for each publication of a scientist is readily available from different sources, e.g., Web of Science, Google Scholar and Scopus. It is generally believed that the impact of a researcher’s work is significant on a given field if his or her papers are frequently cited by other researchers. Usually self-citations are not included in such citation counts. However, using citation count alone to judge the quality of research contributions can be unfair to some researchers. It is quite likely that a researcher will have poor citation metrics (i) if he or she is working in a very narrow area (therefore with fewer citations)or (ii) if he or she is publishing mostly in a language other than English or mainly in books or book chapters (since most citation tools do not capture such citations).

2. Impact Factor

Publishing in a journal, such as Nature or Science, which has a high impact factor is considered very prestigious. In our profession, which deals with electronics and communications, it is a dream for many to publish in IEEE journals because some of the IEEE journals do have a high impact factor and their reviewing procedure is very tough. Impact factor is a measure of how frequently the papers published in a journal are cited in scientific literature. Impact factors are released each year in the Journal Citation Report by the Institute of Scientific Information (ISI) [1]. Since its first publication in 1972, the impact factors have acquired wide acceptability in the absence of any other metric to evaluate the worth of a journal.

However, there are limitations in using the impact factor as a measure of the quality of a journal, and hence the quality of research of a scientist who publishes in a high-impact factor journal. For example, many people may read and use the research findings appearing in a given paper but may not cite these because they do not publish their work. In other words, impact factor measures the usefulness of a journal to only those who read and cite the paper in their publications, leaving out a large number of other practitioners of the profession who have not published but yet benefited from the research findings of a paper published in that journal [2] .

There are more than 100,000 journals published from around the world. However, ISI database includes only a small percentage of these journals. Therefore, if you publish in any journal which is not a part of the ISI database or if your papers are cited in the journals not listed in the ISI database, it will not add up to the impact factor calculation. Impact factors can also be manipulated. For example, in some journals, authors are forced in a subtle way to cite other papers published in the same journal. Therefore, blind usage of citation and impact factor indicators may not result in a correct evaluation of the scientific merit of a researcher.

3. The h-index

To overcome the problems associated with the citation metric and impact factor, in 2005, Jorge Hirsch of the University of California at San Diego suggested a simple method to quantify the impact of a scientist’s research output in a given area [3], [4]. The measure he suggested is called the h-index. In the last few years, it has quickly become a widely used measure of a researcher’s scientific output. Without getting into the mathematical rigor of this approach, the meaning of the h-index can be explained as follows. Suppose a researcher has 15 publications. If 10 of these publications are cited at least 10 times by other researchers, the h-index of the scientist is 10, indicating that the other 5 publications may have less than 10 citations. If one of these 10, out of the 15, publications receives, let us say, 100 citations, the h-index still remains 10. If each of these 15 papers receives 10 citations, the h-index is again only 10. The h-index will reach 15, only if each of all the 15 papers receives a minimum of 15 citations. Therefore, to calculate the h-index of a scientist, find the citations of each publication, rank them according to the number of citations received, and identify the first ‘h’ publications having at least ‘h’ citations. To have a reasonably good h-index it is not sufficient to have a few publications with hundreds of citations. The use of h-index aims at identifying researchers with more papers and relevant impact over a period of time.

3.1 Limitations of the h-index

Caution needs to be exercised while calculating the h-index. The value of the h-index you get depends on the database used for calculating the number of citations. If you are using the ISI database, the same limitations that we have seen for calculating the impact factor will also apply here since ISI database considers only those citations in the journals listed in the ISI database. In general, it is found that Google Scholar gives a higher h-index for the same scientist when compared to Scopus or Web of Science. The scientific impact of any researcher can be calculated using Harzing’s freely downloadable tool called “publish or perish”[5].

There are several studies in literature to make the h-index more universally valid, but there is no consensus on using these corrections. For example, the introduction of the g-index is an effort to give some weightage to the highly cited papers [6], [7], [8]. In a recent study, Liu has pointed out the case of two Nobel prize winners, each of whose h-index is less than that corresponding to a “successful scientist” [9]. However, they still got the Nobel prize. Young researchers, whose research time span is short, are bound to have lower h-index values. A further limitation of the h-index is that it does not diminish with time and therefore cannot detect the declining research output of a scientist. Sometimes, the h-index may give rise to misleading information about a scientist’s contribution. For example, a researcher with 10,000 citations may have an h-index of 10 because only 10 of his/her papers have received a minimum of 10 citations; while another researcher with 650 citations may have an h-index of 25 because each of his/her 25 publications has received a minimum of 25 citations. In spite of all these limitations, there is now enough evidence to show that the use of the h-index has become popular and acceptable.

3.2 Finding Your h-index

One way of overcoming the limitations of the database used by the Web of Science, Google Scholar and Scopus is to first develop a habit of periodically collecting all the citations of your papers from different sources, including the above three sources. You can then rank them and pick up the top ‘h’ publications with a minimum of ‘h’ citations. This will give you the h-index of your scientific output. You however have to maintain a list of all your citations and the complete bibliographic information on the citing source, irrespective of whether it is a book, conference paper, journal paper, PhD thesis, patent or non-English source. The carefully maintained bibliographic data will be a proof for the reliability and authenticity of your h-index calculation. Just to give you an idea, the peak h-index of many Nobel prize winners in physics during the last two decades is around 35 to 40 [4]

4. Mentoring Abilities

Recently, Jeang has argued that in addition to the above performance metrics, we should also measure the mentoring abilities of a scientist [10]. If the coauthors of a scientist are his or her own trainees or students and if they continue to make a scientific impact after leaving their supervisor, it does point to the quality of the mentoring by the scientist and to the impact made by the scientist, as a result of his/her mentoring abilities, in a given area during a given period. This is a very important but totally neglected aspect of the contribution made by a scientist or an academic. However, we do not yet have a well worked out formula to measure such mentoring abilities.

5. Online Page Hits

In recent times, most journals have gone online, with open access, and it is very easy to keep track of the number of visitors to the journal’s website. For example, in IETE Technical Review, you can see how many times an article has been viewed, emailed or printed. A recent study shows that high viewership does lead to high citations, and highly cited articles do not necessarily have high viewership. The online viewership data includes (i) those who simply read and (ii) those that read and also publish citing the paper they have read [10]. The citation data includes only the latter group, while the viewership data includes both the groups. Therefore, it may be appropriate to use the number of views for a paper as a measure of its impact and popularity provided the website avoids counting the repeat page hits from the same computer within a given period.

6. Skewed Performance Metrics

Whatever performance metrics we may use, it appears that authors from developing countries do face certain constraints in terms of achieving higher performance indices and therefore recognition for themselves and their country. It is quite possible that authors from advanced countries may tend to cite publications from organizations located in their own countries, leading to a disadvantage for authors working in difficult situations, with less funding opportunities [11]. This is bound to affect the h-index of scientists working in developing countries. Since there is a limited page budget and increased competition in many “high-profile” journals, it is not always possible to publish in these journals. One way to overcome this problem is to encourage and give value to papers published in national journals. There are many scientists from developing countries such as India working in highly developed countries with advanced scientific infrastructure and huge funding. These scientists should seriously consider publishing their work in journals originating from their native countries. This will bring an international flavor to the national journals, attracting more international authors and ultimately making them mainstream international journals. When these journals become more visible and easily accessible through their online versions, there is a chance that papers published in these journals are more often cited. This way, the skewed calculation of the h-index and other performance metrics for scientists from developing countries may be minimized.

7. Conclusion

Exuberant  dependence on single numbers to quantify scientists’ contribution and make administrative decisions can affect their career progression or may force people to somehow enhance their h-index instead of focusing on their more legitimate activity, i.e., doing good science. Considering the complex issues associated with the calculation of scientific performance metrics, it is clear that a comprehensive approach should be used to evaluate the research worth of a scientist. We should not rely excessively on a single metric. Since the h-index is now becoming more popular and is simple to calculate, we should use it judiciously by combining it with other metrics discussed here.

As always, please do not hesitate to contact me and let me know your views.

REFERENCES

1. Available from: http://science.thomsonreuters.com/index.html

2. O. Yoshiko, and A. Makoto, “Pitfalls of citation and journal impact factor devises in research evaluation,” Journal of Science Policy and Research  Management , vol. 20, pp. 239-58, 2005.

3. Available from: http://arxiv.org/abs/physics/0508025

4. J.E. Hirsch, “An index to quantify an individual′s scientific research output,” Proceedings of the National Academy of Sciences of the USA , vol. 102 (46), pp. 16569-72, 2005.

5. Available from: http://www.harzing.com/pop.htm

6. L. Egghe, “How to improve the h-index,” The Scientist , vol. 20 (3),
p. 14, 2006.

7. L. Egghe, “An improvement of the h-index: The g-index,” ISSI
Newsletter
, vol. 2 (1), pp. 8-9, 2006.

8. L. Egghe, “Theory and practice of the g-index,” Scientometrics ,
vol.69 (1), pp. 131-52, 2006.

9. S.V. Liu, “Real discrepancy between h-Index and Nobel prize-winning,” Logical Biology, vol. 5 (4), pp. 320-1, 2005.

10. K.T. Jeang, “H-index, mentoring-index, highly-cited and highly-accessed: how to evaluate scientists?” Retrovirology , vol. 5, Article Number: 106, Nov 2008.

11. A.W.A. Kellener, and L.C.M.O. Ponciano, “H-index in the Brazilian Academy of Sciences – comments and concerns,” Anais da Academia Brasileira de Ciκncias (Annals of the Brazilian Academy of Sciences) , vol.80 (4), pp. 771-81, Dec 2008.

This entry was posted in Education and Research and tagged . Bookmark the permalink.

17 Responses to Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else?

  1. Wohh just what I was looking for, thanks for putting up.

    Terri Gulick.

  2. S.Paramasvaran says:

    An eye opener of sorts for the scientific community at large.
    Many thanks
    S.Param

  3. Dear Prof. , Most of the points are valid. Also, we are not having scientists of rank J Bose, CV Raman, Ramanujam nowadays. Fundamental research is not taken by many researchers due to pressure from funding agencies and constraints like popularity of research problem, and most importantly, the time spent on the problem! No body wants to invest a decade on a single problem. .

  4. I think it is also important to consider for a researcher how long they have been active research. I am an early career researcher who has been publishing for 5 years working in behavioural ecology and I have 15 papers either published or in press (mostly in international journals with IFs between 2 and 5 [which are pretty high for behavioural ecology]). At last count, I had about 70 citations and an h-index of 5. That is not too bad, for someone in my field and at my career stage. However, it would be awful if these figures applied to someone who has been a professor for 5-10 years…Thanks for the post…

  5. preeti says:

    thanks for the much needed information ..in an easy to comprehend manner..specially for young entrants..

  6. Mariana says:

    I would like to have your opinion on which impact factor to add to your CV, if you choose to add it, the year that you publish? the previous year (when you decided to publish on that magazin?) or the current year? I really appreciate your comment. Best

  7. Dear Mariana, irrespective of in which year you have published your paper, it is always a good practice to give the current impact factor of the journal. For good journals, the impact factor typically keeps improving. Therefore, giving a current impact factor could be to your advantage.

    Best wishes,
    MJK

  8. Dr. Amjad Rehman says:

    Excellent Prof. Kumar, I am really excited to read all about this.

    Dr. Amjad Rehman

  9. I am happy to note that the position taken by IEEE on the inappropriate use of
    bibliometrics is in line with the what is discussed in this article. The relevant portion from the IEEE website (http://www.ieee.org/publications_standards/publications/pubnews/vol6issue3/vol6_issue3_index.html) is given below.

    IEEE statement on correct use of bibliometrics

    An increasing number of voices in the scientific community have recently expressed concerns on the inappropriate use of journal bibliometric indicators — mainly the well-known Impact Factor (IF). More specifically, they are being used as a proxy:
    to judge the impact of a single paper published in a journal;
    to evaluate the scientific impact of a scientist for hiring, tenure, promotion, salary increase and even project evaluations.

    As is well documented in the bibliometric literature, journal bibliometric indicators are simply not designed for these purposes. Such unintended uses can have a negative impact on the careers and lives of IEEE members and authors. Furthermore, the practical use of a single bibliometric indicator (the IF) for such inappropriate objectives has made it the target and not the measure, promoting unethical behavior among editorial board members of journals in several disciplines with the sole aim of manipulating the indicator.

    As the world’s largest professional technical organization, the IEEE has committed itself to addressing this situation by:
    1.taking actions to educate the community on the proper use of bibliometrics;
    2.promoting the use of more than one journal bibliometric indicator to offer a more comprehensive evaluation of the journal impact: in addition to bibliometric “popularity” measures (such as IF), at least complementary “prestige” measures should also be used (such as the Eigenfactor™ or the Article Influence™);
    3.establishing an IEEE ethical position on the appropriate use of bibliometrics.

    More specifically, the IEEE Board of Directors on 9 September 2013 approved an IEEE Position Statement on “Appropriate Use of Bibliometric Indicators for the Assessment of Journals, Research Proposals, and Individuals”, which, in short, affirms that:
    1.The use of multiple complementary bibliometric indicators is fundamentally important to offer an appropriate, comprehensive and balanced view of each journal in the space of scholarly publications;
    2.Any journal-based metric is not designed to capture qualities of individual papers and must therefore not be used as a proxy for single-article quality or to evaluate individual scientists;
    3.While bibliometrics may be employed as a source of additional information for quality assessment within a specific area of research, the primary manner for assessment of either the scientific quality of a research project or of an individual scientist should be peer review;
    4.The IEEE also recognizes the increasing importance of bibliometric indicators as independent measures of quality or impact of any scientific publication and therefore explicitly and firmly condemns any practice aimed at influencing the number of citations to a specific journal with the sole purpose of artificially influencing the corresponding indicators.

  10. KJSR says:

    Dear Prof, I was looking for the information on impact factors and came across your blog. It’s brilliantly written. But my questions are still unanswered. I wonder if you would answer them:
    1. I have few publications in high impact factor peer reviewed journals. Recently I joined the editorial board of 2 journals. In your opinion what is the benefit of writing an editorial. I do not look for any benefit because I enjoy writing editorials and I usually write on a topic different from my field of research. I love writing so it’s like a blog for me. I am an avid reader so I just gather all the thoughts and put them in the editorial. I am lucky they publish it happily. But if one has to calculate, do the editorials give weightage to one’s CV? I mean does the editorial carry the same impact factor as a research article published in the journal?
    2. In the research articles, does the impact factor gets distributed among the authors? As far as I know first and second author benefit the most from the research article. Is it harmful to include many names in the research article?
    Please answer!
    Best Regards

  11. Pingback: Get that paper published: hints | The Descrambler's Blog

  12. Excellent. I learnt a lot. Your comment about shining Indian scholars publishing in India journals. I have some interesting reverse experience, though in a different field, I used to edit a journal on Educational Technology — good quality, good circulation. . When a copy reached an outstanding scholar in the field in USA, he wrote to me, “I was surprised to see such a good journal coming out from India’. Distinguished scholars from USA, UK. Canada and other English speaking countries literally filled the pages. But we lost out on Indian contribution. Unlike in science and engineering, not many senior academics research and publish in India; most of them end research by guiding doctoral dissertations; and mentoring as a measure of assessment of research contribution is not yet prevalent in India.

  13. Nice article. Thanks for this valuable information.

  14. Sibsankar Jana says:

    Excellent!! But still some points are missing about various variations of h-index (g-index, f-index, a-index, hg-index, hc-index etc. Again new measurement technique is emerging to weigh the impact of research imput called “altmetrics”. It considers citations as well as share, like, download, visited etc for measuring the impact of a publication.

  15. Great article! I have recently proposed a new index for evaluation of individual researchers. Please have a look:

    A citation-based, author- and age-normalized, logarithmic index for evaluation of individual researchers independently of publication counts
    http://f1000research.com/articles/4-884

  16. profeza says:

    A publication holds many information related to work concealed. We are seeing the revolution towards open science but its time to take it to a step forward by making the data openly available. It prevails the authenticity and transparency of the article as well as accountablity of individual authors in an article. Ultimately will lead to Research Assessment 2.0, a new spectacle to assess a researcher.

  17. Pingback: Evaluating Scientists: Citations, Impact Factor, h-Index, Online Page Hits and What Else? | Mamidala Jagadesh Kumar – Notas sobre Bibliometría

I value your feedback. Please feel free to comment.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s