author-level metrics
{{Short description|Metrics of the bibliometric impact of individual authors}}
{{Citation metrics}}
Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors (from only considering the total number of citations, to looking at their distribution across papers or journals using statistical or graph-theoretic principles).
These quantitative comparisons between researchers are mostly done to distribute resources (such as money and academic positions). However, there is still debate in the academic world about how effectively author-level metrics accomplish this objective.{{cite journal|last1=Hirsch|first1=J. E.|date=7 November 2005|title=An index to quantify an individual's scientific research output|journal=Proceedings of the National Academy of Sciences|volume=102|issue=46|pages=16569–16572|doi=10.1073/pnas.0507655102|pmc=1283832|pmid=16275915|arxiv=physics/0508025|bibcode=2005PNAS..10216569H|doi-access=free}}
Author-level metrics differ from journal-level metrics, which attempt to measure the bibliometric impact of academic journals rather than individuals, and from article-level metrics, which attempt to measure the impact of individual articles. However, metrics originally developed for academic journals can be reported at researcher level, such as the author-level eigenfactor{{cite journal |title=Author-level Eigenfactor metrics: Evaluating the influence of authors, institutions, and countries within the social science research network community |first1=Jevin D. |last1=West |first2=Michael C. |last2=Jensen |first3=Ralph J. |last3=Dandrea |first4=Gregory J. |last4=Gordon |first5=Carl T. |last5=Bergstrom |journal=Journal of the American Society for Information Science and Technology |year=2013 |doi=10.1002/asi.22790 |volume=64 |issue=4 |pages=787–801}} and the author impact factor.{{cite journal |title=Author Impact Factor: Tracking the dynamics of individual scientific impact |journal=Scientific Reports |doi=10.1038/srep04880 |first1=Raj Kumar |last1=Pan |first2=Santo |last2=Fortunato |year=2014 |volume=4 |page=4880|pmid=24814674 |pmc=4017244 |arxiv=1312.2650 |bibcode=2014NatSR...4E4880P }}
List of metrics
=Additional variations of ''h''-index=
There are a number of models proposed to incorporate the relative contribution of each author to a paper, for instance by accounting for the rank in the sequence of authors.{{cite journal | last1 = Tscharntke | first1 = T. | last2 = Hochberg | first2 = M. E. | last3 = Rand | first3 = T. A. | last4 = Resh | first4 = V. H. | last5 = Krauss | first5 = J. | year = 2007 | title = Author Sequence and Credit for Contributions in Multiauthored Publications | journal = PLOS Biology | volume = 5 | issue = 1| page = e18 | doi = 10.1371/journal.pbio.0050018 |pmc=1769438 | pmid=17227141 | doi-access = free }} A generalization of the h-index and some other indices that gives additional information about the shape of the author's citation function (heavy-tailed, flat/peaked, etc.) has been proposed.{{cite journal |last=Gągolewski |first=M. |author2=Grzegorzewski, P. |year=2009 |title=A geometric approach to the construction of scientific impact indices |journal=Scientometrics |volume=81 |issue=3 |pages=617–34 |doi=10.1007/s11192-008-2253-y |s2cid=466433 }} Because the h-index was never meant to measure future publication success, recently, a group of researchers has investigated the features that are most predictive of future h-index. It is possible to try the predictions using an online tool.{{cite journal |doi=10.1038/489201a |title=Future impact: Predicting scientific success |year=2012 |last1=Acuna |first1=Daniel E. |last2=Allesina |first2=Stefano |last3=Kording |first3=Konrad P. |author-link3=Konrad Kording|journal=Nature |volume=489 |issue=7415 |pages=201–02 |pmid=22972278 |pmc=3770471|bibcode=2012Natur.489..201A }} However, later work has shown that since h-index is a cumulative measure, it contains intrinsic auto-correlation that led to significant overestimation of its predictability. Thus, the true predictability of future h-index is much lower compared to what has been claimed before.{{cite journal |doi=10.1038/srep03052 |title=On the Predictability of Future Impact in Science|year=2013 |last1=Penner |first1=Orion |last2=Pan |first2=Raj K. |last3=Petersen | first3=Alexander M. |last4=Kaski | first4=Kimmo | last5= Fortunato | first5=Santo |journal=Scientific Reports |volume=3 |issue=3052|page=3052 |pmid=24165898 |pmc=3810665|bibcode=2013NatSR...3E3052P|arxiv=1306.0114 }} The h-index can be timed to analyze its evolution during one's career, employing different time windows.{{cite journal|last=Schreiber|first=Michael|date=2015|title=Restricting the h-index to a publication and citation time window: A case study of a timed Hirsch index|journal=Journal of Informetrics|volume=9|pages=150–55|doi=10.1016/j.joi.2014.12.005|arxiv=1412.5050|s2cid=12320545}}
Criticism
Some academics, such as physicist Jorge E. Hirsch, have praised author-level metrics as a "useful yardstick with which to compare, in an unbiased way, different individuals competing for the same resource when an important evaluation criterion is scientific achievement." However, other members of the scientific community, and even Hirsch himself,{{Cite journal|last=Hirsch|first=Jorge E.|date=2020|title=Superconductivity, What the H? The Emperor Has No Clothes|journal=Physics and Society|volume=49|pages=5–9|arxiv=2001.09496|quote=I proposed the H-index hoping it would be an objective measure of scientific achievement. By and large, I think this is believed to be the case. But I have now come to believe that it can also fail spectacularly and have severe unintended negative consequences. I can understand how the sorcerer’s apprentice must have felt. (p.5)}} have criticized them as particularly susceptible to gaming the system.{{cite journal |last1=Peter A. |first1=Lawrence |title=The mismeasurement of science |journal=Current Biology |date=2007 |volume=17 |issue=15 |pages=R583–R585 |doi=10.1016/j.cub.2007.06.014 |pmid=17686424 |bibcode=2007CBio...17.R583L |s2cid=30518724 |url=https://www.cell.com/current-biology/pdf/S0960-9822(07)01516-3.pdf}}{{cite journal |last1=Şengör |first1=Celâl. AM |title=How scientometry is killing science |journal=GSA Today |date=2014 |volume=24 |issue=12 |pages=44–45 |doi=10.1130/GSATG226GW.1 |url=https://www.geosociety.org/gsatoday/archive/24/12/pdf/i1052-5173-24-12-44.pdf}}{{cite journal |last1=Seppelt |first1=Ralf |title=The Art of Scientific Performance |journal=Trends in Ecology and Evolution |date=2018 |volume=11 |issue=33 |pages=805–809 |doi=10.1016/j.tree.2018.08.003 |url=https://doi.org/10.1016/j.tree.2018.08.003
|pmid = 30270172|bibcode=2018TEcoE..33..805S |s2cid=52890068 }}
Work in bibliometrics has demonstrated multiple techniques for the manipulation of popular author-level metrics. The most used metric h-index can be manipulated through self-citations,{{cite journal|author=Gálvez RH|date=March 2017|title=Assessing author self-citation as a mechanism of relevant knowledge diffusion|journal=Scientometrics|volume=111|issue=3|pages=1801–1812|doi=10.1007/s11192-017-2330-1|s2cid=6863843}}{{Cite journal|author=Christoph Bartneck & Servaas Kokkelmans|last2=Kokkelmans|year=2011|title=Detecting h-index manipulation through self-citation analysis|journal=Scientometrics|volume=87|issue=1|pages=85–98|doi=10.1007/s11192-010-0306-5|pmc=3043246|pmid=21472020}}{{Cite journal|author=Emilio Ferrara & Alfonso Romero|last2=Romero|year=2013|title=Scientific impact evaluation and the effect of self-citations: Mitigating the bias by discounting the h-index|journal=Journal of the American Society for Information Science and Technology|volume=64|issue=11|pages=2332–39|arxiv=1202.3119|doi=10.1002/asi.22976|s2cid=12693511}} and even computer-generated nonsense documents can be used for that purpose, for example using SCIgen.{{cite report|url=http://rr.liglab.fr/research_report/RR-LIG-008.pdf|title=Ike Antkare one of the great stars in the scientific firmament|author=Labbé, Cyril|date=2010|publisher=Joseph Fourier University|work=Laboratoire d'Informatique de Grenoble RR-LIG-2008 (technical report)}} Metrics can also be manipulated by coercive citation, a practice in which an editor of a journal forces authors to add spurious citations to their own articles before the journal will agree to publish it.{{cite journal|last1=Wilhite|first1=A. W.|last2=Fong|first2=E. A.|year=2012|title=Coercive Citation in Academic Publishing|journal=Science|volume=335|issue=6068|pages=542–3|bibcode=2012Sci...335..542W|doi=10.1126/science.1212540|pmid=22301307|s2cid=30073305}}{{Cite journal|last=Noorden|first=Richard Van|date=February 6, 2020|title=Highly cited researcher banned from journal board for citation abuse|journal=Nature|volume=578|issue=7794|pages=200–201|doi=10.1038/d41586-020-00335-7|pmid=32047304|bibcode=2020Natur.578..200V|doi-access=free}}
Additionally, if the h-index is considered as a decision criterion for research funding agencies, the game-theoretic solution to this competition implies increasing the average length of coauthors' lists.{{cite journal|author1=Rustam Tagiew|author2=Dmitry I. Ignatov|year=2017|title=Behavior mining in h-index ranking game|url=http://ceur-ws.org/Vol-1968/paper6.pdf|journal=CEUR Workshop Proceedings|volume=1968|pages=52–61}} A study analyzing >120 million papers in the specific field of biology showed that the validity of citation-based measures is being compromised and their usefulness is lessening.{{cite journal |last1=Fire |first1=Michael |last2=Guestrin |first2=Carlos |title=Over-optimization of academic publishing metrics: observing Goodhart's Law in action |journal=GigaScience |date=1 June 2019 |volume=8 |issue=6 |doi=10.1093/gigascience/giz053|pmid=31144712 |pmc=6541803 |arxiv=1809.07841 }} As predicted by Goodhart's law, quantity of publications is not a good metric anymore as a result of shorter papers and longer author lists.
Leo Szilard, the inventor of the nuclear chain reaction, also expressed criticism of the decision-making system for scientific funding in his book "The Voice of the Dolphins and Other Stories".{{cite book |title=The Voice of the Dolphins and Other Stories |date=1961 |publisher=Simon and Schuster |location=New York}} Senator J. Lister Hill read excerpts of this criticism in a 1962 senate hearing on the slowing of government-funded cancer research.{{cite book |last1=Committee |first1=United States Congress Senate Appropriations |title=Labor-Health, Education, and Welfare Appropriations for 1962, Hearings Before the Subcommittee of ... , 87-1 on H.R. 7035 |date=1961 |pages=1498 |url=https://books.google.com/books?id=B7tgSd4ZDBsC&pg=PA1498 |language=en}} Szilard's work focuses on metrics slowing scientific progress, rather than on specific methods of gaming:
"As a matter of fact, I think it would be quite easy. You could set up a foundation, with an annual endowment of thirty million dollars. Research workers in need of funds could apply for grants, if they could mail out a convincing case. Have ten committees, each committee, each composed of twelve scientists, appointed to pass on these applications. Take the most active scientists out of the laboratory and make them members of these committees. And the very best men in the field should be appointed as chairman at salaries of fifty thousand dollars each. Also have about twenty prizes of one hundred thousand dollars each for the best scientific papers of the year. This is just about all you would have to do. Your lawyers could easily prepare a charter for the foundation. As a matter of fact, any of the National Science Foundation bills which were introduced in the Seventy-ninth and Eightieth Congress could perfectly well serve as a model."
"First of all, the best scientists would be removed from their laboratories and kept busy on committees passing on applications for funds. Secondly the scientific workers in need of funds would concentrate on problems which were considered promising and were pretty certain to lead to publishable results. For a few years there might be a great increase in scientific output; but by going after the obvious, pretty soon science would dry out. Science would become something like a parlor game. Somethings would be considered interesting, others not. There would be fashions. Those who followed the fashions would get grants. Those who wouldn’t would not, and pretty soon they would learn to follow the fashion, too."
See also
References
{{Reflist}}
Further reading
- {{cite book |isbn=978-3319000251 |url=https://books.google.com/books?id=8DK6BQAAQBAJ&q=author-level%20metrics&pg=PA181 |title=Opening Science: The Evolving Guide on How the Internet is Changing Research ... |date=2013-12-16 |access-date=2015-08-16|last1 = Bartling|first1 = Sönke|last2 = Friesike|first2 = Sascha| publisher=Springer }}
- {{cite book |isbn=978-1107653603 |url=https://books.google.com/books?id=WnIhAwAAQBAJ&q=author-level%20metrics&pg=PA152 |title=The Handbook of Journal Publishing |author=Sally Morris |author2=Ed Barnas |author3=Douglas LaFrenier |author4=Margaret Reich |date=2013-02-21 | publisher=Cambridge University Press |access-date=2015-08-16}}
- {{cite book |isbn=978-3319097848 |url=https://books.google.com/books?id=3pY9BQAAQBAJ&q=author-level%20metrics&pg=PA267 |title=Incentives and Performance: Governance of Research Organizations |date= 7 November 2014|access-date=2015-08-16|last1 = Welpe|first1 = Isabell M.|last2 = Wollersheim|first2 = Jutta|last3 = Ringelhan|first3 = Stefanie|last4 = Osterloh|first4 = Margit| publisher=Springer }}
- {{cite book |isbn=978-3319103761 |url=https://books.google.com/books?id=oP05BQAAQBAJ&q=author-level%20metrics&pg=PA90 |title=Measuring Scholarly Impact: Methods and Practice |date= 6 November 2014|access-date=2015-08-16|last1 = Ding|first1 = Ying|last2 = Rousseau|first2 = Ronald|last3 = Wolfram|first3 = Dietmar| publisher=Springer }}
{{Academic publishing}}