Skip to Main Content

Research Visibility: Research Metrics

Research metrics proliferate in academia today. They include traditional bibliometrics (quantitative measures of research publications, such as publication counts and citation counts) as well as more recent alternative metrics or "altmetrics" (such as social media mentions).

This guide provides an overview of commonly used metrics and their limitations, as well as guidance on best practices for using metrics in research evaluation.

Best Practices

Evaluation methods should be appropriate to question being asked (or the aims of the assessment) and to the practices of the discipline.

Avoid using quantitative metrics in isolation; they should be accompanied by peer review and expert judgment.

Select bibliometric indicators judiciously; use normalized indicators where appropriate.

Use multiple indicators rather than just one to gain a more thorough picture, and to reduce the likelihood of susceptibility to outliers, bias, or gaming the system.

Be mindful of the limitations of bibliometric analyses.

For more best practices, see the section below on Responsible Metrics.

Responsible Metrics

The Leiden Manifesto

The Leiden Manifesto, published in Nature in 2015, proposes ten principles to guide research evaluation.

DORA

The Declaration on Research Assessment (DORA), developed in 2012 during the Annual Meeting of the American Society for Cell Biology in San Francisco, comprises 18 recommendations to improve the ways in which scientific research output is evaluated by funding agencies, academic institutions, and other parties.

Approaches to Assessing Impacts in the Humanities and Social Sciences

The report Approaches to Assessing Impacts in the Humanities and Social Sciences (PDF) was published by the Federation for the Humanities and Social Sciences in recognition of the need to define HSS research broadly and use a variety of measures, particularly qualitative ones, to evaluate impact in the humanities and social sciences.

Sources of Research Metrics

Citation-Tracking Databases

Research metrics are typically derived from a citation index (a database that tracks citations among papers, in addition to bibliographic details). The most commonly used sources are:

Other Sources

Dimensions (free version) - another citation index, which includes research metrics such as citation counts.

Publish or Perish - a free downloadable software program that retrieves and analyzes academic citations from sources such such as Google Scholar or Microsoft Academic.

Research metrics are dependent on database coverage, so metrics should be drawn from the same source whenever possible and the data source should be named when metrics are cited.

Limitations of Research Metrics

Disciplinary Differences

  • Publication and citation practices vary widely by discipline; bibliometric comparisons between disciplines are generally not recommended (unless normalized metrics are used).
  • Books, book chapters, and other non-article formats (e.g. exhibits, performances, etc.) are not well represented (if at all) in citation tracking databases; thus, metrics based on journals and journal articles have limited applicability for some disciplines (particularly in the arts and humanities).

Scope (Coverage) and Accuracy of Source Data

  • Research metrics are usually calculated based on data contained in commercial citation indices (citation-tracking databases), such as Scopus, Web of Science, and Google Scholar. The measures are therefore tied to the content indexed in of each of those databases. It is likely that the value of an author's h-index, for example, will be different in Scopus, Web of Science, and Google Scholar due to differences in coverage of the author's works and of citing documents.
  • Coverage of different languages, geographic regions, and disciplines varies across databases, as does data accuracy.
  • Citation-tracking databases have an English-language bias and provide limited coverage of non-English works. Similarly, their geographic coverage reflects the locations of major publishers; they may have limited coverage of works that are of national or regional importance.

Time

  • Time is needed for publications to receive citations, so citation-based metrics are less useful when applied to early career researchers.
  • Citations accrue at different rates in difference disciplines and for different publication types.

Size

  • The value of some metrics tends to increase as the size of the entity being considered increases. For example, a small research group will tend to have fewer publications and citations than a large department; in such cases, a metric such as citations per publication may be more appropriate than a total citation count.

Normalization

Since publication and citation patterns vary across disciplines and over time, bibliometric indicators are often normalized to enable comparisons across different fields, time periods, document types, or other factors.

A field-normalized citation score, for example, compares the total number of citations received by a publication (or author) to the expected number of citations of a publication (or author) in the same field.

A value of 1.00 indicates that the publication has received an average number of citations for publications in that field. A value >1.00 indicates that the number of citations this publication has received is greater than the world average for that field.

Local Context

Help with Research Metrics

Contact your librarian
for advice on selecting appropriate research metrics or on navigating citation-tracking databases.

 

Metrics Toolkit:
Helping You Navigate the Research Metrics Landscape

Metrics Toolkit logo
The Metrics Toolkit is a resource for researchers and evaluators that provides guidance for demonstrating and evaluating claims of research impact.  With the Toolkit you can quickly understand what a metric means, how it is calculated, and if it’s good match for your impact question.