Skip to Main Content
OUTSOURCE EVERYTHING BUT THE GENIUS™

Go to Main Navigation

The Rise of Altmetrics: Shaping New Ways of Evaluating Research

Labs Explorer on March 27, 2020

As a researcher, you might be under a lot of pressure to publish (or perish). You might be scared that no one will even consider you for a grant or a position if you compare yourself to your peers with years of experience. Their impressive h-indexes might frighten you, no matter what your research is and how it could impact your field or the society.

Though, your fear relies on traditional research metrics. Now, times are changing when it comes to assessing research. New metrics emerged, as well as new tools of measuring research impact.

In this article, we will give you an overview of traditional and alternative metrics, tools to measure them, as well as provide you with an insight of their limits. We will also show how important it is for researchers to promote their expertise in non-traditional ways.

The complex nature of R&D impact

Defining the impact is a challenge of its own. Thus, there is an entire field of study dedicated to the importance of assessing the (scientific) research impact called scientometrics, as a sub-field of bibliometrics. Scientometricians, researchers and research administrations have been very vocal about the ways researches are rated over the last two decades.

Too often people think too narrowly about what ‘impacts’ can mean,” tells us Sir Phillip Campbell, the editor-in-chief of the Springer Nature. In his research paper from 2005 about the Journal Impact Factor, he emphasized how research impact is “a multi-dimensional construct that cannot be adequately measured by any single indicator.

Because his work in Springer Nature is focused on research from across all the disciplines, directly or indirectly related to societal challenges, and the themes of the UN’s Sustainable Development Goals, Sir Campbell prefers the term research impact over scientific impact. “The latter language is perceived by some to exclude the vital contributions of the humanities and social sciences, for example,” says Sir Campbell. “There are many paths by which research can impact and influence other research and activities outside research.

And that is exactly the reason why many governments and funders are trying to include more alternative ways of assessing research. There are several questions to be answered: What has changed in research metrics over the years? How can we measure the ways of complementing quantitative metrics? And is it possible at all?

A brief history of research metrics

Since the 1960s and Eugene Garfield who defined the Science Citation Index (SCI) in 1964, research metrics have focused on counting the citations from a research article published in journals. However, until the late 90’s the validation of a research paper by peer-review was sufficient to promote the quality of it. Peer-review was a binary method of evaluation: good vs. not-good.

The advancement of technology and the introduction of automation made counting citations easier and more frequent. It started with the launch of the Web of Science database in 2002, followed by Elsevier’s Scopus and then Google Scholar, both in 2004. From there, research evaluation shifted from binary to more balanced rating. This shift brought a lot of concerns to the academic community, especially when it comes to allocating research funds.

The wrong use of the metrics, as a decision supporting instrument, leads to the present crisis of reproducibility in science, is enhancing tendencies for scientific misconducts by putting too much emphasis on formal achievements with lacking quality control,” explains Jan Hrušák, Chair of the European Strategy Forum on Research Infrastructures.

Up until recently, all eyes were on just a few indices which became the fundamental ways of assessing research. Although it is clear now that they can’t actually indicate the quality of research and therefore its impact, they are still relevant in many aspects. What are these traditional indices and why they still matter?

The impact factor obsession

Over the last decades, numerous studies tried to figure out which one of the traditional bibliometrics is the most accurate to measure research impact.

The most popular way to assess the researchers’ impact was to calculate the h-index. It was proposed as “a number of citations per published research” in 2005 by Jorge Hirsch, a physicist at the University of California in San Diego. Finally, the h-index started a whole “impact-factor obsession” era.

Recruiters started asking candidates for their h-index, and pressure was put on PhD students to publish in high-impact journals to gain additional, external funding for their research.

Also, “universities have become obsessed with their position in global rankings such as the Shanghai Ranking and Times Higher Education’s list,” state Diana Hicks, Paul Wouters, Ludo Waltman, Sarah de Rijcke & Ismael Rafols in their comment published in Nature in 2015. That comment will later become known as the “Leiden Manifesto for research metrics” and the basis for new policies on assessing research that we will mention later on.

However, the Impact Factor is primarily related to journals, meaning the average number of citations received by articles in a journal within a two years frame. It is important to stress how citation rates vary between research fields. As stated in Leiden manifesto: “Top-ranked journals in mathematics have impact factors of around 3; top-ranked journals in cell biology have impact factors of about 30.

The logic behind “impact-factor obsession” goes like this: if I publish my research in a journal with a high Journal Impact Factor, that means that my research will have more impact than if it was published in a low-impact journal. The higher the Journal Impact Factor, the higher the impact of an individual research will be. But is it really like that?

How influential are you as an individual researcher?

Being a researcher in a culture of measurement is very challenging. Jan Hrušák, Chair of the ESFI says that “the influence of a researcher is given by his/her position in the corresponding community.” Thus, he stresses how “quality, quantity and impact must be assessed from different perspectives and cannot be projected on a linear scale.

According to the study Quantifying the Impact and Relevance of Scientific Research, conducted by William J. Sutherland, David Goulson, Simon G. Potts, Lynn V. Dicks in 2011, “there is a weak positive correlation between our impact score and the impact factor of the journal.” But, when measuring the impact of an individual researcher, “it is inappropriate to consider only the journal in which they have published,” states Sir Campbell.

And still, there are numerous indices that are trying to do so. We already mentioned the h-index. Jorge Hirsch defined it in his paper An index to quantify an individual’s scientific research output in 2005. He introduced the h-index as “the number of papers with citation number ≥h, as a useful index to characterize the scientific output of a researcher.

The h-index inspired many others in an eternal quest of finding the accurate research impact assessment. It was followed by Leo Egghe’s g-index , proposed in his paper Theory and practice of the g-index in 2006 as “an improvement of the h-index by giving more weight to highly-cited articles.

With the launch of Google Scholar in 2004, i10-index was created as “the number of publications with at least 10 citations.” However, it can be only applied to Google Scholar, using Google’s My Citations feature. And yet, none of these indices managed to answer the question: How to accurately measure the global research impact including in the outer space of the academic world?

Altmetrics - a subsidiary of traditional metrics or a shift in research evaluation?

During the last two decades, it became clear that we cannot look only at the count of citations. The assessment of the research impact should also include posts on social media, policy documents, Wikipedia pages, mentions, etc. Still, as with traditional research metrics, the challenge of answering certain questions remains. How do we measure the impact beyond academia and how do we measure influence outside the academic community?

Alternative metrics have been given more attention during the last ten years. First, the F1000Prime was established in 2002 as a scientific social platform for feedback and comments. Then Mendeley followed in 2008, emphasizing the importance of research influencers in a certain field.

A shift happened - going from citation-based metrics to other alternatives which include measurement of impact beyond academia. The term “altmetrics” was proposed by Jason Priem, Dario Taraborelli, Paul Groth, Cameron Neylon in their “Altmetrics: A manifesto” from 2010. Their approach was different and focused on calculating “scholar impact based on diverse online research output, such as social media, online news media, online reference managers and so on.” By doing so, authors believed how altmetrics “demonstrate both the impact and the detailed composition of the impact.

How marketing can help your research rank better?

Since then, the term altmetrics has been adopted for metrics that include impact assessment beyond academia and citation counts. With altmetrics, content, social media and digital marketing sneaked into the scientific community, forcing researchers to put efforts in the dissemination of their research results.

One of the most known companies dedicated to calculating alternative metrics is logically called Altmetric. Yes, that’s altmetrics without an “s”. The company was founded in 2011 by Euan Adie with the mission to track and analyze the online activity around scholarly literature, not only for individual researchers but also for institutions, publishers and funders.

It collates what people are saying about published research outputs in scholarly and non-scholarly forums like the mainstream media, policy documents, patents, social networks, and blogs to provide a more robust picture of the influence and reach of scholarly work,” explains Sarah Condon, marketing directress of Altmetric company.

They monitor a range of non-traditional sources, searching for links and references to published research since 2012. “Today the Altmetric database contains 124.6 million mentions of over 27.8 million research outputs tracked (including journal articles, books, datasets, images, white papers, reports and more), and is constantly growing,” concludes Sarah Condon.

If we apply marketing principles on scientific dissemination - your research is the product and funders are your customers. Customers need to know about your product if you want to gain financing. How do you achieve that?

There are infinite options for promoting your research online. Having your own website or at least active and updated social media profiles can do wonders. Like that, you can easily connect with influencers in your domain, expand your network and make sure your work reaches a broader audience. You can even invest in social media advertising campaigns, set the audience you want to reach in a certain timeframe, benefiting from an event for example.

Scientists start to work with experts in communication and marketing to make their research and publications stand out of the crowd. For example, Scientist is specialized in scientific dissemination and content marketing as a marketing service provider for R&D teams. For years now, they became experts in supporting scientists and CROs in their communication efforts.

Altmetrics tools

As there are many tools for measuring the quantitative impact of research papers and journals, there are now new tools emerging who are trying to measure the qualitative impact as well.

A list of apps that measure the alternative metrics can be found alongside the mentioned “Altmetrics: A manifesto.” Here is our list of the most popular altmetrics tools right now:

  • Altmetric - One of the leading companies that provide comprehensive impact measurements, not only for individual researchers but also for institutions, publishers and funders. They include traffic on social media caused by the publication of a certain work. That means they will look from “patents and public policy documents to mainstream media, blogs, Wikipedia, and social media platforms.”
  • PlumX- Plum Analytics was founded in early 2012 with the vision of bringing modern ways of measuring research impact to individuals and organizations that use and analyze research. Since 2017, they have been a part of Elsevier. They have an embeddable widget for live tracking of altmetrics.
  • ImpactStory - an open-source, web-based tool where professors can add an ImpactStory widget on their own websites to get live altmetrics for papers and other research products. They are sorting metrics by engagement type and audience.
  • ResearchGate - beside publishing, sharing and commenting research, on this platform you can calculate your RG Score which is “scientific reputation based on how work is received by peers.
  • Academia