Looking further than the impact factor
Broader and more-transparent metrics could help improve how academic quality is assessed. This is what Centre for Science and Technology Studies (CWTS) researchers Paul Wouters (also Dean of the Faculty of Social Sciences), Sarah de Rijcke and Ludo Waltman write together with colleagues in a comment in Nature.
At present, the Journal Impact Factor (JIF) plays a central role in research evaluations. The JIF is a measure of how often an academic journal is cited. But a journal’s value is much wider than is reflected in this impact factor, say the authors. They attended a NIAS-Lorentz workshop in November 2017 that was organised by the CWTS, Clarivate Analytics and EMBO and that looked in detail at the future of journal indicators. Wouters says, ‘This comment is endorsed by the key stakeholders who deal with the impact factor, such as the producer of JIF (Clarivate Analytics), journal editors, publishers, research funders, indicator experts and researchers. It is unique that we have managed to bridge the gap between the different interests of these groups in a proposal on how to deliver us from the negative effects of the impact factor in evaluations. This is a great opportunity for the academic community.’
Misuse
The authors believe that a strong focus on indicators in evaluations invites misuse. Researchers have been known to choose research questions that are likely to generate favourable metrics, for instance. And the reference lists of articles often include journal self-citations, thus boosting the number of citations. The authors even found examples of questionable journals that do not stick to current quality standards (for example, by failing to do a peer review).
Obstacle
Waltman adds: ‘The dominance of JIF is also a significant obstacle to supporting developments relating to open-access publishing. We can see this in, for instance, the discussions on Plan S, an initiative by research funders to make open-access publishing compulsory.’ The authors set out to find an alternative to the present journal indicators, and began by investigating why journals exist at all. They concluded that they serve various purposes. They are used as a form of registration: whoever is first to publish an idea in a scientific journal has staked an intellectual claim on that idea. They are also an important form of evaluation, through peer review. In addition, they are used to curate, disseminate and archive academic knowledge.
New indicators
To prevent misuse of the system, new indicators should be developed that meet a number of conditions, say the authors. Their use should be restricted to assessing the research done by researchers or institutions, they should be contextualised, knowledge about them should be fostered and their irresponsible use should be challenged. In the article, the authors call on their peers to work together to create a governing organisation. De Rijcke says, ‘It is our joint responsibility to ensure that indicators are well designed and are used responsibly. Our suggestion, therefore, is to set up an international governing body that monitors these standards, a body in which all relevant stakeholders are represented.’
Full article
Read the full article on the Nature website.