[Irtalk] Peter Suber on Impact Factors of Journals

Smith, Ina <ismith@sun.ac.za> ismith at sun.ac.za
Tue May 28 21:38:55 SAST 2013

We can do better than journal impact factors.

I just signed the San Francisco Declaration on Research Assessment (DORA) and urge you to do as well.

DORA forcefully points out "[1] the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations; [2] the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and [3] the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact)."


Here are some of my own arguments for the DORA recommendations, or against journal impact factors, from "Thinking about prestige, quality, and open access," SPARC Open Access Newsletter, September 2008:

If you've ever had to consider a candidate for hiring, promotion, or tenure, you know that it's much easier to tell whether she has published in high-impact or high-prestige journals than to tell whether her articles are actually good.  Hiring committees can be experts in the field in which they are hiring, but promotion and tenure committees evaluate candidates in many different fields and can't be expert in every one.  Moreover, even bringing in disciplinary experts doesn't fully solve the problem.  We know that work can be good even when some experts in the field have never heard of it or can't abide it.  On top of that, quantitative judgments are easier than qualitative judgments, and the endless queue of candidates needing evaluation forces us to retreat from time- and labor-intensive methods, which might be more accurate, to shortcuts that are good enough.  And perhaps above all, it's easier to assume that quality and prestige never diverge than to notice when they do diverge and act accordingly.

Impact factors (IFs) rose to prominence in part because they fulfilled the need for easy quantitative judgments and allowed non-experts to evaluate experts.  As they rose to prominence, IFs became more tightly associated with journal prestige than journal quality, in part because their rise itself helped to define journal prestige.

IFs measure journal citation impact, not article impact, not author impact, not journal quality, not article quality, and not author quality, but they seemed to provide a reasonable surrogate for a quality measurement in a world desperate for a reasonable surrogate.  Or they did until we realized that they can be distorted by self-citation and reciprocal citation, that some editors pressure authors to cite the journal, that review articles can boost IF without boosting research impact, that articles can be cited for their weaknesses as well as their strengths, that a given article is as likely to bring a journal's IF down as up, that IFs are only computed for a minority of journals, favoring those from North America and Europe, and that they are only computed for journals at least two years old, discriminating against new journals.

By making IFs central in the evaluation of faculty, universities create incentives to publish in journals with high IFs, and disincentives to publish anywhere else.  This discriminates against journals which are high in quality but low in IF, and journals which are high in quality but for whatever reason (for example, because they are new) excluded from the subset of journals for which Thomson Scientific computes IFs.  By favoring journals with high IFs, universities may succeed at excluding all second-rate journals, but they also exclude many first-rate journals and many first-rate articles.  At the same time, they create perverse incentives for authors and journals to game the IF system.

When we want to assess the quality of articles or people, and not the citation impact of journals, then we need measurements that are more nuanced, more focused on the salient variables, more fair to the variety of scholarly resources, more comprehensive, more timely, and with luck more automated and fully OA....

[My argument] has been misunderstood in the past.  I'm not saying that universities should lower their standards, assume quality from OA, give equal recognition to journals of lower or unknown quality, or treat any impact metric as a quality metric.  I'm saying that universities should do more to evaluate quality, despite the difficulties, and rely less on simplistic quality surrogates.  I'm saying that work of equal quality should have equal weight, regardless of the journals in which it is published.  I'm saying that universities should focus as much as possible on the properties of articles and candidates, not the properties of journals.  I'm saying that in their pursuit of criteria which exclude second-rate work, they should not adopt criteria which exclude identifiable kinds of first-rate work.

I'm never surprised when OA journals report high IFs, often higher than older and better-known journals in their fields.  This reflects the well-documented OA impact advantage.  I'm glad of the evidence that OA journals can play at this game and win.  I'm not saying that journals shouldn't care about their citation impact, or that IFs measure nothing.  I'm only saying that IFs don't measure quality and that universities should care more about quality, especially article quality and candidate quality, than journal citation impact.  I want OA journals to have high impact and prove it with metrics, and I want them to earn prestige in proportion to their quality.  But I want universities to take them seriously because of their quality, not because of their impact metrics or prestige....


E-pos vrywaringsklousule Hierdie e-pos mag vertroulike inligting bevat en mag regtens geprivilegeerd wees en is slegs bedoel vir die persoon aan wie dit geadresseer is. Indien u nie die bedoelde ontvanger is nie, word u hiermee in kennis gestel dat u hierdie dokument geensins mag gebruik, versprei of kopieer nie. Stel ook asseblief die sender onmiddellik per telefoon in kennis en vee die e-pos uit. Die Universiteit aanvaar nie aanspreeklikheid vir enige skade, verlies of uitgawe wat voortspruit uit hierdie e-pos en/of die oopmaak van enige lêers aangeheg by hierdie e-pos nie. E-mail disclaimer This e-mail may contain confidential information and may be legally privileged and is intended only for the person to whom it is addressed. If you are not the intended recipient, you are notified that you may not use, distribute or copy this document in any manner whatsoever. Kindly also notify the sender immediately by telephone, and delete the e-mail. The University does not accept liability for any damage, loss or expense arising from this e-mail and/or accessing any files attached to this e-mail.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lib.sun.ac.za/pipermail/irtalk/attachments/20130528/16fa48e9/attachment-0001.html>

More information about the Irtalk mailing list