Tuesday, November 17, 2009

Valuing The Written Word

I recently attended an interesting lecture at the Annual Meeting of the Association of American Medical Colleges in Boston. Harold E. Varmus, MD, gave the Robert G. Petersdorf Lecture, entitled Publication Practices and Academic Values. The talk addressed the increasing pressure to publish in glamour journals, both for funding and academic advancement. Dr. Varmus pointed out that many of his most significant works appeared in “lesser” journals that served the appropriate audience for the science. Discussions after the formal talk centered on judging academic achievement on that standard.

Photoxpress_4928968

How did some scientific journals acquire biblical reputations?

Some have been around a long, long time. I have not found anyone who can remember a time when The New England Journal of Medicine was not the premier clinical research carrier.

How do we determine a journal’s worth? At the present time, the index of choice is the Impact Factor (IF). There is an excellent description of IF and its calculation on Wikipedia:

The impact factor, often abbreviated IF, is a measure reflecting the average number of citations to articles published in science and social science journals. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones. The impact factor was devised by Eugene Garfield, the founder of the Institute for Scientific Information (ISI), now part of Thomson Reuters. Impact factors are calculated yearly for those journals that are indexed in Thomson Reuter's Journal Citation Reports.

Academics, especially scientists, love an objective numerical datapoint. During the 20 years or so that I have participated in these endeavors [has it really been that long?], the IF has risen dramatically in importance because it seems so objective. However, like all numbers, the IF can be gamed, and its validity has been questioned:

The impact factor refers to the average number of citations per paper, but this is not a normal distribution. It is rather a Bradford distribution, as predicted by theory. Being an arithmetic mean, the impact factor therefore is not a valid representation of this distribution and unfit for citation evaluation.[6]

Most journals try to improve IF by providing more review articles. Reviews receive more citations than original research, thus improving the IF. Other manipulations may also inflate IF:

Journals may change the fraction of "citable items" compared to front-matter in the denominator of the IF equation. Which types of articles are considered "citable" is largely a matter of negotiation between journals and Thomson Scientific. As a result of such negotiations, impact factor variations of more than 300% have been observed.[12] For instance, editorials in a journal are not considered to be citable items and therefore do not enter into the denominator of the impact factor. However, citations to such items will still enter into the numerator, thereby inflating the impact factor. In addition, if such items cite other articles (often even from the same journal), those citations will be counted and will increase the citation count for the cited journal. This effect is hard to evaluate, for the distinction between editorial comment and short original articles is not always obvious. "Letters to the editor" might refer to either class.

 

Several methods, not necessarily with nefarious intent, exist for a journal to cite articles in the same journal which will increase the journal's impact factor.[13]

 

In 2007 a specialist journal with an impact factor of 0.66 published an editorial that cited all its articles from 2005 to 2006 in a protest against the absurd use of the impact factor.[14] The large number of citations meant that the impact factor for that journal increased to 1.44. As a result of the increase, the journal was not included in the 2008 Journal Citation Report. [15]

The Wiki lists a number of alternatives to IF; however, complexity plagues them. PageRank algorithms (such as used by Google) cannot be explained in a simple equation. Increasing complexity may improve validity, but it zaps intuitive understanding.

Dr. Varmus plead for an end to IF insanity. He also works on open access, and has been a a major proponent of freely available online science. In 2006, Chris Surridge of Public Library of Science (PLoS) blogged on the IF and its irrelevancy to PLoS following an editorial in PLoS Medicine. In the Petersdorf lecture, Varmus bragged about reported the IF for PLoS Medicine (amusing but irrelevant, I guess).

So what is the answer? Clearly, the IF can be too easily manipulated. Other alternatives should be considered. Or, perhaps, we need to get over the idea of ranking stuff. Ranking journals and schools (that pesky US News and World Report thing) results in game-playing. We all know which ones are elite. The IF works about as well right now as the Bowl Championship Series algorithm does for college football.

In this internet-connected, computerized age, any paper in a peer-reviewed, indexed journal can readily be retrieved by interested parties. “Elite” journals increase early exposure, but those who need to see the work can easily retrieve it without personal subscriptions or press releases. Today, more than ever, the content should be the point.

Photo courtesy of PhotoXpress.

5 comments:

  1. One simple change could at least make it harder to game the system. Make within journal citations not count or give them lower weights in the ranking. This would instantly prevent the most extreme forms of gaming the system with self-referential editorials.

    In addition, I know some journals that are very insular with relatively high impact factors. They publish a lot of articles, but each article in the journal cites many articles that were previously published in that same journal. They are legitimate citations, but they point to a lively intra-field discussion rather than a journal that "impacts" scientific discussion in a larger portion of the community.

    Such a simple change would definitely give a better measure of impact. There's still the review article gimmick, but if a journal become the go-to place for high quality review articles, that really does affect its impact.

    All this said, the importance of publications in top journals over quality of work and influence in the field is generally a bit crazy.

    ReplyDelete
  2. I don't understand -- anyone can use Web of Science and Google Scholar and see how many actual citations their papers have. So why pay attention to what journal a paper is in when trying to evaluate the impact of someone's work?

    ReplyDelete
  3. Good question, indeed. Some schools in the US factor IF into their "publication scores" for P&T; in other countries high IF is even more important. Study sections are often "forgiving" of quantity of papers if they are in the marquis journals (so they must be really great papers).

    ReplyDelete
  4. Morgan, because it takes specific effort. having a general idea of what is an impressive journal and what is not in the back of your mind makes it far easier to grade a CV.

    I suggested some time ago that what we need is a culture in which it is normal to include current citations for the article right on the CV. preferably expressed relative to some expected value, say the journal's overall IF. Zscore or something.

    ReplyDelete
  5. here it was:
    http://drugmonkey.wordpress.com/2007/09/21/a-modest-proposal-on-impact-factors/

    ReplyDelete