Wednesday, July 13, 2011

Management Journal Rankings: Looking Beyond the Impact Factor Ranks

Hari Bapuji

In a recent post on journal rankings, Suhaib Riaz reflects on the journals ranking and asks whether everything that counts can be counted. Rankings of all types are useful, but they provide just one perspective and are typically based on only one dimension that we need to consider. If a different and equally important dimension is brought into the analysis, the rankings change. I would like to illustrate this with the help of the recently published, well-known Thomson Reuters Journal Citation Reports (JCR). JCR itself does not rank journals, but its impact factor data is used by journals to ascertain their “intellectual superiority”. In this post, I will focus on the aspect of journal self-citations and use JCR data on journals in the Management category.

The issue of journal self-citations has recently gained attention because journal editors could influence impact factors by asking authors to include citations to articles published in their own journal. For example, during a recent review process, an editor’s letter said: “You need to provide five additional references from previously published articles in Journal XYZ (the journal where the paper was under review) and cite them in the references.” I am sure many of my fellow researchers face similar situations. In fact, a team of researchers recently began examining this issue. In short, journal self-citations could be used to influence rankings.

Fortunately, Thomson Reuters Journal Citation Reports (JCR) gives Impact Factors with and without self-citations. In the document here, I have provided the impact factors with and without self-citations. I then ranked the journals using impact factors without self-citations. The ranking now looks different. Sixteen journals lose by 10 or more spots because their self-citations are fewer than those used by other journals. Note that some of the journals in this list are aimed at practicing managers and tend to not use references or use fewer references. However, those pure academic journals that typically use references would be hurt more by this loss. While 16 journals lose, 17 journals gain positions by 10 or more spots.

While these gains and losses are disconcerting, these two rankings capture two different things. The first ranking (most commonly used) is based on simple impact factor (that includes self-citations). This captures the impact a journal had on the field, including itself. The second (using impact factor without self-citations) captures the impact a journal had on others in the field, excluding oneself. Which rank one would like to use depends on what one would like to value.

Beyond the issue of who gains and who loses, there is another important issue. The prevalence of self-citations. On average, nearly 22% of all citations are to articles published in the journal itself. Higher self-citations might reflect an inward focus and could thus impede learning and knowledge exchange. In addition, there could also be an impediment at the level of “category” of journals, such that management researchers draw from each other within the management category, but not from other categories, such as economics, psychology and sociology. Such inward looking bias could hamper the impact management researchers can make on broader knowledge beyond their field. These are deeper questions that management researchers need to consider, as we probe the issue of journal rankings in more detail from various angles.

No comments:

Post a Comment