Wednesday, July 13, 2011

Management Journal Rankings: Looking Beyond the Impact Factor Ranks

Hari Bapuji

In a recent post on journal rankings, Suhaib Riaz reflects on the journals ranking and asks whether everything that counts can be counted. Rankings of all types are useful, but they provide just one perspective and are typically based on only one dimension that we need to consider. If a different and equally important dimension is brought into the analysis, the rankings change. I would like to illustrate this with the help of the recently published, well-known Thomson Reuters Journal Citation Reports (JCR). JCR itself does not rank journals, but its impact factor data is used by journals to ascertain their “intellectual superiority”. In this post, I will focus on the aspect of journal self-citations and use JCR data on journals in the Management category.

The issue of journal self-citations has recently gained attention because journal editors could influence impact factors by asking authors to include citations to articles published in their own journal. For example, during a recent review process, an editor’s letter said: “You need to provide five additional references from previously published articles in Journal XYZ (the journal where the paper was under review) and cite them in the references.” I am sure many of my fellow researchers face similar situations. In fact, a team of researchers recently began examining this issue. In short, journal self-citations could be used to influence rankings.

Fortunately, Thomson Reuters Journal Citation Reports (JCR) gives Impact Factors with and without self-citations. In the document here, I have provided the impact factors with and without self-citations. I then ranked the journals using impact factors without self-citations. The ranking now looks different. Sixteen journals lose by 10 or more spots because their self-citations are fewer than those used by other journals. Note that some of the journals in this list are aimed at practicing managers and tend to not use references or use fewer references. However, those pure academic journals that typically use references would be hurt more by this loss. While 16 journals lose, 17 journals gain positions by 10 or more spots.

While these gains and losses are disconcerting, these two rankings capture two different things. The first ranking (most commonly used) is based on simple impact factor (that includes self-citations). This captures the impact a journal had on the field, including itself. The second (using impact factor without self-citations) captures the impact a journal had on others in the field, excluding oneself. Which rank one would like to use depends on what one would like to value.

Beyond the issue of who gains and who loses, there is another important issue. The prevalence of self-citations. On average, nearly 22% of all citations are to articles published in the journal itself. Higher self-citations might reflect an inward focus and could thus impede learning and knowledge exchange. In addition, there could also be an impediment at the level of “category” of journals, such that management researchers draw from each other within the management category, but not from other categories, such as economics, psychology and sociology. Such inward looking bias could hamper the impact management researchers can make on broader knowledge beyond their field. These are deeper questions that management researchers need to consider, as we probe the issue of journal rankings in more detail from various angles.

Sunday, July 3, 2011

Ranking Knowledge: Can Everything That Counts Be Counted?

By Dr. Suhaib Riaz.
Can we “rank” knowledge? That is the real question underlying frequent debates in academia on the merits and demerits of journal rankings. A recent issue of Organization discusses this for the management field. The observation that most journal rankings are hardly scientific and yet somehow easily accepted by scholars has to be one of the most confounding and yet perhaps also revealing commentary on scholars themselves.

Joel Baum brings up an Albert Einstein quote to highlight the problem: 
Not everything that can be counted counts and not everything that counts can be counted.”

Baum suggests that the general problem of most social phenomena being subject to non-Gaussian distributions (as opposed to the frequent assumptions of normal bell curve distributions) applies to the phenomena of scholarly publications in management as well: 
“…for each journal, the distribution of citations per year is highly skewed, with articles receiving 0–1 citations the largest group for each journal…suggesting a Power Law distribution of article citations.”

As just one illustration, he notes that: 
“the most highly-cited article in a journal thus receives 10–20 times more citations than the average article in the same journal.”

The implications of these empirical observations are huge and worth thinking about. For example: 
"One implication of the variability in article citedness is that sparsely-cited articles published in high-IF (impact factor) journals will often attract fewer citations than highly-cited articles published in low-IF journals… for example, articles in the top quartile of citations in Organization Studies are as or more frequently cited than the median articles at Journal of Management Studies, Organization Science and Administrative Science Quarterly as well as bottom quartile articles at Academy of Management Journal."

In summary, Baum’s major concern is that: 
Attaching the same value to each article published in a given journal masks extreme variability in article citedness, and permits the vast majority of articles—and journals themselves—to free-ride on a small number of highly-cited articles, which are principal in deter­mining journal Impact Factors.”
  
A different angle on the skewness issue is taken up by Stuart Macdonald and Jacqueline Kam. Their main concern is that “the same few authors are published in the same top journals” through mechanisms that don’t quite lead to the best scholarship. They even go as far as comparing this “gaming” of the publication process to other examples with suboptimal outcomes: 
There is no shortage of examples of the really rotten becoming the accepted standard of quality. There is VHS, a second rate product that nevertheless came to dominate the market (Martindale, 1995), in large part because VHS camcorders could provide the spontaneity required by the pornography industry."

Their blame on academics stands out, particularly on what they call publishing “cartels” that try to “game” the system and have made citations little more than an economic exchange: 
“But spare the tears; the key players in this tragedy are not editors or publishers, universities or government. Heading the dramatis personae are academics themselves. They have allowed this situation to develop; the few have entrenched themselves, but the many have been complicit in the hope that they will profit from knowing the rules of the publishing game and from being unscrupulous in playing it.”
  
Hugh Willmott draws out a very interesting comparison with the Arts:
“Like middle-brow arts events that present few challenges to their audiences, arouse few passions, give little offence and so attract corporate sponsorship, middle-of-the-road scholarship is comfortably accommodated in business schools.”

He also draws out a rarely mentioned, but I believe crucial, connection between the problem of pursuing middle-brow scholarship and the lack of relevance of most management research:
"Middle-brow research is untroubling to executives; it ticks the boxes of funding agencies; and it saves benefactors embarrass­ment. Research that focuses upon narrow and trivial topics presented in a technically sophisticated manner is irritatingly impenetrable to practitioners, but also reassuringly inconsequential for them. 
The irrelevance of such scholarship is tolerated because it provides a veneer of academic respecta­bility while leaving the legitimacy of business unchallenged. If it had anything controversial to say that leaked from esoteric journals into media headlines, the accommodating velvet glove would soon be removed to reveal an iron fist of censorship, with threatening letters directed at Deans and Vice-Chancellors demanding sanctions for the transgressors of self-censorship."

A deeper analysis of the problem will undoubtedly have to dig into the historical roots and trajectory of today’s “top ranked” journals:
"They are products of a scholarly tradition fashioned in North America during the Cold War at a time when academic rigor was conflated with respectability gained from prostration before a Method ascribed to the natural sciences, irrespective of the ontology of the phenomena under investigation."
  
This triangular connection between relevance, philosophy and method is rarely brought up in our field. In particular, relevance rarely enters the discussion on the topic of journal rankings. And yet, we do know that some of the most cited and impactful scholars in management who have led in relevance (say, C.K. Prahalad, Henry Mintzberg) clearly chose paths that avoided slavish devotion to academic journal rankings. Should there at least be a separate ranking of journals/other publication outlets in management to account for an impact beyond the in-group of the academic journals themselves? Anyone?