Sunday, July 3, 2011

Ranking Knowledge: Can Everything That Counts Be Counted?

By Dr. Suhaib Riaz.
Can we “rank” knowledge? That is the real question underlying frequent debates in academia on the merits and demerits of journal rankings. A recent issue of Organization discusses this for the management field. The observation that most journal rankings are hardly scientific and yet somehow easily accepted by scholars has to be one of the most confounding and yet perhaps also revealing commentary on scholars themselves.

Joel Baum brings up an Albert Einstein quote to highlight the problem: 
Not everything that can be counted counts and not everything that counts can be counted.”

Baum suggests that the general problem of most social phenomena being subject to non-Gaussian distributions (as opposed to the frequent assumptions of normal bell curve distributions) applies to the phenomena of scholarly publications in management as well: 
“…for each journal, the distribution of citations per year is highly skewed, with articles receiving 0–1 citations the largest group for each journal…suggesting a Power Law distribution of article citations.”

As just one illustration, he notes that: 
“the most highly-cited article in a journal thus receives 10–20 times more citations than the average article in the same journal.”

The implications of these empirical observations are huge and worth thinking about. For example: 
"One implication of the variability in article citedness is that sparsely-cited articles published in high-IF (impact factor) journals will often attract fewer citations than highly-cited articles published in low-IF journals… for example, articles in the top quartile of citations in Organization Studies are as or more frequently cited than the median articles at Journal of Management Studies, Organization Science and Administrative Science Quarterly as well as bottom quartile articles at Academy of Management Journal."

In summary, Baum’s major concern is that: 
Attaching the same value to each article published in a given journal masks extreme variability in article citedness, and permits the vast majority of articles—and journals themselves—to free-ride on a small number of highly-cited articles, which are principal in deter­mining journal Impact Factors.”
  
A different angle on the skewness issue is taken up by Stuart Macdonald and Jacqueline Kam. Their main concern is that “the same few authors are published in the same top journals” through mechanisms that don’t quite lead to the best scholarship. They even go as far as comparing this “gaming” of the publication process to other examples with suboptimal outcomes: 
There is no shortage of examples of the really rotten becoming the accepted standard of quality. There is VHS, a second rate product that nevertheless came to dominate the market (Martindale, 1995), in large part because VHS camcorders could provide the spontaneity required by the pornography industry."

Their blame on academics stands out, particularly on what they call publishing “cartels” that try to “game” the system and have made citations little more than an economic exchange: 
“But spare the tears; the key players in this tragedy are not editors or publishers, universities or government. Heading the dramatis personae are academics themselves. They have allowed this situation to develop; the few have entrenched themselves, but the many have been complicit in the hope that they will profit from knowing the rules of the publishing game and from being unscrupulous in playing it.”
  
Hugh Willmott draws out a very interesting comparison with the Arts:
“Like middle-brow arts events that present few challenges to their audiences, arouse few passions, give little offence and so attract corporate sponsorship, middle-of-the-road scholarship is comfortably accommodated in business schools.”

He also draws out a rarely mentioned, but I believe crucial, connection between the problem of pursuing middle-brow scholarship and the lack of relevance of most management research:
"Middle-brow research is untroubling to executives; it ticks the boxes of funding agencies; and it saves benefactors embarrass­ment. Research that focuses upon narrow and trivial topics presented in a technically sophisticated manner is irritatingly impenetrable to practitioners, but also reassuringly inconsequential for them. 
The irrelevance of such scholarship is tolerated because it provides a veneer of academic respecta­bility while leaving the legitimacy of business unchallenged. If it had anything controversial to say that leaked from esoteric journals into media headlines, the accommodating velvet glove would soon be removed to reveal an iron fist of censorship, with threatening letters directed at Deans and Vice-Chancellors demanding sanctions for the transgressors of self-censorship."

A deeper analysis of the problem will undoubtedly have to dig into the historical roots and trajectory of today’s “top ranked” journals:
"They are products of a scholarly tradition fashioned in North America during the Cold War at a time when academic rigor was conflated with respectability gained from prostration before a Method ascribed to the natural sciences, irrespective of the ontology of the phenomena under investigation."
  
This triangular connection between relevance, philosophy and method is rarely brought up in our field. In particular, relevance rarely enters the discussion on the topic of journal rankings. And yet, we do know that some of the most cited and impactful scholars in management who have led in relevance (say, C.K. Prahalad, Henry Mintzberg) clearly chose paths that avoided slavish devotion to academic journal rankings. Should there at least be a separate ranking of journals/other publication outlets in management to account for an impact beyond the in-group of the academic journals themselves? Anyone?

No comments:

Post a Comment