Thursday, November 18, 2010

Management Research: Flukes or Replicable Trends?


A recent academic study is in the news for finding precognition effects – the influence of future events on our present responses. Such a study is certainly innovative and shows dramatically new results – yet is it believable? It is interesting to see that in a field such as psychology, the entire discussion on the study has immediately shifted to replication: are the statistical results a fluke or do they represent a trend? 

I wonder why a similar conversation in the management world is so rare. The experience of many researchers suggests (and statisticians accept) that it is common knowledge that statistical relationships reported in journals can change based on even one change in a control variable. This should mean that for research in the quantitative tradition claiming some sort of generalizability, replication of statistical results is necessary before any judgments can be made. Yet, the top journals reject replication outright. How sensible is this for developing knowledge in a field? A few have raised such questions (amongst those I have seen: Don Hambrick in his later articles and conference talks). Yet, the institutionalized system conveniently ignores this obvious truth. One can't have any "evidence based management" without giving topmost priority to replications.

It is in part due to this, that the vast "top journal articles" in management, and particularly strategic management, without replication, are sitting up there without any clarity on whether these are one-off fluke results or replicable trends. Knowing whether they are either of these would help and add to knowledge. Flukes are useful. As are trends. But there is little use for results when this distinction about them is unclear.

Some would counter that meta-analyses do this job. Though helpful, these are not the same thing. Problems with these are well known due to the fact that after all, the original studies that are meta-analyzed were not meant to be meta-analyzed and were not intended as replications. You can't suddenly solve the problem of lack of replication using a large number of studies that were fundamentally not meant to be replications anyway.

It is futile to talk of improving or writing more in “management implication” sections (implying use for Practice), when the basic nature of the research is unclear on the above counts. Glossing over the problem by writing more in implication sections will do little to convince practicing managers.

What are the approaches in other fields? Certainly not all methodological approaches from other fields make sense in our field, but at least the basic idea of replication - in whatever form is appropriate for a field - seems to be a necessity for quantitative tradition research to progress in terms of actual impact.

In a different field (say biomedical engineering), imagine developing a polymer and reporting one test of its characteristics in one top journal article. Imagine now that all those scientists who try to replicate this in various contexts and counter or support the earlier results are not published, simply because they are: replications. Is there anyone out there who will have any faith in the characteristics of this polymer, let alone set up a firm to manufacture it. Yet, that is what we expect management practice to do. And when many of them seek out the "witch doctors" of management who promise wild results without enough evidence, we shake our heads at their credulity. Yet, our own research approaches do little to offer an alternative. With all our rigor and obsession with "top journals", we offer little else than what the "witch doctors" offer.

In writing a case on the innovative biotechnology industry, known for rigorous clinical trials, I was forced to do some lateral thinking. Why shouldn't management research be treated in the same way as a drug/medical innovation, and go through similar (perhaps voluntary but documented) "clinical" trials? Management ideas work on a larger plane, and wrong ideas could cause much damage in socio-economic terms.

Imagine asking these four basic questions of any management innovation that you read about in the journals or elsewhere, which aspires to have an impact on management practice. These are based on the stages of clinical trials in the drug industry:

# Is the innovation safe? (what are potential “side-effects” – what could this management innovation impact negatively in the organization)

# Does it work at all? (can it show any effectiveness in practice)

# Does it work better than standard treatment? (should we really throw out old practices or are they sufficient)

# Is the treatment safe over time? (will it cause longer-term harm – think about issues such as sustainability, environment, society, etc.)

None of these are given even token attention in the propagation of management ideas. Shouldn't there be such standards and clinical trials for management innovations?

No comments:

Post a Comment