Opposed to Academic Research?

So this blog illustrates that I am basically opposed to academic research, correct?  ABSOLUTELY NOT!!  What I am opposed to is that specific body of academic research that relies on interpretations of statistical outcomes as adopted from the social sciences in the years since 1959 (and expanded since then, in terms of a larger mythology).  Collectively these are what I refer to as the Generally Accepted Soft Social Science Publishing Process, or the GASSSPP.  For those not familiar with the history of b-school research, 1959 was the year that both the Ford Foundation (Gordon & Howell) and Carnegie Foundation (Pierson) reports were released, and both were highly critical of the nature of b-school research at that time—in response, b-schools imported a large number of social scientists, and the methods they were trained in came with them, of course.  That is where many of our GASSSPP problems began.

But there are other types of academic research that don’t rely on the GASSSPP, and many outcomes from that body of work have been quite valuable.  In the two sections below, I give a number of illustrations of studies and bodies of work that do not fall into the GASSSPP trap, and have actually improved our understanding of how things work in organizations and the business world.  Many of these provide both a body of work from which theory can be generated as well as having applicability for practitioners.

Again, the problem that this blog focuses on is the academic research that falls into the GASSSPP category.  Unfortunately, this combination of Null Hypothesis Significance Testing (NHST) research is coupled with nearly universally incorrect interpretation of results as dependent on the p values we get, and that has been further complicated by “creative” but incorrect expansion on the meaning of p.  All together these create what I think should be termed a  mythomethodology—the underlying technical calculation of statistics from raw data is almost always correct, but the interpretation of outcomes is based on a mythology which is always incorrect.  The results do not constitute valid science, and there is no way they can.

How We Should Do It

In contrast to the fatally flawed GASSSPP research, there are a number of studies that have been done over the years that illustrate the largely unfulfilled potential of research into management, organizations, and human behavior.  I give several examples of these here, and the full references at the end of this page.

Tversky and Kahneman (1974), Judgment under uncertainty: Heuristics and biases. This article summarizes the importance of three biases in the way that humans process information, which often lead to error despite their efficiency.  It introduced the important phenomenon of “anchoring,” which is now having a well-deserved impact on the study of “behavioral finance” and “behavioral economics,” because it illustrates how poorly humans conform to the assumption of the “rational man” that is so fundamental to these fields.

Tversky and Kahneman (1981), The framing of decisions and the psychology of choice. This paper shows the deviation from rationality inherent in much human decision-making in contrast to the linear-rational expectation of behavior regarding the relationship between objective measures of outcomes and their subjective value to the decision-maker, when those decisions are “framed” as either the “prospect of gain” or the “prospect of loss.”

Both Tversky and Kahneman are psychologists, but in their papers you do not see a p-level anywhere, nor is there a single bow to the mighty asterisk.

McKinsey and Co., Management Matters (2005).  This is an excellent study of comparative success in manufacturing, comparing results in four countries (the US, UK, France, and Germany) among 731 companies which were not McKinsey clients.  The original study was commissioned to Bloom et al. at the London School of Economics (“Management practices across firms and nations,” 2005), who did the usual GASSSPP-style reporting of results with statistical significance being a major decision criterion.  McKinsey cleaned this up and looked at effect sizes, and created graphical illustrations of the excellent results the academic team had obtained.  The title says it all, and I use this monograph in all of my advanced courses to teach students that management really does matter, and how well it is done shows up on the bottom line and elsewhere (like employee quality of life).  This is a public document which can be downloaded from McKinsey, and further details on this on-going project can be found at the LSE Centre for Economic Performance at http://cep.lse.ac.uk/management/.

Joyce, Nohria, & Roberson, What (really) works (2003).  This is a book which examined different management practices and forms of organization to come up with what the authors refer to as a “4 + 2” formula for business success.  As this term suggests, there are four mandatory things management has to do to succeed over the long term, and two of four others must be added.  If management can keep all six balls in the air over the long run, that capability is strongly related to sustained performance.  The authors claim to have solicited a list of over 200 management practices from management scholars to examine in their study, but the only ones that passed the test of contributing to sustained performance are those in the 4 + 2 group.  (I contacted one of the authors to get the list of 200 practices, and never got a reply; this has my “bogusmeters” up off the zero peg a bit, but I’m willing to give the benefit of a doubt until I can’t).  It’s an intriguing piece of work, and not a p level anywhere.  Being a book, unfortunately, most b-school bean-counters would not count it as “research;” but it really is, and with the one caveat noted, it’s far better than the overwhelming majority of what appears in our “top” journals.

Keeney, Personal decisions are the leading cause of death (2008).  In this Operations Research article, Ralph Keeney analyzes the relationships between leading causes of death in the U.S. population, and the title explains exactly what he finds.  This is an excellent illustration of the use of raw data and some insightful analysis, and draws conclusions which are enormously valuable for both theory formation, policy formation, and personal decisions regarding health.  If it can ever overcome being published in an academic journal (which virtually assures that it will be ignored unless it is actively promoted), it may well become one of the more important pieces of academic research in the last decade.  All of this is accomplished without a single regression model (seemingly a requirement to publish anything, nowadays) or p anywhere.

Social Science Done Well

There are even examples of research following social science methods that did a good job, for various reasons that illustrate the value of not blindly conforming to the GASSSPP mythomethodology.  Here are three of them, and readers should note that these are three well-known bodies of work; one might ask whether the durability of a research program is inversely related to its conformity to the GASSSPP.

Hofstede, the dimensions of culture (many publications).  From his initial studies of the dimensions of culture when an employee of IBM in the late 1970s to his most recent volumes, Geert Hofstede has developed a model of the pervasive and subtle effects of national and regional cultures on people, and the impact these influences have on organizations.  His initial work did all the significance testing found in most social science, with p’s and asterisks all over the place.  However, Hofstede paid careful attention to his measures and did numerous follow-up and replication studies.  The GASSSPP does neither.

A similar example is Ed Locke’s research on goal setting (again, many publications).  Like Hofstede, Locke and his colleagues did careful incremental work to investigate his scales and develop valid measures of goal setting and the results that accrue to doing it, something rarely seen in GASSPP research.  He paid proper attention to statistical significance, but interpreted it correctly and never let it become a substitute for effect.  His work has withstood tests and challenges so many times that whenever academics discuss “evidence-based management” (EBM), it is goal setting that comes up number one on the list of practices that “evidence” supports; numbers two and three are up for grabs, and I’ve never seen a serious list of EBM subjects of any size.  That says a lot about the state of our research 50 years after the Ford and Carnegie Foundation reports.

Pugh, The Aston programme (1998).  Derek Pugh and his colleagues published a series of studies of organization structure that became known as the Aston program, after the university where he and several others worked at the time.  These are an example of GASSSPP studies that I like, partly because they failed.  The reason they failed is that they did not achieve their objectives, most often explained as creating a comprehensive description of the dimensions of organization structure.  (For a full explanation of the nature of this failure, I strongly recommend Starbuck’s Chapter 25 in Volume III of the collected studies.)  But the studies are a great scientific success, in that they illustrate the need for more careful development and examination of measures, the need for replication studies, and the independent re-examination of findings by those who don’t have a vested interest in a particular outcome from a research program.  It is to the enormous credit of Derek Pugh and his colleagues that they truly believe in the spirit of science, have welcomed debate and disagreement, and really engaged in the kind of work that is “science.”  It has been a long time since I’ve communicated with Derek, and even longer since I’ve seen him, but the Aston group is to be forever commended for its openness in testing their work and publishing the results, supportive or not.  So at the end of the day, I consider this research program to be a huge scientific success even though the outcomes the authors hoped for never really materialized.  If I were designing a doctoral degree program in business, these volumes would be required methodology reading.

There are other studies and lines of research that might be added to this list, but I’ll stop here, with apologies to a number of deserving researchers whose names should appear on this page.  The point has been made, I hope—there is research, including social-science research, which is good and is useful, whether “useful” refers to creation of better models and theories or to the guidance of practice.  I can only encourage this kind of work to proceed apace, because it is the kind of work that can actually make us smarter.  Standard GASSSPP research has the opposite effect.

Good Research is Good

That business-school research is rapidly degrading into a body of junk science is unfortunately driven by our collective acceptance of the GASSSPP.  The late Dennis Flanagan, who edited Scientific American for over 30 years, made a very astute observation from his many years of work with all manner of scientific pursuits: “Science is what scientists do.”  While that might sound like it opens the doors to things like the GASSSPP, astrology, alchemy, and other pursuits that share some characteristics with science (for example, astrologers need to know the stars and planets, do calculations based on data about individuals and heavenly bodies, relate these in the form of plots and charts, etc.), we can easily see that there are many things that scientists do that are absent from these.  The GASSSPP simply does not do what scientists do—we do not pay adequate attention to measurement, we do not do independent replication of results, we rarely make our raw data available to others, among other problems, and in the face of overwhelming evidence that p values are incorrectly interpreted, we do not change editorial practices and requirements to focus on statistical effects as the criterion of value.

This is even more urgent given that the GASSSPP rot is spreading.  In several international conferences in recent years I have given papers on this issue, and at all of these there was an outpouring of frustration at the extent to which emulation of the U.S. academic journals was becoming the norm for b-school researchers around the globe.  Even for those who had correct statistical training and could articulate why they should not use GASSSPP, or why an alternative explanation of results was superior, they were required to rewrite their work to conform to the “standards” of GASSSPP.  The global spread of American-style b-school education is cause for much pride in our educational system, but the spread of GASSSPP is cause for much embarrassment.

So in closing, I want to be on record as very pro-research—being anti-GASSSPP is NOT being anti-research.  I’ve done some research in my time, and it’s both challenging and rewarding work to do.  I can readily identify with people who get caught up in it.  The problem we face now is that what we’re caught up in is not science, and can never be.  We need to do it the right way, or not do it.

References for this page

Bloom, Nick; Dorgan, Stephen; Dowdy, John; Van Reenen, John, and Rippin, Tom. Management practices across firms and nations. London: Center for Economic Performance, London School of Economics; 2005 Jun.

Dowdy, John; Dorgan, Stephen, and Rippin, Tom. Management matters. London: McKinsey & Co; 2005 Jun.

Gordon, Robert A. and Howell, James E. Higher education for business. New York: Columbia University Press; 1959.

Hofstede, Geert. Culture’s consequences: Comparing values, behaviors, institutions, and organizations across nations, second ed. Thousand Oaks, Ca.: Sage; 2001.

—. Culture’s consequences: International differences in work-related values. Beverly Hills, Ca.: Sage; 1980.

Joyce, William; Nohria, Nitin, and Roberson, Bruce. What (really) works: The 4+2 formula for sustained business success. New York: Harper Business; 2003; ISBN: 0-06-051278-4.

Keeney, Ralph L. Personal decisions are the leading cause of death. Operations Research. 2008 Nov-2008 Dec 31; 56(6):1335-1347.

Latham, Gary P. and Locke, Edwin A. Goal setting—a motivational technique that works. Organizational Dynamics. 1979 Autumn; 8(2):68-80.

Pierson, Frank C. The education of American businessmen; a study of university–college programs in business administration. New York: McGraw-Hill; 1959.

Pugh, Derek. The Aston programme. Aldershot, England: Ashgate; 1998.

Starbuck, William H. The production of knowledge: the challenge of social science research. New York: Oxford; 2006; ISBN: 0-19-928853-3.

Tversky, Amos and Kahneman, Daniel. The framing of decisions and the psychology of choice. Science. 1981 Jan 30; 211(4481):453-458.

Tversky, Amos and Kahneman, Daniel.  Judgment under uncertainty: Heuristics and biases.  Science.  1974 Sep 27; 145(4157): 1124-1131.