Some Recommendations

This is the heart of the Management Junk Science blog.  I want to recommend several things our research profession can do, primarily by leveraging the internet and the idea of social networking, if not an actual social network.  I don’t think a formal organization is necessary at this point, and I have to confess that given the dismal track record of the Academy of Management and the American Psychological Association in dealing with the problems of the GASSSPP, I have no confidence in either of them being a constructive or helpful conduit through which to work.

The recommendations here are all ones that can be enacted among groups of like-minded scholars, and need no organization beyond perhaps something like LinkedIn.  In very few cases will they take more than the construction of a group e-mail list in Outlook, or the construction of a free weblog like this one.

1.    My first several recommendations are based on a general observation that as a body of researchers, those who publish in this field (and those who would) make very poor use of the Internet.  This is understandable given the “bean-count” mentality of most university reward systems, where “research” is synonymous with “peer-reviewed publication in the highest-ranked journal I could hit.”  At the same time, we have an excellent and very receptive Web outlet for research in the Social Science Research Network.  Thus, my first recommendation is for creation of an Electronic “File Drawer” (EFD) in cooperation with SSRN.  The EFD could be the repository for the products of the next two recommendations.

2.    Those faculty working with doctoral (and many Masters’) students, as well as faculty themselves who choose to do it, perform replication studies of published research and publish their findings on the EFD or other website.

I know that in a number of fields, marketing being one of them, it is now common to have authors submit a “replication” along with a new study as a condition for publication.  I’m sorry to say that I do not consider these to be replications.  As used in real science, “replication” implies “independent,” and that is my criterion as well; “repetition” is simply not the same thing.  From the stem-cell scandals in South Korea, to the too-good-to-be-true experiments of Jan Henrik Schön at Bell Labs, to the hype of “cold fusion,” real science illustrates the need for independent replication (and “cold fusion” may not be done yet, because of continuing dogged replication research).  We all know that research observers are biased in unknown ways, and that is how a trained, highly motivated astronomer like Percival Lowell “saw” canals on Mars—he had read Schiaparelli’s 1877 reports of “canali,” and he “knew” they were there.

3.    Similarly, have graduate students (and motivated faculty) reanalyze published studies, paying particular attention to effect sizes and to unsupportable author conclusions based on misinterpretations of statistical significance.  (I cannot believe the extent to which top journals and visiting scholars continue to treat p < .05 as proof that they found something, and the labeling of the .001 level as a “highly significant” finding—there is no such thing as highly significant, because there is no linear scale of strength of findings implied or expressed by the p level.)  At the same time we see a widespread adoption of multiple regression as the method of choice, which means that effect sizes are automatically reported as R-squared and delta-R-squared; we nevertheless see these nearly completely ignored, and outcomes discussed in terms of the level of significance!

Both recommendations (2) and (3), in my view, are ideal opportunities for training graduate students in the correct way to analyze data and interpret findings, particularly the latter, and it is clear to me that the field generally needs this kind of re-examination of published work.

4.    Given that one of my issues with GASSSPP research is measurement, the EDF could be an excellent place for studies of measures, including independent replications, new measures, and re-examination of published work.  Since publication of work is a primary determinant of academic rewards, I suspect there are many good studies of various measures that are in the file drawers of academia in various stages of development, and need a place to be made accessible to others who might find them useful.

5.    This is a bit of a fantasy recommendation: Publish raw databases for others to use and reexamine, with attribution.  Much of the work that goes into any study, as we all know, is simply collecting the data.  Since our journals have a dysfunctional fixation on novel research (when did studies become equivalent to dissertations?), there is limited opportunity for an author to get research yield from any one dataset.  So why not put the data out there where someone else with a different idea for using it might have access, so long as the original researcher gets credit for the creation of the data (and automatically gets at least one citation)?

What else can we do that will help stop and reverse the trend toward junk science that comes with the GASSSPP?  For those who are interested, this list has been extensively enlarged in my 2019 article in The American Statistician.  That list is much more ambitious, and takes into account the recent works of many reformer organizations.