I was scanning one of the academic journals the other day (it’s on everyone’s “A” list, but I won’t identify it or the author(s) I’m discussing here), and saw yet another example of how badly we misinterpret our results in many studies. This study used the commonplace regression modeling approach we see everywhere nowadays, and several *b* coefficients in the multivariate model were the basis for testing several hypotheses. There was a nice big sample, so statistical significance of just about everything was a slam-dunk, as it always is with a big *N*. What seems to have been completely forgotten is what a *b* (or beta, if standardized) coefficient really is—a slope. It is the ratio of the rise to the run; the grade; the rate of ascent or descent; or whatever expression you want to use, but it’s a slope.

A slope always tells the same thing, in numerical terms. In regression, a slope tells you how much change in the “dependent” variable is produced by a change of one unit of the independent variable. A slope (*b*) of one is a 45-degree line, and means that one unit change in the independent variable results in a one-unit change in the dependent variable; a *b* of 3.0 would mean three units of dependent change from one unit of independent change, etc.; a *b* of zero is a flat line, meaning no change in the dependent variable is produced by a unit change in the independent variable.

What motivated this page was seeing, for how many times I can’t really recall, a *b* of 0.03 which was statistically significant. Because it was statistically significant, the author(s) claimed their hypothesis about this variable was “supported.” The claim is quite literally that a slope that is effectively zero, a flat line, was evidence of a material relationship because it was statistically significant. This one time is bad enough, but in the last year I have seen a number of “A”-journal articles, and several presentations, where this kind of ridiculous claim is not only made by the researchers, but gets a pass from peer reviewers and editors as well.

We have apparently become so institutionally blind to effect sizes that we are willing to go on public record as claiming that a flat line is evidence of a relationship between things, when in fact it is evidence of the complete lack of a relationship. That kind of decision making, of necessity, involves more than one flat line.

(No references for this page—I don’t want to publicly embarrass anyone.)