There’s one aspect of the much-discussed The Great Stagnation that hasn’t been discussed much—the role of statistics. Cowen thinks that part of the difficulty with the economy these days is that the three sectors expanding most quickly—government, health care, and education—are the ones in which progress or lack thereof is harder to count.
Well, perhaps to give it a more precise definition—we can count many of the results, but we can’t count all of them, and we can’t necessarily count the process-related stuff. With education, it’s pretty easy to figure out that something is going wrong in the education system if graduation rates for high school and college are going down and test rates are stagnating. But what precisely? Hypothetically it could be anything from bad students to bad teachers to bad parents to a bad economy—what’s the causation? Similar things might be said about health care—is the cause behind substandard health care results substandard care itself or, say, increasing obesity rates? It’s a familiar problem for social sciences of all types, but one that’s become more and more pressing.
The old saw is that to a hammer everything looks like a nail; to scientists armed with brute force-capable computers, everything looks like a statistical problem. We seem to have the capabilities to count and crunch everything, so why not try?
There are t problems that crop up if you take that attitude, and underlying it is a problem of philosophy: it’s nice to be able to count, but numbers have no more authority than language—numbers are a language—but we’re not accustomed to thinking that way and so we deceive ourselves into ignoring the crevices and corners that are hidden in a seemingly straightforward numeric assertion. The big problem is that—to our brute-force scientists—is that we seem to have been so successful with the use of numbers in various fields: the hard sciences, computer sciences, sports and finance.
I don’t know enough about the first two to comment further, but the latter two are interesting. Sports have difficulties being completely subjected to statistics, though baseball is closest to being solved by statistics—people have been working on it longest and the structure of the game is suited to being attacked statistically. Basketball and football are more difficult: they’re team games in which the mere presence of a teammate affects the game; they’re also games in which what seems to be good can actually be bad. (Selfishness, for example, is a key element to the goodness or badness of basketball statistics.) Still, with basketball at least, if you’re ignoring advanced basketball statistics, you’re missing out in your understanding of the game.
However difficult sports are to the statistical mind, they’re even more difficult in the rest of real life including especially finance, though high finance has (obviously) taken to the practice with great enthusiasm. That’s true because sports have rules and a confined time frame; life—people in general—are pretty messily outside the lines. Nevertheless, we like to talk ourselves into tools, and statistics and models of various kinds played at least a supporting role in the financial crisis.
I’d suggest, then, that our statistical understanding is far lower than we think it is, in all fields. That’s a particular problem for medicine, where debates over the use and misuse of statistics in studies have formed a fascinating subtext to the debate on how to achieve the results we want in health care. We know, for example, that some nontrivial portion of drug studies are not published in journals; in fact, generally we know that studies proving a result are more popular than studies disproving one. These biases should be enough to doubt the process in certain significant ways. Our statistics are good enough to figure out the results are bad; they aren’t good enough to figure out where the process needs improvement.
Part of the challenge of right now is figuring out what’s right and isn’t right and then reconciling the promise of statistics with their reality.