Annotated: “Academic signaling and the post-truth world”

Wherein I annotate things.

Today, responding to (the more fun half of) Noah Smith’s blog post, “Academic signaling and the post-truth world”:

Lots of people are freaking out about the “post-truth world” and the “war on science“. People are blaming Trump, but I think Trump is just a symptom.

For one thing, rising distrust of science long predates the current political climate; conservative rejection of climate science is a decades-old phenomenon. It’s natural for people to want to disbelieve scientific results that would lead to them making less money. And there’s always a tribal element to the arguments over how to use scientific results; conservatives accurately perceive that people who hate capitalism tend to over-emphasize scientific results that imply capitalism is fundamentally destructive.

But I think things are worse now than before. The right’s distrust of science has reached knee-jerk levels. And on the left, more seem willing to embrace things like anti-vax, and to be overly skeptical of scientific results saying GMOs are safe.

I’m choosing to skip over this bit, because many reasons, but mostly it just wouldn’t be fun for me.

Why is this happening? Well, tribalism has gotten more severe in America, for whatever reason, and tribal reality and cultural cognition are powerful forces. But I also wonder whether a few of science’s wounds might be self-inflicted. The incentives for academic researchers seem like they encourage a large volume of well-publicized spurious results.

The U.S. university system rewards professors who have done prestigious research in the past. That is what gets you tenure. That is what gets you a high salary. That is what gets you the ability to choose what city you want to work in. Exactly why the system rewards this is not quite clear, but it seems likely that some kind of signaling process is involved – profs with prestigious research records bring more prestige to the universities where they work, which helps increase undergrad demand for education there, etc.

So okay, here’s what I want to engage with.  A few things here:

  • Note that the incentives are built to do things which benefit a/the university, as more-or-less a business venture, and not which benefit science or its progress or the social welfare we expect science to produce.
  • Less critical but still pressing, Noah says “in the past,” where there are two sub-issues:
    1. First, we might be concerned that just rewarding your past work produces sub-optimal incentives for the future, especially with regards to tenure decisions.
    2. Ideally though, we want the reward system to look a lot like an ideal Market For Ideas, where we can measure and reward your marginal contribution (like, say, courts sometimes do ex post with licensing on patents).

Relevantly, we have a nice new paper from Brynjolfsson et al. (I know he’s not first author but Erik’s is the work that I actually know) about using algorithms to help with tenure decisions, and who will be a productive researcher post-tenure.

But for whatever reason, this is the incentive: Do prestigious research. That’s the incentive not just at the top of the distribution, but for every top-200 school throughout the nation. And volume is rewarded too. So what we have is tens of thousands of academics throughout the nation all trying to publish, publish, publish.

Yes, so, a couple more issues!

  • New Bloom paper on this
  • Doing prestigious research, and doing volume, is also not necessarily what is good for Science capital-S.
  • I really do need to write a very long piece on how Science has done fuck-all with the internet, and that the tenure-journal-conference triad of the 1930s needs to die.

As the U.S. population expands, the number of undergraduates expands. Given roughly constant productivity in teaching, this means that the number of professors must expand. Which means there is an ever-increasing army of people out there trying to find and report interesting results.

This is perhaps the core of the post.  And I actually engaged about this before, with my Bizarro argument for fewer scientists (short version: we might be at a point where the added complexity from another scientist yields negative returns net of their small positive marginal contribution).

But there’s no guarantee that the supply of interesting results is infinite. In some fields (currently, materials science and neuroscience), there might be plenty to find, but elsewhere (particle physics, monetary theory) the low-hanging fruit might be picked for now. If there are diminishing returns to overall research labor input at any point in time – and history suggests there are – then this means the standards for publishable results must fall, or America will be unable to provide research professors to teach all of its undergrads.

Noah cites Greg Ip’s WSJ article which I think is not good, despite that I usually like Greg’s work, it’s really just a recapitulation of the same story the same few guys have been pushing for 5+ years (I helped write a piece of Gordon’s grant, I’ve been seeing this headwinds stuff foreverrrr).  But this is another problem with how the internet still has not figured out a good way to advance intellectual debates (case-in-point: this very blog).

This might be why we have a replication crisis in psychology (and a quieter replication crisis in medicine, and a replication crisis in empirical economics that no one has even noticed yet). It might be why nutrition science changes its recommendations every few months. It might be a big reason for p-hacking, data mining, and specification search. It might be a reason for the proliferation of untestable theories in high-energy physics, finance, macroeconomics, and elsewhere. And it might be a reason for the flood of banal, jargon-drenched unoriginal work in the humanities.

Wrote at length about this in response to Resnick, Plumer, Belluz.

Almost every graduate student and assistant professor I talk to complains about the amount of bullshit that gets published and popularized in their field. Part of this is the healthy skepticism of science, and part is youthful idealism coming into conflict with messy reality. But part might just be low standards for publication and popularization.

Now, that’s in addition to the incentive to get research funding. Corporate sponsorship of research can obviously bias results. And competition for increasingly scarce grant money gives scientists every incentive to oversell their results to granting agencies. Popularization of research in the media, including overstatement of results, probably helps a lot with that.

And that was Part II!

I recall John Cochrane once shrugging at bad macro models, saying something like “Well, assistant profs need to publish.” OK, but what’s the impact of that on public trust in science? The public knows that a lot of psych research is B.S. They know not to trust the latest nutrition advice. They know macroeconomics basically doesn’t work at all. They know the effectiveness of many pharmaceuticals has been oversold. These things have little to do with the tribal warfare between liberals and conservatives, but I bet they contribute a bit to the erosion of trust in science.

Of course, the media (including yours truly) plays a part in this. I try to impose some quality filters by checking the methodologies of the papers I report on. I’d say I toss out about 25% of my articles because I think a paper’s methodology was B.S. And even for the ones I report on, I try to mention important caveats and potential methodological weaknesses. But this is an uphill battle. If a thousand unreliable results come my way, I’m going to end up treating a few hundred of them as real.

Skipping this public trust bit.  Well okay I mean it matters insofar as the public needs to trust science in order for science to get funded, since it is primarily publicly funded (and should be!  I know the theory).  But unless we think there’s a mismatch between public trust and what public trust should be, that is as long as the gauge is accurate, then we should focus on fixing science for its own sake and the trust+funding will follow appropriately.

So if America’s professors are really being incentivized to crank out crap, what’s the solution? The obvious move is to decouple research from teaching and limit the number of tenured research professorships nationwide. This is already being done to some extent, as universities rely more on lecturers to teach their classes, but maybe it could be accelerated. Another option is to use MOOCs and other online options to allow one professor to teach many more undergrads.

Well now hold up wait a minute.  I don’t know that that’s the obvious move.  Let’s work through this:

  • “decouple research from teaching”: Now this sounds good to me, but it really depends on the complementarities between the two, right?  Because basically the question is just whether switching to this system raises productivity as a whole, where we’re talking about: quality of students + quality of research = university output, and we don’t really know if decoupling would lower both inputs, raise both, or lower one and raise the other.  I mean I’m sure someone’s looked at this, but I don’t know the literature on it.
  • “limit the number of tenured research professorships nationwide”: Can’t say I follow, here.  Is this just an argument like the bizarro one I cited above, where past a certain optimal point the marginal contribution of another prof is actually negative?

MOOCs are kind of an aside up there, but I think that’s where the real meat is, to be honest.  Once that gets figured out, you’ll probably decouple as a result of market forces anyway.

Many people have bemoaned both of these developments, but by limiting the number of profs, they might help raise standards for what qualifies as a research finding. That won’t fully restore public trust in science – political tribalism is way too powerful a force – but it might help slow it’s erosion.

Or maybe I’m completely imagining this, and academic papers are no more full of B.S. than they ever were, and it’s all just tribalism and excessive media hype. I’m not sure. But it’s a thought, anyway.

Okay final thought, which is: how far ex ante can we tell who will be a great researcher?    Annnnd we’re out — that is too big a can of worms for tonight!

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s