Re: “The 7 biggest problems facing science, according to 270 scientists” Pt 3

Back in July,  Julia Belluz, Brad Plumer, and Brian Resnick wrote up this beautiful thing, which you need to go read right now.   And I said at the time that I would reply.  And then I got lazy.  But then so this is me doing that, now.  In 7 parts, structured as their big beautiful article is.

Rough Draft as of Nov 16.  Will be updating with links and more careful thought on an irregular basis.

Part III: The Replication Issue

The problem here is relatively simple: replicating results is critical to science, but there is almost no incentive to do so.

This wouldn’t be an issue if every piece of research were perfect, and perfectly correct, upon publishing.  But science is slow, and messy: replications (when they do happen) tend to find smaller effect sizes, or outright contradict the original research.

There are two consequences.  First, science is slower, as measured by the amount of time it takes to fully and firmly establish a fact — we can probably trust that junk science gets weeded out over time, but with low amounts of replication, that can take awhile.  Second, there can be real harm of having junk science floating around.  While we wait for replications that can take years, or even decades, the world keeps happening.  Policy gets made, individuals and companies make decisions, and to the extent that those choices are influenced by incorrect science, the world is actively harmed.

Underlying Causes

The underlying causes are threefold, and look a lot like the incentives problem from before:

  1. Tenure committees care more about novel research.  There’s just not enough professional reward for publishing replication, unless you manage to contradict previous results.
  2. Journals function the same way, roughly.
  3. Replication can be hard!  Not all data is publicly available, not all methods are transparent, and code sharing is still not always standardized or required (although there are good efforts to move in that direction)

I’d add to this list another culprit, as before — funders.  Just like everyone else (tenure committees, journals), funders want to fund new research.  It’s just not sexy to fund replication studies — boards don’t want to do it, program officers don’t want to do it, and public funding agencies would have a hard time defending it from detractors in gov’t and the public.


The authors cover pretty well the “replication is hard” problem, and honestly I’m least worried about that.  It’s obvious to me (I still do work as an RA) that both norms and institutional requirements are shifting in the direction of data/code disclosure.

The more pernicious problem is to do with incentives.  Nosek and CoS are doing a good job offering some guidelines, but I don’t think it’s enough.

Let’s think about where unilateral action might work, or might not (hint: look to the journals)…

Tenure Committees: My intuition is unilateral movement here doesn’t work.  Suppose the tenure committee at UChicago Economics decides to upweight replication work, or even throws in a requirement to do X replication studies before being considered for tenure.  If the whole scientific (economics, in this case) community doesn’t similarly value replication, the move is probably a net negative for the school and its faculty.  Faculty have a harder time moving to other schools, since they have fewer A-level journal articles (having spent some of that time on replication, instead), garner less respect from their peers for the same reason, and the school has a harder time attracting funding (since funders care less about replication).  You can even play out the game to where UChicago becomes less attractive to top candidates on the market.

Funders: Funders might fare a little better.  Even if A-level journals aren’t publishing replication studies, you should always get some compositional shift toward replication just because that’s where the money is.  I can’t imagine an RFP that just straight up goes unfulfilled, regardless of how unsexy the topic.  Money is money.  Probably your major problem is that junior faculty will never apply for these grants, because it doesn’t help them get tenure — you have to rely on senior researchers doing the work, and they’d rather be leading the field than doing the “grunt work” of replication.

Journals: This might be the best bet for unilateral action.  Suppose Nature committed to publishing 5 replication studies every month.  Is there any doubt, any doubt, that researchers would clamor for those slots?  And it’s an A-level journal, so tenure committees have to care.  We might worry about the money — that researchers want to fill those replication slots desperately, but there’s just no funding for it.  My guess, though, is if a high-profile journal institutes this policy, funders actually end up freeing up some funds for it.  Just personal experience, but funders are typically not adhering to some higher, untouchable principles of what to fund; they kind of just go with what makes sense at the time, and that’s tied up endogenously with plenty of environmental factors (like, e.g., what an A-level journal cares about).

So if I had to bet on any leverage point, it would be the journals.  Someone go talk to a few A-level editors, and see if they’d be up for reserving monthly space for replications.  Idk I’ll go do it, I’ll tell you what they say.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s