Proposed curating robustness replication
We are considering asking evaluators, with compensation, to assist and engage in the process of "robustness replication." This may lead to some interesting follow-on possibilities as we build our potential collaboration with the Institute for Replication and others in this space.
We might ask evaluators discussion questions like these:
What is the most important, interesting, or relevant substantive claim made by the authors, (particularly considering global priorities and potential interventions and responses)?
What statistical test or evidence does this claim depend on, according to the authors?
How confident are you in the substantive claim made?
"Robustness checks": What specific statistical test(s) or piece(s) of evidence would make you substantially more confident in the substantive claim made?
If a robustness replication "passed" these checks, how confident would you be then in the substantive claim? (You can also express this as a continuous function of some statistic rather than as a binary; please explain your approach.)
Background:
The Institute for Replication is planning to hire experts to do "robustness-replications" of work published in a top journal in economics and political science. Code- and data sharing is now being enforced in many or all of these journals and other important outlets. We want to support their efforts and are exploring collaboration possibilities. We are also considering how to best guide potential future robustness replication work.
Last updated