Project submission, selection and prioritization
Submission/evaluation funnel
As we are paying evaluators and have limited funding, we cannot evaluate every paper and project. Papers enter our database through
submission by authors;
our own searches (e.g., searching syllabi, forums, working paper archives, and white papers); and
s from other researchers, practitioners, and members of the public, and recommendations from . We have posted more detailed instructions for how to suggest research for evaluation.
Our management team rates the suitability of each paper according to the criteria discussed below and in the aforementioned linked post.
Our procedures for identification and prioritization
We have followed a few procedures for finding and prioritizing papers and projects. In all cases, we require more than one member of our research-involved team (field specialist, managers, etc.) to support a paper before prioritizing it.
We are building a grounded systematic procedure with criteria and benchmarks. We also aim to give managers and field specialists some autonomy in prioritizing key papers and projects. As noted elsewhere, we are considering targets for particular research areas and sources.
See our basic process (as of Dec. 2023) for prioritizing work: Process: prioritizing research
See also (internal discussion):
Airtable: columns for "crucial_research", "considering" view, "confidence," and "discussion"
Airtable: see "sources" (public view link here)
Authors' permission: sometimes required
Through October 2022: For the papers or projects at the top of our list, we contacted the authors and asked if they wanted to engage, only pursuing evaluation if agreed.
As of November 2022, we have a , we inform authors but do not request permission. For this track, we first focused on particularly relevant NBER working papers.
July 2023: We expanded this process to some other sources, with some discretion.
Communicating: "editors'" process
In deciding which papers or projects to send out to paid evaluators, we have considered the following issues. for each paper or project to evaluators before they write their evaluations.
Summary: why is it relevant and worth engaging with?
Consider: global priority importance, field relevance, open science, authors’ engagement, data and reasoning transparency. In gauging this relevance, the team may consider the ITN framework, but not too rigidly.
Why does it need (more) review? What are some key issues or claims to vet?
What are (some of) the authors’ main claims that are worth carefully evaluating? What aspects of the evidence, argumentation, methods, interpretation, etc., is the team unsure about? What particular data, code, proof, etc., would they like to see vetted? If it has already been peer-reviewed in some way, why do they think more review is needed?
To what extent is there author engagement?
How well has the author engaged with the process? Do they need particular convincing? Do they need help making their engagement with The Unjournal successful?
See What research to target? for further discussion of prioritization, scope, and strategic and sustainability concerns.
Last updated