Ask or search…
K

Guidelines for evaluators

This page describes The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
To download a printable version of these guidelines, click the three dots () above and choose Export as PDF.
Please see For prospective evaluators for an overview of the evaluation process, as well as details on compensation, public recognition, and more.

What we would like you to do

  1. 1.
    Write an evaluation of the target paper or project, similar to a standard, high-quality referee report.
  2. 2.
    Give quantitative metrics and predictions as described below.
  3. 3.
    Answer a short questionnaire about your background and our processes.

Writing the evaluation (aka 'the review')

In writing your evaluation and providing ratings, please consider the following:

The Unjournal's criteria

Broadly, the review should be similar to a report an academic would write for a traditional high-prestige journal (e.g., see some 'conventional guidelines' here). Specifically, we'd like you to focus on our priorities:
  1. 1.
    Advancing our knowledge and practice
  2. 2.
    Justification, reasonableness, validity, and robustness of methods
  3. 3.
    Logic and communication
  4. 4.
    Open, communicative, replicable science
See our guidelines below for more details on each of these. We are not asking you to structure your review according to these metrics, just to pay extra attention to them.
Specific requests for focus or feedback
Please pay attention to anything our managers and editors specifically asked you to focus on. We may ask you to focus on specific areas of expertise. We may also forward specific feedback requests from authors.
The evaluation will be made public
Unless you were advised otherwise, this evaluation, including the review and quantitative metrics, will be given a DOI and, hopefully, will enter the public research conversation. Authors will be given two weeks to respond to reviews before the evaluations, ratings, and responses are made public. You can choose whether you want to be identified publicly as an author of the evaluation.
If you have questions about the authors’ work, you can ask them anonymously: we will facilitate this.
We want you to evaluate the most recent/relevant version of the paper/project that you can access. If you see a more recent version than the one we shared with you, please let us know.
Publishing and signing reviews: considerations and exceptions
We may give early-career researchers the right to veto the publication of very negative reviews or to embargo the release of these reviews for a defined period. We will inform you in advance if this will be the case for your evaluation.
You can reserve some "sensitive" content in your report to be shared with only The Unjournal management or only the authors, but we hope to keep this limited.

Target audiences

We designed this process to balance three considerations with three target audiences. Please consider each of these:
  1. 1.
    Crafting evaluations and ratings that help researchers and policymakers judge when and how to rely on this research. For Research Users.
  2. 2.
    Ensuring these evaluations of the papers are comparable to current journal tier metrics, to enable them to be used to determine career advancement and research funding. For Departments, Research Managers, and Funders.
  3. 3.
    Providing constructive feedback to Authors.
We discuss this, and how it relates to our impact and "theory of change", here.
"But isn't The Unjournal mainly just about feedback to authors"?
We accept that in the near-term an Unjournal evaluation may not be seen to have substantial career value.
Furthermore, work we are considering may tend be at an earlier stage. authors may submit work to us, thinking of this as a "pre-journal" step. The papers we select (e.g., from NBER) may also have been posted long before authors planned to submit them to journals.
This may make the 'feedback for authors' and 'assessment for research users' aspects more important, relative to traditional journals' role. However, in the medium-term, a positive Unjournal evaluation should gain credibility and career value. This should make our evaluations an "endpoint" for a research paper.

Quantitative metrics

We ask for a set of nine quantitative metrics. For each metric, we ask for a score and a 90% credible interval. We describe these in detail below.

Percentile rankings

For some questions, we ask for a percentile ranking from 0-100%. This represents "what proportion of papers in the reference group are worse than this paper, by this criterion". A score of 100% means this is essentially the best paper in the reference group. 0% is the worst paper. A score of 50% means this is the median paper; i.e., half of all papers in the reference group do this better, and half do this worse, and so on.
Here, the population of papers should be all serious research in the same area that you have encountered in the last three years.
"Serious" research? Academic research?
Here, we are mainly considering research done by professional researchers with high levels of training, experience, and familiarity with recent practice, who have time and resources to devote months or years to each such research project or paper. These will typically be written as 'working papers' and presented at academic seminars, before being submitted to standard academic journals. Although no credential is required, this typically includes people with PhD degrees (or upper-level PhD students). Most of this sort of research is done by full-time academics (professors, post-docs, academic staff, etc.) with a substantial research remit, as well as research staff at think tanks and research institutions (but there may be important exceptions).
What counts as the "same area"?
This is a judgment call. Here are some criteria to consider: first, does the work come from the same academic field and research subfield, and does it address questions that might be addressed using similar methods? Secondly, does it deal with the same substantive research question, or a closely related one? If the research you are evaluating is in a very niche topic, the comparison reference group should be expanded to consider work in other areas.
"Research that you have encountered"
We are aiming for comparability across evaluators. If you suspect that you are particularly exposed to higher-quality work in this category, compared to other likely evaluators, you may want to adjust your reference group downwards. (And of course vice-versa, if you suspect you are particularly exposed to lower-quality work.)

Midpoint rating and credible intervals

For each metric, we ask you to provide a 90% credible interval as a measure of your uncertainty. Our interface provides slider bars to express your chosen intervals:
See below for more guidance on credible intervals.
The table below summarizes the percentile rankings.
Quantitative metric
Scale
Overall assessment
0 - 100%
Advancing our knowledge and practice
0 - 100%
Methods: Justification, reasonableness, validity, robustness
0 - 100%
Logic and communication
0 - 100%
Open, collaborative, replicable science
0 - 100%
Real world relevance
0 - 100%
Relevance to global priorities
0 - 100%

Overall assessment

Percentile ranking (0-100%)
Judge the quality of the research heuristically. Consider all aspects of quality, credibility, importance to knowledge production, and importance to practice.

Advancing our knowledge and practice

Percentile ranking (0-100%)
To what extent does the project contribute to the field or to practice, particularly in ways that are directly or indirectly relevant to global priorities and impactful interventions?
Less weight to "originality and cleverness’"
Originality and cleverness should be weighted less than the typical journal, because The Unjournal focuses on impact. Papers that apply existing techniques and frameworks more rigorously than previous work or apply them to new areas in ways that provide practical insights for GP (global priorities) and interventions should be highly valued. More weight should be placed on 'contribution to GP' than on 'contribution to the academic field'.
Do the paper's insights inform our beliefs about important parameters and about the effectiveness of interventions? We do not require surprising results; sound and well-presented null results can be valuable.
Does the project add useful value to other impactful research?

Methods: Justification, reasonableness, validity, robustness

Percentile ranking (0-100%)
Are the methods used well-justified and explained; are they a reasonable approach to answering the question(s) in this context? Are the underlying assumptions reasonable? Are all of the given results justified in the discussion of methods?
Are the results and methods likely to be robust to reasonable changes in the underlying assumptions? Does the author demonstrate this?
Avoiding bias and questionable research practices (QRP): Did the authors take steps to reduce bias from opportunistic reporting and QRP? For example, did they do a strong pre-registration and pre-analysis plan, incorporate multiple hypothesis testing corrections, and report flexible specifications?

Logic and communication

Percentile ranking (0-100%)
Are the goals and questions of the paper clearly expressed? Are concepts clearly defined and referenced?
Is the reasoning "transparent"? (See, e.g., Open Philanthropy's guide on reasoning transparency.) Are assumptions made explicit? Are all logical steps clear and correct? Does the writing make the argument easy to follow?
Are stated conclusions consistent with the evidence (or theoretical proofs) presented?
Are the data and/or analysis presented relevant to the arguments made? Are the tables, graphs, and diagrams easy to understand in the context of the narrative (e.g., no major errors in labeling)?

Open, collaborative, replicable science

Percentile ranking (0-100%)
This covers several considerations:

Replicability, reproducibility, data integrity

Would another researcher be able to perform the same analysis and get the same results? Are the methods explained clearly and in enough detail to enable easy and credible replication? For example, are all analyses and statistical tests explained, and is code provided?
Is the source of the data clear?
Is the necessary data made as widely available as reasonably possible (if applicable)? Ideally, the cleaned data should also be clearly labeled and explained/legible.
Optional: Could other researchers reconstruct the output from the shared code and data?
Note that evaluators are not required to run or evaluate the code; this is at your discretion. However, having a quick look at some of the elements could be helpful. Ideally, the author should give code that allows easy, full replication; for example, a single R script that runs and creates everything, starting from the original data source, and including data cleaning files (even better if 'containerized/dockerized' to be platform-independent). This would make it fairly easy for an evaluator to check. For example, see this taxonomy of "levels of computational reproducibility."
Consistency
Do the numbers in the paper and/or code output make sense? Are they internally consistent throughout the paper?
Useful building blocks
Do the authors provide tools, resources, data, and outputs that might enable or enhance future work and meta-analysis?

Real-world relevance

Percentile ranking (0-100%)
Are the papers' assumptions and setup realistic and relevant to the real world? Does the paper consider the real-world relevance of the arguments and results presented, perhaps engaging policy and implementation questions?
Do the authors communicate their work in ways policymakers and decision-makers are likely to understand without being misleading and oversimplifying?
Do the authors present practical impact quantifications, such as cost-effectiveness analyses? Do they report results enabling such analyses?

Relevance to global priorities

Percentile ranking (0-100%)
Could the paper's topic and approach potentially help inform global priorities, cause prioritization, and high-impact interventions?

Journal ranking tiers

To help universities and policymakers make sense of our evaluations, we want to benchmark them against how research is currently judged. So, we would like you to assess the paper in terms of journal rankings. We ask for two assessments:
  1. 1.
    a normative judgment about 'how well the research should publish';
  2. 2.
    a prediction about where the research will be published.
Journal ranking tiers are on a 0-5 scale, as follows:
  • 0/5: Marginally respectable/Little to no value; not publishable in any journal with scrutiny or credible WP series; not likely to be cited by credible researcher
  • 1/5: OK/Somewhat valuable journal
  • 2/5: Marginal B-journal/Decent field journal
  • 3/5: Top B-journal/Strong field journal
  • 4/5: Marginal A-Journal/Top field journal
  • 5/5: A-journal/Top journal
We give some example journal rankings here, based on SJR and ABS ratings.
We encourage you to consider a non-integer score, e.g. 4.6 or 2.2.
As before, we ask for a 90% credible interval.
Journal ranking tiers
Scale
90% CI
What journal ranking tier should this work be published in?
0.0-5.0
lower, upper
What journal ranking tier will this work be published in?
0.0-5.0
lower, upper
PubPub note: as of 8 Feb 2024, the PubPub form is not allowing you to give non-integer responses. Until this is fixed, please provide these (potentially) non-interval 'continuous' journal-metric predictions/ratings and CIs for these in the comment boxes below the slider.

What journal ranking tier should this work be published in?

Journal ranking tier (0.0-5.0)
Assess this paper on the journal ranking scale described above, considering only its merit, particularly considering the category metrics we discussed above.
Equivalently, where would this paper be published if:
  1. 1.
    the journal process was fair, unbiased, and free of noise, and that status, social connections, and lobbying to get the paper published didn’t matter;
  2. 2.
    journals assessed research according to the category metrics we discussed above.

What journal ranking tier will this work be published in?

Journal ranking tier (0.0-5.0)
What if this work has already been peer reviewed and published?
If this work has already been published, and you know where, please report the prediction you would have given absent that knowledge.

The credible intervals: expressing uncertainty

What are we looking for and why?

We want policymakers, researchers, funders, and managers to be able to use The Unjournal's evaluations to update their beliefs and make better decisions. To do this well, they need to weigh multiple evaluations against each other and other sources of information. Evaluators may feel confident in their rating on a particular category, but less confident in another area. How much weight should they give to each? In this context, it is important to quantify the uncertainty.
Why are you asking about "confidence" in these metrics?
We would like you to state your "credible intervals." Loosely speaking, we hope to capture a sense of how sure you are about your ratings. The credible intervals will help readers know how much weight to put on them. They can also be used for meta-science and meta-analysis.
How do I come up with these intervals?
You may understand the concepts of uncertainty and credible intervals, but you might be unfamiliar with applying them in a situation like this one.
You may have a certain best guess for the "Methods..." criterion. Still, even an expert can never be certain. E.g., you may misunderstand some aspect of the paper, there may be a method you are not familiar with, etc.
Your uncertainty over this could be described by some distribution, representing your beliefs about the true value of this criterion. Your "'best guess" should be the central mass point of this distribution.
You are also asked to give a 90% credible interval. Consider this as the smallest interval that you believe is 90% likely to contain the true value.
For some questions, the "true value" refers to something objective, e.g. will this work be published in a top-ranked journal? In other cases, like the percentile rankings, the true value means "if you had complete evidence, knowledge, and wisdom, what value would you choose?"
For more information on credible intervals, this Wikipedia entry may be helpful.
If you are "well calibrated", your 90% credible intervals should contain the true value 90% of the time. To understand this better, assess your ability, then practice to get better at estimating your confidence in results. This web app will help you get practice at calibrating your judgments.

Survey questions

Lastly, we ask evaluators about their background, and for feedback about the process.
Survey questions for evaluators: details
For the two questions below, we will publish your responses unless you specifically ask these questions to be kept anonymous.
  1. 1.
    How long have you been in this field?
  2. 2.
    How many proposals and papers have you evaluated? (For journals, grants, and other peer review.)
Answers to the questions below will not be made public:
  1. 1.
    How would you rate this template and process?
  2. 2.
    Do you have any suggestions or questions about this process or The Unjournal? (We will try to respond to your suggestions, and incorporate them in our practice.) [Open response]
  3. 3.
    Would you be willing to consider evaluating a revised version of this project?

Other guidelines and notes

Note on the evaluation platform (13 Feb 2024)
12 Feb 2024: We are moving to a hosted form/interface in PubPub. That form is still somewhat a work-in-progress, and may need some further guidance; we try to provide this below, but please contact us with any questions. If you prefer, you can also submit your response in a Google Doc, and share it back with us. Click here to make a new copy of that directly.
Length/time spent: This is up to you. We welcome detail, elaboration, and technical discussion.
Length and time: possible benchmarks
The Econometrics society recommends a 2–3 page referee report; Berk et al. suggest this is relatively short, but confirm that brevity is desirable. In a recent survey (Charness et al., 2022), economists report spending (median and mean) about one day per report, with substantial shares reporting "half a day" and "two days." We expect that reviewers tend spend more time on papers for high-status journals, and when reviewing work that is closely tied to their own agenda.
Adjustments: We have made some adjustments to this page and to our guidelines and processes; this is particularly relevant for considering earlier evaluations. See Adjustments to metrics and guidelines/previous presentations.
Our data protection statement is linked here.