Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
(for pilot and beyond)
Our initial focus is quantitative work that informs global priorities (see linked discussion), especially in economics, policy, and social science. We want to see better research leading to better outcomes in the real world (see our 'Theory of Change').
See (earlier) discussion in public call/EA forum discussion .
To reach these goals, we need to select "the right research" for evaluation. We want to choose papers and projects that are highly relevant, methodologically promising, and that will benefit substantially from our evaluation work. We need to optimize how we select research so that our efforts remain mission-focused and useful. We also want to make our process transparent and fair. To do this, we are building a coherent set of criteria and goals, and a specific approach to guide this process. We explore several dimensions of these criteria below.
Management access only: General discussion of prioritization in Gdoc . Private discussion of specific papers in Airtable and links (e.g., ). We incorporate some of this discussion below.
When considering a piece of research to decide whether to commission it to be evaluated, we can start by looking at its general relevance as well as the value of evaluating and rating it.
Our prioritization of a paper for evaluation should not be seen as an assessment of its quality, nor of its 'vulnerability'. Furthermore, 'the prioritization is not the evaluation', it is less specific and less intensive.
Why is it relevant and worth engaging with?
We consider (and prioritize) the importance of the research to global priorities; its relevance to crucial decisions; the attention it is getting, the influence it is having; its direct relevance to the real world; and the potential value of the research for advancing other impactful work. We de-prioritize work that has already been credibly (publicly) evaluated. We also consider the fit of the research with our scope (social science, etc.), and the likelihood that we can commission experts to meaningfully evaluate it. As noted , some 'instrumental goals' (, , driving change, ...) also play a role in our choices.
Some features we value, that might raise the probability we consider a paper or project include the commitment and contribution to open science, the authors' engagement with our process, and the logic, communication, and transparent reasoning of the work. However, if a prominent research paper is within our scope and seems to have a strong potential for impact, we will prioritize it highly, whether or not it has these qualities.
2. Why does it need (more) evaluation, and what are some key issues and claims to vet?
We ask the people who suggest particular research, and experts in the field:
What are (some of) the authorsâ key/important claims that are worth evaluating?
What aspects of the evidence, argumentation, methods, and interpretation, are you unsure about?
What particular data, code, proofs, and arguments would you like to see vetted? If it has already been peer-reviewed in some way, why do you think more review is needed?
Put broadly, we need to consider how this research allows us to achieve our own goals in line with our , targeting "ultimate outcomes." The research we select and evaluate should meaningfully drive positive change. One way we might see this process: âbetter research & more informative evaluationâ â âbetter decision-makingâ â âbetter outcomesâ for humanity and for non-human animals (i.e., the survival and flourishing of life and human civilization and values).
As we weigh research to prioritize for evaluation, we need to balance directly having a positive impact against building our ability to have an impact in the future.
Below, we adapt the (popular in effective altruism circles) to assess the direct impact of our evaluations.
Importance
What is the direct impact potential of the research?
This is a massive question many have tried to address (see sketches and links below). We respond to uncertainty around this question in several ways, including:
Consulting a range of sources, not only EA-linked sources.
EA and more or less adjacent: and overviews, .
Non-EA, e.g., .
Neglectedness
Where is the current journal system failing GP-relevant work the most . . . in ways we can address?
Tractability
âEvaluabilityâ of research: Where does the UJ approach yield the most insight or value of information?
Existing expertise: Where do we have field expertise on the UJ team? This will help us commission stronger evaluations.
"Feedback loops": Could this research influence concrete intervention choices? Does it predict near-term outcomes? If so, observing these choices and outcomes and getting feedback on the research and our evaluation can yield strong benefits.
Consideration/discussion: How much should we include research with indirect impact potential (theoretical, methodological, etc.)?
Moreover, we need to consider how the research evaluation might support the sustainability of The Unjournal and the broader general project of open evaluation. We may need to strike a balance between work informing the priorities of various audences, including:
Relevance to stakeholders and potential supporters
Clear connections to impact; measurability
Support from relevant academic communities
Consideration/discussion: What will drive further interest and funding?
Finally, we consider how our choices will increase the visibility and solidify the credibility of The Unjournal and open evaluations. We consider how our work may help drive positive institutional change. We aim to:
Interest and involve academicsâand build the status of the project.
Commission evaluations that will be visibly useful and credible.
âBenchmark traditional publication outcomesâ, track our predictiveness and impact.
We are aware of possible pitfalls of some elements of our vision.
We are pursuing a second "high-impact policy and applied research" track for evaluation. This will consider work that is not targeted at academic audiences. This may have direct impact and please SFF funders, but, if not done carefully, this may distract us from changing academic systems, and may cost us status in academia.
A focus on topics perceived as niche (e.g., the economics and game theory of AI governance and AI safety) may bring a similar tradeoff.
On the other hand, perhaps a focus on behavioral and experimental economics would generate lots of academic interest and participants; this could help us benchmark our evaluations, etc.; but this may also be less directly impactful.
We hope we have identified the important considerations (above); but we may be missing key points. We continue to engage discussion and seek feedback, to hone and improve our processes and approaches.
We present and analyze the specifics surrounding our current evaluation data in
Below: An earlier template for considering and discussing the relevance of research. This was/is provided both for our own consideration and for sharing (in part?) with evaluators, to give them some guidance. Think of these as bespoke evaluation notes for a
Scoping what other sorts of work are representative inputs to GP-relevant work.
Get a selection of seminal GP publications; look back to see what they are citing and categorize by journal/field/keywords/etc.
Have strong leverage over research "outcomes and rewards."
Increase public visibility and raise public interest.
Bring in supporters and participants.
Achieve substantial output in a reasonable time frame and with reasonable expense.
Maintain goodwill and a justified reputation for being fair and impartial.
Giving managers autonomy and pushing forward quickly may bring the risk of perceived favoritism; a rule-based systematic approach to choosing papers to evaluate might be slower and less interesting for managers. However, it might be seen as fairer (and it might enable better measurement of our impact).
Link to any private hosted comments on the paper/project
As mentioned under , consider factors including importance to global priorities, relevance to the field, the commitment and contribution to open science, the authorsâ engagement, and the transparency of data and reasoning. You may consider the explicitly, but not too rigidly.
What are (some of) the authorsâ main important claims that are worth carefully evaluating? What aspects of the evidence, argumentation, methods, interpretation, etc., are you unsure about? What particular data, code, proof, etc., would you like to see vetted? If it has already been peer-reviewed in some way, why do you think more review is needed?
What types of expertise and background would be most appropriate for the evaluation? Who would be interested? Please try to make specific suggestions.
Do they need particular convincing? Do they need help making their engagement with The Unjournal successful?
Research can be "submitted" by authors (here) or "suggested" by others. For a walk-through on suggesting research, see this video example.
There are two main paths for making suggestions: through our survey form or through Airtable.
Anyone can suggest research using the survey form at . (Note, if you want to "submit your own research," go to .) Please include the following steps:
Begin by reviewing to get a sense of the research we cover and our priorities. Look for high-quality research that 1) falls within our focus areas and 2) would benefit from (further) evaluation.
When in doubt, we encourage you to suggest the research anyway.
Navigate to The Unjournal's . Most of the fields here are optional. The fields ask the following information:
Who you are: Let us know who is making the suggestion (you can also choose to stay anonymous).
If you leave your contact information, you will be eligible for financial "bounties" for strong suggestions.
If you are already a member of The Unjournal's team, additional fields will appear for you to link your suggestion to your profile in the Unjournal's
Complete all the required fields and submit your suggestion. The Unjournal team will review your submission and consider it for future evaluation. You can reach out to us at with any questions or concerns.
People on our team may find it more useful to suggest research to The Unjournal directl via the Airtable. See for a guide to this. (Please request document permission to access this explanation.)
Aside on setting the prioritization ratings: In making your subjective prioritization rating, please consider âWhat percentile do you think this paper (or project) is relative to the others in our database, in terms of ârelevance for The UJ to evaluateâ?â (Note this is a redefinition; we previously considered these as probabilities.) We roughly plan to commission the evaluation of about 1 in 5 papers in the database, the âtop 20%â according to these percentiles. Please donât consider the âpublication status or the âauthor's propensity to engageâ in this rating. We will consider those as separate criteria.
Please donât enter only the papers you think are âvery relevantâ; please enter in all research that you have spent any substantial time considering (more than a couple minutes). If we all do this, we should all aim for our percentile ratings to be approximately normally distributed; evenly spread over the 1-100% range.
Research Label: Provide a short, descriptive label for the research you are suggesting. This helps The Unjournal quickly identify the topic at a glance.
Research Importance: Explain why the research is important, its potential impact, and any specific areas that require thorough evaluation.
Research Link: Include a direct URL to the research paper. The Unjournal prefers research that is publicly hosted, such as in a working paper archive or on a personal website.
Peer Review Status: Inform about the peer review status of the research, whether it's unpublished, published without clear peer review, or published in a peer-reviewed journal.
"Rate the relevance": This represents your best-guess at how relevant this work is for The Unjournal to evaluate, as a percentile relative to other work we are considering.
Research Classification: Choose categories that best describe the research. This helps The Unjournal sort and prioritize suggestions.
Field of Interest: Select the outcome or field of interest that the research addresses, such as global health in low-income countries.
David Reinstein, 28 Mar 2024 I am proposing the following policies and approaches for our âApplied & Policy Streamâ. We will move forward with these for now on a trial basis, but they may be adjusted. Please offer comments and ask questions in this Google doc, flagging the email 'contact@unjournal.org'
Much of the most impactful research is not aimed at academic audiences and may never be submitted to academic journals. It is written in formats that are very different from traditional academic outputs, and cannot be easily judged by academics using the same standards. Nonetheless, this work may use technical approaches developed in academia, making it important to gain expert feedback and evaluation.
The Unjournal can help here. However, to avoid confusion, we want to make this clearly distinct from our main agenda, which aims at impactful academically-aimed research.
This we are trialing an âApplied & Policy Streamâ which will be clearly labeled as separate from our main stream. This may constitute roughly 10 or 15% of the work that we cover. Below, we refer to this as the âpolicy streamâ for brevity.
Our considerations for prioritizing this work are generally the same as for our academic stream â is it in the fields that we are focused on, using approaches that enable meaningful evaluation and rating? Is it already having impact (e.g., influencing grant funding in globally-important areas)? Does it have the potential for impact, and if so, is it high-quality enough that we should consider boosting its signal?
We will particularly prioritize policy and applied work that uses technical methods that need evaluation by research experts, often academics.
This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers.
Ratings/metrics: As in the academic stream, this work will be evaluated for its credibility, usefulness, communication/logic, etc. However, we are not seeking to have this work assessed by the standards of academia in a way that yields a comparison to traditional journal tiers. Evaluators: Please ignore these parts of our interface; if you are unsure if it is relevant feel free to ask.
Evaluator selection, number, pay: Generally we want to continue to select academic research experts or non-academic researchers with strong academic and methodological background to do these evaluations. A key purpose of this policy stream is largely to bring research expertise, particularly from academia, to work that is not normally scrutinized by such experts.
The compensation may be flexible as well; in some cases the work may be more involved than for the academic stream and in some cases less involved. As a starting point we will begin by offering the same compensation as for the academic stream.
Careful flagging and signposting: To preserve the reputation of our academic-stream evaluations we need to make it clear, wherever people might see this work, that it is not being evaluated by the same standards as the academic stream and doesn't âcountâ towards those metrics.
The flowchart below focuses on the evaluation part of our process.
(Section updated 1 August 2023)
Submission/selection (multiple routes)
Author (A) submits work (W), creates new submission (submits a URL and DOI), through our platform or informally.
Author (or someone on their behalf) can complete a submission form; this includes a potential "request for embargo" or other special treatment.
Managers and field specialists select work (or the project is submitted independently of authors) and the management team agrees to prioritize it.
Prioritization
Following author submission ...
Manager(s) (M) and Field Specialists (FS) prioritize work for review (see ).
M assigns an Evaluation Manager (EM â typically part of our ) to selected project.
EM invites evaluators (aka "reviewers") and shares the paper to be evaluated along with (optionally) a brief summary of why The Unjournal thinks it's relevant, and what we are asking.
Potential evaluators are given full access to (almost) all information submitted by the author and M, and notified of any embargo or special treatment granted.
EM may make special requests to the evaluator as part of a management policy (e.g., "signed/unsigned evaluation only," short deadlines, extra incentives as part of an agreed policy, etc.).
Evaluator accepts or declines the invitation to review, and if the former, agrees on a deadline (or asks for an extension).
If the evaluator accepts, the EM shares full guidelines/evaluation template and specific suggestions with the evaluator.
Evaluator completes an evaluation form.
Evaluator submits evaluation including numeric ratings and predictions, plus "CI's" for these.
Possible addition (future plan): Reviewer asks for minor revisions and corrections; see "How revisions might be folded in..." in the fold below.
EM collates all evaluations/reviews, shares these with Author(s).
Evaluator must be very careful not to share evaluators' identities at this point.
This includes caution to avoid accidentally-identifying information, especially where
Author(s) read(s) evaluations, given two working weeks to submit responses.
If there is an embargo, there is more time to do this, of course.
EM creates evaluation summary and "EM comments."
EM or UJ team publishes each element on our space as a separate "pub" with a DOI for each (unless embargoed):
Summary and EM comments
With a prominent section for the "ratings data tables"
Authors and evaluators are informed once elements are on PubPub; next steps include promotion, checking bibliometrics, etc.
("Ratings and predictions data" to enter an additional public database.)
Note that we intend to automate and integrate many of the process into an editorial-management-like system in PubPub.
In our current (8 Feb 2023 pilot) phase, we have the evaluators consider the paper "as is," frozen at a certain date, with no room for revisions. The authors can, of course, revise the paper on their own and even pursue an updated Unjournal review; we would like to include links to the "permanently updated version" in the Unjournal evaluation space.
After the pilot, we may consider making minor revisions part of the evaluation process. This may add substantial value to the papers and process, especially where evaluators identify straightforward and easily-implementable improvements.
If "minor revisions" are requested:
... the author has four (4) weeks (strict) to make revisions if they want to, submit a new linked manuscript, and also submit their response to the evaluation.
We don't want to replicate the slow and inefficient processes of the traditional system. Essentially, we want evaluators to give a report and rating as the paper stands.
We also want to encourage papers as projects. The authors can improve it, if they like, and resubmit it for a new evaluation.
For either of these cases (1 or 2), authors are asked for permission.
Alternate Direct Evaluation track: "Work enters prestige archive" (NBER, CEPR, and some other cases).
Managers inform and consult the authors but permission is not needed. (Particularly relevant: we confirm with author that we have the latest updated version of the research.)
Following direct evaluation selection...
M or FS may add additional (fn1) "evaluation suggestions" (see examples here) explaining why it's relevant, what to evaluate, etc., to be shared later with evaluators.
If requested (in either case), M decides whether to grant embargo or other special treatment, notes this, and informs authors.
EM (also, optionally) may add "evaluation suggestions" to share with the evaluators.
Even if evaluators chose to "sign their evaluation," their identity should not be disclosed to authors at this point. However, evaluators are told they can reach out to the authors if they desire.
Evaluations are shared with the authors as a separate doc, set of docs, file, or space; which the evaluators do not have automatic access to. (Going forward, this will be automated.)
It is made clear to authors that their responses will be published (and given a DOI, when possible).
Each evaluation, with summarized ratings at the top
The author response
All of the above are linked in a particular way, with particular settings; see notes
Previous/less emphasized: Society Group: curating evaluations and papers
Evaluations and author response are given DOI's, enter the bibliometric record
Future consideration:
"publication tier" of authors' responses as a workaround to encode aggregated evaluation
Hypothes.is annotation of hosted and linked papers and projects (aiming to integrate: see: )
Sharing evaluation data on public Github repo (see data reporting here)
We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.
In the short run (in our pilot phase), we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on "aggregating expert opinion." (See, e.g., Hemming et al, 2017; Hanea et al, 2021; McAndrew et al, 2020; Marcoci et al, 2022.)
We are working on this in our public data presentation (Quarto notebook) here.
We are considering...
Syntheses of evaluations and author feedback
Input to prediction markets, replication projects, etc.
Less technical summaries and policy-relevant summaries, e.g., for the EA Forum, Asterisk magazine, or mainstream long-form outlets
This page is a work-in-progress
15 Dec 2023: Our main current process involves
Submitted and (internally/externally) suggested research
Prioritization ratings and discussion by Unjournal field specialists
Feedback from field specialist area teams
A final decision by the management team, guided by the above
See (also embedded below) for more details of the proposed process.
As we are paying evaluators and have limited funding, we cannot evaluate every paper and project. Papers enter our database through
submission by authors;
our own searches (e.g., searching syllabi, forums, working paper archives, and white papers); and
suggestions from other researchers, practitioners, and members of the public, and recommendations from high-impact research-users. We have posted more detailed instructions for .
Our management team rates the suitability of each paper according to the criteria discussed below and .
We have followed a few procedures for finding and prioritizing papers and projects. In all cases, we require more than one member of our research-involved team (field specialist, managers, etc.) to support a paper before prioritizing it.
We are building a grounded systematic procedure with criteria and benchmarks. We also aim to give managers and field specialists some autonomy in prioritizing key papers and projects. As noted elsewhere, we are considering targets for particular research areas and sources.
See our basic process (as of Dec. 2023) for prioritizing work:
See also (internal discussion):
Airtable: columns for "crucial_research", "considering" view, "confidence," and "discussion"
Airtable: see "sources" ()
Through October 2022: For the papers or projects at the top of our list, we contacted the authors and asked if they wanted to engage, only pursuing evaluation if agreed.
As of November 2022, we have a second track where, under certain conditions, we inform authors but do not request permission. For this track, we first focused on particularly relevant working papers.
July 2023: We expanded this process to some other sources, with some discretion.
See .
In deciding which papers or projects to send out to paid evaluators, we have considered the following issues. We aim to communicate the team's answers for each paper or project to evaluators before they write their evaluations.
Consider: , field relevance, open science, authorsâ engagement, data and reasoning transparency. In gauging this relevance, the team may consider the , but not too rigidly.
What are (some of) the authorsâ main claims that are worth carefully evaluating? What aspects of the evidence, argumentation, methods, interpretation, etc., is the team unsure about? What particular data, code, proof, etc., would they like to see vetted? If it has already been peer-reviewed in some way, why do they think more review is needed?
How well has the author engaged with the process? Do they need particular convincing? Do they need help making their engagement with The Unjournal successful?
See for further discussion of prioritization, scope, and strategic and sustainability concerns.
As noted in Process: prioritizing research, we ask people who suggest research to provide a numerical 0-100 rating:
We also ask people within our team to act as 'assessors' to give as second and third opinions on this. This 'prioritization rating' is one of the criteria we will use to determine whether to commission research to be evaluated (along with author engagement, publication status, our capacity and expertise, etc.) Again, see the previous page for the current process.
We are working on a set of notes on this, fleshing this out and giving specific examples. At the moment this is available to members of our team only (ask for access to "Guidelines for prioritization ratings (internal)"). We aim to share a version of this publicly once it converges, and once we can get rid of arbitrary sensitive examples.
I. This is not the evaluation itself. It is not an evaluation of the paper's merit per se:
Influential work, and prestigious work in influential areas may be highly prioritized regardless of its rigor and quality
The prioritization rating might consider quality for work that seems potentially impactful, which does not seem particularly prestigious or influential. Here aspects like writing clarity, methodological rigor, etc., might put it 'over the bar'. However, even here these will tend to be based on rapid and shallow assessments, and should not be seen as meaningful evaluations of research merit.
II. These ratings will be considered along with the discussion by the field team and the management. Thus is helpful if you give a justification and explanation for your stated rating.
Define/consider the following âattributesâ of a piece of research:
Global decision-relevance/VOI: Is this research decision-relevant to high-value choices and considerations that are important for global priorities and global welfare?
Prestige/prominence: Is the research already prominent/valued (esp. in academia), highly cited, reported on, etc?
Influence: Is the work already influencing important real-world decisions and considerations?
Obviously, these are not binary factors; there is a continuum for each. But for the sake of illustration, consider the following flowcharts.
If the flowcharts do not render, please refresh your browser. You may have to refresh twice.
"Fully baked": Sometimes prominent researchers release work (e.g., on NBER) that is not particularly rigorous or involved, which may have been put together quickly. This might be research that links to a conference they are presenting at, to their teaching, or to specific funding or consulting. It may be survey/summary work, perhaps meant for less technical audiences. The Unjournal tends not to prioritize such work, or at least not consider it in the same "prestigious" basket (although there will be exceptions). In the flowchart above, we contrast this with their "fully-baked" work.
Decision-relevant, prestigious work: Suppose the research is both âglobally decision-relevantâ and prominent. Here, if the research is in our domain, we probably want to have it publicly evaluated. This is basically the case regardless of its apparent methodological strength. This is particularly true if it has been recently made public (as a working paper), if it has not yet been published in a highly-respected peer-reviewed journal, and if there are non-straightforward methodological issues involved.
Prestigious work that seems less globally-relevant: We generally will not prioritize this work unless it adds to our mission in other ways (see, e.g., our âsustainabilityâ and âcredibilityâ goals ). In particular we will prioritize such research more if:
It is presented in innovative, transparent formats (e.g., dynamic documents/open notebooks, sharing code and data)
The research indirectly supports more globally-relevant research, e.g., throughâŚ
Providing methodological tools that are relevant to that âhigher-valueâ work
(If the flowchart below does not render, please refresh your browser; you may have to refresh twice.)
Decision-relevant, influential (but less prestigious) work: E.g., suppose this research might be cited by a major philanthropic organization as guiding its decision-making, but the researchers may not have strong academic credentials or a track record. Again, if this research is in our domain, we probably want to have it publicly evaluated. However, depending on the rigor of the work and the way it is written, we may want to explicitly class this in our ânon-academic/policyâ stream.
Decision-relevant, less prestigious, less-influential work: What about for less-prominent work with fewer academic accolades that is not yet having an influence, but nonetheless seems to be globally decision-relevant? Here, our evaluations seem less likely to have an influence unless the work seems potentially strong, implying that our evaluations, rating, and feedback could boost potentially valuable neglected work. Here, our prioritization rating might focus more on our initial impressions of things like âŚ
Methodological strength (this is a big one!)
Rigorous logic and communication
Open science and robust approaches
Again: the prioritization process is not meant to be an evaluation of the work in itself. Itâs OK to do this in a fairly shallow way.
In future, we may want to put together a loose set of methodological âsuggestive guidelinesâ for work in different fields and areas, without being too rigid or prescriptive. (To do: we can draw from some existing frameworks for this [ref].)
Drawing attention to neglected high-priority research fields (e.g., animal welfare)
Thanks for your interest in evaluating research for The Unjournal!
The Unjournal is a nonprofit organization started in mid-2022. We commission experts to publicly evaluate and rate research. Read more about us here.
Write an evaluation of a specific research paper or project: essentially a standard, high-quality referee report.
See for further details and guidance.
Why use your valuable time writing an Unjournal evaluation? There are several reasons: helping high-impact research users, supporting open science and open access, and getting recognition and financial compensation.
The Unjournal's goal is to make impactful research more rigorous, and rigorous research more impactful, while supporting open access and open science. We encourage better research by making it easier for researchers to get feedback and credible ratings. We evaluate research in high-impact areas that make a difference to global welfare. Your evaluation will:
Help authors improve their research, by giving early, high-quality feedback.
Help improve science by providing open-access, prompt, structured, public evaluations of impactful research.
Inform funding bodies and meta-scientists as we build a database of research quality, strengths and weaknesses in different dimensions. Help research users learn what research to trust, when, and how.
For more on our scientific mission, see .
Your evaluation will be made public and given a DOI. You have the option to be identified as the author of this evaluation or to remain anonymous, as you prefer.
You will be given a $200-$400 honorarium for providing a prompt and complete evaluation and feedback ($100-$300 base + $100 'promptness bonus') in line with our .
Note, Aug. 2024: we're adjusting the base compensation to reward strong work and experience. Minimum base compensation:
$100 + $100 for first-time evaluators
You will also be eligible for monetary prizes for "most useful and informative evaluation," plus other bonuses. We currently (Feb. 2024) set aside an additional $150 per evaluation for incentives, bonuses, and prizes.
See also
We may occasionally offer additional payments for specifically requested evaluation tasks, or raise the base payments for particularly hard-to-source expertise.
July 2023: The above is our current policy; we are working to build an effective, fair, transparent, and straightforward system of honorariums, incentives, and awards for evaluators.
Feb. 204: Note that we currently set aside an additional $150 per evaluation (i.e., per evaluator) for evaluator incentives, bonuses, and prizes. This may be revised upwards or downwards in future (and this will be announced and noted).
If you have been invited to be an evaluator and want to proceed, simply respond to the email invitation that we have sent you. You will then be sent a link to our evaluation form.
To sign up for our evaluator pool, see
To learn more about our evaluation process, see. If you are doing an evaluation, we highly recommend you read these guidelines carefully
12 Feb 2024: We are moving to a hosted form/interface in PubPub. That form is still somewhat a work-in-progress, and may need some further guidance; we try to provide this below, but please contact us with any questions. If you prefer, you can also submit your response in a Google Doc, and share it back with us. Click to make a new copy of that directly.
Answer a short questionnaire about your background and our processes.
$300 + $100 for return Unjournal evaluators and those with previous strong public review experience. We will be integrating other incentives and prizes into this, and are committed to in average compensation per evaluation, including prizes.
In addition to soliciting research submissions by authors, we directly prioritize unsubmitted research for evaluation, with a specific process and set of rules, outlined below.
Choose a set of "top-tier working paper series" and medium-to-top-tier journals.
This program started with the . We expanded this beyond NBER to research posted in other exclusive working paper archives and to work where all authors seem to be prominent, secure, and established. See .
Identify relevant papers in this series, following our stated criteria (i.e., , strength, ). For NBER this tends to include
recently released work in the early stages of the journal peer-review process, particularly if it addresses a timely subject; as well as
work that has been around for many years, is widely cited and influential, yet has never been published in a peer-reviewed journal.
We do this systematically and transparently; authors shouldn't feel singled out nor left out.
Notify the work's authors that The Unjournal plans to commission evaluations. We're not asking for permission, but
making them aware of The Unjournal, the process, the , and the authors' opportunities to engage with the evaluation and publicly respond to the evaluation before it is made public;
letting us know if we have the most recent version of the paper, and if updates are coming soon;
Public benefit: Working papers (especially NBER) are already influencing policy and debate, yet they have not been peer-reviewed and may take years to go through this process, if ever (e.g., many NBER papers). However, it is difficult to understand the papers' limitations unless you happen to have attended an academic seminar where they were presented. Evaluating these publicly will provide a service.
1. Negative backlash: Some authors may dislike having their work publicly evaluated, particularly when there is substantial criticism. Academics complain a lot about unfair peer reviews, but the difference is that here the evaluations will be made public. This might lead The Unjournal to be the target of some criticism.
Responses:
Aside: in the future, we hope to work directly with working paper series, associations, and research groups to get their approval and engagement with Unjournal evaluations. We hope that having a large share of papers in your series evaluated will serve as a measure of confidence in your research quality. If you are involved in such a group and are interested in this, please reach out to us ().
All NBER working papers are generally eligible, but watch for exceptions where authors seem vulnerable in their career. (And remember, we contact authors, so they can plead their case.)
We treat these on a case-by-case basis and use discretion. All CEPR members are reasonably secure and successful, but their co-authors might not be, especially if these co-authors are PhD students they are supervising.
In some areas and fields (e.g., psychology, animal product markets) the publication process is relatively rapid or it may fail to engage general expertise. In general, all papers that are already published in peer-reviewed journals are eligible for our direct track.
These are eligible (without author permission) if all authors
have tenured or âlong termâ positions at well-known, respected universities or other research institutions, or
have tenure-track positions at top universities (e.g., top-20 globally by some credible rankings), or
are clearly not pursuing an academic career (e.g., the "partner at the aid agency running the trial").
On the other hand, if one or more authors is a PhD student close to graduation or an untenured academic outside a "top global program,ââ then we will ask for permission and potentially offer an embargo.
A possible exception to this exception: If the PhD student or untenured academic is otherwise clearly extremely high-performing by conventional metrics; e.g., an REStud "tourist" or someone with multiple published papers in top-5 journals. In such cases the paper might be considered eligible for direct evaluation.
letting the authors complete our forms if they wish, giving further information about the paper or e.g. adding a "permalink" to updated versions;
asking if there are authors in sensitive career positions justifying a; and
asking the authors if there is specific feedback they would like to receive.
Reaching out to and commissioning evaluators, as in our regular process. Considerations:
Evaluators should be made aware that the authors have not directly requested this review, but have been informed it is happening.
As this will allow us to consider a larger set of papers more quickly, we can reach out to multiple evaluators more efficiently.
Specifically for NBER: This working paper series is highly influential and relied upon by policy makers and policy journalists. It'd an elite outlet: only members of NBER are able to post working papers here.
Fear of public evaluation (safety in numbers): There may be some shyness or reluctance to participate in The Unjournal evaluation process (for reasons to do so, see our discussion). It is scary to be a first mover, and it may feel unfair to be among the few people to have an evaluation of your work out there in public (in spite of the Bayesian arguments presented in the previous link). There should be "safety" in numbers: having a substantial number of prominent papers publicly evaluated by The Unjournal will ease this concern.
Passive evaluation may be preferred to active consent: Academics (especially early-career) may also worry that they will seem weird or rebellious by submitting to The Unjournal, as this may be taken as "rejecting mainstream system norms." Again, this will be less of a problem if a substantial number of public evaluations of prominent papers are posted. You will be in good company. Furthermore, if we are simply identifying papers for evaluation, the authors of these papers cannot be seen as rejecting the mainstream path (as they did not choose to submit).
Piloting and building a track record or demonstration: The Unjournal needs a reasonably large set of high-quality, relevant work to evaluate in order to help us build our system and improve our processes. Putting out a body of curated evaluation work will also allow us to demonstrate the reasonableness and reliability of this process.
We will work to ensure that the evaluations we publish involve constructive dialogue, avoid unnecessary harshness, and provide reasons for their critiques. We also give authors the opportunity to respond.
We are focusing on more prominent papers, with authors in more secure positions. Additionally, we offer a potential "embargo" for sensitive career situations, e.g., those that might face early-career researchers.
2. Less author engagement: If authors do not specifically choose to have their work evaluated, they are less likely to engage fullly with the process.
Response: This is something we will keep an eye on, weighing the benefits and costs.
3. Evaluator/referee reluctance: As noted above, evaluators may be more reluctant to provide ratings and feedback on work where the author has not instigated the process.
Response: This should largely be addressed by the fact that we allow evaluators to remain anonymous. A potential cost here is discouraging signed evaluations, which themselves have some benefits (as well as possible costs).
4. Slippery-slope towards "unfairly reviewing work too early": In some fields, working papers are released at a point where the author does not wish them to be evaluated, and where the author is not implicitly making strong claims about the validity of this work. In economics, working papers tend to be released when they are fairly polished and the authors typically seek feedback and citations. The NBER series is a particularly prominent example. However, we don't want extend the scope of direct evaluation too far.
Response: We will be careful with this. Initially, we are extending this evaluation process only to the NBER series. Next, we may consider direct evaluation of fairly prestigious publications in "actual" peer-reviewed journals, particularly in fields (such as psychology) where the peer-review process is much faster than in economics. As NBER is basically "USA-only", we have extended this to other series such as , while being sensitive to the prestige/vulnerability tradeoffs.
It's a norm in academia that people do reviewing work for free. So why is The Unjournal paying evaluators?
From a recent survey of economists:
We estimate that the average (median) respondent spends 12 (9) working days per year on refereeing. The top 10% of the distribution dedicates 25 working days or more, which is quite substantial considering refereeing is usually unpaid.
The peer-review process in economics is widely-argued to be too slow and lengthy. But there is evidence that payments may help improve this.
, they note that few economics journals currently pay reviewers (and these payments tend to be small (e.g., JPE and AER paid $100 at the time). However, they also note, citing several papers:
The existing evidence summarized in Table 5 suggests that offering financial incentives could be an effective way of reducing turnaround time.
notes that the work of reviewing is not distributed equally. To the extent that accepting to do a report is based on individual goodwill, the unpaid volunteer model could be seen to unfairly penalize more generous and sympathetic academics. Writing a certain number of referee reports per year is generally considered part of "academic service". Academics put this on their CVs, and it may lead to being on the board of a journal which is valued to an extent. However, this is much less attractive for researchers who are not tenured university professors. Paying for this work would do a better job of including them in the process.
'Payment for good evaluation work' may also lead to fair and more useful evaluations.
In the current system academics may take on this work in large part to try to impress journal editors and get favorable treatment from them when they submit their own work. They may also write reviews in particular ways to impress these editors.
For less high-prestige journals, to get reviewers, editors often need to lean on their personal networks, including those they have power relationships with.
Reviewers are also known to strategically try to get authors to cite and praise the reviewer's own work. They maybe especially critical to authors they see as rivals.
To the extent that reviewers are doing this as a service they are being paid for, these other motivations will be comparatively somewhat less important. The incentives will be more in line with doing evaluations that are seen as valuable by the managers of the process, in order to get chosen for further paid work. (And, if evaluations are public, the managers can consider the public feedback on these reports as well.)
We are not âjust another journal.â We need to give incentives for people to put effort into a new system and help us break out of the old inferior equilibrium.
In some senses, we are asking for more than a typical journal. In particular, our evaluations will be made public and thus need to be better communicated.
We cannot rely on 'reviewers taking on work to get better treatment from editors in the future.' This does not apply to our model, as we don't have editors make any sort of âfinal accept/reject decisionâ
Our âpaying evaluatorsâ brings in a wider set of evaluators, including non-academics. This is particularly relevant to our impact-focused goals.
The Unjournal Evaluators have the option of remaining anonymous (see ). Where evaluators choose this, we will carefully protect this anonymity, aiming at a high standard of protection, as good as or better than traditional journals. We will give evaluators the option to take extra steps to safeguard this further. We are offering anonymity in perpetuity to those who request it. (As well as anonymity on other terms to those who request it, on explicitly mutually agreed upon terms.)
If they choose to stay anonymous, there should be no way for authors to be able to âguessâ who has reviewed their work.
We will take steps to keep private any information that could connect the identity of an anonymous evaluator and their evaluation/the work they are evaluating.
We will take extra steps to make the possibility of accidental disclosure extremely small (this is never impossible of course, even in the case of conventional journal reviews). In particular, we will use pseudonyms or ID codes for these evaluators in any discussion or database that is shared among our management team that connects individual evaluators to research work.
If we ever share a list of Unjournalâs evaluators this will not include anyone who wished to remain anonymous (unless they explicitly ask us to be on such a list).
Aside: In future, we may consider , and these tools will also be
We will do our best to warn anonymous evaluators of ways that they might inadvertently be identifying themselves in the evaluation content they provide.
We will provide platforms to enable anonymous and secure discussion between anonymous evaluators and others (authors, editors, etc.) Where an anonymous evaluator is involved, we will encourage these platforms to be used as much as possible. In particular, see .
This discussion is a work-in-progress
We are targeting global priorities-relevant research...
With the potential for impact, and with the potential for Unjournal evaluations to have an impact (see our high-level considerations and our prioritization ratings discussions).
Our initial focus is quantitative work that informs , especially in economics, policy, and social science, informing our .
We give a data presentation of the work we have already covered and the work we are prioritizing , which will be continually updated.
But what does this mean in practice? What specific research fields, topics, and approaches are we likely to classify as 'relevant to evaluate'?
We give some lists and annotated examples below.
As of January 2024 The Unjournal focuses on ...
Research where the fundamental question being investigated involves human behavior and beliefs and the consequences of these. This may involve markets, production processes, economic constraints, social interactions, technology, the 'market of ideas', individual psychology, government processes, and more. However, the main research question should not revolve around issues outside of human behavior, such as physical science, biology, or computer science and engineering. These areas are out of our scope (at least for now).
Research that is fundamentally quantitative and uses
This to generally involves the academic fields:
Economics
Applied Statistics (and some other applied math)
Psychology
These discipline/field boundaries are not strict; they may adapt as we grow
These were chosen in light of two main factors:
Our founder and our team is most comfortable assessing and managing the consideration of research in these areas.
These fields seem to be particularly amenable to, and able to benefit from our journal-independent evaluation approach. Other fields, such as biology, are already being 'served' by strong initiatives like .
To do: We will give and explain some examples here
The Unjournal's mission is to prioritize
research with the strongest potential for a positive impact on global welfare
where public evaluation of this research will have the greatest impact
Given this broad goal, we consider research into any cause, topic, or outcome, as long as the research involves fields, methods, and approaches within our domain (see above), and as long as the work meets our other requirements (e.g., research must be publicly shared without a paywall).
While we don't have rigid boundaries, we are nonetheless focusing on certain areas:
(As of Jan. 2024) we have mainly commissioned evaluations of work involving development economics and health-related outcomes and interventions in low-and middle-income countries.
As well as research involving
Environmental economics, conservation, harm to human health
The social impact of AI and emerging technologies
Economics, welfare, and governance
We are currently prioritizing further work involving
Psychology, behavioral science, and attitudes: the spread of misinformation; other-regarding preferences and behavior; moral circles
Animal welfare: markets, attitudes
Methodological work informing high-impact research (e.g., methods for impact evaluation)
We are also considering prioritizing work involving
AI governance and safety
Quantitative political science (voting, lobbying, attitudes)
Political risks (including authoritarian governments and war and conflict)
To do: We will give and explain some examples here
Research that targets and addresses a single specific question or goals, or a small cluster. It should not mainly be a broad discussion and overview of other research or conceptual issues.
Other quantitative social science fields (perhaps Sociology)
Applied "business school" fields: finance, accounting, operations, etc.
Applied "policy and impact evaluation" fields
Life science/medicine where it targets human behavior/social science
The economics of innovation; scientific progress and meta-science
The economics of health, happiness, and wellbeing
Long-term growth and trends; the long-term future of civilization; forecasting
31 Aug 2023: Our present approach is a "working solution" involving some ad-hoc and intuitive choices. We are re-evaluating the metrics we are asking for as well as the interface and framing. We are gathering some discussion in this linked Gdoc, incorporating feedback from our pilot evaluators and authors. We're also talking to people with expertise as well as considering past practice and other ongoing initiatives. We plan to consolidate that discussion and our consensus and/or conclusions into the present (Gitbook) site.
Ultimately, we're trying to replace the question of "what tier of journal did a paper get into?" with "how highly was the paper rated?" We believe this is a more valuable metric. It can be more fine-grained. It should be less prone to gaming. It aims to reduce randomness in the process, through things like 'the availability of journal space in a particular field'. See our discussion of .
To get to this point, we need to have academia and stakeholders see our evaluations as meaningful. We want the evaluations to begin to have some value that is measurable in the way âpublication in the AERâ is seen to have value.
While there are some ongoing efforts towards journal-independent evaluation, these tend not use comparable metrics. Typically, they either have simple tick-boxes (like "this paper used correct statistical methods: yes/no") or they enable descriptive evaluation without an overall rating. As we are not a journal, and we donât accept or reject research, we need another way of assigning value. We are working to determine the best way of doing this through quantitative ratings. We hope to be able to benchmark our evaluations to "traditional" publication outcomes. Thus, we think it is important to ask for both an overall quality rating and a journal ranking tier prediction.
In addition to the overall assessment, we think it will be valuable to have the papers rated according to several categories. This could be particularly helpful to practitioners who may care about some concerns more than others. It also can be useful to future researchers who might want to focus on reading papers with particular strengths. It could be useful in meta-analyses, as certain characteristics of papers could be weighed more heavily. We think the use of categories might also be useful to authors and evaluators themselves. It can help them get a sense of what we think research priorities should be, and thus help them consider an overall rating.
However, these ideas have been largely ad-hoc and based on the impressions of our management team (a particular set of mainly economists and psychologists). The process is still being developed. Any feedback you have is welcome. For example, are we overemphasizing certain aspects? Are we excluding some important categories?
We are also researching other frameworks, templates, and past practice; we hope to draw from validated, theoretically grounded projects such as .
In eliciting expert judgment, it is helpful to differentiate the level of confidence in predictions and recommendations. We want to know not only what you believe, but how strongly held your beliefs are. If you are less certain in one area, we should weigh the information you provide less heavily in updating our beliefs. This may also be particularly useful for practitioners. Obviously, there are challenges to any approach. Even experts in a quantitative field may struggle to convey their own uncertainty. They may also be inherently "poorly calibrated" (see discussions and tools for ). Some people may often be "confidently wrong." They might state very narrow "credible intervals", when the truthâwhere measurableâroutinely falls outside these boundaries. People with greater discrimination may sometimes be underconfident. One would want to consider and potentially correct for poor calibration. As a side benefit, this may be interesting for research in and of itself, particularly as The Unjournal grows. We see 'quantifying one's own uncertainty' as a good exercise for academics (and everyone) to engage in.
2 Oct 2023 -- We previously suggested 'weightings' for individual ratings, along with a note
We give "suggested weights" as an indication of our priorities and a suggestion for how you might average these together into an overall assessment; but please use your own judgment.
We included these weightings for several reasons:
We have removed suggested weightings for each of these categories. We discuss the rationale at some length .
Evaluators working before October 2023 saw a previous version of the table, which you can see .
We previously gave evaluators two options for expressing their confidence in each rating:
Either:
The 90% Confidence/Credible Interval (CI) input you see below (now a 'slider' in PubPub V7) or
We had included the note:
We give the previous weighting scheme in a fold below for reference, particularly for those reading evaluations done before October 2023.
As well as:
Suggested weighting: 0. Why 0?
Elsewhere in that page we had noted:
As noted above, we give suggested weights (0â5) to suggest the importance of each category rating to your overall assessment, given The Unjournal's priorities. But you don't need, and may not want to use these weightings precisely.
The weightings were presented once again along with each description in the section .
[FROM PREVIOUS GUIDELINES:]
You may feel comfortable giving your "90% confidence interval," or you may prefer to give a "descriptive rating" of your confidence (from "extremely confident" to "not confident").
Quantify how certain you are about this rating, either giving a 90% / interval or using our . (We prefer the 90% CI. Please don't give both.
5 = Extremely confident, i.e., 90% confidence interval spans +/- 4 points or less
4 = Very confident: 90% confidence interval +/- 8 points or less
3 = Somewhat confident: 90% confidence interval +/- 15 points or less
2 = Not very confident: 90% confidence interval, +/- 25 points or less
[Previous...] Remember, we would like you to give a 90% CI or a confidence rating (1â5 dots), but not both.
The example in the diagram above (click to zoom) illustrates the proposed correspondence.
And, for the 'journal tier' scale:
From "five dots" to "one dot":
5 = Extremely confident, i.e., 90% confidence interval spans +/â 4 points or less*
4 = Very confident: 90% confidence interval +/â 8 points or less
3 = Somewhat confident: 90% confidence interval +/â 15 points or less
[Previous guidelines]: The description folded below focuses on the "Overall Assessment." Please try to use a similar scale when evaluating the category metrics.
95â100: Among the highest quality and most important work you have ever read.
90â100: This work represents a major achievement, making substantial contributions to the field and practice. Such work would/should be weighed very heavily by tenure and promotion committees, and grantmakers.
For example:
This work represents a strong and substantial achievement. It is highly rigorous, relevant, and well-communicated, up to the standards of the strongest work in this area (say, the standards of the top 5% of committed researchers in this field). Such work would/should not be decisive in a tenure/promotion/grant decision alone, but it should make a very solid contribution to such a case.
60â74.9: A very strong, solid, and relevant piece of work. It may have minor flaws or limitations, but overall it is very high-quality, meeting the standards of well-respected research professionals in this field.
40â59.9: A useful contribution, with major strengths, but also some important flaws or limitations.
20â39.9: Some interesting and useful points and some reasonable approaches, but only marginally so. Important flaws and limitations. Would need substantial refocus or changes of direction and/or methods in order to be a useful part of the research and policy discussion.
5â19.9: Among the lowest quality papers; not making any substantial contribution and containing fatal flaws. The paper may fundamentally address an issue that is not defined or obviously not relevant, or the content may be substantially outside of the authorsâ field of expertise.
0â4: Illegible, fraudulent, or plagiarized. Please flag fraud, and notify us and the relevant authorities.
The previous categories were 0â5, 5â20, 20â40, 40â60, 60â75, 75â90, and 90â100. Some evaluators found the overlap in this definition confusing.
This page explains the value of the metrics we are seeking from evaluators.
This page describes The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
30 July 2024: These guidelines below apply to the evaluation form currently hosted on PubPub. We're adjusting this form somewhat â the new form is temporarily hosted in Coda and (). If you prefer, you are welcome to use the Coda form instead (just let us know).
If you have any doubts about which form to complete or about what we are looking for please ask the evaluation manager or email contact@unjournal.org.
You can download a pdf version of these guidelines (updated March 2024).
See links below accessing current policies of The Unjournal, accompanied by discussion and including templates for managers and editors.
People and organizations submit their own research or suggest research they believe may be high-impact. The Unjournal also directly monitors key sources of research and research agendas. Our team then systematically prioritizes this research for evaluation. See the link below for further details.
See sections below
For prospective evaluators: An overview of what we are asking; payment and recognition details
Guidelines for evaluators: The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
Other sections and subsections provide further resources, consider future initiatives, and discuss our rationales.
We are considering asking evaluators, with compensation, to assist and engage in the process of "robustness replication." This may lead to some interesting follow-on possibilities as we build our potential collaboration with the Institute for Replication and others in this space.
We might ask evaluators discussion questions like these:
What is the most important, interesting, or relevant substantive claim made by the authors, (particularly considering global priorities and potential interventions and responses)?
What statistical test or evidence does this claim depend on, according to the authors?
How confident are you in the substantive claim made?
"Robustness checks": What specific statistical test(s) or piece(s) of evidence would make you substantially more confident in the substantive claim made?
If a robustness replication "passed" these checks, how confident would you be then in the substantive claim? (You can also express this as a continuous function of some statistic rather than as a binary; please explain your approach.)
Background:
The Institute for Replication is planning to hire experts to do "robustness-replications" of work published in a top journal in economics and political science. Code- and data sharing is now being enforced in many or all of these journals and other important outlets. We want to support their efforts and are exploring collaboration possibilities. We are also considering how to best guide potential future robustness replication work.
We choose an evaluation manager for each research paper or project. They commission and compensate expert evaluators to rate and discuss the research, following our evaluation template and guidelines. The original research authors are given a chance to publicly respond before we post these evaluations. See the link below for further details.
We make all of this evaluation work public on our PubPub page, along with an evaluation summary. We create DOIs for each element and submit this work to scholarly search engines. We also present a summary and analysis of our evaluation ratings data.
We outline some further details in the link below.
See the link below for a full 'flowchart' map of our evaluation workflow
We wanted to make the overall rating better defined and thus more useful to outsiders and comparable across raters
Emphasizing what we think is important (in particular, methodological reliability)
We didn't want evaluators to think we wanted them to weigh each category equally ⌠some are clearly more important
However, we decided to remove these weightings because:
Reduce clutter in an already overwhelming form and guidance doc. âMore numbersâ can be particularly overwhelming
These weights were ad-hoc, and they may suggest we have a more grounded âmodel of valueâ than we already do. (And there is also some overlap in our categories anyways, something we are working on addressing.)
Some people interpreted what we intended incorrectly (e.g., they thought we were saying ârelevance to global prioritiesâ is not an important thing)
A five-point 'Likert style' measure of confidence, which we described qualitatively and explained how we would convert it into CIs when we report aggregations.
To make this process less confusing, to encourage careful quantification of uncertainty, and to enable better-justified aggregation of expert judgment, we are de-emphasizing the latter measure.
Still, to accommodate those who may not be familiar with or comfortable stating "90% CIs on their own beliefs" we offer further explanations, and we are providing tools to help evaluators construct these. As a fallback, we will still allow evaluators to give the 1-5 confidence measure, noting the correspondence to CIs, but we discourage this somewhat.
The previous guidelines can be seen here; these may be useful in considering evaluations provided pre-2024.
5
45, 55
4
10, 35
3
40, 70
2
30,46
0**
21,65
45, 55
10, 35
40, 70
30,46
21,65
1 = Not confident: 90% confidence interval +/â 25 points
This paper is substantially more rigorous or more insightful than existing work in this area in a way that matters for research and practice.
The work makes a major, perhaps decisive contribution to a case for (or against) a policy or philanthropic intervention.
Why these guidelines/metrics?(holistic, most important!)
39, 52
5
Why these guidelines/metrics?(holistic, most important!)
39, 52
47, 54


47, 54
Please see For prospective evaluators for an overview of the evaluation process, as well as details on compensation, public recognition, and more.
Write an evaluation of the target paper or project, similar to a standard, high-quality referee report. Please identify the paper's main claims and carefully assess their validity, leveraging your own background and expertise.
Give quantitative metrics and predictions as described below.
Answer a short questionnaire about your background and our processes.
In writing your evaluation and providing ratings, please consider the following.
In many ways, the written part of the evaluation should be similar to a report an academic would write for a traditional high-prestige journal (e.g., see some 'conventional guidelines' here). Most fundamentally, we want you to use your expertise to critically assess the main claims made by the authors. Are the claims well-supported? Are the assumptions believable? Are the methods are appropriate and well-executed? Explain why or why not.
However, we'd also like you to pay some consideration to our priorities:
Advancing our knowledge and practice
Justification, reasonableness, validity, and robustness of methods
Logic and communication
Open, communicative, replicable science
See our guidelines below for more details on each of these. Please don't structure your review according to these metrics, just pay some attention to them.
Unless you were advised otherwise, this evaluation, including the review and quantitative metrics, will be given a DOI and, hopefully, will enter the public research conversation. Authors will be given two weeks to respond to reviews before the evaluations, ratings, and responses are made public. You can choose whether you want to be identified publicly as an author of the evaluation.
If you have questions about the authorsâ work, you can ask them anonymously: we will facilitate this.
We want you to evaluate the most recent/relevant version of the paper/project that you can access. If you see a more recent version than the one we shared with you, please let us know.
We may give early-career researchers the right to veto the publication of very negative evaluations or to embargo the release of these for a defined period. We will inform you in advance if this will be the case for the work you are evaluating.
You can reserve some "sensitive" content in your report to be shared with only The Unjournal management or only the authors, but we hope to keep this limited.
We designed this process to balance three considerations with three target audiences. Please consider each of these:
Crafting evaluations and ratings that help researchers and policymakers judge when and how to rely on this research. For Research Users.
Ensuring these evaluations of the papers are comparable to current journal tier metrics, to enable them to be used to determine career advancement and research funding. For Departments, Research Managers, and Funders.
Providing constructive feedback to Authors.
We discuss this, and how it relates to our impact and "theory of change", here.
We accept that in the near-term an Unjournal evaluation may not be seen to have substantial career value.
Furthermore, work we are considering may tend be at an earlier stage. authors may submit work to us, thinking of this as a "pre-journal" step. The papers we select (e.g., from NBER) may also have been posted long before authors planned to submit them to journals.
This may make the 'feedback for authors' and 'assessment for research users' aspects more important, relative to traditional journals' role. However, in the medium-term, a positive Unjournal evaluation should gain credibility and career value. This should make our evaluations an "endpoint" for a research paper.
We ask for a set of nine quantitative metrics. For each metric, we ask for a score and a 90% credible interval. We describe these in detail below. (We explain why we ask for these metrics here.)
For some questions, we ask for a percentile ranking from 0-100%. This represents "what proportion of papers in the reference group are worse than this paper, by this criterion". A score of 100% means this is essentially the best paper in the reference group. 0% is the worst paper. A score of 50% means this is the median paper; i.e., half of all papers in the reference group do this better, and half do this worse, and so on.
Here* the population of papers should be all serious research in the same area that you have encountered in the last three years.
Here, we are mainly considering research done by professional researchers with high levels of training, experience, and familiarity with recent practice, who have time and resources to devote months or years to each such research project or paper. These will typically be written as 'working papers' and presented at academic seminars before being submitted to standard academic journals. Although no credential is required, this typically includes people with PhD degrees (or upper-level PhD students). Most of this sort of research is done by full-time academics (professors, post-docs, academic staff, etc.) with a substantial research remit, as well as research staff at think tanks and research institutions (but there may be important exceptions).
This is a judgment call. Here are some criteria to consider: first, does the work come from the same academic field and research subfield, and does it address questions that might be addressed using similar methods? Secondly, does it deal with the same substantive research question, or a closely related one? If the research you are evaluating is in a very niche topic, the comparison reference group should be expanded to consider work in other areas.
We are aiming for comparability across evaluators. If you suspect you are particularly exposed to higher-quality work in this category, compared to other likely evaluators, you may want to adjust your reference group downwards. (And of course vice-versa, if you suspect you are particularly exposed to lower-quality work.)
For each metric, we ask you to provide a 'midpoint rating' and a 90% credible interval as a measure of your uncertainty. Our interface provides slider bars to express your chosen intervals:
See below for more guidance on uncertainty, credible intervals, and the midpoint rating as the 'median of your belief distribution'.
The table below summarizes the percentile rankings.
Overall assessment
0 - 100%
Advancing our knowledge and practice
0 - 100%
Methods: Justification, reasonableness, validity, robustness
0 - 100%
Logic and communication
0 - 100%
Open, collaborative, replicable science
0 - 100%
Real world relevance
0 - 100%
Percentile ranking (0-100%)
Judge the quality of the research heuristically. Consider all aspects of quality, credibility, importance to knowledge production, and importance to practice.
Do the authors do a good job of (i) stating their main questions and claims, (ii) providing strong evidence and powerful approaches to inform these, and (iii) correctly characterizing the nature of their evidence?
Percentile ranking (0-100%)
Are the methods used well-justified and explained; are they a reasonable approach to answering the question(s) in this context? Are the underlying assumptions reasonable?
Are the results and methods likely to be robust to reasonable changes in the underlying assumptions? Does the author demonstrate this?
Avoiding bias and questionable research practices (QRP): Did the authors take steps to reduce bias from opportunistic reporting and QRP? For example, did they do a strong pre-registration and pre-analysis plan, incorporate multiple hypothesis testing corrections, and report flexible specifications?
Percentile ranking (0-100%)
To what extent does the project contribute to the field or to practice, particularly in ways that are relevant to global priorities and impactful interventions?
(Applied stream: please focus on âimprovements that are actually helpfulâ.)
Originality and cleverness should be weighted less than the typical journal, because The Unjournal focuses on impact. Papers that apply existing techniques and frameworks more rigorously than previous work or apply them to new areas in ways that provide practical insights for GP (global priorities) and interventions should be highly valued. More weight should be placed on 'contribution to GP' than on 'contribution to the academic field'.
Do the paper's insights inform our beliefs about important parameters and about the effectiveness of interventions?
Does the project add useful value to other impactful research?
We don't require surprising results; sound and well-presented null results can also be valuable.
Percentile ranking (0-100%)
Are the goals and questions of the paper clearly expressed? Are concepts clearly defined and referenced?
Is the reasoning "transparent"? Are assumptions made explicit? Are all logical steps clear and correct? Does the writing make the argument easy to follow?
Are the conclusions consistent with the evidence (or formal proofs) presented? Do the authors accurately state the nature of their evidence, and the extent it supports their main claims?
Are the data and/or analysis presented relevant to the arguments made? Are the tables, graphs, and diagrams easy to understand in the context of the narrative (e.g., no major errors in labeling)?
Percentile ranking (0-100%)
This covers several considerations:
Would another researcher be able to perform the same analysis and get the same results? Are the methods explained clearly and in enough detail to enable easy and credible replication? For example, are all analyses and statistical tests explained, and is code provided?
Is the source of the data clear?
Is the data made as available as is reasonably possible? If so, is it clearly labeled and explained??
Consistency
Do the numbers in the paper and/or code output make sense? Are they internally consistent throughout the paper?
Useful building blocks
Do the authors provide tools, resources, data, and outputs that might enable or enhance future work and meta-analysis?
Are the paperâs chosen topic and approach likely to be useful to global priorities, cause prioritization, and high-impact interventions?
Does the paper consider real-world relevance and deal with policy and implementation questions? Are the setup, assumptions, and focus realistic?
Do the authors report results that are relevant to practitioners? Do they provide useful quantified estimates (costs, benefits, etc.) enabling practical impact quantification and prioritization?
Do they communicate (at least in the abstract or introduction) in ways policymakers and decision-makers can understand, without misleading or oversimplifying?
Percentile ranking (0-100%)
Are the assumptions and setup realistic and relevant to the real world? Does the paper consider the real-world relevance of the arguments and results presented, perhaps engaging policy and implementation questions?
Do the authors communicate their work in ways policymakers and decision-makers can understand, without misleading or oversimplifying?
Do the authors present practical impact quantifications, such as cost-effectiveness analyses? Do they report results that enable such analyses?
Percentile ranking (0-100%)
Could the paper's topic and approach potentially help inform global priorities, cause prioritization, and high-impact interventions?
Most work in our applied stream will not be targeting academic journals. Still, in some cases it might make sense to make this comparison; e.g., if particular aspects of the work might be rewritten and submitted to academic journals, or if the work uses certain techniques that might be directly compared to academic work. If you believe a comparison makes sense, please consider giving an assessment below, making reference to our guidelines and how you are interpreting them in this case.
To help universities and policymakers make sense of our evaluations, we want to benchmark them against how research is currently judged. So, we would like you to assess the paper in terms of journal rankings. We ask for two assessments:
a normative judgment about 'how well the research should publish';
a prediction about where the research will be published.
Journal ranking tiers are on a 0-5 scale, as follows:
0/5: "Won't publish/little to no value". Unlikely to be cited by credible researchers
1/5: OK/Somewhat valuable journal
2/5: Marginal B-journal/Decent field journal
3/5: Top B-journal/Strong field journal
4/5: Marginal A-Journal/Top field journal
5/5: A-journal/Top journal
We give some example journal rankings here, based on SJR and ABS ratings.
We encourage you to consider a non-integer score, e.g. 4.6 or 2.2.
As before, we ask for a 90% credible interval.
What journal ranking tier should this work be published in?
0.0-5.0
lower, upper
What journal ranking tier will this work be published in?
0.0-5.0
lower, upper
PubPub note: as of 14 March 2024, the PubPub form is not allowing you to give non-integer responses. Until this is fixed, please multiply these by 10 and enter these using the 0-50 slider . (Or use the Coda form.)
Journal ranking tier (0.0-5.0)
Assess this paper on the journal ranking scale described above, considering only its merit, giving some weight to the category metrics we discussed above.
Equivalently, where would this paper be published if:
the journal process was fair, unbiased, and free of noise, and that status, social connections, and lobbying to get the paper published didnât matter;
journals assessed research according to the category metrics we discussed above.
Journal ranking tier (0.0-5.0)
We want policymakers, researchers, funders, and managers to be able to use The Unjournal's evaluations to update their beliefs and make better decisions. To do this well, they need to weigh multiple evaluations against each other and other sources of information. Evaluators may feel confident about their rating for one category, but less confident in another area. How much weight should readers give to each? In this context, it is useful to quantify the uncertainty.
But it's hard to quantify statements like "very certain" or "somewhat uncertain" â different people may use the same phrases to mean different things. That's why we're asking for you a more precise measure, your credible intervals. These metrics are particularly useful for meta-science and meta-analysis.
You are asked to give a 'midpoint' and a 90% credible interval. Consider this as the smallest interval that you believe is 90% likely to contain the true value. See the fold below for further guidance.
You may understand the concepts of uncertainty and credible intervals, but you might be unfamiliar with applying them in a situation like this one.
You may have a certain best guess for the "Methods..." criterion. Still, even an expert can never be certain. E.g., you may misunderstand some aspect of the paper, there may be a method you are not familiar with, etc.
Your uncertainty over this could be described by some distribution, representing your beliefs about the true value of this criterion. Your "'best guess" should be the central mass point of this distribution.
You are also asked to give a 90% credible interval. Consider this as the smallest interval that you believe is 90% likely to contain the true value.
For some questions, the "true value" refers to something objective, e.g. will this work be published in a top-ranked journal? In other cases, like the percentile rankings, the true value means "if you had complete evidence, knowledge, and wisdom, what value would you choose?"
For more information on credible intervals, may be helpful.
If you are "", your 90% credible intervals should contain the true value 90% of the time.
We also ask for the 'midpoint', the center dot on that slider. Essentially, we are asking for the median of your belief distribution. By this we mean the percentile ranking such that you believe "there's a 50% chance that the paper's true rank is higher than this, and a 50% chance that it actually ranks lower than this."
If you are "well calibrated", your 90% credible intervals should contain the true value 90% of the time. To understand this better, assess your ability, and then practice to get better at estimating your confidence in results. This web app will help you get practice at calibrating your judgments. We suggest you choose the "Calibrate your Judgment" tool, and select the "confidence intervals" exercise, choosing 90% confidence. Even a 10 or 20 minute practice session can help, and it's pretty fun.
We are now asking evaluators for âclaim identification and assessmentâ where relevant. This is meant to help practitioners use this research to inform their funding, policymaking, and other decisions. It is not intended as a metric to judge the research quality per se. This is not required but we will reward this work.
Lastly, we ask evaluators about their background, and for feedback about the process.
For the two questions below, we will publish your responses unless you specifically ask these questions to be kept anonymous.
How long have you been in this field?
How many proposals and papers have you evaluated? (For journals, grants, and other peer review.)
Answers to the questions below will not be made public:
How would you rate this template and process?
Do you have any suggestions or questions about this process or The Unjournal? (We will try to respond to your suggestions, and incorporate them in our practice.) [Open response]
Would you be willing to consider evaluating a revised version of this project?
12 Feb 2024: We are moving to a hosted form/interface in PubPub. That form is still somewhat a work-in-progress, and may need some further guidance; we try to provide this below, but please contact us with any questions. If you prefer, you can also submit your response in a Google Doc, and share it back with us. Click here to make a new copy of that directly.
Length/time spent: This is up to you. We welcome detail, elaboration, and technical discussion.
The Econometrics society recommends a 2â3 page referee report; Berk et al. suggest this is relatively short, but confirm that brevity is desirable. In a recent survey (Charness et al., 2022), economists report spending (median and mean) about one day per report, with substantial shares reporting "half a day" and "two days." We expect that reviewers tend spend more time on papers for high-status journals, and when reviewing work that is closely tied to their own agenda.
We have made some adjustments to this page and to our guidelines and processes; this is particularly relevant for considering earlier evaluations. See Adjustments to metrics and guidelines/previous presentations.
If you still have questions, please contact us, or see our FAQ on Evaluation (refereeing).
Our data protection statement is linked here.
Cite evidence and reference specific parts of the research when giving feedback.
Justify your critiques and claims in a reasoning-transparent way, rather than merely â"passing judgment." Avoid comments like "this does not pass the smell test".
Provide specific, actionable feedback to the author where possible.
Try to restate the authorsâ arguments, clearly presenting the most reasonable interpretation of what they have written. See .
Be collegial and encouraging, but also rigorous. Criticize and question specific parts of the research without suggesting criticism of the researchers themselves.
We're happy for you to use whichever process and structure you feel comfortable with when writing your evaluation content.
Core
Briefly summarize the work in context
Highlight positive aspects of the paper and its strengths and contributions, considered in the context of existing research.
Remember: The Unjournal doesnât âpublishâ and doesnât âaccept or reject.â So donât give an Accept, Revise-and-Resubmit', or Reject-type recommendation. We ask for quantitative metrics, written feedback, and expert discussion of the validity of the paper's main claims, methods, and assumptions.
Economics
Semi-relevant:
Report:
Open Science
(Conventional but open access; simple and brief)
(Open-science-aligned; perhaps less detail-oriented than we are aiming for)
(Journal-independent âPREreviewâ; detailed; targets ECRs)
General, other fields
(Conventional; general)
(extensive resources; only some of this is applicable to economics and social science)
âthe 4 validitiesâ and
Relevance to global priorities
0 - 100%

Most importantly: Identify and assess the paper's most important and impactful claim(s). Are these supported by the evidence provided? Are the assumptions reasonable? Are the authors using appropriate methods?
Note major limitations and potential ways the work could be improved; where possible, reference methodological literature and discussion and work that models what you are suggesting.
Optional/desirable
Offer suggestions for increasing the impact of the work, for incorporating the work into global priorities research and impact evaluations, and for supporting and enhancing future work.
Discuss minor flaws and their potential revisions.
Desirable: formal
Please don't spend time copyediting the work. If you like, you can give a few specific suggestions and then suggest that the author look to make other changes along these lines.