Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
See links below accessing current policies of The Unjournal, accompanied by discussion and including templates for managers and editors.
People and organizations submit their own research or suggest research they believe may be high-impact. The Unjournal also directly monitors key sources of research and research agendas. Our team then systematically prioritizes this research for evaluation. See the link below for further details.
We choose an evaluation manager for each research paper or project. They commission and compensate expert evaluators to rate and discuss the research, following our evaluation template and guidelines. The original research authors are given a chance to publicly respond before we post these evaluations. See the link below for further details.
We make all of this evaluation work public on our PubPub page, along with an evaluation summary. We create DOIs for each element and submit this work to scholarly search engines. We also present a summary and analysis of our evaluation ratings data.
We outline some further details in the link below.
See Mapping evaluation workflow for a full 'flowchart' map of our evaluation workflow
We are also piloting several initiatives that involve a different process. See:
This page is a work-in-progress
15 Dec 2023: Our main current process involves
Submitted and (internally/externally) suggested research
Prioritization ratings and discussion by Unjournal field specialists
Feedback from field specialist area teams
A final decision by the management team, guided by the above
See this doc (also embedded below) for more details of the proposed process.
As noted in , we ask people who suggest research to provide a numerical 0-100 rating:
We also ask people within our team to act as 'assessors' to give as second and third opinions on this. This 'prioritization rating' is one of the criteria we will use to determine whether to commission research to be evaluated (along with author engagement, publication status, our capacity and expertise, etc.) Again, see the for the current process.
We are working on a set of notes on this, fleshing this out and giving specific examples. At the moment this is available to members of our team only (ask for access to "Guidelines for prioritization ratings (internal)"). We aim to share a version of this publicly once it converges, and once we can get rid of arbitrary sensitive examples.
I. This is not the evaluation itself. It is not an evaluation of the paper's merit per se:
Influential work, and prestigious work in influential areas may be highly prioritized regardless of its rigor and quality
The prioritization rating might consider quality for work that seems potentially impactful, which does not seem particularly prestigious or influential. Here aspects like writing clarity, methodological rigor, etc., might put it 'over the bar'. However, even here these will tend to be based on rapid and shallow assessments, and should not be seen as meaningful evaluations of research merit.
II. These ratings will be considered along with the discussion by the field team and the management. Thus is helpful if you give a justification and explanation for your stated rating.
Define/consider the following ‘attributes’ of a piece of research:
Global decision-relevance/VOI: Is this research decision-relevant to high-value choices and considerations that are important for global priorities and global welfare?
Prestige/prominence: Is the research already prominent/valued (esp. in academia), highly cited, reported on, etc?
Influence: Is the work already influencing important real-world decisions and considerations?
Obviously, these are not binary factors; there is a continuum for each. But for the sake of illustration, consider the following flowcharts.
If the flowcharts do not render, please refresh your browser. You may have to refresh twice.
"Fully baked": Sometimes prominent researchers release work (e.g., on NBER) that is not particularly rigorous or involved, which may have been put together quickly. This might be research that links to a conference they are presenting at, to their teaching, or to specific funding or consulting. It may be survey/summary work, perhaps meant for less technical audiences. The Unjournal tends not to prioritize such work, or at least not consider it in the same "prestigious" basket (although there will be exceptions). In the flowchart above, we contrast this with their "fully-baked" work.
Decision-relevant, prestigious work: Suppose the research is both ‘globally decision-relevant’ and prominent. Here, if the research is in our domain, we probably want to have it publicly evaluated. This is basically the case regardless of its apparent methodological strength. This is particularly true if it has been recently made public (as a working paper), if it has not yet been published in a highly-respected peer-reviewed journal, and if there are non-straightforward methodological issues involved.
It is presented in innovative, transparent formats (e.g., dynamic documents/open notebooks, sharing code and data)
The research indirectly supports more globally-relevant research, e.g., through…
Providing methodological tools that are relevant to that ‘higher-value’ work
Drawing attention to neglected high-priority research fields (e.g., animal welfare)
(If the flowchart below does not render, please refresh your browser; you may have to refresh twice.)
Decision-relevant, influential (but less prestigious) work: E.g., suppose this research might be cited by a major philanthropic organization as guiding its decision-making, but the researchers may not have strong academic credentials or a track record. Again, if this research is in our domain, we probably want to have it publicly evaluated. However, depending on the rigor of the work and the way it is written, we may want to explicitly class this in our ‘non-academic/policy’ stream.
Decision-relevant, less prestigious, less-influential work: What about for less-prominent work with fewer academic accolades that is not yet having an influence, but nonetheless seems to be globally decision-relevant? Here, our evaluations seem less likely to have an influence unless the work seems potentially strong, implying that our evaluations, rating, and feedback could boost potentially valuable neglected work. Here, our prioritization rating might focus more on our initial impressions of things like …
Methodological strength (this is a big one!)
Rigorous logic and communication
Open science and robust approaches
Engagement with real-world policy considerations
Again: the prioritization process is not meant to be an evaluation of the work in itself. It’s OK to do this in a fairly shallow way.
In future, we may want to put together a loose set of methodological ‘suggestive guidelines’ for work in different fields and areas, without being too rigid or prescriptive. (To do: we can draw from some existing frameworks for this [ref].)
Our is quantitative work that informs , especially in . We want to see better research leading to better outcomes in the real world (see our '').
See (earlier) discussion in public call/EA forum discussion .
To reach these goals, we need to select "the right research" for evaluation. We want to choose papers and projects that are highly relevant, methodologically promising, and that will benefit substantially from our evaluation work. We need to optimize how we select research so that our efforts remain mission-focused and useful. We also want to make our process transparent and fair. To do this, we are building a coherent set of criteria and goals, and a specific approach to guide this process. We explore several dimensions of these criteria below.
Management access only: General discussion of prioritization in Gdoc . Private discussion of specific papers in our Coda resource. We incorporate some of this discussion below.
When considering a piece of research to decide whether to commission it to be evaluated, we can start by looking at its general relevance as well as the value of evaluating and rating it.
Our prioritization of a paper for evaluation should not be seen as an assessment of its quality, nor of its 'vulnerability'. Furthermore, specific and less intensive.
Why is it relevant and worth engaging with?
We consider (and prioritize) the importance of the research to global priorities; its relevance to crucial decisions; the attention it is getting, the influence it is having; its direct relevance to the real world; and the potential value of the research for advancing other impactful work. We de-prioritize work that has already been credibly (publicly) . We also consider the fit of the research with our scope (social science, etc.), and the likelihood that we can commission experts to meaningfully evaluate it. As noted , some 'instrumental goals' (, , driving change, ...) also play a role in our choices.
Some features we value, that might raise the probability we consider a paper or project include the commitment and contribution to open science, the authors' engagement with our process, and the logic, communication, and transparent reasoning of the work. However, if a prominent research paper is within our scope and seems to have a strong potential for impact, we will prioritize it highly, whether or not it has these qualities.
2. Why does it need (more) evaluation, and what are some key issues and claims to vet?
We ask the people who suggest particular research, and experts in the field:
What are (some of) the authors’ key/important claims that are worth evaluating?
What aspects of the evidence, argumentation, methods, and interpretation, are you unsure about?
What particular data, code, proofs, and arguments would you like to see vetted? If it has already been peer-reviewed in some way, why do you think more review is needed?
As we weigh research to prioritize for evaluation, we need to balance directly having a positive impact against building our ability to have an impact in the future.
Importance
What is the direct impact potential of the research?
This is a massive question many have tried to address (see sketches and links below). We respond to uncertainty around this question in several ways, including:
Consulting a range of sources, not only EA-linked sources.
Scoping what other sorts of work are representative inputs to GP-relevant work.
Get a selection of seminal GP publications; look back to see what they are citing and categorize by journal/field/keywords/etc.
Neglectedness
Where is the current journal system failing GP-relevant work the most . . . in ways we can address?
Tractability
“Evaluability” of research: Where does the UJ approach yield the most insight or value of information?
Existing expertise: Where do we have field expertise on the UJ team? This will help us commission stronger evaluations.
"Feedback loops": Could this research influence concrete intervention choices? Does it predict near-term outcomes? If so, observing these choices and outcomes and getting feedback on the research and our evaluation can yield strong benefits.
Consideration/discussion: How much should we include research with indirect impact potential (theoretical, methodological, etc.)?
Moreover, we need to consider how the research evaluation might support the sustainability of The Unjournal and the broader general project of open evaluation. We may need to strike a balance between work informing the priorities of various audences, including:
Relevance to stakeholders and potential supporters
Clear connections to impact; measurability
Support from relevant academic communities
Support from open science
Consideration/discussion: What will drive further interest and funding?
Finally, we consider how our choices will increase the visibility and solidify the credibility of The Unjournal and open evaluations. We consider how our work may help drive positive institutional change. We aim to:
Interest and involve academics—and build the status of the project.
Commission evaluations that will be visibly useful and credible.
‘Benchmark traditional publication outcomes’, track our predictiveness and impact.
Have strong leverage over research "outcomes and rewards."
Increase public visibility and raise public interest.
Bring in supporters and participants.
Achieve substantial output in a reasonable time frame and with reasonable expense.
Maintain goodwill and a justified reputation for being fair and impartial.
We hope we have identified the important considerations (above); but we may be missing key points. We continue to engage discussion and seek feedback, to hone and improve our processes and approaches.
Prestigious work that seems less globally-relevant: We generally will not prioritize this work unless it adds to our mission in other ways (see, e.g., our ‘sustainability’ and ‘credibility’ goals ). In particular we will prioritize such research more if:
Put broadly, we need to consider how this research allows us to achieve our own goals in line with our , The research we select and evaluate should meaningfully drive positive change. One way we might see this process: “better research & more informative evaluation” → “better decision-making” → “better outcomes” for humanity and for non-human animals (i.e., the survival and flourishing of life and human civilization and values).
Below, we adapt the (popular in effective altruism circles) to assess the direct impact of our evaluations.
EA and more or less adjacent: and overviews, .
Non-EA, e.g., .
We present and analyze the specifics surrounding our current evaluation data in
Below: An earlier template for considering and discussing the relevance of research. This was/is provided both for our own consideration and for sharing (in part?) with evaluators, to give . Think of these as bespoke evaluation notes for a .
As mentioned under , consider factors including importance to global priorities, relevance to the field, the commitment and contribution to open science, the authors’ engagement, and the transparency of data and reasoning. You may consider the explicitly, but not too rigidly.
Research can be "submitted" by authors (here) or "suggested" by others. For a walk-through on suggesting research, see this video example.
There are two main paths for making suggestions: through our survey form or through Airtable.
Anyone can suggest research using the survey form at https://bit.ly/ujsuggestr. (Note, if you want to "submit your own research," go to bit.ly/ujsubmitr.) Please include the following steps:
Begin by reviewing The Unjournal's guidelines on What research to target to get a sense of the research we cover and our priorities. Look for high-quality research that 1) falls within our focus areas and 2) would benefit from (further) evaluation.
When in doubt, we encourage you to suggest the research anyway.
Navigate to The Unjournal's Suggest Research Survey Form. Most of the fields here are optional. The fields ask the following information:
Who you are: Let us know who is making the suggestion (you can also choose to stay anonymous).
If you leave your contact information, you will be eligible for financial "bounties" for strong suggestions.
If you are already a member of The Unjournal's team, additional fields will appear for you to link your suggestion to your profile in the Unjournal's database.
Research Label: Provide a short, descriptive label for the research you are suggesting. This helps The Unjournal quickly identify the topic at a glance.
Research Importance: Explain why the research is important, its potential impact, and any specific areas that require thorough evaluation.
Research Link: Include a direct URL to the research paper. The Unjournal prefers research that is publicly hosted, such as in a working paper archive or on a personal website.
Peer Review Status: Inform about the peer review status of the research, whether it's unpublished, published without clear peer review, or published in a peer-reviewed journal.
"Rate the relevance": This represents your best-guess at how relevant this work is for The Unjournal to evaluate, as a percentile relative to other work we are considering.
Research Classification: Choose categories that best describe the research. This helps The Unjournal sort and prioritize suggestions.
Field of Interest: Select the outcome or field of interest that the research addresses, such as global health in low-income countries.
Complete all the required fields and submit your suggestion. The Unjournal team will review your submission and consider it for future evaluation. You can reach out to us at contact@unjournal.org with any questions or concerns.
People on our team may find it more useful to suggest research to The Unjournal directl via the Airtable. See this document for a guide to this. (Please request document permission to access this explanation.)
Aside on setting the prioritization ratings: In making your subjective prioritization rating, please consider “What percentile do you think this paper (or project) is relative to the others in our database, in terms of ‘relevance for The UJ to evaluate’?” (Note this is a redefinition; we previously considered these as probabilities.) We roughly plan to commission the evaluation of about 1 in 5 papers in the database, the ‘top 20%’ according to these percentiles. Please don’t consider the “publication status or the “author's propensity to engage” in this rating. We will consider those as separate criteria.
Please don’t enter only the papers you think are ‘very relevant’; please enter in all research that you have spent any substantial time considering (more than a couple minutes). If we all do this, we should all aim for our percentile ratings to be approximately normally distributed; evenly spread over the 1-100% range.
David Reinstein, Nov 2024: Over the last six months we have considered and evaluated a small amount of work under this “Applied & Policy Stream”. We are planning to continue this stream for the forseeable future.
Much of the most impactful research is not aimed at academic audiences and may never be submitted to academic journals. It is written in formats that are very different from traditional academic outputs, and cannot be easily judged by academics using the same standards. Nonetheless, this work may use technical approaches developed in academia, making it important to gain expert feedback and evaluation.
The Unjournal can help here. However, to avoid confusion, we want to make this clearly distinct from our main agenda, which focuses on impactful academically-aimed research.
Our “Applied & Policy Stream” will be clearly labeled as separate from our main stream. This may constitute roughly 10 or 15% of the work that we cover. Below, we refer to this as the “applied stream” for brevity.
Our considerations for prioritizing this work are generally the same as for our academic stream – is it in the fields that we are focused on, using approaches that enable meaningful evaluation and rating? Is it already having impact (e.g., influencing grant funding in globally-important areas)? Does it have the potential for impact, and if so, is it high-quality enough that we should consider boosting its signal?
We will particularly prioritize policy and applied work that uses technical methods that need evaluation by research experts, often academics.
a range of applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers.
Ratings/metrics: As in the academic stream, this work will be evaluated for its credibility, usefulness, communication/logic, etc. However, we are not seeking to have this work assessed by the standards of academia in a way that yields a comparison to traditional journal tiers. Evaluators: Please ignore these parts of our interface; if you are unsure if it is relevant feel free to ask.
Evaluator selection, number, pay: Generally we want to continue to select academic research experts or non-academic researchers with strong academic and methodological background to do these evaluations. , particularly from academia, to work that is not normally scrutinized by such experts.
The compensation may be flexible as well; in some cases the work may be more involved than for the academic stream and in some cases less involved. As a starting point we will begin by offering the same compensation as for the academic stream.
Careful flagging and signposting: To preserve the reputation of our academic-stream evaluations we need to make it clear, wherever people might see this work, that it is not being evaluated by the same standards as the academic stream and doesn't “count” towards those metrics.
This research is more likely to fall into the category of #direct-evaluation-of-impactful-work, "already influencing a substantial amount of funding in impact-relevant areas, or substantially influencing policy considerations".
If the research itself is being funded by a global-impact focused foundation or donor, this will also constitute a strong prima facie reason to commission an evaluation (without requiring the authors' consent). See this post on the EA Forum.
See sections below
For prospective evaluators: An overview of what we are asking; payment and recognition details
Guidelines for evaluators: The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
Other sections and subsections provide further resources, consider future initiatives, and discuss our rationales.
As we are paying evaluators and have limited funding, we cannot evaluate every paper and project. Papers enter our database through:
submission by authors;
our own searches (e.g., searching syllabi, forums, working paper archives, and white papers); and
s from other researchers, practitioners, and members of the public, and recommendations from . We have posted more detailed instructions for how to suggest research for evaluation.
Our management team rates the suitability of each paper according to the criteria discussed below and in the aforementioned linked post.
We have followed a few procedures for finding and prioritizing papers and projects. In all cases, we require more than one member of our research-involved team (field specialist, managers, etc.) to support a paper before prioritizing it.
We are building a grounded systematic procedure with criteria and benchmarks. We also aim to give managers and field specialists some autonomy in prioritizing key papers and projects. As noted elsewhere, we are considering targets for particular research areas and sources.
See our basic process (as of Dec. 2023) for prioritizing work: Process: prioritizing research
Through October 2022: For the papers or projects at the top of our list, we contacted the authors and asked if they wanted to engage, only pursuing evaluation if agreed.
In our Direct evaluation track, we inform authors but do not request permission. For this track, we have largely focused on NBER working papers.
In deciding which papers or projects to send out to paid evaluators, we have considered the following issues. for each paper or project to evaluators before they write their evaluations.
Consider: global priority importance, field relevance, open science, authors’ engagement, data and reasoning transparency. In gauging this relevance, the team may consider the ITN framework, but not too rigidly.
What are (some of) the authors’ main claims that are worth carefully evaluating? What aspects of the evidence, argumentation, methods, interpretation, etc., is the team unsure about? What particular data, code, proof, etc., would they like to see vetted? If it has already been peer-reviewed in some way, why do they think more review is needed?
How well has the author engaged with the process? Do they need particular convincing? Do they need help making their engagement with The Unjournal successful?
See What research to target? for further discussion of prioritization, scope, and strategic and sustainability concerns.
You can request a conditional embargo by emailing us at contact@unjournal.org, or via the submission/response form. Please explain what sort of embargo you are asking for, and why. By default, we'd like Unjournal evaluations to be made public promptly.
However, we may make exceptions in special circumstances particularly
for very early-career researchers who are not clearly #high-professional-status-less-career-sensitive,
where the research is not obviously already influencing a substantial amount of funding in impact-relevant areas, or substantially influencing policy considerations
If there is an early-career researcher on the authorship team, we may allow authors to "embargo" the publication of the evaluation until a later date. Evaluators (referees) will be informed of this. This date can be contingent, but it should not be indefinite.
For example, we might grant an embargo that lasts until after a PhD/postdoc’s upcoming job market or until after publication in a mainstream journal, with a hard maximum of 14 months. (Of course, embargoes can be ended early at the request of the authors.)
In exceptional circumstances we may consider granting a ""
Note: the above are all exceptions to our regular rules, examples of embargos we might or might not agree to.
Thanks for your interest in evaluating research for The Unjournal!
The Unjournal is a nonprofit organization started in mid-2022. We commission experts to publicly evaluate and rate research. Read more about us here.
Write an evaluation of a specific research : essentially a standard, high-quality referee report.
research by filling in a structured form.
Answer a short questionnaire about your background and our processes.
See Guidelines for Evaluators for further details and guidance.
Why use your valuable time writing an Unjournal evaluation? There are several reasons: helping high-impact research users, supporting open science and open access, and getting recognition and financial compensation.
The Unjournal's goal is to make impactful research more rigorous, and rigorous research more impactful, while supporting open access and open science. We encourage better research by making it easier for researchers to get feedback and credible ratings. We evaluate research in high-impact areas that make a difference to global welfare. Your evaluation will:
Help authors improve their research, by giving early, high-quality feedback.
Help improve science by providing open-access, prompt, structured, public evaluations of impactful research.
Inform funding bodies and meta-scientists as we build a database of research quality, strengths and weaknesses in different dimensions. Help research users learn what research to trust, when, and how.
For more on our scientific mission, see here.
Your evaluation will be made public and given a DOI. You have the option to be identified as the author of this evaluation or to remain anonymous, as you prefer.
for providing a and complete evaluation and feedback ($100-$300 base + $100 'promptness bonus') in line with our expected standards.
Note, Aug. 2024: we're adjusting the base compensation to reward strong work and experience.
$100 + $100 for first-time evaluators
$300 + $100 for return Unjournal evaluators and those with previous strong public review experience. We will be integrating other incentives and prizes into this, and are committed to $450 in average compensation per evaluation, including prizes.
You will also be eligible for monetary prizes for "most useful and informative evaluation," plus other bonuses. We currently (Feb. 2024) set aside an additional $150 per evaluation for incentives, bonuses, and prizes.
See also "submitting claims and expenses"
If you have been invited to be an evaluator and want to proceed, simply respond to the email invitation that we have sent you. You will then be sent a link to our evaluation form.
To sign up for our evaluator pool, see 'how to get involved'
To learn more about our evaluation process, seeGuidelines for evaluators. If you are doing an evaluation, we highly recommend you read these guidelines carefully
The Unjournal Evaluators have the option of remaining anonymous (see #anonymity-blinding-vs.-signed-reports). Where evaluators choose this, we will carefully protect this anonymity, aiming at a high standard of protection, as good as or better than traditional journals. We will give evaluators the option to take extra steps to safeguard this further. We are offering anonymity in perpetuity to those who request it. (As well as anonymity on other terms to those who request it, on explicitly mutually agreed upon terms.)
If they choose to stay anonymous, there should be no way for authors to be able to ‘guess’ who has reviewed their work.
We will take steps to keep private any information that could connect the identity of an anonymous evaluator and their evaluation/the work they are evaluating.
We will take extra steps to make the possibility of accidental disclosure extremely small (this is never impossible of course, even in the case of conventional journal reviews). In particular, we will use pseudonyms or ID codes for these evaluators in any discussion or database that is shared among our management team that connects individual evaluators to research work.
If we ever share a list of Unjournal’s evaluators this will not include anyone who wished to remain anonymous (unless they explicitly ask us to be on such a list).
We will do our best to warn anonymous evaluators of ways that they might inadvertently be identifying themselves in the evaluation content they provide.
We will provide platforms to enable anonymous and secure discussion between anonymous evaluators and others (authors, editors, etc.) Where an anonymous evaluator is involved, we will encourage these platforms to be used as much as possible. In particular, see our (proposed) use of Cryptpad.
Aside: In future, we may consider allowing Evaluation Managers (formerly 'managing editors') to remain anonymous, and these tools will also be
31 Aug 2023: Our present approach is a "working solution" involving some ad-hoc and intuitive choices. We are re-evaluating the metrics we are asking for as well as the interface and framing. We are gathering some discussion in this linked Gdoc, incorporating feedback from our pilot evaluators and authors. We're also talking to people with expertise as well as considering past practice and other ongoing initiatives. We plan to consolidate that discussion and our consensus and/or conclusions into the present (Gitbook) site.
Ultimately, we're trying to replace the question of "what tier of journal did a paper get into?" with "how highly was the paper rated?" We believe this is a more valuable metric. It can be more fine-grained. It should be less prone to gaming. It aims to reduce randomness in the process, through things like 'the availability of journal space in a particular field'. See our discussion of Reshaping academic evaluation: beyond the binary... .
To get to this point, we need to have academia and stakeholders see our evaluations as meaningful. We want the evaluations to begin to have some value that is measurable in the way “publication in the AER” is seen to have value.
While there are some ongoing efforts towards journal-independent evaluation, these . Typically, they either have simple tick-boxes (like "this paper used correct statistical methods: yes/no") or they enable descriptive evaluation without an overall rating. As we are not a journal, and we don’t accept or reject research, we need another way of assigning value. We are working to determine the best way of doing this through quantitative ratings. We hope to be able to benchmark our evaluations to "traditional" publication outcomes. Thus, we think it is important to ask for both an overall quality rating and a journal ranking tier prediction.
In addition to the overall assessment, we think it will be valuable to have the papers rated according to several categories. This could be particularly helpful to practitioners who may care about some concerns more than others. It also can be useful to future researchers who might want to focus on reading papers with particular strengths. It could be useful in meta-analyses, as certain characteristics of papers could be weighed more heavily. We think the use of categories might also be useful to authors and evaluators themselves. It can help them get a sense of what we think research priorities should be, and thus help them consider an overall rating.
However, these ideas have been largely ad-hoc and based on the impressions of our management team (a particular set of mainly economists and psychologists). The process is still being developed. Any feedback you have is welcome. For example, are we overemphasizing certain aspects? Are we excluding some important categories?
We are also researching other frameworks, templates, and past practice; we hope to draw from validated, theoretically grounded projects such as RepliCATS.
In eliciting expert judgment, it is helpful to differentiate the level of confidence in predictions and recommendations. We want to know not only what you believe, but how strongly held your beliefs are. If you are less certain in one area, we should weigh the information you provide less heavily in updating our beliefs. This may also be particularly useful for practitioners. Obviously, there are challenges to any approach. Even experts in a quantitative field may struggle to convey their own uncertainty. They may also be inherently "poorly calibrated" (see discussions and tools for calibration training). Some people may often be "confidently wrong." They might state very narrow "credible intervals", when the truth—where measurable—routinely falls outside these boundaries. People with greater discrimination may sometimes be underconfident. One would want to consider and As a side benefit, this may be interesting for research , particularly as The Unjournal grows. We see 'quantifying one's own uncertainty' as a good exercise for academics (and everyone) to engage in.
39, 52
5
47, 54
5
45, 55
4
10, 35
3
40, 70
2
30,46
0**
21,65
We had included the note:
We give the previous weighting scheme in a fold below for reference, particularly for those reading evaluations done before October 2023.
As well as:
Suggested weighting: 0.
Elsewhere in that page we had noted:
As noted above, we give suggested weights (0–5) to suggest the importance of each category rating to your overall assessment, given The Unjournal's priorities.
The weightings were presented once again along with each description in the section "Category explanations: what you are rating".
39, 52
47, 54
45, 55
10, 35
40, 70
30,46
21,65
[FROM PREVIOUS GUIDELINES:]
You may feel comfortable giving your "90% confidence interval," or you may prefer to give a "descriptive rating" of your confidence (from "extremely confident" to "not confident").
Quantify how certain you are about this rating, either giving a 90% confidence/credibility interval or using our scale described below. (
[Previous...] Remember, we would like you to give a 90% CI or a confidence rating (1–5 dots), but not both.
And, for the 'journal tier' scale:
[Previous guidelines]: The description folded below focuses on the "Overall Assessment." Please try to use a similar scale when evaluating the category metrics.
#more-reliable-precise-and-useful-metrics This page explains the value of the metrics we are seeking from evaluators.
Unjournal Evaluator Guidelines and Metrics - Discussion space
In addition to soliciting research submissions by authors, we have a process for sourcing and prioritizing unsubmitted research for evaluation. For some of this research we ask for author engagement but do not require their permission.
Aside: in the future, we hope to work directly with working paper series, associations, and research groups to get their approval and engagement with Unjournal evaluations. We hope that having a large share of papers in your series evaluated will serve as a measure of confidence in your research quality. If you are involved in such a group and are interested in this, please reach out to us ().
Dec 7, 2024: We have updated some of our rules and guidelines on this page. These will be applied going forward (in future contacts with authors) but not retroactively.
All NBER working papers are generally eligible, with .
We treat these on a case-by-case basis and use discretion. All CEPR members are reasonably secure and successful, but their co-authors might not be, especially if these co-authors are PhD students they are supervising.
In some areas and fields (e.g., psychology, animal product markets) the publication process is relatively rapid or it may fail to engage general expertise. In general, all papers that are already published in peer-reviewed journals are eligible for our direct track.
These are eligible (without author permission) if all authors/all lead authors "have high professional status" or are otherwise less career-sensitive to the consequences of this evaluation.
We define this (at least for economics) as:
having a tenured or ‘long term’ positions at well-known, respected universities or other research institutions, or
having a tenure-track positions at a top universities (e.g., top-20 globally by some credible ranking) and having published one or more papers in a "top-five-equivalent" journal, or
clearly not pursuing an academic career (e.g., the "partner at the aid agency running the trial").
On the other hand, if one or more authors is a PhD student or an untenured academic outside a "top global program,’’ then we will ask for permission and potentially offer an embargo.
If the PhD student or untenured academic is otherwise clearly extremely high-performing by conventional metrics; e.g., an REStud "tourist" or someone with multiple published papers in top-5 journals. In such cases the paper might be considered eligible for direct evaluation.
See also #direct-evaluation-of-impactful-work
Under review/R&R at a journal? The fact that the paper is under submission or in "revise and resubmit" at a top journal does not preclude us from evaluating it. In some cases it may be particular important and helpful to evaluate work at this stage. But we'd like to be aware of this, as it can weigh into our considerations and timing.
We will also evaluate work directly, without requiring author permission, where it is clear that this research is already influencing a substantial amount of funding in impact-relevant areas, or substantially influencing policy considerations. Much of this work will be evaluated as part of our "Applied and Policy" Track.
This page describes The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
These guidelines apply to the evaluation forms in Coda and ().
Please see for an overview of the evaluation process, as well as details on compensation, public recognition, and more.
Write an evaluation of the target , similar to a standard, high-quality referee report. Please identify the paper's main claims and carefully assess their validity, leveraging your own background and expertise.
.
Answer a short questionnaire about your background and our processes.
In writing your evaluation and providing ratings, please consider the following.
In many ways, the written part of the evaluation should be similar to a report an academic would write for a traditional high-prestige journal (e.g., see some 'conventional guidelines' ). Most fundamentally, we want you to use your expertise to critically assess the main claims made by the authors. Are the claims well-supported? Are the assumptions believable? Are the methods are appropriate and well-executed? Explain why or why not.
However, we'd also like you to pay some consideration to our priorities, including
Advancing our knowledge and supporting practitioners
Justification, reasonableness, validity, and robustness of methods
Logic and communication, intellectual modesty, transparent reasoning
Open, communicative, replicable science
If you have questions about the authors’ work, you can ask them anonymously: we will facilitate this.
We want you to evaluate the most recent/relevant version of the paper/project that you can access. If you see a more recent version than the one we shared with you, please let us know.
We designed this process to balance three considerations with three target audiences. Please consider each of these:
Crafting evaluations and ratings that help researchers and policymakers judge when and how to rely on this research. For Research Users.
Ensuring these evaluations of the papers are comparable to current journal tier metrics, to enable them to be used to determine career advancement and research funding. For Departments, Research Managers, and Funders.
Providing constructive feedback to Authors.
For some questions, we ask for a percentile ranking from 0-100%. This represents "what proportion of papers in the reference group are worse than this paper, by this criterion". A score of 100% means this is essentially the best paper in the reference group. 0% is the worst paper. A score of 50% means this is the median paper; i.e., half of all papers in the reference group do this better, and half do this worse, and so on.
Here* the population of papers should be all serious research in the same area that you have encountered in the last three years.
For each metric, we ask you to provide a 'midpoint rating' and a 90% credible interval as a measure of your uncertainty. Our interface provides slider bars to express your chosen intervals:
The table below summarizes the percentile rankings.
Percentile ranking (0-100%)
Do the authors do a good job of (i) stating their main questions and claims, (ii) providing strong evidence and powerful approaches to inform these, and (iii) correctly characterizing the nature of their evidence?
Percentile ranking (0-100%)
Percentile ranking (0-100%)
(Applied stream: please focus on ‘improvements that are actually helpful’.)
Do the paper's insights inform our beliefs about important parameters and about the effectiveness of interventions?
Does the project add useful value to other impactful research?
Percentile ranking (0-100%)
Are the goals and questions of the paper clearly expressed? Are concepts clearly defined and referenced?
Are the conclusions consistent with the evidence (or formal proofs) presented? Do the authors accurately state the nature of their evidence, and the extent it supports their main claims?
Are the data and/or analysis presented relevant to the arguments made? Are the tables, graphs, and diagrams easy to understand in the context of the narrative (e.g., no major errors in labeling)?
Percentile ranking (0-100%)
This covers several considerations:
Would another researcher be able to perform the same analysis and get the same results? Are the methods explained clearly and in enough detail to enable easy and credible replication? For example, are all analyses and statistical tests explained, and is code provided?
Is the source of the data clear?
Is the data made as available as is reasonably possible? If so, is it clearly labeled and explained??
Consistency
Do the numbers in the paper and/or code output make sense? Are they internally consistent throughout the paper?
Useful building blocks
Do the authors provide tools, resources, data, and outputs that might enable or enhance future work and meta-analysis?
Does the paper consider real-world relevance and deal with policy and implementation questions? Are the setup, assumptions, and focus realistic?
Do the authors report results that are relevant to practitioners? Do they provide useful quantified estimates (costs, benefits, etc.) enabling practical impact quantification and prioritization?
Do they communicate (at least in the abstract or introduction) in ways policymakers and decision-makers can understand, without misleading or oversimplifying?
To help universities and policymakers make sense of our evaluations, we want to benchmark them against how research is currently judged. So, we would like you to assess the paper in terms of journal rankings. We ask for two assessments:
a normative judgment about 'how well the research should publish';
a prediction about where the research will be published.
Journal ranking tiers are on a 0-5 scale, as follows:
1/5: OK/Somewhat valuable journal
2/5: Marginal B-journal/Decent field journal
3/5: Top B-journal/Strong field journal
4/5: Marginal A-Journal/Top field journal
5/5: A-journal/Top journal
As before, we ask for a 90% credible interval.
Journal ranking tier (0.0-5.0)
Assess this paper on the journal ranking scale described above, considering only its merit, giving some weight to the category metrics we discussed above.
the journal process was fair, unbiased, and free of noise, and that status, social connections, and lobbying to get the paper published didn’t matter;
journals assessed research according to the category metrics we discussed above.
Journal ranking tier (0.0-5.0)
We want policymakers, researchers, funders, and managers to be able to use The Unjournal's evaluations to update their beliefs and make better decisions. To do this well, they need to weigh multiple evaluations against each other and other sources of information. Evaluators may feel confident about their rating for one category, but less confident in another area. How much weight should readers give to each? In this context, it is useful to quantify the uncertainty.
But it's hard to quantify statements like "very certain" or "somewhat uncertain" – different people may use the same phrases to mean different things. That's why we're asking for you a more precise measure, your credible intervals. These metrics are particularly useful for meta-science and meta-analysis.
We are now asking evaluators for “claim identification and assessment” where relevant. This is meant to help practitioners use this research to inform their funding, policymaking, and other decisions. It is not intended as a metric to judge the research quality per se. This is not required but we will reward this work.
Lastly, we ask evaluators about their background, and for feedback about the process.
Length/time spent: This is up to you. We welcome detail, elaboration, and technical discussion.
We are considering asking evaluators, with compensation, to assist and engage in the process of "robustness replication." This may lead to some interesting follow-on possibilities as we build our potential collaboration with the and others in this space.
We might ask evaluators discussion questions like these:
What is the most important, interesting, or relevant substantive claim made by the authors, (particularly considering global priorities and potential interventions and responses)?
What statistical test or evidence does this claim depend on, according to the authors?
How confident are you in the substantive claim made?
"Robustness checks": What specific statistical test(s) or piece(s) of evidence would make you substantially more confident in the substantive claim made?
If a robustness replication "passed" these checks, how confident would you be then in the substantive claim? (You can also express this as a continuous function of some statistic rather than as a binary; please explain your approach.)
Background:
The Institute for Replication is planning to hire experts to do "robustness-replications" of work published in a top journal in economics and political science. Code- and data sharing is now being enforced in many or all of these journals and other important outlets. We want to support their efforts and are exploring collaboration possibilities. We are also considering how to best guide potential future robustness replication work.
We're happy for you to use whichever process and structure you feel comfortable with when writing your evaluation content.
Remember: The Unjournal doesn’t “publish” and doesn’t “accept or reject.” So don’t give an Accept, Revise-and-Resubmit', or Reject-type recommendation. We ask for quantitative metrics, written feedback, and expert discussion of the validity of the paper's main claims, methods, and assumptions.
Economics
Semi-relevant:
Report:
Open Science
General, other fields
It's a norm in academia that people do reviewing work for free. So why is The Unjournal paying evaluators?
From a recent
We estimate that the average (median) respondent spends 12 (9) working days per year on refereeing. The top 10% of the distribution dedicates 25 working days or more, which is quite substantial considering refereeing is usually unpaid.
The peer-review process in economics is widely-argued to be too slow and lengthy. But there is evidence that payments may help improve this.
, they note that few economics journals currently pay reviewers (and these payments tend to be small (e.g., JPE and AER paid $100 at the time). However, they also note, citing several papers:
The existing evidence summarized in Table 5 suggests that offering financial incentives could be an effective way of reducing turnaround time.
notes that the work of reviewing is not distributed equally. To the extent that accepting to do a report is based on individual goodwill, the unpaid volunteer model could be seen to unfairly penalize more generous and sympathetic academics. Writing a certain number of referee reports per year is generally considered part of "academic service". Academics put this on their CVs, and it may lead to being on the board of a journal which is valued to an extent. However, this is much less attractive for researchers who are not tenured university professors. Paying for this work would do a better job of including them in the process.
'Payment for good evaluation work' may also lead to fair and more useful evaluations.
In the current system academics may take on this work in large part to try to impress journal editors and get favorable treatment from them when they submit their own work. They may also write reviews in particular ways to impress these editors.
For less high-prestige journals, to get reviewers, editors often need to lean on their personal networks, including those they have power relationships with.
Reviewers are also known to strategically try to get authors to cite and praise the reviewer's own work. They maybe especially critical to authors they see as rivals.
To the extent that reviewers are doing this as a service they are being paid for, these other motivations will be comparatively somewhat less important. The incentives will be more in line with doing evaluations that are seen as valuable by the managers of the process, in order to get chosen for further paid work. (And, if evaluations are public, the managers can consider the public feedback on these reports as well.)
We are not ‘just another journal.’ We need to give incentives for people to put effort into a new system and help us break out of the old inferior equilibrium.
In some senses, we are asking for more than a typical journal. In particular, our evaluations will be made public and thus need to be better communicated.
We cannot rely on 'reviewers taking on work to get better treatment from editors in the future.' This does not apply to our model, as we don't have editors make any sort of ‘final accept/reject decision’
Our ‘paying evaluators’ brings in a wider set of evaluators, including non-academics. This is particularly relevant to our impact-focused goals.
Previous/less emphasized: Society
Evaluations and author response are given DOI's, enter the bibliometric record
Future consideration:
"publication tier" of authors' responses as a workaround to encode aggregated evaluation
Hypothes.is annotation of hosted and linked papers and projects (aiming to integrate: see: )
Sharing evaluation data on public Github repo (see )
We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.
In the short run (in our pilot phase), we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on "aggregating expert opinion." (See, e.g., ; ; ; .)
We are working on this in our public data presentation (Quarto notebook) .
We are considering...
Syntheses of evaluations and author feedback
Input to prediction markets, replication projects, etc.
A simplified rendering, skipping some steps and possibilities
The flowchart below focuses on the evaluation part of our process in detail. See for a more condensed flowchart.
(Section updated 1 August 2023)
Submission/selection (multiple routes)
Author (A) submits work (W), creates new submission (submits a URL and DOI), through our platform or informally.
Author (or someone on their behalf) can complete a submission form; this includes a potential "request for embargo" or other special treatment.
Managers and field specialists select work (or the project is submitted independently of authors) and the management team agrees to prioritize it.
For either of these cases (1 or 2), authors are asked for permission.
Alternate : "Work enters prestige archive" (NBER, CEPR, and some other cases).
Managers inform and consult the authors but permission . (Particularly relevant: we confirm with author that we have the latest updated version of the research.)
Prioritization
Following author submission ...
Manager(s) (M) and Field Specialists (FS) prioritize work for review (see ).
Following direct evaluation selection...
"evaluation suggestions" (see ) explaining why it's relevant, what to evaluate, etc., to be shared later with evaluators.
If requested (in either case), M decides whether to grant embargo or other special treatment, notes this, and informs authors.
an Evaluation Manager (EM – typically part of our ) to selected project.
EM invites evaluators (aka "reviewers") and shares the paper to be evaluated along with (optionally) a brief summary of why The Unjournal thinks it's relevant, and what we are asking.
Potential evaluators are given full access to (almost) all information submitted by the author and M, and notified of any embargo or special treatment granted.
EM may make special requests to the evaluator as part of a management policy (e.g., "signed/unsigned evaluation only," short deadlines, extra incentives as part of an agreed policy, etc.).
EM (, optionally) may add "evaluation suggestions" to share with the evaluators.
Evaluator accepts or declines the invitation to review, and if the former, agrees on a deadline (or asks for an extension).
If the evaluator accepts, the EM shares full guidelines/evaluation template and specific suggestions with the evaluator.
Evaluator completes .
Evaluator submits evaluation including numeric ratings and predictions, plus "CI's" for these.
Possible addition (future plan): Reviewer asks for minor revisions and corrections; see "How revisions might be folded in..." in the fold below.
EM collates all evaluations/reviews, shares these with Author(s).
Evaluator must be very careful not to share evaluators' identities at this point.
This includes caution to avoid accidentally-identifying information, especially where .
Even if evaluators chose to "sign their evaluation," their identity should not be disclosed to authors at this point. However, evaluators are told they can reach out to the
Evaluations are shared with the authors as a separate doc, set of docs, file, or space; which the . (Going forward, this will be automated.)
It is made clear to authors that their responses will be published (and given a DOI, when possible).
Author(s) read(s) evaluations, given two working weeks to submit responses.
If there is an embargo, there is more time to do this, of course.
EM creates evaluation summary and "EM comments."
EM or UJ team publishes each element on our space as a separate "pub" with a DOI for each (unless embargoed):
Summary and EM comments
With a prominent section for the "ratings data tables"
Each evaluation, with summarized ratings at the top
The author response
All of the above are linked in a particular way, with particular settings;
Authors and evaluators are informed once elements are on PubPub; next steps include promotion, checking bibliometrics, etc.
("Ratings and predictions data" to enter an additional public database.)
Note that we intend to automate and integrate many of the process into an editorial-management-like system in PubPub.
In our current (8 Feb 2023 pilot) phase, we have the evaluators consider the paper "as is," frozen at a certain date, with no room for revisions. The authors can, of course, revise the paper on their own and even pursue an updated Unjournal review; we would like to include links to the "permanently updated version" in the Unjournal evaluation space.
After the pilot, we may consider making minor revisions part of the evaluation process. This may add substantial value to the papers and process, especially where evaluators identify straightforward and easily-implementable improvements.
We don't want to replicate the slow and inefficient processes of the traditional system. Essentially, we want evaluators to give a report and rating as the paper stands.
(holistic, most important!)
(holistic, most important!)
The Calibrate Your Judgment app from Clearer Thinking is fairly helpful and fun for practicing and checking how good you are at expressing your uncertainty. It requires creating account, but that doesn't take long. The 'Confidence Intervals' training seems particularly relevant for our purposes.
See our for more details on each of these. Please don't structure your review according to these metrics, just pay some attention to them.
We discuss this, and how it relates to our impact and "theory of change", .
We ask for a set of nine quantitative metrics. For each metric, we ask for a score and a 90% credible interval. We describe these in detail below. (We explain .)
for more guidance on uncertainty, credible intervals, and the midpoint rating as the 'median of your belief distribution'.
Judge the quality of the research heuristically. Consider all aspects of quality, credibility, importance to future impactful applied research, and practical relevance and usefulness.
Are the used well-justified and explained; are they a reasonable approach to answering the question(s) in this context? Are the underlying assumptions reasonable?
Are the results and methods likely to be robust to reasonable changes in the underlying assumptions?
Avoiding bias and (QRP): Did the authors take steps to reduce bias from opportunistic reporting ? For example, did they do a strong pre-registration and pre-analysis plan, incorporate multiple hypothesis testing corrections, and report flexible specifications?
To what extent does the project contribute to the field or to practice, particularly in ways that are to global priorities and impactful interventions?
Is the "? Are assumptions made explicit? Are all logical steps clear and correct? Does the writing make the argument easy to follow?
Are the paper’s chosen topic and approach to
Are the assumptions and setup realistic and relevant to the real world?
Could the paper's topic and approach help inform
Most work in our will not be targeting academic journals. Still, in some cases it might make sense to make this comparison; e.g., if particular aspects of the work might be rewritten and submitted to academic journals, or if the work uses certain techniques that might be directly compared to academic work. If you believe a comparison makes sense, please consider giving an assessment below, making reference to our guidelines and how you are interpreting them in this case.
0/5: "/little to no value". Unlikely to be cited by credible researchers
We give some example journal rankings , based on SJR and ABS ratings.
We encourage you to , e.g. 4.6 or 2.2.
PubPub note: as of 14 March 2024, the PubPub form is not allowing you to give non-integer responses. Until this is fixed, . (Or use the Coda form.)
Equivalently, if:
You are asked to give a 'midpoint' and a 90% credible interval. Consider this as that you believe is 90% likely to contain the true value. See the fold below for further guidance.
You are also asked to give a 90% credible interval. Consider this as that you believe is 90% likely to contain the true value.
For more information on credible intervals, may be helpful.
If you are "", your 90% credible intervals should contain the true value 90% of the time.
If you are "", your 90% credible intervals should contain the true value 90% of the time. To understand this better, assess your ability, and then practice to get better at estimating your confidence in results. will help you get practice at calibrating your judgments. We suggest you choose the "Calibrate your Judgment" tool, and select the "confidence intervals" exercise, choosing 90% confidence. Even a 10 or 20 minute practice session can help, and it's pretty fun.
.
For the two questions below, we will unless you specifically ask these questions to be kept anonymous.
Answers to the questions
12 Feb 2024: We are moving to a hosted form/interface in PubPub. That form is still somewhat a work-in-progress, and may need some further guidance; we try to provide this below, but please contact us with any questions. , you can also submit your response in a Google Doc, and share it back with us. Click to make a new copy of that directly.
recommends a 2–3 page referee report; suggest this is relatively short, but confirm that brevity is desirable. , economists report spending (median and mean) about one day per report, with substantial shares reporting "half a day" and "two days." We expect that reviewers tend spend more time on papers for high-status journals, and when reviewing work that is closely tied to their own agenda.
We have made some adjustments to this page and to our guidelines and processes; this is particularly relevant for considering earlier evaluations. See .
If you still have questions, please contact us, or see our FAQ on .
Our data protection statement is linked .
(Conventional but open access; simple and brief)
(Open-science-aligned; perhaps less detail-oriented than we are aiming for)
(Journal-independent “PREreview”; detailed; targets ECRs)
(Conventional; general)
(extensive resources; only some of this is applicable to economics and social science)
‘the 4 validities’ and
Less technical summaries and policy-relevant summaries, e.g., for the , , or mainstream long-form outlets
We also want to encourage papers as projects. The authors can improve it, if they like, and resubmit it for a new evaluation.
Overall assessment
0 - 100%
Claims, strength and characterization of evidence:
0 - 100%
Methods: Justification, reasonableness, validity, robustness
0 - 100%
Advancing knowledge and practice
0 - 100%
Logic and communication
0 - 100%
Open, collaborative, replicable science
0 - 100%
0 - 100%
What journal ranking tier should this work be published in?
0.0-5.0
lower, upper
What journal ranking tier will this work be published in?
0.0-5.0
lower, upper
Text to accompany the Impactful Research Prize discussion
Note: This section largely repeats content in our guide for researchers/authors, especially our FAQ on "why engage."
Jan. 2024: We have lightly updated this page to reflect our current systems.
We describe the nature of the work we are looking to evaluate, along with examples, in this forum post. Update 2024: This is now better characterized under "What research to target?" and "What specific areas do we cover?".
If you are interested in submitting your work for public evaluation, we are looking for research which is relevant to global priorities—especially quantitative social sciences—and impact evaluations. Work that would benefit from further feedback and evaluation is also of interest.
Your work will be evaluated using our evaluation guidelines and metrics. You can read these here before submitting.
Important Note: We are not a journal. By having your work evaluated, you will not be giving up the opportunity to have your work published in a journal. We simply operate a system that allows you to have your work independently evaluated.
If you think your work fits our criteria and would like it to be publicly evaluated, please submit your work through this form.
If you would like to submit more than one of your papers, you will need to complete a new form for each paper you submit.
By default, we would like Unjournal evaluations to be made public. We think public evaluations are generally good for authors, as explained here. However, in special circumstances and particularly for very early-career researchers, we may make exceptions.
If there is an early-career researcher on the author team, we will allow authors to "embargo" the publication of the evaluation until a later date. This date is contingent, but not indefinite. The embargo lasts until after a PhD/postdoc’s upcoming job search or until it has been published in a mainstream journal, unless:
the author(s) give(s) earlier permission for release; or
until a fixed upper limit of 14 months is reached.
If you would like to request an exception to a public evaluation, you will have the opportunity to explain your reasoning in the submission form.
See "Conditional embargos & exceptions" for more detail, and examples.
The Unjournal presents an additional opportunity for evaluation of your work with an emphasis on impact.
Substantive feedback will help you improve your work—especially useful for young scholars.
Ratings can be seen as markers of credibility for your work that could help your career advancement at least at the margin, and hopefully help a great deal in the future. You also gain the opportunity to publicly respond to critiques and correct misunderstandings.
You will gain visibility and a connection to the EA/Global Priorities communities and the Open Science movement.
You can take advantage of this opportunity to gain a reputation as an ‘early adopter and innovator’ in open science.
You can win prizes: You may win a “best project prize,” which could be financial as well as reputational.
Entering into our process will make you more likely to be hired as a paid reviewer or editorial manager.
We will encourage media coverage.
If we consider your work for public evaluation, we may ask for some of the items below, although most are optional. We will aim to make this a very light touch for authors.
A link to a non-paywalled, hosted version of your work (in any format—PDFs are not necessary) that can be given a Digital Object Identifier (DOI). Again, we will not be "publishing" this work, just evaluating it.
A link to data and code, if possible. We will work to help you to make it accessible.
Assignment of two evaluators who will be paid to assess your work. We will likely keep their identities confidential, although this is flexible depending on the reviewer. Where it seems particularly helpful, we will facilitate a confidential channel to enable a dialogue with the authors. One person on our managing team will handle this process.
Have evaluators publicly post their evaluations (i.e., 'reviews') of your work on our platform. As noted above, we will ask them to provide feedback, thoughts, suggestions, and some quantitative ratings for the paper.
By completing the submission form, you are providing your permission for us to post the evaluations publicly unless you request an embargo.
You will have a two-week window to respond through our platform before anything is posted publicly. Your responses can also be posted publicly.
For more information on why authors may want to engage and what we may ask authors to do, please see For researchers/authors.
This discussion is a work-in-progress
We are targeting global priorities-relevant research...
With the potential for impact, and with the potential for Unjournal evaluations to have an impact (see our high-level considerations and our prioritization ratings discussions).
Our is quantitative work that informs global priorities (see linked discussion), especially in , informing our Theory of Change.
We give a data presentation of the work we have already covered and the work we are prioritizing here, which will be continually updated.
But what does this mean in practice? What specific research fields, topics, and approaches are we likely to classify as 'relevant to evaluate'?
We give some lists and annotated examples below.
As of January 2024 The Unjournal focuses on ...
Research where the fundamental question being investigated involves human behavior and beliefs and the consequences of these. This may involve markets, production processes, economic constraints, social interactions, technology, the 'market of ideas', individual psychology, government processes, and more. However, the main research question should not revolve around issues outside of human behavior, such as physical science, biology, or computer science and engineering. These areas are out of our scope (at least for now).
Research that is fundamentally quantitative and uses . It will generally involve or consider measurable inputs, choices, and outcomes; specific categorical or quantitative questions; analytical and mathematical reasoning; hypothesis testing and/or belief updating, etc.
Research that targets and addresses a single specific question or goals, or a small cluster. It should not mainly be a broad discussion and overview of other research or conceptual issues.
This to generally involves the academic fields:
Economics
Applied Statistics (and some other applied math)
Psychology
Political Science
Other quantitative social science fields (perhaps Sociology)
Applied "business school" fields: finance, accounting, operations, etc.
Applied "policy and impact evaluation" fields
Life science/medicine where it targets human behavior/social science
These discipline/field boundaries are not strict; they may adapt as we grow
These were chosen in light of two main factors:
Our founder and our team is most comfortable assessing and managing the consideration of research in these areas.
These fields seem to be particularly amenable to, and able to benefit from our journal-independent evaluation approach. Other fields, such as biology, are already being 'served' by strong initiatives like Peer Communities In.
To do: We will give and explain some examples here
The Unjournal's mission is to prioritize
research with the strongest potential for a positive impact on global welfare
where public evaluation of this research will have the greatest impact
Given this broad goal, we consider research into any cause, topic, or outcome, as long as the research involves fields, methods, and approaches within our domain (see above), and as long as the work meets our other requirements (e.g., research must be publicly shared without a paywall).
While we don't have rigid boundaries, we are nonetheless focusing on certain areas:
(As of Jan. 2024) we have mainly commissioned evaluations of work involving development economics and health-related outcomes and interventions in low-and middle-income countries.
As well as research involving
Environmental economics, conservation, harm to human health
The social impact of AI and emerging technologies
Economics, welfare, and governance
Catastrophic risks; predicting and responding to these risks
The economics of innovation; scientific progress and meta-science
The economics of health, happiness, and wellbeing
We are currently prioritizing further work involving
Psychology, behavioral science, and attitudes: the spread of misinformation; other-regarding preferences and behavior; moral circles
Animal welfare: markets, attitudes
Methodological work informing high-impact research (e.g., methods for impact evaluation)
We are also considering prioritizing work involving
AI governance and safety
Quantitative political science (voting, lobbying, attitudes)
Political risks (including authoritarian governments and war and conflict)
Institutional decisionmaking and policymaking
Long-term growth and trends; the long-term future of civilization; forecasting
To do: We will give and explain some examples here