Kickstarter incentive: After the first 8 quality submissions (or by Jan. 1, 2025 - whichever comes later) we will award a prize of $500 to the strongest evaluation.
Note on .
The Unjournal is seeking academics, researchers, and students to submit structured evaluations of the most impactful research . Strong evaluations will be posted or linked on our PubPub community, offering readers a perspective on the implications, strengths, and limitations of the research. These evaluations can be submitted using this form for academic-targeted research or this form for ; evaluators can publish their name or maintain anonymity; we also welcome collaborative evaluation work. We will facilitate, promote, and encourage these evaluations in several ways, described below.
We are particularly looking for people with research training, experience, and expertise in quantitative social science and statistics including cost-benefit modeling and impact evaluation. This could include professors, other academic faculty, postdocs, researchers outside of academia, quantitative consultants and modelers, PhD students, and students aiming towards PhD-level work (pre-docs, research MSc students etc.) But anyone is welcome to give this a try — when in doubt, piease go for it.
We are also happy to support collaborations and group evaluations. There is a good track record for this — see: “What is a PREreview Live Review?”, ASAPBio’s Crowd preprint review, I4replication.org and repliCATS for examples in this vein. We may also host live events and/or facilitate asynchronous collaboration on evaluations
Instructors/PhD, MRes, Predoc programs: We are also keen to work with students and professors to integrate ‘independent evaluation assignments’ (aka ‘learn to do peer reviews’) into research training.
Your work will support The Unjournal’s core mission — improving impactful research through journal-independent public evaluation. In addition, you’ll help research users (policymakers, funders, NGOs, fellow researchers) by providing high quality detailed evaluations that rate and discuss the strengths, limitations, and implications of research.
Doing an independent evaluation can also help you. We aim to provide feedback to help you become a better researcher and reviewer. We’ll also give prizes for the strongest evaluations. Lastly, writing evaluations will help you build a portfolio with The Unjournal, making it more likely we will commission you for paid evaluation work in the future.
We focus on rigorous, globally-impactful research in quantitative social science and policy-relevant research. (See “What specific areas do we cover?” for details.) We’re especially eager to receive independent evaluations of:
Research we publicly prioritize: see our public list of research we've prioritized or evaluated. ()
Research we previously evaluated (see public list, as well as https://unjournal.pubpub.org/ )
Work that other people and organizations suggest as having high potential for impact/value of information (also see Evaluating Pivotal Questions)
You can also suggest research yourself here and then do an independent evaluation of it.
We’re looking for careful methodological/technical evaluations that focus on research credibility, impact, and usefulness. We want evaluators to dig into the weeds, particularly in areas where they have aptitude and expertise. See our guidelines.
The Unjournal’s structured evaluation forms: We encourage evaluators to do these using either:
Our Academic (main) stream form: If you are evaluating research aimed at an academic journal or
Our ‘Applied stream’ form: If you are evaluating research that is probably not aimed at an academic journal. This may include somewhat less technical work, such as reports from policy organizations and think tanks, or impact assessments and cost-benefit analyses
Other public evaluation platforms: We are also open to engaging with evaluations done on existing public evaluation platforms such as PREreview.org. Evaluators: If you prefer to use another platform, please let us know about your evaluation using one of the forms above. If you like, you can leave most of our fields blank, and provide a link to your evaluation on the other public platform.
Academic (~PhD) assignments and projects: We are also looking to build ties with research-intensive university programs; we can help you structure academic assignments and provide external reinforcement and feedback. Professors, instructors, and PhD students: please contact us (contact@unjournal.org).
We will encourage all these independent evaluations to be publicly hosted, and will share links to these. We will further promote the strongest independent evaluations, potentially (such as unjournal.pubpub.org)
However, when we host or link these, we will keep them clearly separated and signposted as distinct from the commissioned evaluations; independent evaluations will not be considered official, and their ratings won’t be included in our ‘main data’ (see dashboard here; see ).
Bounties: We will offer prizes for the ‘most valuable independent evaluations’.
As a start, after the first eight (or by Jan. 1 2025, whichever comes later), we will award a prize of $500 to the most valuable evaluation.
Further details tbd.
All evaluation submissions will be eligible for these prizes and “grandfathered in” to any prizes announced later. We will announce and promote the prize winners (unless they opt for anonymity).
Evaluator pool: People who submit evaluations can elect to join our evaluator pool. We will consider and (time-permitting) internally rate these evaluations. People who do the strongest evaluations in our focal areas are likely to be commissioned as paid evaluators for The Unjournal.
We’re also moving towards a two-tiered base We will offer a higher rate to people who can demonstrate previous strong review/evaluation work. These independent evaluations will count towards this ‘portfolio’.
Our PubPub page provides examples of strong work, including the prize-winning evaluations.
We will curate guidelines and learning materials from relevant fields and from applied work and impact-evaluation. For a start, see "Conventional guidelines for referee reports" in our knowledge base.
We are reaching out to PhD programs and pre-PhD research-focused programs. Some curricula already involve “mock referee report” assignments. We hope professors will encourage their students to do these through our platform. In return, we’ll offer the incentives and promotion mentioned above, as well as resources, guidance, and some further feedback
5. Fostering a positive environment for anonymous and signed evaluations
We want to preserve a positive and productive environment. This is particularly important because we will be accepting anonymous content. We will take steps to ensure that the system is not abused. If the evaluations have an excessively negative tone, have content that could be perceived as personal attacks, or have clearly spurious criticism, we will ask the evaluators to revise this, or we may decide not to post or link it.
Crowdsourced feedback can add value in itself; encouraging this can enable some public evaluation and discussion of work that The Unjournal doesn’t have the bandwidth to cover
Improving our evaluator pool and evaluation standards in general.
Students and ECRs can practice and (if possible) get feedback on independent evaluations
They can demonstrate their ability this publicly, enabling us to recruit and commission the strongest evaluators
Examples will help us build guidelines, resources, and insights into ‘what makes an evaluation useful’.
This provides us opportunities to engage with academia, especially in Ph.D programs and research-focused instruction.
The Unjournal commissions public evaluations of impactful research in quantitative social sciences fields. We are an alternative and a supplement to traditional academic peer-reviewed journals – separating evaluation from journals unlocks a range of benefits. We ask expert evaluators to write detailed, constructive, critical reports. We also solicit a set of structured ratings focused on research credibility, methodology, careful and calibrated presentation of evidence, reasoning transparency, replicability, relevance to global priorities, and usefulness for practitioners (including funders, project directors, and policymakers who rely on this research). While we have mainly targeted impactful research from academia, our ‘applied stream’ covers impactful work that uses formal quantitative methods but is not aimed at academic journals. So far, we’ve commissioned about 50 evaluations of 24 papers, and published these evaluation packages on our PubPub community, linked to academic search engines and bibliometrics.
Did you just write a brilliant peer review for an economics (or social science, policy, etc.) journal? Your work should not be wasted, there should be a way to share your insights and get credit!
Consider transforming these insights into a public "independent evaluation" for . This will benefit the community and help make research better and more impactful. And we can share your work and provide you feedback. This will help you build a portfolio with The Unjournal, making it more likely we'll hire you for paid work and compensate you at the higher rate. And we offer prizes for the best work.
You can do this either anonymously or sign your name.
To say this in :
Journal peer review is critical for assessing and improving research, but too often these valuable discussions remain hidden behind closed doors. By publishing a version of your review, you can: (1) Amplify the impact of your reviewing efforts by contextualizing the research for a broader audience, (2) Facilitate more transparent academic discussions around the strengths and limitations of the work, (3) Get public recognition for your peer review contributions, which are often unseen and unrewarded (4) Reduce overall reviewing burdens by allowing your assessment to be reused, (5) Support a culture of open scholarship by modeling constructive feedback on public research
According to a COPE Discussion document: Who “owns” peer reviews (emphasis added)
While the depth of commentary may vary greatly among reviews, given the minimal thresholds set by copyright law, it can be presumed that most reviews meet the requirements for protection as an “original work of authorship”. As such, in the absence of an express transfer of copyright or a written agreement between the reviewer and publisher establishing the review as a “work for hire”, it may be assumed that, by law, the reviewer holds copyright to their reviewer comments and thus is entitled to share the review however the reviewer deems fit...
The COPE council notes precisely the benefits we are aiming to unlock. They mention an 'expectation of confidentiality' that seems incompletely specified.
For example, reviewers may wish to publish their reviews in order to demonstrate their expertise in a subject matter and to contribute to their careers as a researcher. Or they may see publication of their reviews as advancing discourse on the subject and thus acting for the benefit of science as a whole. Nevertheless, a peer reviewer’s comments are significantly different from many other works of authorship in that they are expressly solicited as a work product by a journal and—whatever the peer review model—are subject to an expectation of confidentiality. However, without an express agreement between the journal and the reviewer, it is questionable whether such obligation of confidentiality should be considered to apply only until a final decision is reached on the manuscript, or to extend indefinitely.
Several journals explicitly agree that reviewers are welcome to publish the content of their reviews, with some important caveats. The Publish Your Reviews initiative gathered public statements from several journals and publishers confirming that they support reviewers posting their comments externally. However, they generally ask reviewers to remove any confidential information before sharing their reviews. This includes: the name of the journal, the publication recommendation (e.g., accept, revise, or reject), and any other details the journal or authors considered confidential, such as unpublished data.
For these journals, we are happy to accept and share/link the verbatim content as part of an independent Unjournal evaluation.
But even for journals that have not signed onto this, as the COPE mentioned Your peer review is your intellectual property, it is not owned by the journal!
There may be some terms and conditions you agreed to as part of submitting a referee report. Please consult these carefully.
However, you are still entitled to share your own expert opinions on publicly-shared research. You may want to rewrite the review somewhat. You should make it clear that it refers to the publicly-shared (working paper/preprint) version of the research, not the one the journal shared with you in confidence. As above, you should probably not mention the journal name, the decision, or any other sensitive information. You don't even need to mention that you did review the paper for a journal.
Even if a journal considers the specific review confidential, this doesn't prevent the reviewer from expressing their independent assessment elsewhere.
As an expert reviewer, you have unique insights that can improve the quality and impact of research. Making your assessment available through The Unjournal amplifies the reach and value of your efforts. You can publish evaluations under your name or remain anonymous.
Ready to make your peer reviews work harder for science? Consider submitting an independent evaluation, for recognition, rewards, and to improve research. Contact us anytime at contact@unjournal.org for guidance... We look forward to unlocking your valuable insights!