LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Initiative: ‘independent evaluations’
  • Who should do these evaluations?
  • Why should you do an evaluation?
  • Which research?
  • What sort of ‘evaluations’ and what formats?
  • How will The Unjournal engage?
  • How does this benefit The Unjournal and our mission?
  • About The Unjournal (unfold)

Was this helpful?

Export as PDF
  1. An Introduction to The Unjournal
  2. How to get involved

Independent evaluations (trial)

PreviousStandalone project: Impactful Research Scoping (temp. pause)NextReviewers from previous journal submissions

Last updated 6 months ago

Was this helpful?

Kickstarter incentive: After the first 8 quality submissions (or by Jan. 1, 2025 - whichever comes later) we will award a prize of $500 to the strongest evaluation.

Note on .

Initiative: ‘independent evaluations’

is seeking academics, researchers, and students to submit structured evaluations of the most impactful research . Strong evaluations will be posted or linked on our , offering readers a perspective on the implications, strengths, and limitations of the research. These evaluations can be submitted using for academic-targeted research or for ; evaluators can publish their name or maintain anonymity; we also welcome collaborative evaluation work. We will facilitate, promote, and encourage these evaluations in several ways, described below.

Who should do these evaluations?

We are particularly looking for people with research training, experience, and expertise in quantitative social science and statistics including cost-benefit modeling and impact evaluation. This could include professors, other academic faculty, postdocs, researchers outside of academia, quantitative consultants and modelers, PhD students, and students aiming towards PhD-level work (pre-docs, research MSc students etc.) But anyone is welcome to give this a try — when in doubt, piease go for it.

We are also happy to support collaborations and group evaluations. There is a good track record for this — see: “”, ASAPBio’s, and for examples in this vein. We may also host live events and/or facilitate asynchronous collaboration on evaluations

Instructors/PhD, MRes, Predoc programs: We are also keen to work with students and professors to integrate ‘independent evaluation assignments’ (aka ‘learn to do peer reviews’) into research training.

Why should you do an evaluation?

Your work will support The Unjournal’s core mission — improving impactful research through journal-independent public evaluation. In addition, you’ll help research users (policymakers, funders, NGOs, fellow researchers) by providing high quality detailed evaluations that rate and discuss the strengths, limitations, and implications of research.

Doing an independent evaluation can also help you. We aim to provide feedback to help you become a better researcher and reviewer. We’ll also give prizes for the strongest evaluations. Lastly, writing evaluations will help you build a portfolio with The Unjournal, making it more likely we will commission you for paid evaluation work in the future.

Which research?

We focus on rigorous, globally-impactful research in quantitative social science and policy-relevant research. (See for details.) We’re especially eager to receive independent evaluations of:

  1. Research we publicly prioritize: see our we've prioritized or evaluated. ()

  2. Research we previously evaluated (see , as well as )

  3. Work that other people and organizations suggest as having high potential for impact/value of information (also see)

What sort of ‘evaluations’ and what formats?

The Unjournal’s structured evaluation forms: We encourage evaluators to do these using either:

How will The Unjournal engage?

1. Posting and signal-boosting

2. Offering incentives

Bounties: We will offer prizes for the ‘most valuable independent evaluations’.

As a start, after the first eight (or by Jan. 1 2025, whichever comes later), we will award a prize of $500 to the most valuable evaluation.

Further details tbd.

All evaluation submissions will be eligible for these prizes and “grandfathered in” to any prizes announced later. We will announce and promote the prize winners (unless they opt for anonymity).

Evaluator pool: People who submit evaluations can elect to join our evaluator pool. We will consider and (time-permitting) internally rate these evaluations. People who do the strongest evaluations in our focal areas are likely to be commissioned as paid evaluators for The Unjournal.

We’re also moving towards a two-tiered base We will offer a higher rate to people who can demonstrate previous strong review/evaluation work. These independent evaluations will count towards this ‘portfolio’.

3. Providing materials, resources and guidance/feedback

4. Partnering with academic institutions

We are reaching out to PhD programs and pre-PhD research-focused programs. Some curricula already involve “mock referee report” assignments. We hope professors will encourage their students to do these through our platform. In return, we’ll offer the incentives and promotion mentioned above, as well as resources, guidance, and some further feedback

5. Fostering a positive environment for anonymous and signed evaluations

We want to preserve a positive and productive environment. This is particularly important because we will be accepting anonymous content. We will take steps to ensure that the system is not abused. If the evaluations have an excessively negative tone, have content that could be perceived as personal attacks, or have clearly spurious criticism, we will ask the evaluators to revise this, or we may decide not to post or link it.

How does this benefit The Unjournal and our mission?

  1. Crowdsourced feedback can add value in itself; encouraging this can enable some public evaluation and discussion of work that The Unjournal doesn’t have the bandwidth to cover

  2. Improving our evaluator pool and evaluation standards in general.

    1. Students and ECRs can practice and (if possible) get feedback on independent evaluations

    2. They can demonstrate their ability this publicly, enabling us to recruit and commission the strongest evaluators

  3. Examples will help us build guidelines, resources, and insights into ‘what makes an evaluation useful’.

  4. This provides us opportunities to engage with academia, especially in Ph.D programs and research-focused instruction.

About The Unjournal (unfold)

You can also suggest research yourself and then do an independent evaluation of it.

We’re looking for careful methodological/technical evaluations that focus on research credibility, impact, and usefulness. We want evaluators to dig into the weeds, particularly in areas where they have aptitude and expertise. See our.

Our : If you are evaluating research aimed at an academic journal or

Our : If you are evaluating research that is probably not aimed at an academic journal. This may include somewhat less technical work, such as reports from policy organizations and think tanks, or impact assessments and cost-benefit analyses

Other public evaluation platforms: We are also open to engaging with evaluations done on existing public evaluation platforms such as. Evaluators: If you prefer to use another platform, please let us know about your evaluation using one of the forms above. If you like, you can leave most of our fields blank, and provide a link to your evaluation on the other public platform.

Academic (~PhD) assignments and projects: We are also looking to build ties with research-intensive university programs; we can help you structure academic assignments and provide external reinforcement and feedback. Professors, instructors, and PhD students: please contact us ().

We will encourage all these independent evaluations to be publicly hosted, and will share links to these. We will further promote the strongest independent evaluations, potentially (such as )

However, when we host or link these, we will keep them clearly separated and signposted as distinct from the commissioned evaluations; independent evaluations will not be considered official, and their ratings won’t be included in our (see dashboard; see ).

Our provides examples of strong work, including the.

We will curate guidelines and learning materials from relevant fields and from applied work and impact-evaluation. For a start, see

commissions public evaluations of impactful research in quantitative social sciences fields. We are an alternative and a supplement to traditional academic peer-reviewed journals – separating evaluation from journals unlocks a . We ask expert evaluators to write detailed, constructive, critical reports. We also solicit a set of structured ratings focused on research credibility, methodology, careful and calibrated presentation of evidence, reasoning transparency, replicability, relevance to global priorities, and usefulness for practitioners (including funders, project directors, and policymakers who rely on this research). While we have mainly targeted impactful research from academia, our covers impactful work that uses formal quantitative methods but is not aimed at academic journals. So far, we’ve commissioned about 50 evaluations of 24 papers, and published these evaluation packages , linked to academic search engines and bibliometrics.

The Unjournal
PubPub community
this form
this form
What is a PREreview Live Review?
Crowd preprint review
I4replication.org
repliCATS
“What specific areas do we cover?”
public list of research
public list
https://unjournal.pubpub.org/
Evaluating Pivotal Questions
here
guidelines
Academic (main) stream form
‘Applied stream’ form
See here for guidance on using these forms for independent evaluations
PREreview.org
contact@unjournal.org
unjournal.pubpub.org
‘main data’
here
PubPub page
prize-winning evaluations
"Conventional guidelines for referee reports" in our knowledge base.
The Unjournal
range of benefits
‘applied stream’
on our PubPub community