LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Who we are
  • What we are asking you to do
  • Why be an evaluator?
  • Helping research users, helping science
  • Public recognition
  • Financial compensation
  • What do I do next?

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow
  2. Evaluation

For prospective evaluators

PreviousEvaluationNextGuidelines for evaluators

Last updated 2 months ago

Was this helpful?

Thanks for your interest in evaluating research for The Unjournal!

Who we are

The Unjournal is a nonprofit organization started in mid-2022. We commission experts to publicly evaluate and rate research. Read more about us .

What we are asking you to do

  1. Write an evaluation of a specific research : essentially a standard, high-quality referee report.

  2. research by filling in a structured form.

  3. Answer a short questionnaire about your background and our processes.

See for further details and guidance.

Why be an evaluator?

Why use your valuable time writing an Unjournal evaluation? There are several reasons: helping high-impact research users, supporting open science and open access, and getting recognition and financial compensation.

Helping research users, helping science

The Unjournal's goal is to make impactful research more rigorous, and rigorous research more impactful, while supporting open access and open science. We encourage better research by making it easier for researchers to get feedback and credible ratings. We evaluate research in high-impact areas that make a difference to global welfare. Your evaluation will:

  1. Help authors improve their research, by giving early, high-quality feedback.

  2. Help improve science by providing open-access, prompt, structured, public evaluations of impactful research.

  3. Inform funding bodies and meta-scientists as we build a database of research quality, strengths and weaknesses in different dimensions. Help research users learn what research to trust, when, and how.

Public recognition

Your evaluation will be made public and given a DOI. You have the option to be identified as the author of this evaluation or to remain anonymous, as you prefer.

Financial compensation

Our current baseline compensation has two tiers, aimed to reward strong previous work doing public evaluations and reviews for us and for others. These tiers are not about academic seniority or credentials.

  1. $100 + $100 for first-time evaluators without demonstrated public review experience

  2. $300 + $100 for return Unjournal evaluators and those with previous strong public review experience (for The Unjournal or through other initiatives).

Other ways to show evaluation experience

In addition public evaluations and referee reports, we can accept critical syntheses and literature review papers and essays as example of evaluation experience. You can also share with us an example of a previous strong referee report you have written, that would be suitable for making public given the required permissions. (Also see Reviewers from previous journal submissions for a discussion of publicly sharing these).

Additional rewards and incentives

We may occasionally offer additional payments for specifically requested evaluation tasks, or raise the base payments for particularly hard-to-source expertise.

What do I do next?

  • If you have been invited to be an evaluator and want to proceed, simply respond to the email invitation that we have sent you. You will then be sent a link to our evaluation form.

To learn more about our evaluation process, seeGuidelines for evaluators. If you are doing an evaluation, we highly recommend you read these guidelines carefully

Note on the evaluation platform (13 Feb 2024)

For more on our scientific mission, see .

for providing a and complete evaluation and feedback ($100-$300 base + $100 'promptness bonus') in line with our .

We will be integrating other incentives and prizes into this, and are committed to in average compensation per evaluation, including prizes. You will also be eligible for monetary prizes for "most useful and informative evaluation," plus other bonuses.

See also .

To sign up for our evaluator pool, see

12 Feb 2024: We are moving to a hosted form/interface in PubPub. That form is still somewhat a work-in-progress, and may need some further guidance; we try to provide this below, but please contact us with any questions. , you can also submit your response in a Google Doc, and share it back with us. Click to make a new copy of that directly.

here
Guidelines for Evaluators
here
$450
'how to get involved'
here
expected standards
"submitting claims and expenses"