LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Evaluation guidelines and criteria
  • Choosing and working with evaluators
  • How do we choose evaluators?
  • Why do we pay evaluators?
  • Evaluator concerns
  • Can I submit an evaluation anonymously? How will you protect my anonymity?
  • I'm concerned about making my evaluation public; what if I make a mistake or write something I later regret?
  • Are the research authors involved in The Unjournal's review process and do they give consent?
  • Can I share this evaluation? What else can I do with it?
  • Evaluation value
  • What value do these evaluations provide, and for whom?
  • Evaluation quality and misunderstandings
  • What should I prioritize in my evaluation process?
  • 'This paper is great, I would accept it without changes, what should I write/do?"

Was this helpful?

Export as PDF
  1. Frequently Asked Questions (FAQ)

Evaluation ('refereeing')

PreviousFor research authorsNextSuggesting and prioritizing research

Last updated 7 months ago

Was this helpful?

Evaluation guidelines and criteria

We generally refer to "evaluation" instead of "refereeing" because The Unjournal does not publish work; it only links, rates, and evaluates it.

For more information about what we are asking evaluators to do, see:

Choosing and working with evaluators

How do we choose evaluators?

  • We follow standard procedures, considering complementary expertise, interest, and cross-citations, as well as checking for conflicts of interest. (See our internal guidelines for .)

  • We aim to consult those who have first.

  • We favor evaluators with a track record of careful, in-depth, and insightful evaluation — while giving ECRs a chance to build such a record.

Why do we pay evaluators?

For several reasons... (for more discussion, see Why pay evaluators (reviewers)?)

  • It's equitable, especially for those not getting "service credit" for their refereeing work from their employer.

  • Paying evaluators can reduce and conflicts of interest —arguably inherent to the traditional process where reviewers work for free.

  • We need to use explicit incentives while The Unjournal grows.

  • We can use payment as an incentive for high-quality work, and to access a wider range of expertise, including people not interested in submitting their own work to The Unjournal.

To claim your evaluator payment...

Evaluator concerns

Can I submit an evaluation anonymously? How will you protect my anonymity?

Yes, we allow evaluators to choose whether they wish to remain anonymous or "sign" their evaluations. See Protecting anonymity.

I'm concerned about making my evaluation public; what if I make a mistake or write something I later regret?

To limit this concern:

  1. You can choose to make your evaluation anonymous. You can make this decision from the outset (this is preferable) or later, after you've completed your review.

  2. Your evaluation will be shared with the authors before it is posted, and they will be given two weeks to respond before we post. If they cite what they believe are any major misstatements in your evaluation, we will give you the chance to correct these.

  3. It is well-known that referee reports and evaluations are subject to mistakes. We expect most people who read your evaluation will take this into account.

  4. You can add an addendum or revision to your evaluation later on (see below).

Can I redact my evaluation after it's published through The Unjournal?

Are the research authors involved in The Unjournal's review process and do they give consent?

See the For research authors FAQ as well as the "Direct evaluation" track.

We have two main ways that papers and research projects enter the Unjournal process:

  1. Authors ; if we believe the work is relevant, we assign evaluators, and so on.

For either track, authors are invited to be involved in several ways:

  • Authors are informed of the process and given an opportunity to identify particular concerns, request an embargo, etc.

  • Evaluators can be put in touch with authors (anonymously) for clarification questions.

  • Authors are given a two-week window to respond to the evaluations (this response is published as well) before the evaluations are made public. They can also respond after the evaluations are released.

Can I share this evaluation? What else can I do with it?

If you are writing a signed evaluation, you can share it or link it on your own pages. Please wait to do this until after we have given the author a chance to respond and posted the package.

Otherwise, if you are remaining anonymous, please do not disclose your connection to this report.

Going forward:

  • We may later invite you to . . .

  • . . . and to help us judge prizes (e.g., the Impactful Research Prize (pilot)).

  • As a general principle, we hope and intend always to see that you are fairly compensated for your time and effort.

Evaluation value

What value do these evaluations provide, and for whom?

  1. For readers and users: Unjournal evaluations assess the reliability and usefulness of the paper along several dimensions—and make this public, so other researchers and policymakers can

  2. For careers and improving research: Evaluations provide metrics of quality. In the medium term, these should provide increased and accelerated career value, improving the research process. We aim to build metrics that are credibly comparable to the current "tier" of journal a paper is published in. But we aim to do this better in several ways:

  3. Feedback and suggestions for authors: We expect that evaluators will provide feedback that is relevant to the authors, to help them make the paper better.

Evaluation quality and misunderstandings

What should I prioritize in my evaluation process?

'This paper is great, I would accept it without changes, what should I write/do?"

We still want your evaluation and ratings. Some things to consider as an evaluator in this situation.

A paper/project is not only a good to be judged on a single scale. How useful is it, and to who or what? We'd like you discuss its value in relation to previous work, it’s implications, what it suggests for research and practice, etc.

Even if the paper is great...

  1. Would you accept it in the “top journal in economics”? If not, why not?

  2. Would you hire someone based on this paper?

  3. Would you fund a major intervention (as a government policymaker, major philanthropist, etc.) based on this paper alone? If not, why not

  4. What are the most important and informative results of the paper?

  5. Can you quantify your confidence in these 'crucial' results, and their replicability and generalizability to other settings? Can you state your probabilistic bounds (confidence or credible intervals) on the quantitative results (e.g., 80% bounds on QALYs/DALYs/or WELLBYs per $1000)

  6. Would any other robustness checks or further work have the potential to increase your confidence (narrow your belief bounds) in this result? Which?

  7. Do the authors make it easy to reproduce the statistical (or other) results of the paper from shared data? Could they do more in this respect?

  8. Communication: Did you understand all of the paper? Was it easy to read? Are there any parts that could have been better explained?

  9. Is it communicated in a way that would it be useful to policymakers? To other researchers in this field, or in the general discipline?

We will put your evaluation on and give it a DOI. It cannot be redacted in the sense that this initial version will remain on the internet in some format. But you can add an addendum to the document later, which we will post and link, and the DOI can be adjusted to point to the revised version.

We that seems potentially influential, impactful, and relevant for evaluation. In some cases, we request the authors' permission before sending out the papers for evaluation. In other cases (such as where senior authors release papers in the prestigiousand CEPR series) we contact the authors and request their engagement before proceeding, but we don't ask for permission.

We may ask if you want to be involved in replication exercises (e.g., through the ).

The evaluations provide at least three types of value, helping advance several paths in our :

More quickly, more reliably, more transparently, and without the unproductive overhead of dealing with journals (see '')

Allowing flexible, , thus improving the research process, benefiting research careers, and hopefully improving the research itself in impactful areas.

See "what

See our .

For Prospective Evaluators
Guidelines for Evaluators
choosing evaluators
opted-in to our evaluator pool
PubPub
NBER
Institute for Replication
theory of change
reshaping evaluation
transparent formats (such as dynamic documents)
https://globalimpact.gitbook.io/the-unjournal-project-and-communication-space/faq-interaction/referees-evaluators#what-value-do-these-evaluations-provide-and-for-whom
guidelines for evaluators
#submitting-and-paying-expenses-claims