LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • General reasons to pay reviewers
  • Economics, turnaround times
  • Equity and inclusivity
  • Incentivizing useful, unbiased evaluations
  • Reasons for The Unjournal to pay evaluators

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow
  2. Evaluation

Why pay evaluators (reviewers)?

PreviousConventional guidelines for referee reportsNextProtecting anonymity

Last updated 11 months ago

Was this helpful?

It's a norm in academia that people do reviewing work for free. So why is The Unjournal paying evaluators?

From a recent

We estimate that the average (median) respondent spends 12 (9) working days per year on refereeing. The top 10% of the distribution dedicates 25 working days or more, which is quite substantial considering refereeing is usually unpaid.

General reasons to pay reviewers

Economics, turnaround times

The peer-review process in economics is widely-argued to be too slow and lengthy. But there is evidence that payments may help improve this.

, they note that few economics journals currently pay reviewers (and these payments tend to be small (e.g., JPE and AER paid $100 at the time). However, they also note, citing several papers:

The existing evidence summarized in Table 5 suggests that offering financial incentives could be an effective way of reducing turnaround time.

Equity and inclusivity

notes that the work of reviewing is not distributed equally. To the extent that accepting to do a report is based on individual goodwill, the unpaid volunteer model could be seen to unfairly penalize more generous and sympathetic academics. Writing a certain number of referee reports per year is generally considered part of "academic service". Academics put this on their CVs, and it may lead to being on the board of a journal which is valued to an extent. However, this is much less attractive for researchers who are not tenured university professors. Paying for this work would do a better job of including them in the process.

Incentivizing useful, unbiased evaluations

'Payment for good evaluation work' may also lead to fair and more useful evaluations.

In the current system academics may take on this work in large part to try to impress journal editors and get favorable treatment from them when they submit their own work. They may also write reviews in particular ways to impress these editors.

For less high-prestige journals, to get reviewers, editors often need to lean on their personal networks, including those they have power relationships with.

Reviewers are also known to strategically try to get authors to cite and praise the reviewer's own work. They maybe especially critical to authors they see as rivals.

To the extent that reviewers are doing this as a service they are being paid for, these other motivations will be comparatively somewhat less important. The incentives will be more in line with doing evaluations that are seen as valuable by the managers of the process, in order to get chosen for further paid work. (And, if evaluations are public, the managers can consider the public feedback on these reports as well.)

Reasons for The Unjournal to pay evaluators

  1. We are not ‘just another journal.’ We need to give incentives for people to put effort into a new system and help us break out of the old inferior equilibrium.

  2. In some senses, we are asking for more than a typical journal. In particular, our evaluations will be made public and thus need to be better communicated.

  3. We cannot rely on 'reviewers taking on work to get better treatment from editors in the future.' This does not apply to our model, as we don't have editors make any sort of ‘final accept/reject decision’

  4. Our ‘paying evaluators’ brings in a wider set of evaluators, including non-academics. This is particularly relevant to our impact-focused goals.

survey of economists:
In Charness et al's full report
The report cited above