LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Considering papers/projects
  • Direct-evaluation track: when to proceed with papers that have "R&R's" at a journal?
  • 'Policy work' not (mainly) intended for academic audiences?
  • Evaluation procedure and guidelines
  • Feedback and discussion vs. evaluations
  • Why Single-blind?
  • Why signed reports?
  • Compromise approaches
  • Anonymity of evaluators
  • Which metrics and predictions to ask, and how?
  • Discussion amongst evaluators, initial and revised judgments?
  • Revisions as part of process?
  • Timing of releasing evaluations
  • Non-Anonymity of Managing editors
  • Presenting and hosting our output
  • Use of Hypothes.is and collaborative annotation

Was this helpful?

Export as PDF
  1. Management details [mostly moved to Coda]

Policies/issues discussion

PreviousUJ Team: resources, onboardingNextResearch scoping discussion spaces

Last updated 1 year ago

Was this helpful?

This page is mainly for The Unjournal management, advisory board and staff, but outside opinions are also valuable.

Unjournal team members:

  • Priority 'ballot issues' are given in our 'Survey form', linked to the Airtable (ask for link)

  • Key discussion questions in the broad_issue_stuffview inquestions table, linking discussion Google docs

Considering papers/projects

Direct-evaluation track: when to proceed with papers that have "R&R's" at a journal?

'Policy work' not (mainly) intended for academic audiences?

We are considering a second stream to evaluate non-traditional, less formal work, not written with academic standards in mind. This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers. See comments ; see also Pete Slattery’s proposal , which namechecks the Unjournal.

E.g., for

We further discuss the case for this stream and sketch and consider some potential policies for this .

Evaluation procedure and guidelines

Internal discussion space:

Feedback and discussion vs. evaluations

DR: I suspect that signed reviews (cf blog posts) provide good feedback and evaluation. However, when it comes to rating (quantitative measures of a paper's value), my impression from existing initiatives and conversations is that people are reluctant to award anything less than 5/5 'full marks'.

Why Single-blind?

  • Power dynamics: referees don't want to be 'punished', may want to flatter powerful authors

  • Connections and friendships may inhibit honesty

  • 'Powerful referees signing critical reports' could hurt ECRs

Why signed reports?

  • Public reputation incentive for referees

    • (But note single-blind paid review has some private incentives.)

  • Fosters better public dialogue

  • Inhibits obviously unfair and impolite 'trashing'

Compromise approaches

  • Author and/or referee choose whether it should be single-blind or signed

  • Random trial: We can compare empirically (are signed reviews less informative?)

  • Use a mix (1 signed, 2 anonymous reviews) for each paper

Anonymity of evaluators

We may revisit our "evaluators decide if they want to be anonymous" policy. Changes will, of course never apply retroactively: we will carefully keep our promises. However, we may consider requesting certain evaluators/evaluations to specifically be anonymous, or to publish their names. A mix of anonymous and signed reviews might be ideal, leveraging some of the benefits of each.

Which metrics and predictions to ask, and how?

Discussion amongst evaluators, initial and revised judgments?

Revisions as part of process?

Timing of releasing evaluations

Should we wait until all commissioned evaluations are in, as well as authors' responses, and release these as a group, or should we sometimes release a subset of these if we anticipate a long delay in others? (If we did this, we would still stick by our guarantee to give authors two weeks to respond before release.)

Non-Anonymity of Managing editors

Considerations

My memory is that when submitting a paper, I usually learn who the Senior Editor was but not the managing editor. But there are important differences in our case. For a traditional journal the editors make an ‘accept/reject/R&R’ decision. The referee’s role is technically an advisory one. In our case, there is no such decision to be made. For The Unjournal, ME’s are choosing evaluators, corresponding with them, explaining our processes, possibly suggesting what aspects to evaluate, and perhaps putting together a quick summary of the evaluations to be bundled into our output. But we don’t make any ‘accept/reject/R&R’ decisions … once the paper is in our system and on our track, there should be a fairly standardized approach. Because of this, my thinking is:

  1. We don’t really need so many ‘layers of editor’ … a single Managing Editor (or co-ME’s) who consult other people on the UJ team informally … should be enough

  2. ME anonymity is probably not necessary; there is less room for COI, bargaining, pleading, reputation issues etc.

Presenting and hosting our output

We are also researching other frameworks, templates, and past practices; we hope to draw from validated, theoretically grounded projects such as .

See the 'IDEAS protocol' and , 2022

here
here
HERE
Unjournal Evaluator Guidelines & Metrics
RepliCATS
Marcoci et al
Use of Hypothes.is and collaborative annotation
#considering-for-future-enabling-minor-revisions