Policies/issues discussion

This page is mainly for The Unjournal management, advisory board and staff, but outside opinions are also valuable.

Unjournal team members:

  • Priority 'ballot issues' are given in our 'Survey form', linked to the Airtable (ask for link)

  • Key discussion questions in the broad_issue_stuffview inquestions table, linking discussion Google docs

Considering papers/projects

Direct-evaluation track: when to proceed with papers that have "R&R's" at a journal?

'Policy work' not (mainly) intended for academic audiences?

We are considering a second stream to evaluate non-traditional, less formal work, not written with academic standards in mind. This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers. See comments here; see also Pete Slattery’s proposal here, which namechecks the Unjournal.

E.g., for

We further discuss the case for this stream and sketch and consider some potential policies for this HERE.

Evaluation procedure and guidelines

Internal discussion space: Unjournal Evaluator Guidelines & Metrics

Feedback and discussion vs. evaluations

DR: I suspect that signed reviews (cf blog posts) provide good feedback and evaluation. However, when it comes to rating (quantitative measures of a paper's value), my impression from existing initiatives and conversations is that people are reluctant to award anything less than 5/5 'full marks'.

Why Single-blind?

  • Power dynamics: referees don't want to be 'punished', may want to flatter powerful authors

  • Connections and friendships may inhibit honesty

  • 'Powerful referees signing critical reports' could hurt ECRs

Why signed reports?

  • Public reputation incentive for referees

    • (But note single-blind paid review has some private incentives.)

  • Fosters better public dialogue

  • Inhibits obviously unfair and impolite 'trashing'

Compromise approaches

  • Author and/or referee choose whether it should be single-blind or signed

  • Random trial: We can compare empirically (are signed reviews less informative?)

  • Use a mix (1 signed, 2 anonymous reviews) for each paper

Anonymity of evaluators

We may revisit our "evaluators decide if they want to be anonymous" policy. Changes will, of course never apply retroactively: we will carefully keep our promises. However, we may consider requesting certain evaluators/evaluations to specifically be anonymous, or to publish their names. A mix of anonymous and signed reviews might be ideal, leveraging some of the benefits of each.

Which metrics and predictions to ask, and how?

We are also researching other frameworks, templates, and past practices; we hope to draw from validated, theoretically grounded projects such as RepliCATS.

Discussion amongst evaluators, initial and revised judgments?

See the 'IDEAS protocol' and Marcoci et al, 2022

Revisions as part of process?

#considering-for-future-enabling-minor-revisions

Timing of releasing evaluations

Should we wait until all commissioned evaluations are in, as well as authors' responses, and release these as a group, or should we sometimes release a subset of these if we anticipate a long delay in others? (If we did this, we would still stick by our guarantee to give authors two weeks to respond before release.)

Non-Anonymity of Managing editors

Considerations

My memory is that when submitting a paper, I usually learn who the Senior Editor was but not the managing editor. But there are important differences in our case. For a traditional journal the editors make an ‘accept/reject/R&R’ decision. The referee’s role is technically an advisory one. In our case, there is no such decision to be made. For The Unjournal, ME’s are choosing evaluators, corresponding with them, explaining our processes, possibly suggesting what aspects to evaluate, and perhaps putting together a quick summary of the evaluations to be bundled into our output. But we don’t make any ‘accept/reject/R&R’ decisions … once the paper is in our system and on our track, there should be a fairly standardized approach. Because of this, my thinking is:

  1. We don’t really need so many ‘layers of editor’ … a single Managing Editor (or co-ME’s) who consult other people on the UJ team informally … should be enough

  2. ME anonymity is probably not necessary; there is less room for COI, bargaining, pleading, reputation issues etc.

Presenting and hosting our output

Last updated