LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Describing key steps in the flowchart
  • Consideration for the future: enabling "minor revisions"
  • Why would we (potentially) consider only minor revisions?

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow

Mapping evaluation workflow

PreviousProtecting anonymityNextEvaluation workflow – Simplified

Last updated 6 months ago

Was this helpful?

The flowchart below focuses on the evaluation part of our process in detail. See Evaluation workflow – Simplified for a more condensed flowchart.

Describing key steps in the flowchart

(Section updated 1 August 2023)

  1. Submission/selection (multiple routes)

    1. Author (A) submits work (W), creates new submission (submits a URL and DOI), through our platform or informally.

      • Author (or someone on their behalf) can complete a submission form; this includes a potential "request for embargo" or other special treatment.

    2. Managers and field specialists select work (or the project is submitted independently of authors) and the management team agrees to prioritize it.

      • For either of these cases (1 or 2), authors are asked for permission.

    3. Alternate : "Work enters prestige archive" (NBER, CEPR, and some other cases).

      • Managers inform and consult the authors but permission . (Particularly relevant: we confirm with author that we have the latest updated version of the research.)

  2. Prioritization

    • Following author submission ...

      • Manager(s) (M) and Field Specialists (FS) prioritize work for review (see ).

    • Following direct evaluation selection...

      • "evaluation suggestions" (see ) explaining why it's relevant, what to evaluate, etc., to be shared later with evaluators.

    • If requested (in either case), M decides whether to grant embargo or other special treatment, notes this, and informs authors.

  3. an Evaluation Manager (EM – typically part of our ) to selected project.

  4. EM invites evaluators (aka "reviewers") and shares the paper to be evaluated along with (optionally) a brief summary of why The Unjournal thinks it's relevant, and what we are asking.

    • Potential evaluators are given full access to (almost) all information submitted by the author and M, and notified of any embargo or special treatment granted.

    • EM may make special requests to the evaluator as part of a management policy (e.g., "signed/unsigned evaluation only," short deadlines, extra incentives as part of an agreed policy, etc.).

    • EM (, optionally) may add "evaluation suggestions" to share with the evaluators.

  5. Evaluator accepts or declines the invitation to review, and if the former, agrees on a deadline (or asks for an extension).

    • If the evaluator accepts, the EM shares full guidelines/evaluation template and specific suggestions with the evaluator.

  6. Evaluator completes .

  7. Evaluator submits evaluation including numeric ratings and predictions, plus "CI's" for these.

    • Possible addition (future plan): Reviewer asks for minor revisions and corrections; see "How revisions might be folded in..." in the fold below.

  8. EM collates all evaluations/reviews, shares these with Author(s).

    • Evaluator must be very careful not to share evaluators' identities at this point.

      • This includes caution to avoid accidentally-identifying information, especially where .

      • Even if evaluators chose to "sign their evaluation," their identity should not be disclosed to authors at this point. However, evaluators are told they can reach out to the

    • Evaluations are shared with the authors as a separate doc, set of docs, file, or space; which the . (Going forward, this will be automated.)

    • It is made clear to authors that their responses will be published (and given a DOI, when possible).

  9. Author(s) read(s) evaluations, given two working weeks to submit responses.

    • If there is an embargo, there is more time to do this, of course.

  10. EM creates evaluation summary and "EM comments."

  11. EM or UJ team publishes each element on our space as a separate "pub" with a DOI for each (unless embargoed):

    1. Summary and EM comments

      • With a prominent section for the "ratings data tables"

    2. Each evaluation, with summarized ratings at the top

    3. The author response

      • All of the above are linked in a particular way, with particular settings;

  12. Authors and evaluators are informed once elements are on PubPub; next steps include promotion, checking bibliometrics, etc.

  13. ("Ratings and predictions data" to enter an additional public database.)

Note that we intend to automate and integrate many of the process into an editorial-management-like system in PubPub.

Consideration for the future: enabling "minor revisions"

In our current (8 Feb 2023 pilot) phase, we have the evaluators consider the paper "as is," frozen at a certain date, with no room for revisions. The authors can, of course, revise the paper on their own and even pursue an updated Unjournal review; we would like to include links to the "permanently updated version" in the Unjournal evaluation space.

After the pilot, we may consider making minor revisions part of the evaluation process. This may add substantial value to the papers and process, especially where evaluators identify straightforward and easily-implementable improvements.

How revisions might be folded into the above flow

If "minor revisions" are requested:

  • ... the author has four (4) weeks (strict) to make revisions if they want to, submit a new linked manuscript, and also submit their response to the evaluation.

  • Optional: Reviewers can comment on any minor revisions and adjust their rating.

Why would we (potentially) consider only minor revisions?

We don't want to replicate the slow and inefficient processes of the traditional system. Essentially, we want evaluators to give a report and rating as the paper stands.

We also want to encourage papers as projects. The authors can improve it, if they like, and resubmit it for a new evaluation.

permanent-beta
Direct Evaluation track
Project selection and evaluation
examples here
management team or advisory board
PubPub
see notes