Comment on page

Mapping evaluation workflow

The flowchart below focuses on the evaluation part of our process.
Oops, something is missing.We could not find the original source to display this content.

Describing key steps above (updated 10 May '23)

  1. 1.
    1. 1.
      Author (A) submits work (W), creates new submission (submits a URL and DOI), through our platform or informally
      • Author (or someone on their behalf) can complete a submission form; this includes a potential 'request for embargo' or other special treatment
    2. 2.
      Managers select work to prioritize, or the project is submitted independently of authors
      • For either of these cases (1 or 2), authors are asked for permission
    3. 3.
      Alternate 'Direct evaluation' track), 'Work enters prestige archive' (currently NBER).
      • Here authors are informed and consulted, but permission is not needed
  2. 2.
    • (Following author submission) ...
    • (Following direct evaluation selection)...
      • M fills in additional information explaining why it's relevant, what to evaluate, etc.
    • If requested (in either case), decides whether to grant embargo/special treatment, notes this, informs authors
  3. 3.
    M assigns Evaluations Manager (EM) to selected project (typically part of our management team or advisory board)
  4. 4.
    EM invites Evaluators (aka 'Reviewers'), sharing the paper to be evaluated, and a brief summary of why the UJ thinks it's relevant, and what we are asking.
    • Potential evaluators given full access to (almost) all information submitted by author and ME, notified of any embargo/special treatment granted.
    • EM may make special requests to evaluator (e.g., 'signed/unsigned evaluation only', short deadlines, special focus, extra incentives, etc.)
  5. 5.
    Evaluator accepts/declines invitation to review, agrees on deadline (or asks for extension)
    • If accepts, EM shares full guidelines/evaluation template and specific suggestions with evaluator
  6. 6.
    Evaluator completes an evaluation form; we aim to embed this in a system, e.g., Kotahi: submit/eval/mgmt
  7. 7.
    Evaluator submits evaluation including numeric ratings and predictions, "CI's" for these
    • Possible addition (future plan): Reviewer asks for 'minor revisions and corrections; see 'considering: allowing minor revisions' in fold below
  8. 8.
    EM collates all evaluations/ratings, shares these with Author, notifies evaluators this was done
    • Be very careful not to share evaluators' identities at this point
      • Especially where evaluators chose anonymity, be extra-careful there is no accidentally-identifying information
      • Even if evaluators chose to 'sign their evaluation', do not disclose their identity to authors at this point, but tell evaluators they can reach out to the authors if they desire
    • Share evaluations with the authors as separate doc/file/space; which the evaluators do not have automatic access to.
    • Make it clear to authors: their responses will be published (and given a DOI when we can).
  9. 9.
    Author(s) reads evaluations, given two weeks to submit responses
    • If there is an embargo, there is more time to do this, of course
  10. 10.
    EM creates evaluation summary and 'EM comments'
  11. 11.
    UJ team publishes each element on our PubPub space as a separate 'pub' with a DOI for each (unless embargoed)
    1. 1.
      Summary and EM comments,
      1. 1.
        With a prominent section for the 'ratings data/tables'
    2. 2.
      Each evaluation (with summarized ratings at the top)
    3. 3.
      Author response
      • All of the above are linked in a particular way, with particular settings; see notes
  12. 12.
    Inform authors and evaluators after this is on PubPub, promote, check bibliometrics, etc.
  13. 13.
    ('Ratings and predictions data' to enter an additional public database)

Considering for future: enabling 'minor revisions'

In our current (8 Feb 2023: Pilot) phase, we have the evaluators consider the paper 'as is', frozen at a certain date, with no room for revisions. The authors can of course revise the paper on their own, and even pursue an updated Unjournal review; and we would like to include links to the 'permanently updated version' in the Unjournal evaluation space.
After the pilot, we may consider making minor revisions part of the evaluation process. This may add substantial value to the papers and process, especially where evaluators identify straightforward and easily-implementable improvements.
How revisions might be folded into the above flow
If 'minor revisions' are requested:
  • ... the author has 4 weeks (strict) to make these if they want to, submit a new linked manuscript, and also submit their response to the evaluation.
  • Optional: Reviewers can comment on any minor revisions and adjust their rating

Why would we (potential consider) only 'minor' revisions?

We don't want to replicate the slow and inefficient processes of the traditional system. We want evaluators to basically give a report and rating as the paper stands.
We also want to encourage papers as permanent-beta projects. The authors can improve it, if they like, and resubmit it for a new evaluation.