Update Feb. 2024: We are moving the discussion of the details of this process to an internal Coda link (here, accessible by team members only). We will present an overview in broad strokes below.
See also Mapping evaluation workflowfor an overview and flowchart of our full process (including the evaluation manager role).
Compensation: As of April Dec 2023, evaluation managers are compensated a minimum of $300 per project, and up to $500 for detailed work. Further work on 'curating' the evaluation, engaging further with authors and evaluators, writing detailed evaluation summary content, etc., can earn up to an additional $200.
If you are the evaluation manager please follow the process described in our private Coda space here
Engage with our previous discussion of the papers; why we prioritized this work, what sort of evaluators would be appropriate, what to ask them to do.
Inform and engage with the paper's authors, asking them for updates and requests for feedback. The process varies depending on whether the work is part of our "Direct evaluation" track or whether we require authors' permission.
Find potential evaluators with relevant expertise, contact them. We generally seek two evaluators per paper.
Suggest research-specific issues for evaluators to consider. Guide evaluators on our process.
Read the evaluations as they come in, suggest additions or clarifications if necessary.
Rate the evaluations for awards and bonus incentives.
Share the evaluations with the authors, requesting their response.
Optionally, provide a brief "evaluation manager's report" (synthesis, discussion, implications, process) to accompany the evaluation package.
See also:
See also: Protecting anonymity
We give the authors two weeks to respond before publishing the evaluation package (and they can always respond afterwards).
Once the evaluations are up on PubPub, reach out the evaluators again with the link, in case they want to view their evaluation and the others. The evaluators may be allowed to revise their evaluation, e.g., if the authors find an oversight in the evaluation. (We are working on a policy for this.)
At the moment (Nov. 2023) we don't have any explicit 'revise and resubmit' procedure, as part of the process. Authors are encouraged to share changes they plan to make, and a (perma)-link to where their revisions can be found. They are also welcome to independently (re)-submit an updated version of their work for a later Unjournal evaluation.
Mapping collaborator networks through Research Rabbit
We use a website called Research Rabbit (RR).
Our RR database contains papers we are considering evaluating. To check potential COI, we use the following steps:
After choosing a paper, we select the button "these authors." This presents all the authors for that paper.
After this, we choose "select all," and click "collaborators." This presents all the people that have collaborated on papers with the authors.
Finally, by using the "filter" function, we can determine whether the potential evaluator has ever collaborated with an author from the paper.
If a potential evaluator has no COI, we will add them to our list of possible evaluators for this paper.
Note: Coauthorship is not a disqualifier for a potential evaluator; however, we think it should be avoided where possible. If it cannot be avoided, we will note it publicly.
Did the people who suggested the paper suggest any evaluators?
We prioritize our "evaluator pool" (people who signed up)
Expertise in the aspects of the work that needs evaluation
Interest in the topic/subject
Conflicts of interest (especially co-authorships)
Secondary concerns: Likely alignment and engagement with Unjournal priorities. Good writing skills. Time and be motivation to write review promptly and thoroughly.