Ask or search…
Comment on page

Evaluation (refereeing)

Evaluation guidelines and criteria

We refer to "evaluation" because The Unjournal does not publish work; it only links, rates, and evaluates it.
What we are asking evaluators (referees) to do: Guidelines for Evaluators

Choosing and working with evaluators

How do we choose evaluators?

  • We follow standard procedures, considering complementary expertise, interest, and cross-citations, as well as confirming lack of conflict-of-interest. (See our internal guidelines for choosing evaluators.)
  • We aim to consult those who have opted-in to our referee pool first.
  • We will favor evaluators with a track record of careful, in-depth, and insightful evaluation—while giving ECRs a chance to build such a record.

Why do we pay evaluators?

  • It's equitable, especially for those not getting "service credit" for their refereeing work from their employer.
  • While researchers currently write reports for prominent traditional journals for free, perhaps in exchange for goodwill when they submit their own work or a desire to impress prominent editors...
    1. 1.
      we need to use explicit incentives as The Unjournal grows.
    2. 2.
      paying evaluators can reduce adverse selection and conflicts of interest—arguably inherent to the traditional process.
  • We can use payment as an incentive for high-quality work.
  • We can use payments to access a wider range of expertise, including people not interested in submitting their own work to The Unjournal.
To claim your evaluator payment...

Evaluator concerns (anonymity, reputation, etc.)

Anonymity/blinding vs. signed reports

Can I submit an evaluation anonymously?

Yes, we allow evaluators to choose whether they wish to remain anonymous or "sign" their evaluations.

How do we protect the anonymity of evaluators who request it?

Making mistakes, making adjustments

I'm concerned about making my evaluation public; what if I make an error or write something in a way that I later regret?

In the typical journal process, reviewers may make mistakes in reports, and these may typically be mitigated by multiple reviewers and editor mediation. Furthermore, in standard journals, reviews are typically not made public. As an Unjournal evaluator, you might be concerned about having an error or poor judgment enter the public record. How worried should you be, and what can be done to limit this concern?
  1. 1.
    You can choose to make your evaluation anonymous. You can make this decision from the outset (this is preferable) or later, after you've completed your review. This is your choice.
  2. 2.
    Your evaluation will be shared with the authors before it is posted, and they will be given two weeks to respond before we post. If they cite what they believe are any major misstatements in your evaluation, we will give you the chance to correct these.
  3. 3.
    It is well-known that referee reports and evaluations are subject to mistakes. We expect most people who read your evaluation will take this into account.
  4. 4.
    You can add an addendum or revision to your evaluation later on (see below).

Can I redact my evaluation after it's published through The Unjournal?

We will put your evaluation on PubPub and give it a DOI. It cannot be redacted in the sense that this initial version will remain on the internet in some format. But you can certainly add an addendum to the document later, which we will post and link, and the DOI can be adjusted to point to the revised version.

Authors' involvement

We currently (May 2023) have two ways that papers and research projects enter the Unjournal process:
  1. 1.
    Authors submit their work (perhaps after our reaching out to them); if we believe the work is relevant, we assign evaluators, and so on. We can also agree with authors to "embargo" evaluations until a later date, under certain conditions. In this case, evaluators are informed of your request.
  2. 2.
    Alternatively, we select a set of working papers released in the prominent NBER series for evaluation (see note); where these papers seem particularly influential, potentially impactful, and relevant for evaluation. For these, we contact the authors before sending out the papers for evaluation and request the authors' engagement, but we don't ask for permission.
For either track, authors are invited to be involved in several ways:
  • Authors are informed of the process and given an opportunity to identify particular concerns, request an embargo, etc.
  • Evaluators can be put in touch with authors (anonymously) for clarification questions.
  • Authors are given a two-week window to respond to the evaluations (this response is published as well) before the evaluations are made public. They can also respond on our platform after the evaluations are released.

The value of the evaluations

Can I share this evaluation? What else can I do with it?

If you are writing a signed (not anonymous) evaluation, you can share it or link it on your own pages. Please wait to do this until after we have given the author a chance to respond and posted the package. (Note that as of May 2023, we put the evaluation up on our PubPub with a DOI and try to get it out to scholarly search engines and bibliometric databases.)
Otherwise, if you are remaining anonymous. please do not disclose your connection to this report.
Going forward:
  • We may later invite you to write and evaluate more about this piece of research . . .
  • . . . and to help us judge prizes (e.g., the Impactful Research Prize).
  • We may ask if you want to be involved in replication exercises (e.g., through the Institute for Replication).
  • As a general principle, we hope and intend always to see that you are fairly compensated for your time and effort.

What value do these evaluations provide (and how should evaluators think about this)? Who is the audience? How much is this process a "service for authors"?

The evaluations provide at least three types of value, helping advance several paths in our theory of change:
  1. 1.
    For readers and users: Unjournal evaluations assess the reliability and usefulness of the paper along several dimensions—and make this public, so other researchers and policymakers can learn from this intellectual process and apply this learning in their practice and decisionmaking.
  2. 2.
    For careers and improving research: Evaluations provide metrics of quality. In the medium term, these should provide increased and accelerated career value, improving the research process. We aim to build metrics that are credibly comparable to the current "tier" of journal a paper is published in. But we aim to do this better in several ways:
    • More quickly, more reliably, more transparently, and without the unproductive overhead of dealing with journals (see 'reshaping evaluation')
    • Allowing flexible, transparent formats (such as dynamic documents), thus improving the research process, benefiting research careers, and hopefully improving the research itself in impactful areas.
  3. 3.
    Feedback and suggestions for authors: We expect that evaluators will provide feedback that is relevant to the authors, to help them make the paper better.
Is "feedback for authors" more important for Unjournal evaluations than for traditional journals?
In the near term, while an Unjournal evaluation may not yet seen to have substantial career value . . .
  • Work UJ considers might tend be at an earlier stage relative to papers submitted to journals, as authors who submit work may see this as a "pre-journal" step.
  • Also, the papers we select (from NBER, currently) might be posted before authors submit them to journals.
  • On the other hand, we also tend to consider (as of July 2023) a lot of NBER papers that seem close to being submitted (or are currently in submission) at top traditional journals.
Medium-term, if a positive Unjournal evaluation has "value," it may be equally likely that the evaluation is seen as an "endpoint" for that paper or project.
Also note that the modal outcome from submitting to a journal is rejection, so feedback is equally desirable from them.
Can The Unjournal "do feedback to authors better" than traditional journals?
Maybe we can?
  • We pay evaluators.
  • The evaluations are public, and some sign their evaluations.
    • → Evaluators may be more motivated to be careful and complete.
On the other hand . . .
  • For public evaluations, people might defer to being overly careful.
  • At standard journals, referees do want to impress editors, and often (but not always) leave very detailed comments and suggestions.

What evaluators should prioritize

What should I prioritize in my evaulation process?

Given the above considerations, and considering that we want to encourage authors to see value in engaging with The Unjournal, we encourage evaluators to prioritize the three components roughly equally:
  • Making the evaluations and ratings useful for readers and users
  • Making them meaningful for assessing academics
  • Communicating useful feedback and suggestions to researchers

Public evaluations

How are these hosted and shared?

(2023 update)
  • Currently: on our PubPub page
    • Each evaluation and response gets a DOI, linking these into all relevant systems, including Google Scholar.
  • Previously: Kotahi, linked to a Sciety group (we aim to mirror PubPub content to our Sciety group and vice-versa)
  • Potentially: hosted/mirrored on our own dedicated web page
  • Ideally: We will present the "ratings data" in clear, comparable formats, as well as providing the raw data for meta-analysts and others (this is partially available in the PubPub content, and we are working on making this more systematic).

General criteria/guidelines

  • E.g., "Selection": Traditionally, reviewers may be more likely to accept an assignment when they have a particular interest in the paper under consideration.
  • E.g., for wider audiences, such as on Wikipedia, the EA Forum, or Asterisk magazine, with potential further compensation.
  • 22 Dec 2022: We are still developing and improving this system.