LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Why numerical ratings?
  • Why these categories?
  • Why ask for credible intervals?
  • "Weightings" for each rating category (removed for now)
  • Adjustments to metrics and guidelines/previous presentations
  • Pre-October 2023 'ratings with weights' table, provided for reference (no longer in use)
  • Pre-2024 ratings and uncertainty elicitation, provided for reference (no longer in use)
  • See also

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow
  2. Evaluation
  3. Guidelines for evaluators

Why these guidelines/metrics?

PreviousGuidelines for evaluatorsNextProposed curating robustness replication

Last updated 1 year ago

Was this helpful?

31 Aug 2023: Our present approach is a "working solution" involving some ad-hoc and intuitive choices. We are re-evaluating the metrics we are asking for as well as the interface and framing. We are gathering some discussion , incorporating feedback from our pilot evaluators and authors. We're also talking to people with expertise as well as considering past practice and other ongoing initiatives. We plan to consolidate that discussion and our consensus and/or conclusions into the present (Gitbook) site.

Why numerical ratings?

Ultimately, we're trying to replace the question of "what tier of journal did a paper get into?" with "how highly was the paper rated?" We believe this is a more valuable metric. It can be more fine-grained. It should be less prone to gaming. It aims to reduce randomness in the process, through things like 'the availability of journal space in a particular field'. See our discussion of .

To get to this point, we need to have academia and stakeholders see our evaluations as meaningful. We want the evaluations to begin to have some value that is measurable in the way “publication in the AER” is seen to have value.

While there are some ongoing efforts towards journal-independent evaluation, these . Typically, they either have simple tick-boxes (like "this paper used correct statistical methods: yes/no") or they enable descriptive evaluation without an overall rating. As we are not a journal, and we don’t accept or reject research, we need another way of assigning value. We are working to determine the best way of doing this through quantitative ratings. We hope to be able to benchmark our evaluations to "traditional" publication outcomes. Thus, we think it is important to ask for both an overall quality rating and a journal ranking tier prediction.

Why these categories?

In addition to the overall assessment, we think it will be valuable to have the papers rated according to several categories. This could be particularly helpful to practitioners who may care about some concerns more than others. It also can be useful to future researchers who might want to focus on reading papers with particular strengths. It could be useful in meta-analyses, as certain characteristics of papers could be weighed more heavily. We think the use of categories might also be useful to authors and evaluators themselves. It can help them get a sense of what we think research priorities should be, and thus help them consider an overall rating.

However, these ideas have been largely ad-hoc and based on the impressions of our management team (a particular set of mainly economists and psychologists). The process is still being developed. Any feedback you have is welcome. For example, are we overemphasizing certain aspects? Are we excluding some important categories?

We are also researching other frameworks, templates, and past practice; we hope to draw from validated, theoretically grounded projects such as .

Why ask for credible intervals?

In eliciting expert judgment, it is helpful to differentiate the level of confidence in predictions and recommendations. We want to know not only what you believe, but how strongly held your beliefs are. If you are less certain in one area, we should weigh the information you provide less heavily in updating our beliefs. This may also be particularly useful for practitioners. Obviously, there are challenges to any approach. Even experts in a quantitative field may struggle to convey their own uncertainty. They may also be inherently "poorly calibrated" (see discussions and tools for ). Some people may often be "confidently wrong." They might state very narrow "credible intervals", when the truth—where measurable—routinely falls outside these boundaries. People with greater discrimination may sometimes be underconfident. One would want to consider and As a side benefit, this may be interesting for research , particularly as The Unjournal grows. We see 'quantifying one's own uncertainty' as a good exercise for academics (and everyone) to engage in.

"Weightings" for each rating category (removed for now)

Weightings for each ratings category (removed for now)

2 Oct 2023 -- We previously suggested 'weightings' for individual ratings, along with a note

We give "suggested weights" as an indication of our priorities and a suggestion for how you might average these together into an overall assessment; but please use your own judgment.

We included these weightings for several reasons:

  • People are found [reference needed] do a more careful job at prediction (and thus perhaps at overall rating too) if the outcome of interest is built up from components that are each judged separately.

  • We wanted to make the overall rating better defined and thus more useful to outsiders and comparable across raters

  • Emphasizing what we think is important (in particular, methodological reliability)

  • We didn't want evaluators to think we wanted them to weigh each category equally … some are clearly more important

However, we decided to remove these weightings because:

  1. Reduce clutter in an already overwhelming form and guidance doc. ‘More numbers’ can be particularly overwhelming

  2. These weights were ad-hoc, and they may suggest we have a more grounded ‘model of value’ than we already do. (And there is also some overlap in our categories anyways, something we are working on addressing.)

  3. Some people interpreted what we intended incorrectly (e.g., they thought we were saying ‘relevance to global priorities’ is not an important thing)

Adjustments to metrics and guidelines/previous presentations

Oct 2023 update - removed "weightings"
Dec. 2023: Hiding/de-emphasizing 'confidence Likerts'

We previously gave evaluators two options for expressing their confidence in each rating:

Either:

  1. The 90% Confidence/Credible Interval (CI) input you see below (now a 'slider' in PubPub V7) or

  1. A five-point 'Likert style' measure of confidence, which we described qualitatively and explained how we would convert it into CIs when we report aggregations.

To make this process less confusing, to encourage careful quantification of uncertainty, and to enable better-justified aggregation of expert judgment, we are de-emphasizing the latter measure.

Still, to accommodate those who may not be familiar with or comfortable stating "90% CIs on their own beliefs" we offer further explanations, and we are providing tools to help evaluators construct these. As a fallback, we will still allow evaluators to give the 1-5 confidence measure, noting the correspondence to CIs, but we discourage this somewhat.

Pre-October 2023 'ratings with weights' table, provided for reference (no longer in use)

Category (importance)
Sugg. Wgt.*
Rating (0-100)
90% CI
Confidence (alternative to CI)

44

39, 52

5

50

47, 54

5

51

45, 55

4

20

10, 35

3

60

40, 70

2

35

30,46

0**

30

21,65

We had included the note:

We give the previous weighting scheme in a fold below for reference, particularly for those reading evaluations done before October 2023.

As well as:

Suggested weighting: 0.

Elsewhere in that page we had noted:

As noted above, we give suggested weights (0–5) to suggest the importance of each category rating to your overall assessment, given The Unjournal's priorities.

Pre-2024 ratings and uncertainty elicitation, provided for reference (no longer in use)

Category (importance)
Rating (0-100)
90% CI
Confidence (alternative to CI)
44

39, 52

50

47, 54

51

45, 55

20

10, 35

60

40, 70

35

30,46

30

21,65

[FROM PREVIOUS GUIDELINES:]

You may feel comfortable giving your "90% confidence interval," or you may prefer to give a "descriptive rating" of your confidence (from "extremely confident" to "not confident").

[Previous guidelines] "1–5 dots": Explanation and relation to CIs

5 = Extremely confident, i.e., 90% confidence interval spans +/- 4 points or less

4 = Very confident: 90% confidence interval +/- 8 points or less

3 = Somewhat confident: 90% confidence interval +/- 15 points or less

2 = Not very confident: 90% confidence interval, +/- 25 points or less

1 = Not confident: (90% confidence interval +/- more than 25 points)

[Previous...] Remember, we would like you to give a 90% CI or a confidence rating (1–5 dots), but not both.

[Previous guidelines] Example of confidence dots vs CI

The example in the diagram above (click to zoom) illustrates the proposed correspondence.

And, for the 'journal tier' scale:

[Previous guidelines]: Reprising the confidence intervals for this new metric

From "five dots" to "one dot":

5 = Extremely confident, i.e., 90% confidence interval spans +/– 4 points or less*

4 = Very confident: 90% confidence interval +/– 8 points or less

3 = Somewhat confident: 90% confidence interval +/– 15 points or less

2 = Not very confident: 90% confidence interval, +/– 25 points or less

1 = Not confident: 90% confidence interval +/– 25 points

Previous 'descriptions of ratings intervals'

[Previous guidelines]: The description folded below focuses on the "Overall Assessment." Please try to use a similar scale when evaluating the category metrics.

Top ratings (90–100)

95–100: Among the highest quality and most important work you have ever read.

90–100: This work represents a major achievement, making substantial contributions to the field and practice. Such work would/should be weighed very heavily by tenure and promotion committees, and grantmakers.

For example:

  • Most work in this area in the next ten years will be influenced by this paper.

  • This paper is substantially more rigorous or more insightful than existing work in this area in a way that matters for research and practice.

  • The work makes a major, perhaps decisive contribution to a case for (or against) a policy or philanthropic intervention.

Near-top (75–89) (*)

This work represents a strong and substantial achievement. It is highly rigorous, relevant, and well-communicated, up to the standards of the strongest work in this area (say, the standards of the top 5% of committed researchers in this field). Such work would/should not be decisive in a tenure/promotion/grant decision alone, but it should make a very solid contribution to such a case.

Middle ratings (40–59, 60–74) (*)

: A very strong, solid, and relevant piece of work. It may have minor flaws or limitations, but overall it is very high-quality, meeting the standards of well-respected research professionals in this field.

40–59.9: A useful contribution, with major strengths, but also some important flaws or limitations.

Low ratings (5–19, 20–39) (*)

20–39.9: Some interesting and useful points and some reasonable approaches, but only marginally so. Important flaws and limitations. Would need substantial refocus or changes of direction and/or methods in order to be a useful part of the research and policy discussion.

5–19.9: Among the lowest quality papers; not making any substantial contribution and containing fatal flaws. The paper may fundamentally address an issue that is not defined or obviously not relevant, or the content may be substantially outside of the authors’ field of expertise.

0–4: Illegible, fraudulent, or plagiarized. Please flag fraud, and notify us and the relevant authorities.

(*) 20 Mar 2023: We adjusted these ratings to avoid overlap

The previous categories were 0–5, 5–20, 20–40, 40–60, 60–75, 75–90, and 90–100. Some evaluators found the overlap in this definition confusing.

See also

Calibration training tools

We have removed suggested weightings for each of these categories. We discuss the rationale at some length .

Evaluators working before October 2023 saw a previous version of the table, which you can see .

The previous guidelines ; these may be useful in considering evaluations provided pre-2024.

(holistic, most important!)

The weightings were presented once again along with each description in the section .

(holistic, most important!)

Quantify how certain you are about this rating, either giving a 90% / interval or using our . (

This page explains the value of the metrics we are seeking from evaluators.

The from Clearer Thinking is fairly helpful and fun for practicing and checking how good you are at expressing your uncertainty. It requires creating account, but that doesn't take long. The 'Confidence Intervals' training seems particularly relevant for our purposes.

in this linked Gdoc
Reshaping academic evaluation: beyond the binary...
RepliCATS
calibration training
Unjournal Evaluator Guidelines and Metrics - Discussion space
here
HERE
can be seen here
confidence
credibility
scale described below
#overall-assessment
#1.-advancing-our-knowledge-and-practice
#2.-methods-justification-reasonableness-validity-robustness
#3.-logic-and-communication
#4.-open-collaborative-replicable-science-and-methods
#5.-engaging-with-real-world-impact-quantification-practice-realism-and-relevance
#6.-relevance-to-global-priorities
#overall-assessment
#1.-advancing-our-knowledge-and-practice
#2.-methods-justification-reasonableness-validity-robustness
#3.-logic-and-communication
#4.-open-collaborative-replicable-science-and-methods
#5.-engaging-with-real-world-impact-quantification-practice-realism-and-relevance
#6.-relevance-to-global-priorities
Calibrate Your Judgment app
More reliable, precise, and useful metrics
"Category explanations: what you are rating"