LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • So what goes into this "prioritization rating"; what does it mean?
  • One possible way of considering the rating criteria
  • Key attributes/factors
  • Prestigious work
  • Less prestigious work

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow
  2. Project submission, selection and prioritization
  3. Process: prioritizing research

Prioritization ratings: discussion

PreviousProcess: prioritizing researchNextSuggesting research (forms, guidance)

Last updated 9 months ago

Was this helpful?

As noted in Process: prioritizing research, we ask people who suggest research to provide a numerical 0-100 rating:

We also ask people within our team to act as 'assessors' to give as second and third opinions on this. This 'prioritization rating' is one of the criteria we will use to determine whether to commission research to be evaluated (along with author engagement, publication status, our capacity and expertise, etc.) Again, see the previous page for the current process.

So what goes into this "prioritization rating"; what does it mean?

We are working on a set of notes on this, fleshing this out and giving specific examples. At the moment this is available to members of our team only (ask for access to "Guidelines for prioritization ratings (internal)"). We aim to share a version of this publicly once it converges, and once we can get rid of arbitrary sensitive examples.

Some key points

I. This is not the evaluation itself. It is not an evaluation of the paper's merit per se:

  • Influential work, and prestigious work in influential areas may be highly prioritized regardless of its rigor and quality

  • The prioritization rating might consider quality for work that seems potentially impactful, which does not seem particularly prestigious or influential. Here aspects like writing clarity, methodological rigor, etc., might put it 'over the bar'. However, even here these will tend to be based on rapid and shallow assessments, and should not be seen as meaningful evaluations of research merit.

II. These ratings will be considered along with the discussion by the field team and the management. Thus is helpful if you give a justification and explanation for your stated rating.

One possible way of considering the rating criteria

Key attributes/factors

Define/consider the following ‘attributes’ of a piece of research:

  1. Global decision-relevance/VOI: Is this research decision-relevant to high-value choices and considerations that are important for global priorities and global welfare?

  2. Prestige/prominence: Is the research already prominent/valued (esp. in academia), highly cited, reported on, etc?

  3. Influence: Is the work already influencing important real-world decisions and considerations?

Obviously, these are not binary factors; there is a continuum for each. But for the sake of illustration, consider the following flowcharts.

If the flowcharts do not render, please refresh your browser. You may have to refresh twice.

Prestigious work

"Fully baked": Sometimes prominent researchers release work (e.g., on NBER) that is not particularly rigorous or involved, which may have been put together quickly. This might be research that links to a conference they are presenting at, to their teaching, or to specific funding or consulting. It may be survey/summary work, perhaps meant for less technical audiences. The Unjournal tends not to prioritize such work, or at least not consider it in the same "prestigious" basket (although there will be exceptions). In the flowchart above, we contrast this with their "fully-baked" work.

Decision-relevant, prestigious work: Suppose the research is both ‘globally decision-relevant’ and prominent. Here, if the research is in our domain, we probably want to have it publicly evaluated. This is basically the case regardless of its apparent methodological strength. This is particularly true if it has been recently made public (as a working paper), if it has not yet been published in a highly-respected peer-reviewed journal, and if there are non-straightforward methodological issues involved.

Prestigious work that seems less globally-relevant: We generally will not prioritize this work unless it adds to our mission in other ways (see, e.g., our ‘sustainability’ and ‘credibility’ goals here). In particular we will prioritize such research more if:

  • It is presented in innovative, transparent formats (e.g., dynamic documents/open notebooks, sharing code and data)

  • The research indirectly supports more globally-relevant research, e.g., through…

    • Providing methodological tools that are relevant to that ‘higher-value’ work

    • Drawing attention to neglected high-priority research fields (e.g., animal welfare)

Less prestigious work

(If the flowchart below does not render, please refresh your browser; you may have to refresh twice.)

Decision-relevant, influential (but less prestigious) work: E.g., suppose this research might be cited by a major philanthropic organization as guiding its decision-making, but the researchers may not have strong academic credentials or a track record. Again, if this research is in our domain, we probably want to have it publicly evaluated. However, depending on the rigor of the work and the way it is written, we may want to explicitly class this in our ‘non-academic/policy’ stream.

Decision-relevant, less prestigious, less-influential work: What about for less-prominent work with fewer academic accolades that is not yet having an influence, but nonetheless seems to be globally decision-relevant? Here, our evaluations seem less likely to have an influence unless the work seems potentially strong, implying that our evaluations, rating, and feedback could boost potentially valuable neglected work. Here, our prioritization rating might focus more on our initial impressions of things like …

  • Methodological strength (this is a big one!)

  • Rigorous logic and communication

  • Open science and robust approaches

  • Engagement with real-world policy considerations

Again: the prioritization process is not meant to be an evaluation of the work in itself. It’s OK to do this in a fairly shallow way.

In future, we may want to put together a loose set of methodological ‘suggestive guidelines’ for work in different fields and areas, without being too rigid or prescriptive. (To do: we can draw from some existing frameworks for this [ref].)