LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • The Pivotal Questions project in brief
  • The process
  • ‘Operationalizable’ questions
  • How you can help us

Was this helpful?

Export as PDF

"Pivotal questions"

PreviousWhat is global-priorities-relevant research?Next‘Operationalizable’ questions

Last updated 1 month ago

Was this helpful?

Express your interest/suggest a pivotal question using .

We will try to keep a public record of our progress — see this '' and this

The Pivotal Questions project in brief

commissions public evaluations of impactful research in quantitative social sciences fields. We are seeking pivotal questions to guide our choice of research papers to commission for evaluation. We're contacting organizations that aim to use evidence to do the most good, and asking:

  • Which open questions most affect your policies and funding recommendations?

  • For which questions would research yield the highest ‘value of information’?

The Unjournal has focused on finding that seems relevant to impactful questions and crucial considerations, and then commissioning experts to publicly evaluate them. (For more about our process, see ). Our field specialist teams search and monitor prominent research archives (like ), and consider , while keeping an eye on forums and social media.

We're now exploring turning this on its head and identifying pivotal questions first and identifying evaluating a cluster of research that informs these. This could offer a more efficient and observable path to impact. (For context, see our .)

The process

Elicit questions

The Unjournal will ask impact-focused research-driven organizations such as Open Philanthropy and Charity Entrepreneurship to identify specific that impact their funding, policy, and research-direction choices. For example, If GiveWell is considering recommending a charity running a CBT intervention in West Africa, they’d like to know “how much does a 16 week course of non-specialist psychotherapy increase self-reported happiness, compared to the same amount spent on direct cash transfers?” We’re looking for the questions with the highest value-of-information (VOI) for the organization’s work over the next few years.

We have some requirements — the questions should relate to The Unjournal’s coverage areas and engage rigorous research in economics, social science, policy, or impact quantification. Ideally, organizations will identify at least one piece of publicly-available research that relates to their question. But we are doing this mainly to help these organizations, so we will try to keep it simple and low-effort for them.

More examples of questions
  • If The Center for Humane Technology is considering a political campaign for AI safety in California, they could consider “how much does television and social media advertisements increase the vote share for ballot initiatives supporting the regulation of technology and business for safety reasons?”

  • OP might be considering funding organizations that promote democracy, largely because they think democracies may be more resilient to global catastrophies. As a tractable proxy, they may want to know “by what percentage does a country being a democracy reduce the loss of life in a natural disaster on the scale of a 7+ magnitude earthquake”?

  • If a CE project is considering promoting farmed fish welfare legislation in India, they might ask “as the price of India-farmed fish increases by 10%, how much will consu

We will work to minimize the effort required from these organizations; e.g., by leveraging their existing writings and agendas to suggest potential high value-of-information questions. We will also crowdsource questions (via EA Forum, social media, etc.), offering bounties for valuable suggestions.

Select, refine, and get feedback on the target questions

The Unjournal team will discuss the suggested questions, leveraging our field specialists’ expertise. We’ll rank these questions, prioritizing at least one for each organization.

We’ll work with the organization to specify the priority question precisely and in a useful way. We want to be sure that (1) evaluators will interpret these questions as intended, and (2) the answers that come out are likely to actually be helpful. We’ll make these lists of questions public and solicit general feedback — on their relevance, on their framing, on key sub-questions, and on pointers to relevant research.

Where practicable, we will operationalize the target questions as a claim on a prediction market (for example, Metaculus) to be resolved by the evaluations and synthesis below.

Where feasible, post these on public prediction markets (such as Metaculus)

Elicit stakeholder beliefs

We will ask (and help) the organizations and interested parties to specify their own beliefs about these questions, aka their 'priors'. We may adapt the Metaculus interface for this.

Source and prioritize research informing the target questions

Once we’ve converged on the target question, we’ll do a variation of our usual evaluation process.

For each question, we will prioritize roughly two to five . These may be suggested by the organization that proposed the question, sourced by The Unjournal, or discovered through community feedback ().

Commission expert evaluations of research, informing the target questions

As we normally do, we’ll have evaluation managers recruit . However, we’ll ask the evaluators to , and to consider the target organization’s priorities.

Get feedback from paper authors and from the target organization(s)

We will contact both the research authors (as per our standard process) and the target organizations for their responses to the evaluations, and for follow-up questions. We’ll foster a productive discussion between them (while preserving anonymity as requested, and being careful not to overtax people’s time and generosity)

Prepare a Synthesis Report

evaluation managers to write a report as a summary of the research investigated.

These reports should synthesize “What do the research, evaluations, and responses say about the question/claim?” They should provide an overall metric relating to the truth value of the target question (or similar for the parameter of interest). In cases where we integrate prediction markets, they should decisively resolve the market claim.

Next, we will share these synthesis reports with authors and organizations for feedback.

(Where applicable) Resolve the prediction markets

Complete and publish the ‘target question evaluation packages’

‘Operationalizable’ questions

We give detailed guidance with examples below:

Why do we want these pivotal questions to be 'operationalizable'?

How you can help us

Give us feedback on this proposal

Suggest organizations and people we should reach out to

Suggest target questions

  • A brief description of what your organization does (linking your ‘about us’ page is fine)

  • A brief explanation of why this question is particularly high-value for your organization or your work, and, if applicable, how you have tried to answer it

  • If possible, a link to at least one research paper that relates to this question

  • Optionally, your current beliefs about this question (your ‘priors’)

Please also let us know how you would like to engage with us on refining this question and addressing it. Do you want to follow up with a 1-1 meeting? How much time are you willing to put in? Who, if anyone, should we reach out to at your organization?

Remember that we plan to make all of this analysis and evaluation public. However, we will not make any of your input public without your consent.

If you don’t represent an organization, we still welcome your suggestions, and will try to give feedback. ('.)

If the question is well operationalized, and we have a clear approach to 'resolving it' after the evaluations and synthesis, we will post it on a reputation-based market like or . Metaculus is offering 'minitaculus' platforms such as to enable these more flexible questions.

We’ll also . This is inspired by the, and some evidence suggesting that the (mechanistically aggregated) estimates of experts after deliberations than their independent estimates (also mechanistically aggregated). We may also facilitate collaborative evaluations and ‘live reviews’, following the examples of , , and others.

We’ll put up each evaluation on our page, bringing them into academic search tools, databases, bibliometrics, etc. We’ll also curate them, linking them to the relevant target question and to the synthesis report.

We will produce, share, and promote further summaries of these packages. This could include forum and blog posts summarizing the results and insights, as well as interactive and visually appealing web pages. We may also produce less technical content, perhaps submitting work to outlets like, , or .

At least initially, we’re planning to ask for questions that could be definitively answered and/or measured quantitatively. We will help organizations and other suggesters refine their questions to make this the case. These should resemble questions that could be posted on forecasting platforms such as or . These should also resemble the we currently request from evaluators.

We’re still refining this idea, and looking for your suggestions about what is unclear, what could go wrong, what might make this work better, what has been tried before, and where the biggest wins are likely to be. We’d appreciate your feedback! (Feel free to email to make suggestions or arrange a discussion.)

If you work for an impact-focused research organization and you are interested in participating in our pilot, please reach out to us at contact@unjournal.org to flag your interest and/or complete . We would like to see:

A specific, , high-value claim or research question you'd like to be evaluated, that falls within our scope (~quantitative social science, economics, policy, and impact measurement)

Again, please remember that we currently focus on quantitative ~social sciences fields, including economics, policy, and impact modeling (see for more detail on our coverage). Questions surrounding (for example) technical AI safety, microbiology, or measuring animal sentience are less likely to be in our domain.

If you want to talk about this first, or if you have any questions, please send an email or with David Reinstein, our co-founder and director.

this form
forum sequence
public database of PQs
The Unjournal
here
NBER
agendas from impactful organizations
‘logic model’ flowchart for our theory of change
repliCATS project
ASAPBio
PREreview
Unjournal.pubpub.org
Asterisk
Vox
worksinprogress.co
Manifold Markets
Metaculus
'claim identification'
‘Operationalizable’ questions
Why "operationalizable questions"?
contact@unjournal.org
this form
operationalized
here
schedule a meeting
Metaculus
this one on Sudan
Manifold