Communicating results

Curating and publishing evaluations, linked to research

Aggregating evaluators' ratings and predictions

We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.

In the short run (in our pilot phase), we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on "aggregating expert opinion." (See, e.g., Hemming et al, 2017; Hanea et al, 2021; McAndrew et al, 2020; Marcoci et al, 2022.)

We are working on this in our public data presentation (Quarto notebook) here.

Other communication

We are considering...

  • Syntheses of evaluations and author feedback

  • Input to prediction markets, replication projects, etc.

  • Less technical summaries and policy-relevant summaries, e.g., for the EA Forum, Asterisk magazine, or mainstream long-form outlets

Last updated

#536:

Change request updated