Communicating results
Curating and publishing evaluations, linked to research
Previous/less emphasized: Society Group: curating evaluations and papers
Evaluations and author response are given DOI's, enter the bibliometric record
Future consideration:
"publication tier" of authors' responses as a workaround to encode aggregated evaluation
Hypothes.is annotation of hosted and linked papers and projects (aiming to integrate: see: hypothes.is for collab. annotation)
Sharing evaluation data on public Github repo (see data reporting here)
Aggregating evaluators' ratings and predictions
We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.
In the short run (in our pilot phase), we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on "aggregating expert opinion." (See, e.g., Hemming et al, 2017; Hanea et al, 2021; McAndrew et al, 2020; Marcoci et al, 2022.)
We are working on this in our public data presentation (Quarto notebook) here.
Other communication
We are considering...
Syntheses of evaluations and author feedback
Input to prediction markets, replication projects, etc.
Less technical summaries and policy-relevant summaries, e.g., for the EA Forum, Asterisk magazine, or mainstream long-form outlets
Last updated