arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

Communicating results

hashtag
Curating and publishing evaluations, linked to research

  • Unjournal PubPub pagearrow-up-right

    • Previous/less emphasized: Society

  • Evaluations and author response are given DOI's, enter the bibliometric record

    • Future consideration:

      • "publication tier" of authors' responses as a workaround to encode aggregated evaluation

  • Sharing evaluation data on public Github repo (see )

hashtag
Aggregating evaluators' ratings and predictions

We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.

In the short run (in our pilot phase), we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on "aggregating expert opinion." (See, e.g., ; ; ; .)

We are working on this in our public data presentation (Quarto notebook) .

hashtag
Other communication

We are considering...

  • Syntheses of evaluations and author feedback

  • Input to prediction markets, replication projects, etc.

  • Less technical summaries and policy-relevant summaries, e.g., for the , , or mainstream long-form outlets

Hypothes.is annotation of hosted and linked papers and projects (aiming to integrate: see: )

Group: curating evaluations and papersarrow-up-right
data reporting herearrow-up-right
Hemming et al, 2017arrow-up-right
Hanea et al, 2021arrow-up-right
McAndrew et al, 2020arrow-up-right
Marcoci et al, 2022arrow-up-right
herearrow-up-right
EA Forumarrow-up-right
Asterisk magazinearrow-up-right
hypothes.is for collab. annotation