Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
7 Feb 2023 We have considered and put together:
See: Guidelines for evaluators
... including descriptive and quantitative (rating and prediction elements). With feedback from evaluators and others, we are continuing to build and improve these guidelines.
See sections below.
Status: 7 Feb 2023
Volunteer pool of 80+ reviewers (see Airtable), responding to How to get involved and other outreach
For our initial 3 focal pilot papers we have a total of 8 completed evaluations (2 of these are completed subject to some final checks.
For the remaining 7 pilot papers, we have roughly 6 agreed evaluators so far (we aim for 2-3 per paper)
()
Identify a small set of papers or projects as representative first-cases; use to help test the system we are building in a concrete manner.
In doing the above, we are also collecting a longer list of key papers, projects, authors, topics, issues, etc.
Post on EA Forum (and other places) and present form (see view at bottom of this section) promoting our call for papers further, with bounty.
2. Search for most-cited papers (within our domain) among EA-aligned researchers and organizations.
3. Dig into existing lists, reviews, and syllabi, such as:
GPI research agenda (includes many posed questions)
Open Philanthropy "questions that might affect our grantmaking" (needs updating? few academic links)
Syllabi: Pablo's list; Economics focus list; David Rhys-Bernard's syllabus (link to my commented/highlighted version)
5. Appeal directly to authors and research groups
6. Cross-promote with How to get involved
Pete Slattery: "Do a complete test run using a single paper and team…" Thus, we aim to identify a small set of papers (or projects), maybe 2–3, that seem like good test and example cases, and offer a bounty for projects we choose as test cases.
Note that much of the work identified here has already been peer-reviewed and "published." While we envision that The Unjournal may assess papers that are already published in traditional journals, these are probably not the best case for the PoC. Thus, we de-prioritize these for now.
7 Feb 2023: We have an organized founding/management committee, as well as an advisory board (see ). We are focusing on pushing research through the evaluation pipeline, communicating this output, and making it useful. We have a working division of labor, e.g., among "managing editors," for specific papers. We are likely to expand our team after our pilot, conditional on further funding.
The creation of an action plan can be seen in the Gdoc discussion
Assemble a list of the most relevant and respected people, using more or less objective criteria and justification.
Ask to join founding committee.
Ask to join list of supporters.
Add people who have made past contributions.
28 May 2022: The above has mostly been done, at least in terms of people attending the first meeting. We probably need a more systematic approach to getting the list of supporters.
Further posts on social media, academic websites and message boards, etc.
The steps we've taken and our plans; needs updating
This page and its sub-pages await updating
See also
See also
18 Jun 2023: This needs updating
Initial evaluations; feedback on the process
Revise process; further set of evaluations
Disseminate and communicate (research, evaluations, processes); get further public feedback
Further funding; prepare for scaling-up
Management: updates and CTA in gdoc shared in emails
Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects.
7 Feb 2023
Set up
Configured it for submission and management
Mainly configured for evaluation but it needs bespoke configuration to be efficient and easy for evaluators, particular for the quantitative ratings and predictions. Thus we are using Google Docs (or cryptpads) for the pilot. Will configure Kotahi with further funds.
Evaluations are curated , which integrates these with the publicly-hosted research.
7 Feb 2023: We are working on
The best ways to get evaluations from "submissions on Kotahi" into Sciety,
... with the curated link to the publicly-hosted papers (or projects) on a range of platforms, including NBER
Ways to get DOIs for each evaluation and author response
Ways to integrate evaluation details as 'collaborative annotations' (with hypothes.is) into the hosted papers
(We currently use a hypothes.is workaround to have this feed into so these show up as ‘evaluated pre-prints’ in their public database, gaining a DOI.