arrow-left

All pages
gitbookPowered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Pilot steps

See sections below.

Pilot: Building a founding committee

circle-info

7 Feb 2023: We have an organized founding/management committee, as well as an advisory board (see Our team). We are focusing on pushing research through the evaluation pipeline, communicating this output, and making it useful. We have a working division of labor, e.g., among "managing editors," for specific papers. We are likely to expand our team after our pilot, conditional on further funding.

hashtag
Progress: the team (continual update)

hashtag
Key elements of plan

chevron-rightPut together founding committee, meetings, public posts, and feedback (done)hashtag
  1. Build a "founding committee" of 5–8 experienced and enthusiastic EA-aligned or adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).

hashtag
How was this founding committee recruited?

  • The creation of an action plan can be seen in the Gdoc discussion

chevron-rightThree key relevant areas from which to draw candidateshashtag

DR: I think I need to draw people from a few relevant areas: 1. Academia, in relevant subject fields for The Unjournal: economics, quantitative social science, maybe more

2. Effective altruism, to assess the value and scope of the journal and the research

3. Open Science and academic reform, and applied metascience—people with practical ideas and knowledge

+ People with strong knowledge of the journal and bibliometric processes and systems

hashtag
First: direct outreach to a list of pivotal, prominent people

  1. Assemble a list of the most relevant and respected people, using more or less objective criteria and justification.

    1. Ask to join founding committee.

    2. Ask to join list of supporters.

circle-info

28 May 2022: The above has mostly been done, at least in terms of people attending the first meeting. We probably need a more systematic approach to getting the list of supporters.

hashtag
Second: public call for interest

Further posts on social media, academic websites and message boards, etc.

hashtag
See also public Gdoc

Setting up evaluation guidelines for pilot papers

7 Feb 2023 We have considered and put together:

See: Guidelines for evaluators

... including descriptive and quantitative (rating and prediction elements). With feedback from evaluators and others, we are continuing to build and improve these guidelines.

'Evaluators': Identifying and engaging

Status: 7 Feb 2023

  1. Volunteer pool of 80+ reviewers (see Airtable), responding to How to get involved and other outreach

  2. For our initial 3 focal pilot papers we have a total of 8 completed evaluations (2 of these are completed subject to some final checks. Status tracked in AirtableReviewer_process in participants_reviewers

  3. For the remaining 7 pilot papers, we have roughly 6 agreed evaluators so far (we aim for 2-3 per paper)

Create private Airtable with lists of names and organizations
  • Added element: List of supporter names for credibility, with little or no commitment

  • Host a meeting (and shared collaboration space/document), to come to a consensus on a set of practical principles. [26 May 2022: First meeting held, writing up shared notes.]

  • Post and present our consensus (coming out of this meeting) on key fora. After a brief followup period (~1 week), consider adjusting the above consensus plan in light of the feedback, repost, and move forward.

  • ... Excerpts from successful ACX grant, , reiterated in followup FTX Future Fund (for further funding; unsuccessful).

    Add people who have made past contributions.

    Our team
    "Procedure for choosing committee"arrow-up-right
    EA Forum question post: Soliciting lists and namesarrow-up-right
    discussion of "the committee and consensus"arrow-up-right

    Pilot: Setting up platforms

    Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects.

    chevron-rightProgress reportshashtag

    Update 7 Sep 2022, partial update 22 Dec 2022

    • We are setting up processes and forms in

      • is pretty useable (but imperfect, e.g., we need to ask people to (click 'submit a URL instead' on page one)

    • Evaluations form: using a Gdoc for now, trying out Airtable, Qualtrics and other solutions, aiming to integrate it into Kotahi

    • See for how projects will enter, be evaluated, and 'output'

    • We will outline specific for developers\

    • Sciety group set up with 'Hypothes.is feed'; working on processing first evaluations\

    hashtag
    Submission, evaluation and management platform

    7 Feb 2023

    • Set up

    • Configured it for submission and management

    • Mainly configured for evaluation but it needs bespoke configuration to be efficient and easy for evaluators, particular for the quantitative ratings and predictions. Thus we are using Google Docs (or cryptpads) for the pilot. Will configure Kotahi with further funds.

    hashtag

    hashtag
    Sciety group (curated evaluations and research)

    Evaluations are curated , which integrates these with the publicly-hosted research.

    7 Feb 2023: We are working on

    • The best ways to get evaluations from "submissions on Kotahi" into Sciety,

    • ... with the curated link to the publicly-hosted papers (or projects) on a range of platforms, including NBER

    • Ways to get DOIs for each evaluation and author response

    (We currently use a hypothes.is workaround to have this feed into so these show up as ‘evaluated pre-prints’ in their public database, gaining a DOI.

    hashtag

    Pilot: Identifying key research

    (Note for the Management Team)

    hashtag
    Test-case research for proof of concept

    Identify a small set of papers or projects as representative first-cases; use to help test the system we are building in a concrete manner.

    circle-info

    In doing the above, we are also collecting a longer list of key papers, projects, authors, topics, issues, etc.

    hashtag
    Steps taken

    1. Post on EA Forum (and other places) and present (see view at bottom of this section) promoting our call for papers further, with bounty.

    2. Search for most-cited papers (within our domain) among EA-aligned researchers and organizations.

    3. Dig into existing lists, reviews, and syllabi, such as:

    • GPI (includes many posed questions)

    • Open Philanthropy (needs updating? few academic links)

    chevron-rightConsider: "Mistakes" pages?hashtag
    • (mainly operational mistakes)

    5. Appeal directly to authors and research groups

    6. Cross-promote with

    hashtag
    Pivot: direct focus on NBER working papers

    1. Pete Slattery: "Do a complete test run using a single paper and team…" Thus, we aim to identify a small set of papers (or projects), maybe 2–3, that seem like good test and example cases, and offer a bounty for projects we choose as test cases.

    2. Note that much of the work identified here has already been peer-reviewed and "published." While we envision that The Unjournal may assess papers that are already published in traditional journals, these are probably not the best case for the PoC. Thus, we de-prioritize these for now.

    Ways to integrate evaluation details as 'collaborative annotations' (with hypothes.is) into the hosted papers

    Kotahiarrow-up-right
    Submissions form arrow-up-right
    Mapping evaluation workflow
    requestsarrow-up-right
    Kotahi: submit/eval/mgmt (may be phasing out?)
    Kotahi page HEREarrow-up-right
    in our Sciety.org grouparrow-up-right
    Scietyarrow-up-right
    Notes, exploring the platform.arrow-up-right

    Syllabi: t; list; David Rhys-Bernard's syllabus (

    Not very relevant because focused on operational issues
    formarrow-up-right
    Rules for bounty HEREarrow-up-right
    research agendaarrow-up-right
    "questions that might affect our grantmaking"arrow-up-right
    The EA Behavioral Science Newsletterarrow-up-right
    Givewellarrow-up-right
    ACX/Scott Alexanderarrow-up-right
    How to get involved
    "Direct evaluation" track
    Pablo's lisarrow-up-right
    Economics focusarrow-up-right
    link to my commented/highlighted version)arrow-up-right
    What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) — EA Forumforum.effectivealtruism.orgchevron-right
    Logo
    Airtable | Everyone's app platformAirtablechevron-right
    Logo
    The twelve-month plan
    Unjournal: Call for participants and research — EA Forumforum.effectivealtruism.orgchevron-right

    Action and progress

    The steps we've taken and our plans; needs updating

    This page and its sub-pages await updating

    See also Plan of action

    See also Updates (earlier)

    hashtag
    Gantt Chart of next steps

    18 Jun 2023: This needs updating

    Management: updates and CTA in gdoc shared in emails

    Initial evaluations; feedback on the process
  • Revise process; further set of evaluations

  • Disseminate and communicate (research, evaluations, processes); get further public feedback

  • Further funding; prepare for scaling-up

  • Pilot: Building a founding committee
    Pilot: Identifying key research
    Pilot: Setting up platforms
    Logo