See the vision and broad plan presented here (and embedded below), updated August 2023.
Pilot targets
What we need our pilot (~12 months) to demonstrate
We actually "do something."
We can provide credible reviews and ratings that have value as measures of research quality comparable to (or better than) traditional journal systems.
We identify important work that informs global priorities.
We boost work in innovative and transparent and replicable formats (especially dynamic documents).
Authors engage with our process and find it useful.
(As a push) Universities, grantmakers, and other arbiters assign value to Unjournal ratings.
Status: Mostly completed/decided for pilot phase; will review after initial trial
Rules for reviews/assessments
To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible (10 Dec 2022 update)
Share, advertise, promote this; have efficient meetings and presentations
Establish links to all open-access bibliometric initiatives (to the extent feasible)
Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.
Build a "founding committee" of 5–8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).
Host a meeting (and shared collaboration space/document), to come to a consensus/set of practical principles.
Post and present our consensus (coming out of this meeting) on key fora. After a brief "followup period" (~1 week), consider adjusting the above consensus plan in light of feedback, and repost (and move forward).
Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects. Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental space’ Update: Kotahi/Sciety seems a more flexible solution.
Reach out to researchers in relevant areas and organizations and ask them to "submit" their work for "feedback and potential positive evaluations and recognition," and for a chance at a prize. The Unjournal will not be an exclusive outlet. Researchers are free to also submit the same work to 'traditional journals' at any point. However, whether submitted elsewhere or not, papers accepted by The Unjournal must be publicly hosted, with a DOI. Ideally the whole project is maintained and updated, with all materials, in a single location.
21 Sep 2022 status:_ 1-3 mostly completed. We have a good working and management group. We decided a platform and we're configuring it, and we have an interim workaround. We've reached out to researchers and organizations and got some good responses, but we need to find more platforms to disseminate and advertise this. We've identified and are engaging with four papers for the initial piloting. We aim to put out a larger prize-driven call soon and intake about 10 more papers or projects.
The approach below is largely integrated into the Unjournal proposal, but this is a suggestion for how organizations like RP might consider how to get feedback and boost credibility:
Host article (or dynamic research project or 'registered report') on OSF or another place allowing time stamping & DOIs (see my resources list in Airtable for a start)
Link this to PREreview (or similar tool or site) to solicit feedback and evaluation without requiring exclusive publication rights (again, see Airtable list)
Directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs
Next steps towards this approach:
Build our own systems (assign "editors") to do this without bias and with incentives
Build standard metrics for interpreting these reviews (possibly incorporating prediction markets)
Encourage them to leave their feedback through the PREreview or another platform
Also: Commit to publish academic reviews or share in our internal group for further evaluation and reassessment or benchmarking of the ‘PREreview’ type reviews above (perhaps taking the FreeOurKnowledge pledge relating to this).
Status: We are still working with Google Docs and building an external survey interface. We plan to integrate this with PubPub over the coming months (August/Sept. 2023)