Comment on page
Plan of action
Building a 'best feasible plan'...
- 1.We actually 'do something'
- 2.We can provide credible reviews and ratings that have value as measures of research quality comparable to (or better than) traditional journal systems
- 3.We identify important work that informs global priorities
- 4.We boost work in innovative and transparent/replicable formats (especially dynamic documents)
- 5.Authors engage with our process and find it useful
- 6.(As a push) Universities, grantmakers, and other arbiters assign value to Unjournal ratings
Updated: Partial update 10 Dec 2022.
/ Define the broad scope of our research interest and key overriding principles. Light-touch, to also be attractive to aligned academics
✔
⏳
Build "editorial-board-like" teams with subject/area expertise
⏳
Status: Mostly completed/decided for pilot phase
- Which projects enter the review system (relevance, minimal quality, stakeholders, any red lines or 'musts')
- how projects are to be submitted
- how reviewers are to be assigned and compensated
Status: Mostly completed/decided for pilot phase; will review after initial trial
- To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible. 10 Dec 2022 update
- Share, advertise, promote this, have efficient meetings and presentations
- Establish links to all open-access bibliometric initiatives (to the extent feasible)
- Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.
Status: Mostly completed/decided for pilot phase; will review after the initial trial
The key elements of the plan:
Build a ‘founding committee’ of 5-8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).
- 1.Host a meeting (and shared collaboration space/document), to come to a consensus/set of practical principles
- 2.Post and present our consensus (coming out of this meeting) on key fora. After a brief ‘followup period’ (~1 week), consider adjusting the above consensus plan in light of the feedback, and repost (and move forward).
- 3.Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects. Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental space’ Update: Kotahi/Sciety seems a more flexible solution
- 4.Reach out to researchers in relevant areas and organizations and ask them to 'submit' their work for 'feedback and potential positive evaluations and recognition', and for a chance at a prize. The Unjournal will not be an exclusive outlet. Researchers are free to also submit the same work to 'traditional journals' at any point. Their work must be publicly hosted, with a DOI. Ideally the 'whole project' is maintained and updated, with all materials, in a single location. 21 Sep 2022 status: 1-3 mostly completed. We have a good working and management group. We decided a platform and we're configuring it, and we have an interim workaround. We've reached out to researchers/orgs and got some good responses, but we need to find more platforms to disseminate and advertise this. We've identified and are engaging with 4 papers for the initial piloting. We aim to put out a larger prize-driven call soon and intake about 10 more papers/projects.
The approach below is largely integrated into the Unjournal proposal, but this is a suggestion for how organizations like RP might consider 'how to get feedback and boost credibility
- 1.Host article (or dynamic research project or 'registered report') on OSF or other place allowing time stamping & DOIs (see my resources list in Airtable for a start)
- 2.Link this to PREreview (or similar tool) tools/sites soliciting feedback and evaluation without requiring exclusive publication rights... (again, see Airtable list)
- 3.Directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs
- We need to build our own systems (assign ‘editors') to do this without bias and with incentives
- building standard metrics for interpreting these reviews (possibly incorporating prediction markets,
- encouraging them to leave their feedback through the PREreview or another platform.
Also: Committing to publish academic reviews or ‘share in our internal group’ for further evaluation and reassessment/benchmarking of the ‘PREreview’ type reviews above. (Perhaps taking the FreeOurKnowledge pledge relating to this)
Last modified 5mo ago