Passed on to LTFF and funding was awarded
frozen version as Dropbox paper here
Passed on to LTFF and funding was awarded
Start date = ~21 February 2022
The "Unjournal" will organize and fund 'public journal-independent evaluation’ of EA-relevant/adjacent research, encouraging this research by making it easier for academics and EA-organization researchers to get feedback and credible ratings.
Peer review is great, but academic publication processes are wasteful, slow, rent-extracting, and they discourage innovation. From onscienceandacademia
Academic publishers extract rents and discourage progress. But there is a coordination problem in ‘escaping’ this. Funders like Open Philanthropy and EA-affiliated researchers are not stuck, we can facilitate an exit.
The traditional binary ‘publish or reject’ system wastes resources (wasted effort and gamesmanship) and adds unnecessary risk. I propose an alternative, the “Evaluated Project Repo”: a system of credible evaluations, ratings, and published reviews (linked to an open research archive/curation). This will also enable more readable, reliable, and replicable research formats, such as dynamic documents; and allow research projects to continue to improve without “paper bloat”. (I also propose some ‘escape bridges’ from the current system.)
Global priorities and EA research organizations are looking for ‘feedback and quality control’, dissemination, and external credibility. We would gain substantial benefits from supporting, and working with the Evaluated Project Repo (or with related peer-evaluation systems), rather than (only) submitting our work to traditional journals. We should also put some direct value on results of open science and open access, and the strong impact we may have in supporting this.
I am asking for funding to help replace this system, with EA 'taking the lead'. My goal is permanent and openly-hosted research projects, and efficient journal-independent peer review, evaluation, and communication. (I have been discussing and presenting this idea publicly for roughly one year, and gained a great deal of feedback. I return to this in the next section).
I propose the following 12-month Proof of Concept: Proposal for EA-aligned research 'unjournal' collaboration mechanis
Build a ‘founding committee’ of 5-8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).
Update 1 Aug 2022, mainly DONE, todo: consult EAG speakers
I will publicly share my procedure for choosing this group (in the long run we will aim at transparent and impartial process for choosing ‘editors and managers’, as well as aiming at decentralized forms of evaluation and filtering.)
2. Host a meeting (and shared collaboration space/document), to come to a consensus/set of principles on
A cluster of EA-relevant research areas we want to start with
A simple outreach strategy
How we determine which work is 'EA-interesting’
How we will choose ‘reviewers’ and avoid conflicts-of-interest
How we will evaluate, rate, rank, and give feedback on work
The platforms we will work with
How to promote and communicate the research work (to academics, policymakers, and the EA community)
Update 1 Aug 2022: 2 meetings so far, agreed on on going-forward policies for most of the above
3. Post and present our consensus (on various fora especially in the EA, Open Science, and relevant academic communities, as well as pro-active interviews with key players). Solicit feedback. Have a brief ‘followup period’ (1 week) to consider adjusting the above consensus plan in light of the feedback.
Update 1 Aug 2022: Done somewhat; waiting to have 2+ papers assessed before we engage more
4. Set up the basic platforms, links
Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental spac\
Update 1 Aug 2022: Going with Kotahi and Sciety as a start; partially setup
5. Reach out to researchers in relevant areas and organizations and ask them to 'submit' their work for 'feedback and potential positive evaluations and recognition', and for a chance at a prize.
The unjournal will *not be an exclusive outlet.* Researchers are free to also submit the same work to 'traditional journals' at any point.
Their work must be publicly hosted, with a DOI. Ideally the 'whole project' is maintained and updated, with all materials, in a single location. We can help enable them to host their work and enable DOI's through (e.g.) Zenodo; even hosted 'dynamic documents' can be DOI'd.
Update 1 Aug 2022: Did a 'bounty' and some searching of our own, plan a 'big public call' afrter pilot evaluations of 2+ papers
Researchers are encouraged to write and present work in 'reasoning transparent' (as well as 'open science' transparent) ways. They are encouraged to make connections with core EA ideas and frameworks, but without being too heavy-handed. Essentially, we are asking them to connect their research to 'the present and future welfare of humanity/sentient beings'.
Reviews will, by default, be made public and connected with the paper. However, our committee will discuss I. whether/when authors are allowed to withdraw/hide their work, and II. when reviews will be ‘signed’ vs anonymous. In my conversations with researchers, some have been reluctant to ‘put themselves out there for public criticism’, while others seem more OK with this. We aim to have roughly 25 research papers/projects reviewed/evaluated and 'communicated' (to EA audiences) in the first year.
Update July 2022: scaled back to 15 papers
My suggestions on the above, as a starting point...
Given my own background, I would lean towards ‘empirical social science’ (including Economics) and impact evaluation and measurement (especially for ‘effective charitable interventions’)
Administration should be light-touch, to also be attractive to aligned academics
We should build "editorial-board-like" teams with subject/area expertise
We should pay reviewers for their work (I propose $250 for 5 hours of quality reviewing work)
Create a set of rules for 'submission and management', 'which projects enter the review system' (relevance, minimal quality, stakeholders, any red lines or 'musts'), how projects are to be submitted (see above, but let's be flexible), how reviewers are to be assigned and compensated (or 'given equivalent credit')
Rules for reviews/assessments
Reviews to be done on the chosen open platform (likely Prereview) unless otherwise infeasible
Share, advertise, promote the reviewed work
Establish links to all open-access bibliometric initiatives to the extent feasible
Each research paper/project should be introduced in at least one EA Forum post
Laying these out; I have responses to some of these, others will require further consideration \
Will researchers find it useful to submit/share their work? From my experience (i) as an academic economist and (ii) working at Rethink Priorities, and my conversations with peers, I think people would find this very useful. I would have (and still would).
i. FEEDBACK IS GOLD: It is very difficult to get anyone to actually read your paper, and to get actual useful feedback on your work. The incentive is to publish, not to read, papers are dense and require specific knowledge, and people may be reluctant to criticize peers, and economists tend to be free-riders. It is hard to engage seminar audiences on the more detailed aspects of the work, and then one gets feedback on the ‘presentation’ not the ‘paper’. We often use ‘submission to journal’ as a way to get feedback, but this is slow, not the intended use of the journal process (I’ve been told), and often results in less-useful feedback. (A common perception is that the referee ‘decides what decision to make and then fleshes out a report to justify it.)
ii. ACADEMICS NEED SOURCES OF TIMELY VALIDATION: The publication process is extremely slow and complicated in Economics (and other fields, in my experience), requiring years of submissions and responses to multiple journals. This imposes a lot of risky for an academic’s career, particularly pre-tenure. Having an additional credible source validating the strength of one’s work could help reduce this risk. If we do this right, I think hiring and tenure committees would consider it as an important source of quality information.
iii. EA ORGS/FUNDERS need both, but the traditional journal process is costly in time and hassle. I think researchers and research managers at RP would be very happy to get feedback through this, as well as an assessment of the quality of their work, and suggestions for alternative methods and approaches. We would also benefit from external signals of the quality of our work, in justifying this to funders such as Open Philanthropy. (OP themselves would value this greatly, I believe. They are developing their own systems for asse_s_sing the quality of their funded work, but I expect they would prefer an external source.) However, it is costly for us at RP to submit to academic journals: the process is slow and bureaucratic and noisy, and traditional journals will typically not evaluate work with EA priorities and frameworks in mind. (Note that I suggest the unjournal make these priorities a factor while also assessing the work’s rigor in ways that invoke justifiable concerns in academic disciplines.)
I assume that similar concerns apply to other EA research organizations.
iv. OPEN SCIENCE AND DYNAMIC FORMATS
IMO the best and most transparent way to present data-driven work (as well as much quantitative work) is in a dynamic document, where narrative, code, and results are presented in concert. Readers can ‘unfold for further details’. The precise reasoning, data, and generation of each result can be traced. These can also be updated and improved with time. Many researchers, particularly those involved in Open Science, find this the most attractive way to work and present their work. However, ‘frozen pdf prison’ and ‘use our bureaucratic system’ approaches makes this very difficult to use in traditional journals. As the ‘unjournal’ does not host papers, but merely assesses work with DOI’s (which can be, e.g. a hosted web page, as frozen at a particular point in time of review), we can facilitate this. Will researchers find it ‘safe’ to share their work?
A large group of Economists and academics tend to be conservative, risk-averse, and leader-following. But there are important exceptions and also substantial groups that seek to be particular innovative and iconoclastic.
The key concerns we will need to address (at least for some researchers). i. Will my work be ‘trashed publicly in a way that hurts my reputation’? I think this is more for early-career; more experienced researchers will have a thicker skin and realize that it’s common-knowledge that some people disagree with their approaches. ii. Will this tag me as ‘weird or non-academic’. This might be addressed by our making connections to academic bodies and established researchers. How to get quality reviews and avoid slacking/free-riding by reviewers? Ideas:
compensation and rewarding quality as an incentive,
recruiting reviewers who seem to have intrinsic motivations,
publishing some ‘signed’ reviews (but there are tradeoffs here as we want to avoid flattery)
longer run: integrated system of ‘rating the reviews’, a la StackExchange (I know there are some innovations in process here we’d love to link with
QUANTIFY and CALIBRATE
We will ask referees to give a set of quantitative ratings in addition to their detailed feedback and discussion. These should be stated in ways that are made explicitly relative to other work they have seen, both within the Unjournal, and in general. Referees might be encouraged to ‘calibrate’; first given a set of (previously traditionally-published) papers to rank and rate. They should be later reminded about how the distribution of the evaluation they have given.
Within our system, evaluations themselves could be stated ‘relative to the other evaluations given by the same referee.’
BENCHMARK We also will encourage or require referees provide a ‘a predicted/equivalent “traditional publication outcome” and possibly incentivize these predictions. (And we could consider running public prediction markets on this in the longer run, as has been done in other contexts). This should be systematized. It could be stated as “this project is of the sufficient quality that it has a 25% probability of being published in a journal of the rated quality of Nature, and a 50% probability of being published in a journal such as the Journal of Public Economics or better … within the next 3 years.” (We can also elicit statements about the impact factor, etc.)
I expect most/many academics who submit their work will also submit it to traditional journals at least in the first year or so of this project. (but ultimately we hope this 'unjournal' system of journal-independent evaluation provides a signal of quality that will supercede The Musty Old Journal.) This will thus provide us a way to validate the above predictions, as well as independently establish a connection between our ratings and the ‘traditional’ outcomes. PRIZE as a powerful signal/scarce commodity The “prize for best submissions” (perhaps a graded monetary prize for the top 5 submissions in the first year) will provide a commitment device and a credible signal, to enhance the attractiveness and prestige of this.
We may try to harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, the coin-based 'ResearchHub', etc.
Will the evaluations be valued by gatekeepers (universities, grantmakers, etc.) and policy-makers? This will ultimately depend on the credibility factors mentioned above. I expect they will have value to EA and open-science-oriented grantmakers fairly soon, especially if the publicly-posted reviews are of a high apparent quality.
I expect academia to take longer to come on board. In the medium run they are likely to value it as ‘a factor in career decisions’ (but not as much as a traditional journal publication); particularly if our Unjournal finds participation and partnership with credible established organizations and prominent researchers.
I am optimistic because of my impression that non-traditional-journal outcomes (arXiv and impact factors, conference papers, cross-journal outlets, distill.pub) are becoming the source of value in several important disciplines How will we choose referees? How to avoid conflicts of interest (and the perception of this)?
This is an important issue. I believe there are ‘pretty good’ established protocols for this. I’d like to build specific prescribed rules for doing this, and make it transparent. We may be able to leverage tools, e.g., those involving GPT3 like elicit.org.
COI: We should partition the space of potential researchers and reviewers, and/or establish ‘distance measures’ (which may themselves be reported along with the review). There should be specified rules, e.g., ‘no one from the same organization or an organization that is partnering with the author’s organization’. Ideally EA-orgresearchers’ work should be reviewed by academic researchers, and to some extent vice-versa.
How to support EA ideas, frameworks, and priorities while maintaining (actual and perceived) objectivity and academic rigor
(Needs discussion)
Why hasn’t this been done before? I believe it involves a collective action problem, as well as a coordination/lock-in problem that can be solved by bringing together the compatible interests of two groups. Academic researchers have expertise, credibility, but they are locked into traditional and inefficient systems. EA organizations/researchers have a direct interest in feedback and fostering this research, and have some funding and are not locked into traditional systems.
Yonatan Cale restating my claim:
Every Econ researcher (interested in publishing) pays a price for having the system set up badly, the price isn't high enough for any one researcher to have an incentive to fix the system for themselves, but as a group, they would be very happy if someone would fix this systematic problem (and they would in theory be willing to "pay" for it, because the price of "fixing the system" is way lower then the sum of the prices that each one of them pays individually)
‘Sustainability’ … Who will pay for these reviews in the longer run
Once this catches on…Universities will pay to support this; they will save massively on journal subscriptions. Governments supporting Open Science will fund this. Authors/research orgs will pay a reasonable submission fee to partly/fully cover the cost of the reviews. EA-aligned research funders will support this.
But we need to show a proof-of-concept and build credibility. The ACX grant funds can help make this happen.
My CV https://daaronr.github.io/markdown-cv/ should make this clear\
I have been an academic economist for 15-20 years, and I have been deeply involved in the research and publication process, with particular interests in open science and dynamic documents. (PhD UC Berkeley Lecturer University of Essex, Senior Lecturer, University of Exeter). My research has mainly been in Economics, but also involving other disciplines (especially Psychology).
I’m a Senior Economist at Rethink Priorities, where I’ve worked for the past year, engaging with a range of researchers and practitioners at RP and other EA groups
My research has involved EA-relevant themes since the latter part of my PhD. I’ve been actively involved with the EA community since about 2016, when I received a series of ESRC ‘impact grants’ for the innovationsinfundraising.org and giveifyouwin.org projects, working with George Howlett and the CEA
I’ve been considering and discussing this proposal for many years with colleagues in Economics and other fields, and presenting it publicly and soliciting feedback over the past year— mainly through https://bit.ly/unjournal, social media, EA and open science Slack groups and conferences (presenting this at a GPI lunch and at the COS/Metascience conference, as well as in an EA Forum post and the onscienceandacademia post mentioned above).
I have had long 1-1 conversations on this idea with a range of knowledgable and relevant EAs, academics, and open-science practitioners, and technical/software developers including
Cooper Smout, head of ‘https://freeourknowledge.org/, which I’d like to ally with (through their pledges, and through an open access journal Cooper is putting together, which the Unjournal could feed into, for researchers needing a ‘journal with an impact factor’)
Participants in the GPI seminar luncheon
Daniela Saderi of PreReview
Paolo Crosetto (Experimental Economics, French National Research Institute for Agriculture, Food and Environment) https://paolocrosetto.wordpress.com/
Cecilia Tilli, Foundation to Prevent Antibiotics Resistance and EA research advocate
Sergey Frolov (Physicist), Prof. J.-S. Caux, Physicist and head of https://scipost.org/
Peter Slattery, Behaviourworks Australia
Alex Barnes, Business Systems Analyst, https://eahub.org/profile/alex-barnes/
Gavin Taylor and Paola Masuzzo of IGDORE (biologists and advocate of open science)
William Sleegers (Psychologist and Data Scientist, Rethink Priorities)
Nathan Young https://eahub.org/profile/nathan-young/
Edo Arad https://eahub.org/profile/edo-arad/ (mathematician and EA research advocate)
Hamish Huggard (Data science, ‘literature maps’)
Yonatan Cale, who helped me put this proposal together through asking a range of challengin questions and offering his feedback. https://il.linkedin.com/in/yonatancale
https://daaronr.github.io/markdown-cv/, my online CV has links to almost everything else@givingtools on twitter david_reinstein on EA forum; see post on this: https://forum.effectivealtruism.org/posts/Z2jPENrHpY9QSQBDQ/proposal-alternative-to-traditional-academic-journals-for-ea I read/discuss this on my podcast, e.g., see https://anchor.fm/david-reinstein/episodes/Journal-slaying-The-Evaluated-Project-Repo-aka-the-Unjournal--httpbit-lyunjournal-Future-EA-Forum-post-e149uc2\
Feel free to give either a simple number, or a range, a complicated answer, or a list of what could be done with how much
Over a roughly one-year ‘pilot’ period, I propose the following. Note that most of the costs will not be incurred in the event of the ‘failure modes’ I consider. E.g., if we can’t find qualified and relevant reviewers and authors, these payments will not be made
$15k: Pay reviewers for their time for doing 50 reviews of 25 papers (2 each), at 250 USD per review (I presume this is 4-5 hours of concentrated work) --> 12,500 USD
$5k to find ways to ’buy off” 100 hours of my time (2-3 hours per week over some 50 weeks) to focus on managing the project, setting up rules/interface, choosing projects to review, assigning reviewers, etc. I will do this either through paying my employer directly or ‘buying time’ by getting delivery meals, Uber rides, etc.)
$5k to ’buy off” 100 hours of time from other ‘co-editors’ to help, and for a board to meet/review the initial operating principles
$5k: to hire about 100 hours technical support for 1 year to help authors host and format their work, to tailor the ‘experimental’ space that PreReview has promiosed us, and potentially working with the EA forum and other interfaces
$2.5k: Hire clerical/writing/copy editing support as needed
$7.5k: rewards for ‘authors of the best papers/projects’ (e.g., 5 * 1000 USD … perhaps with a range of prizes) … and/or additional incentives for ‘best reviews’ (e.g., 5 * 250 USD)
We have an action plan (mainly for EA organizations) and a workspace in the GitBook here: https://app.gitbook.com/o/-MfFk4CTSGwVOPkwnRgx/s/-MkORcaM5xGxmrnczq25/ This also nests several essays discussing the idea, including the collaborative document (with many comments and suggestions) at https://bit.ly/unjournal\
Most of the measures of ‘small success’ are scaleable; the funds I am asking for, for referee payments, some of my time, etc., will not be spent/will be returned to you if we do not recieve quality submissions and commitments to review and assist in the management
My own forecast (I’ve done some calibration training, but these are somewhat off-the-cuff) 80% that we will find relevant authors and referees, and this will be a useful resource for improving and assesing the credibility of EA-relevant research
60% that we will get the academic world substantially involved in such a way that it becomes reasonably well known, and quality academic researchers are asking to ‘submit their work’ to this without our soliciting their work.
50% that this becomes among the top/major ways that EA-aligned research organizations seek feedback on their work (and the work that they fund — see OpenPhil), and a partial alternative to academic publication
10-25% that this becomes a substantial alternative (or is at the core of such a sea-change) to traditional publication in important academic fields and sub-fields within the next 1-3 years. (This estimate is low in part because I am fairly confident a system along these lines will replace the traditional journal, but less confident that it will be so soon, and still less confident my particular work on this will be at the center of it.) \
Yes
Yes
Rethink Priorities will act as fiscal sponsor for this, to help administer payments. They will also receive $5,000 to cover roughly two hours/week of Reinstein's time on this project.
Administering payments to referees, researchers, etc.
We will need to make small payments to (say) 20–50 different referees, 5–10 committee members and "editorial managers," 5–10 research prize winners, as well as clerical and IT assistants.
LTFF:
Please let us know how you would like your grant communicated on the ACX blog, e.g., if you'd like Scott to recommend that readers help you in some way (see this post for examples).
See #acx-and-ltff-media__
Peter Slattery: on EA Forum, fork moved to .
Other comments, especially post-grant, in this Gdoc discussion space (embedded below) will be integrated back.
\