Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
I (David Reinstein) am an economist who left UK academia after 15 years to pursue a range of projects (see my web page). One of these is The Unjournal:
The Unjournal (with funding from the Long Term Future Fund and the Survival and Flourishing Fund) organizes and funds public-journal-independent feedback and evaluation, paying reviewers for their work. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
We are looking for your involvement...
We want researchers who are interested in doing evaluation work for The Unjournal. We pay an average of evaluation, and we award monetary prizes for the strongest work. Right now we are particularly looking for economists and people with quantitative and policy-evaluation skills. We describe what we are asking evaluators to do here: essentially a regular peer review with some different emphases, plus providing a set of quantitative ratings and predictions. Your evaluation content would be made public (and receive a DOI, etc.), but you can choose if you want to remain anonymous or not.
To sign up to be part of the pool of evaluators or to get involved in The Unjournal project in other ways, please fill out this brief form or email contact@unjournal.org.
We welcome suggestions for particularly impactful research that would benefit from (further) public evaluation. We choose research for public evaluation based on an initial assessment of methodological strength, openness, clarity, relevance to global priorities, and the usefulness of further evaluation and public discussion. We sketch these criteria here, and discuss some potential examples here (see research we have chosen and evaluated at unjournal.pubpub.org, and a larger list of research we're considering here).
If you have research—your own or others—that you would like us to assess, please fill out this form. You can submit your own work here (or by contacting ). Authors of evaluated papers will be eligible for our Impactful Research Prizes ().
We are looking for both feedback on and involvement in The Unjournal project. Feel free to reach out at .
View our data protection statement
A "curated guide" to this GitBook; updated June 2023
You can now ask questions of this GitBook using a chatbot: click the search bar or press cmd-k and choose "ask Gitbook."
For authors, evaluators, etc.
Writeups of the main points for a few different audiences
Important benefits of journal-independent public evaluation and The Unjournal's approach, with links to deeper commentary
How we choose papers/projects to evaluate, how we assign evaluators, and so on
Groups we work with; comparing approaches
What research are we talking about? What will we cover?
These are of more interest to people within our team; we are sharing these in the spirit of transparency.
A "best feasible plan" for going forward
Successful proposals (ACX, SFF), other applications, initiatives
Note: we have moved some of this "internal interest content" over to our Coda.io knowledge base.
Key resources and links for managers, advisory board members, staff, team members and others involved with The Unjournal project.
19 Feb 2024. We are not currently hiring, but expect to do so in the future
To indicate your potential interest in roles at The Unjournal, such as those described below, please fill out this quick survey form and link (or upload) your CV or webpage.
If you already filled out this form for a role that has changed titles, don’t worry. You will still be considered for relevant and related roles in the future.
If you add your name to this form, we may contact you to offer you the opportunity to do paid project work and paid work tasks.
Furthermore, if you are interested in conducting paid research evaluation for The Unjournal, or in joining our advisory board, please complete the form linked here.
Feel free to contact contact@unjournal.org with any questions.
Administration, operations and management roles
Research & operations-linked roles & projects
Standalone project: Impactful Research Scoping (temp. pause)
Express interest in any of these roles in our survey form.
The Unjournal, a not-for-profit collective under the umbrella and fiscal sponsorship of the Open Collective Foundation, is an equal-opportunity employer and contractor. We are committed to creating an inclusive environment for all employees, volunteers, and contractors. We do not discriminate on the basis of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetic information, disability, age, or veteran status.
See our data protection statement linked here.
In addition to the jobs and paid projects listed here, we are expanding our management team, advisory board, field specialist team pool, and evaluator pool. Most of these roles involve compensation/honorariums. See Advisory/team roles (research, management)
We are not a journal!
The Unjournal seeks to make rigorous research more impactful and impactful research more rigorous. We are a team of researchers, practitioners, and open science advocates led by David Reinstein.
The Unjournal encourages better research by making it easier for researchers to get feedback and credible ratings. We coordinate and fund public journal-independent expert evaluation of hosted . We publish evaluations, ratings, manager summaries, author responses, and links to evaluated research on our PubPub page.
As the name suggests, we are not a journal!
We work independently of traditional academic journals. We're building an open platform and a sustainable system for feedback, ratings, and assessment. We're currently focusing on quantitative work that in .
How to get involved?
We're looking for research to evaluate, as well as evaluators. You can submit research here, or suggest research using this form. We offer financial prizes for suggesting research we end up evaluating. If you want to be an evaluator, apply here. You can use the same form to express your interest in joining our management team, advisory board, or reviewer pool. For more information, see our how to get involved guide.
Why The Unjournal? Peer review is great, but conventional academic publication processes are wasteful, slow, and rent-extracting. They discourage innovation and prompt researchers to focus more on "gaming the system" than on the quality of their research. We will provide an immediate alternative, and at the same time, offer a bridge to a more efficient, informative, useful, and transparent research evaluation system.
Does The Unjournal charge any fees?
No. We're a US-registered tax-exempt 501(c)(3) nonprofit, and we don't charge fees for anything. We compensate evaluators for their time and we even award prizes for strong research and evaluation work, in contrast to most traditional journals. We do so thanks to funding from the Long-Term Future Fund and Survival and Flourishing Fund.
At some point in the future, we might consider sliding-scale fees for people or organizations submitting their work for Unjournal evaluation, or for other services. If we do this, it would simply be a way to cover the compensation we pay evaluators and to cover our actual costs. Again, we are a nonprofit and we will stay that way.
Research submission/identification and selection: We identify, solicit, and select relevant research work to be hosted on any open platform in any format Authors are encouraged to present their work in the ways they find most comprehensive and understandable. We support the use of dynamic documents and other formats that foster replicability and open science. (See: the benefits of dynamic docs).
Paid evaluators (AKA "reviewers"): We compensate evaluators (essentially, reviewers) for providing thorough feedback on this work. (Read more: Why do we pay?)
Eliciting quantifiable and comparable metrics: We aim to establish and generate credible measures of research quality and usefulness. We benchmark these against traditional previous measures (such as journal tiers) and assess the reliability, consistency, and predictive power of these measures. (Read more: Why quantitative metrics?)
Public evaluation: We publish the evaluation packages (including reports, ratings, author responses, and manager summaries) on our PubPub community. Making evaluation public facilitates dialogue, and supports , , , and .
Linking, not publishing: Our process is not "exclusive." Authors can submit their work to a journal (or other evaluation service) at any time. This approach also allows us to against traditional publication outcomes.
Prizes: We award financial prizes and hold public events to recognize the most credible, impactful, useful, and insightful research, as well as strong engagement with our evaluation process.
Transparency: We aim for maximum transparency in our processes and judgments.
Academics and funders have complained about this stuff for years and continue to do so every day on social media . . . and we're fairly confident our critiques of the traditional review and publication process will resonate with most readers.
So why haven't academia and the research community been able to move to something new? There is a difficult collective action problem. Individual researchers and universities find it risky to move unilaterally. But we believe we have a good chance of finally changing this model and moving to a better equilibrium. How? We will:
Take risks: Many members of The Unjournal management are not traditional academics; we can stick our necks out. We are also recruiting established senior academics who are less professionally vulnerable.
Bring in new interests, external funding, and incentives: There are a range of well-funded and powerful organizations—such as the Sloan Foundation and Open Philanthropy—with a strong inherent interest in high-impact research being reliable, robust, and reasoning-transparent. This support can fundamentally shift existing incentive structures.
Allow less risky "bridging steps": As noted above, The Unjournal allows researchers to submit their work to traditional journals. In fact, this will provide a benchmark to help build our quantitative ratings and demonstrate their value.
Communicate with researchers and stakeholders to make our processes easy, clear, and useful to them.
Make our output useful, in the meantime: It may take years for university departments and grant funders to incorporate journal-independent evaluations as part of their metrics and reward systems. The Unjournal can be somewhat patient: our evaluation, rating, feedback, and communication are already providing a valuable service to authors, policymakers, and other researchers.
Leverage new technology: A new set of open-access and AI-powered tools makes what we are trying to do easier, and makes more useful every day.
Reward early adopters with prizes and recognition: We can replace "fear of standing out" with "fear of missing out." In particular, authors and research institutions that commit to publicly engaging with evaluations and critiques of their work should be commended and rewarded. And we are doing this.
This GitBook is a knowledge base that supplements our main public page, unjournal.org. It serves as a platform to organize our ideas and resources and track our progress towards our dual objectives:
Making "peer evaluation and rating" of open projects into a standard high-status outcome in academia and research, specifically within economics and social sciences. This stands in contrast to the conventional binary choice of accepting or rejecting papers to be published as PDFs and other static formats.
Building a cohesive and efficient system for publishing, accruing credibility, and eliciting feedback for research aligned with effective altruism and global priorities. Our ultimate aim is to make rigorous research more impactful, and impactful research more rigorous.
See Content overview
(see our #in-a-nutshell) wants your involvement, help, and feedback. We offer rewards and strive to compensate people for their time and effort.
Join our team: Complete this form to apply for our...
Evaluator pool: to be eligible to be commissioned and paid to evaluate and rate research, mainly in quantitative social science and policy
Field specialist teams: help identify, prioritize, and manage research evaluation in a particular field or cause area
Management team or advisory board, to be part of our decision-making
Do an Independent Evaluation to build your portfolio, receive guidance, and be eligible for promotion and prizes. See details Independent evaluations (trial)
Suggest "Pivotal questions" for us to focus on
Give us feedback: Is anything unclear? What could be improved? Email contact@unjournal.org. We will offer rewards for the most useful suggestions.
David Reinstein is the founder and co- of The Unjournal. The organization is currently looking for field specialists and evaluators, as well as suggestions for relevant work for The Unjournal to evaluate.
The Unjournal team is building a system for credible, public, journal-independent feedback and evaluation of research.
We maintain an open call for participants for four different roles:
Management Committee members (involving honorariums for time spent)
Advisory Board members (no time commitment)
Field Specialists (who will often also be on the Advisory Board)
A pool of Evaluators (who will be paid for their time and their work; we also draw evaluators from outside this pool)
The roles are explained in more detail here. You can express your interest (and enter our database) here.
We will reach out to evaluators (a.k.a. "reviewers") on a case-by-case basis, appropriate for each paper or project being assessed. This is dependent on expertise, the researcher's interest, and a lack of conflict of interest.
Time commitment: Case-by-case basis. For each evaluation, here are some guidelines for the amount of time to spend.
Compensation: We pay a minimum of $200 (updated Aug. 2024) for a prompt and complete evaluation, $400 for experienced evaluators. We offer additional prizes and incentives, and are committed to an average compensation of at least $450 per evaluator. See here for more details.
Who we are looking for: We are putting together a list of people interested in being an evaluator and doing paid referee work for The Unjournal. We generally prioritize the pool of evaluators who signed up for our database before reaching out more widely.
Interested? Please fill out this form (about 3–5 min, same form for all roles or involvement).
Ready to get started doing evaluations and building a track record? See our new Independent evaluations (trial) initiative, offering prizes and recognition for the best work. You can evaluate work in our public database, or suggest and evaluate work.
We are looking for high-quality, globally pivotal research projects to evaluate, particularly those embodying open science practices and innovative formats. We are putting out a call for relevant research. Please suggest research here. (We offer bounties and prizes for useful suggestions – .) For details of what we are looking for, and some potential examples, see this post and accompanying links.
You can also put forward your own work.
Note: This is under continual refinement; see our policies for more details.
As of December 2023, the prizes below have been chosen and will be soon announced. We are also scheduling an event linked to this prize. However, we are preparing for even larger author and evaluator prizes for our next phase. Submit your research to The Unjournal or serve as an evaluator to be eligible for future prizes (details to be announced).
Submit your work to be eligible for our “Unjournal: Impactful Research Prize” and a range of other benefits including the opportunity for credible public evaluation and feedback.
First-prize winners will be awarded $, and the runner-ups will receive $1000.
Note: these are the minimum amounts; we will increase these if funding permits.
Prize winners will have the opportunity (but not the obligation) to present their work at an online seminar and prize ceremony co-hosted by The Unjournal, Rethink Priorities, and EAecon.
To be eligible for the prize, submit a link to your work for public evaluation here.
Please choose “new submission” and “Submit a URL instead.”
The latter link requires an ORCID ID; if you prefer, you can email your submission to
The Unjournal, with funding from the Long Term Future Fund and the Survival and Flourishing Fund, organizes and funds public-journal-independent feedback and evaluation. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation, and aim to expand this widely. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
We aim to publicly evaluate 15 papers (or projects) within our pilot year. This award will honor researchers doing robust, credible, transparent work with a global impact. We especially encourage the submission of research in "open" formats such as hosted dynamic documents (Quarto, R-markdown, Jupyter notebooks, etc.).
The research will be chosen by our management team for public evaluation by 2–3 carefully selected, paid reviewers based on an initial assessment of a paper's methodological strength, openness, clarity, relevance to global priorities, and the usefulness of further evaluation and public discussion. We sketch out these criteria here.
All evaluations, including quantitative ratings, will be made public by default; however, we will consider "embargos" on this for researchers with sensitive career concerns (the linked form asks about this). Note that submitting your work to The Unjournal does not imply "publishing" it: you can submit it to any journal before, during, or after this process.
If we choose not to send your work out to reviewers, we will try to at least offer some brief private feedback (please on this).
All work evaluated by The Unjournal will be eligible for the prize. Engagement with The Unjournal, including responding to evaluator comments, will be a factor in determining the prize winners. We also have a slight preference for giving at least one of the awards to an early-career researcher, but this need not be determinative.
Our management team and advisory board will vote on the prize winners in light of the evaluations, with possible consultation of further external expertise.
Deadline: Extended until 5 December (to ensure eligibility).
Note: In a subsection below, Recap: submissions, we outline the basic requirements for submissions to The Unjournal.
The prize winners for The Unjournal's Impactful Research Prize were selected through a multi-step, collaborative process involving both the management team and the advisory board. The selection was guided by several criteria, including the quality and credibility of the research, its potential for real-world impact, and the authors' engagement with The Unjournal's evaluation process.
Initial Evaluation: All papers that were evaluated by The Unjournal were eligible for the prize. The discussion, evaluations, and ratings provided by external evaluators played a significant role in the initial shortlisting.
Management and Advisory Board Input: Members of the management committee and advisory board were encouraged to write brief statements about papers they found particularly prize-worthy.
Meeting and Consensus: A "prize committee" meeting was held with four volunteers from the management committee to discuss the shortlisted papers and reach a consensus. The committee considered both the papers and the content of the evaluations Members of the committee allocated a total of 100 points among the 10 paper candidates. We used this to narrow down a shortlist of five papers.
Point Voting: The above shortlist and the notes from the accompanying discussion were shared with all management committee and advisory board members. Everyone in this larger group was invited to allocate up to 100 points among the shortlisted papers (and asked to allocate fewer points if they were less familiar with the papers and evaluations).
Special Considerations: We decided that at least one of the winners had to be a paper submitted by the authors or one where the authors substantially engaged with The Unjournal's processes. However, this constraint did not prove binding. Early-career researchers were given a slight advantage in our consideration.
Final Selection: The first and second prizes were given to the papers with the first- and second-most points, respectively.
This comprehensive approach aimed to ensure that the prize winners were selected in a manner that was rigorous, fair, and transparent, reflecting the values and goals of The Unjournal.
The Unjournal was founded by , who maintains this wiki/GitBook and other resources.
The information below may be outdated.
See our "" at Unjournal.org for an updated view of our team members.
Team members can see more details in our Coda page .
( on terminology)
See description under .
, Founder and Co-director
, Interdisciplinary Researcher at ; Co-director
, Social Scientist and Associate Professor in the Guelph Institute of Development Studies and Department of Political Science at the University of Guelph, Canada
, Economics PhD student at the University of California, Merced
, Research Author at the Department of Psychology, (India)
, Senior Research Fellow, Institute of Environment & Sustainability, Lee Kuan Yew of School of Public Policy, National University of Singapore
, Research Scientist (fellow) at (South Africa)
, Research Author at the Department of Economics at (India)
Nov. 2023 update: We have paused this process focus to emphasize our positions. We hope to come back to hiring researchers to implement these projects soon.
We are planning to hire 3–7 researchers for a one-off paid project.
There are two opportunities: Contracted Research (CR) and Independent Projects (IP).
Project Outline
What specific research themes in economics, policy, and social science are most important for global priorities?
What projects and papers are most in need of further in-depth public evaluation, attention, and scrutiny?
Where does "Unjournal-style evaluation" have the potential to be one of the most impactful uses of time and money? By impactful, we mean in terms of some global conception of value (e.g., the well-being of living things, the survival of human values, etc.).
This is an initiative that aims to identify, summarize, and conduct an in-depth evaluation of the most impactful themes in economics, policy, and social science to answer the above questions. Through a systematic review of selected papers and potential follow-up with authors and evaluators, this project will enhance the visibility, understanding, and scrutiny of high-value research, fostering both rigorous and impactful scholarship.
Contracted Research (CR) This is the main opportunity, a unique chance to contribute to the identification and in-depth evaluation of impactful research themes in economics, policy, and social science. We’re looking for researchers and research users who can commit a (once-off) 15–20 hours. CR candidates will:
Summarize a research area or theme, its status, and why it may be relevant to global priorities (~4 hours).
We are looking for fairly narrow themes. Examples might include:
The impact of mental health therapy on well-being in low-income countries.
The impact of cage-free egg regulation on animal welfare.
Public attitudes towards AI safety regulation.
Identify a selection of papers in this area that might be high-value for UJ evaluation (~3 hours).
Choose at least four of these from among NBER/"top-10 working paper" series (or from work submitted to the UJ – we can share – or from work where the author has expressed interest to you).
For a single paper, or a small set of these papers (or projects) (~6 hours)
Read the paper fairly carefully and summarize it, explaining why it is particularly relevant.
Discuss one or more aspects of the paper that need further scrutiny or evaluation.
Identify 3 possible evaluators, and explain why they might be particularly relevant to evaluate this work. (Give a few sentences we could use in an email to these evaluators).
Possible follow-up task: email and correspond with the authors and evaluators (~3 hours).
We are likely to follow up on your evaluation suggestions. We also may incorporate your writing into our web page and public posts; you can choose whether you want to be publicly acknowledged or remain anonymous.
Independent Projects (IP)
We are also inviting applications to do similar work as an “Independent Project” (IP), a parallel opportunity designed for those eager to engage but not interested in working under a contract, or not meeting some of the specific criteria for the Contracted Research role. This involves similar work to above.
If you are accepted to do an IP, we will offer some mentoring and feedback. We will also offer prize rewards/bounties for particularly strong IP work. We will also consider working with professors and academic supervisors on these IP projects, as part of university assignments and dissertations.
You can apply to the CR and IP positions together; we will automatically consider you for each.
Get Involved!
Kickstarter incentive: After the first 8 quality submissions (or by Jan. 1, 2025 - whichever comes later) we will award a prize of $500 to the strongest evaluation.
Note on .
is seeking academics, researchers, and students to submit structured evaluations of the most impactful research . Strong evaluations will be posted or linked on our , offering readers a perspective on the implications, strengths, and limitations of the research. These evaluations can be submitted using for academic-targeted research or for ; evaluators can publish their name or maintain anonymity; we also welcome collaborative evaluation work. We will facilitate, promote, and encourage these evaluations in several ways, described below.
We are particularly looking for people with research training, experience, and expertise in quantitative social science and statistics including cost-benefit modeling and impact evaluation. This could include professors, other academic faculty, postdocs, researchers outside of academia, quantitative consultants and modelers, PhD students, and students aiming towards PhD-level work (pre-docs, research MSc students etc.) But anyone is welcome to give this a try — when in doubt, piease go for it.
We are also happy to support collaborations and group evaluations. There is a good track record for this — see: “”, ASAPBio’s, and for examples in this vein. We may also host live events and/or facilitate asynchronous collaboration on evaluations
Instructors/PhD, MRes, Predoc programs: We are also keen to work with students and professors to integrate ‘independent evaluation assignments’ (aka ‘learn to do peer reviews’) into research training.
Your work will support The Unjournal’s core mission — improving impactful research through journal-independent public evaluation. In addition, you’ll help research users (policymakers, funders, NGOs, fellow researchers) by providing high quality detailed evaluations that rate and discuss the strengths, limitations, and implications of research.
Doing an independent evaluation can also help you. We aim to provide feedback to help you become a better researcher and reviewer. We’ll also give prizes for the strongest evaluations. Lastly, writing evaluations will help you build a portfolio with The Unjournal, making it more likely we will commission you for paid evaluation work in the future.
We focus on rigorous, globally-impactful research in quantitative social science and policy-relevant research. (See for details.) We’re especially eager to receive independent evaluations of:
Research we publicly prioritize: see our we've prioritized or evaluated. ()
Research we previously evaluated (see , as well as )
Work that other people and organizations suggest as having high potential for impact/value of information (also see)
The Unjournal’s structured evaluation forms: We encourage evaluators to do these using either:
Bounties: We will offer prizes for the ‘most valuable independent evaluations’.
All evaluation submissions will be eligible for these prizes and “grandfathered in” to any prizes announced later. We will announce and promote the prize winners (unless they opt for anonymity).
Evaluator pool: People who submit evaluations can elect to join our evaluator pool. We will consider and (time-permitting) internally rate these evaluations. People who do the strongest evaluations in our focal areas are likely to be commissioned as paid evaluators for The Unjournal.
We are reaching out to PhD programs and pre-PhD research-focused programs. Some curricula already involve “mock referee report” assignments. We hope professors will encourage their students to do these through our platform. In return, we’ll offer the incentives and promotion mentioned above, as well as resources, guidance, and some further feedback
5. Fostering a positive environment for anonymous and signed evaluations
We want to preserve a positive and productive environment. This is particularly important because we will be accepting anonymous content. We will take steps to ensure that the system is not abused. If the evaluations have an excessively negative tone, have content that could be perceived as personal attacks, or have clearly spurious criticism, we will ask the evaluators to revise this, or we may decide not to post or link it.
Crowdsourced feedback can add value in itself; encouraging this can enable some public evaluation and discussion of work that The Unjournal doesn’t have the bandwidth to cover
Improving our evaluator pool and evaluation standards in general.
Students and ECRs can practice and (if possible) get feedback on independent evaluations
They can demonstrate their ability this publicly, enabling us to recruit and commission the strongest evaluators
Examples will help us build guidelines, resources, and insights into ‘what makes an evaluation useful’.
This provides us opportunities to engage with academia, especially in Ph.D programs and research-focused instruction.
I was in academia for about 20 years (PhD Economics, UC Berkeley; Lecturer, University of Essex; Senior Lecturer, University of Exeter). I saw how the journal system was broken.
Academics constantly complain about it (but don't do anything to improve it).
Most conversations are not about research, but about 'who got into what journal' and 'tricks for getting your paper into journals'
Open science and replicability are great, and dynamic documents make research a lot more transparent and readable. But these goals and methods are very hard to apply within the traditional journal system and its 'PDF prisons'.
Now I'm working outside academia and can stick my neck out. I have the opportunity to help fix the system. I work with research organizations and large philanthropists involved with effective altruism and global priorities. They care about the results of research in areas that are relevant to global priorities. They want research to be reliable, robust, reasoning-transparent, and well-communicated. Bringing them into the equation can change the game.
5 Sep 2024: The Unjournal is still looking to build our team and evaluator pool. Please consider the roles below and or contact us at contact@unjournal.org.
Activities of those on the management committee may involve a combination of the following (although you can choose your focus):
Contributing to the decision-making process regarding research focus, reviewer assignment, and prize distribution.
Collaborating with other committee members on the establishment of rules and guidelines, such as determining the metrics for research evaluation and defining the mode of assessment publication.
Helping plan The Unjournal’s future path.
Helping monitor and prioritize research for The Unjournal to evaluate (i.e., acting as a ; see further discussion below). Acting as an for research in your area.
Time commitment: A minimum of 15–20 hours per year.
Compensation: We have funding for a $57.50 per hour honorarium for the first 20 hours, with possible compensation . will be further compensated (at roughly $300–$450 per paper).
Who we are looking for: All applicants are welcome. We are especially interested in those involved in global priorities research (and related fields), policy research and practice, open science and meta-science, bibliometrics and scholarly publishing, and any other academic research. We want individuals with a solid interest in The Unjournal project and its goals, and the ability to meet the minimal time commitment. Applying is extremely quick, and those not chosen will be considered for other roles and work going forward.
Beyond direct roles within The Unjournal, we're building a larger, more passive advisory board to be part of our network, to offer occasional feedback and guidance. There is essentially no minimum time commitment for advisory board members—only opportunities to engage. We sketch some of the expectations in the fold below.
Dec 2024: We're introducing a lower-commitment "Unjournal Research Affiliate" role, targeted at PhD students, but available more broadly. See below.
FSs will focus on a particular area of research, policy, or impactful outcome. They will keep track of new or under-considered research with potential for impact and explain and assess the extent to which The Unjournal can add value by commissioning its evaluation. They will "curate" this research and may also serve as evaluation managers for this work.
Time commitment: There is no specific time obligation—only opportunities to engage. We may also consult you occasionally on your areas of expertise. Perhaps 1–4 hours a month is a reasonable starting expectation for people already involved in doing or using research, plus potential additional paid assignments.
Who we are looking for: For the FS roles, we are seeking active researchers, practitioners, and stakeholders with a strong publication record and/or involvement in the research and/or research-linked policy and prioritization processes. For the AB, also people with connections to academic, governmental, or relevant non-profit institutions, and/or involvement in open science, publication, and research evaluation processes. People who can offer relevant advice, experience, guidance, or help communicate our goals, processes, and progress.
Join our Slack and Coda, but there is no expectation to check in regularly.
Suggest 2-3 papers per term (Spring, Summer, Autumn), and explain why these have potential for impact. This comes with modest compensation; see above.
Give the 2nd opinion (assessment) for 3 papers per term.
Vote on 5-10 papers per term; check in to vote 1x per month or more
Understand and be able to explain The Unjournal to your colleagues/on social media
Join at least one (one hour) FS or Unjournal general meeting per year.
Evaluation management is not expected, but we may may consult URAs for occasional help in suggesting evaluators, with potential compensation.
If you are interested in discussing any of the above in person, please email us () to arrange a conversation.
See description under .
, Infectious Disease Researcher, London School of Hygiene and Tropical Medicine
, Associate Professor of Marketing, London Business School
, Applied Researcher (Global Health & Development) at Founder's Pledge
, Professor of Economics, UC Santa Barbara
, Post-Doctoral Researcher in the Department of Quantitative Methods and Economic Theory at the University of Chieti (Italy)
, Metascience Program Lead, Federation of American Scientists
, Managing Editor at : writing and research on global health, development, and nutrition
, Professor of Statistics and Political Science at Columbia University (New York)
, Associate Professor, University of Melbourne (Australia): expert judgment, biosciences, applied probability, uncertainty quantification
, Late-Stage PhD Student in Information Systems at the University of Cologne, Germany
, PhD Student, Applied Economics, University of Minnesota
Postdoctoral researcher, Institute for Interactive Systems and Data Science at Graz University of Technology (Austria)
, Associate Researcher, INRAE, Member, Toulouse School of Economics (France)
, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)
The table below shows all the members of our team (including field specialists) taking on a research-monitoring role (see for a description of this role).
, Research Specialist: Data science, metascience, aggregation of expert judgment
, Operations generalist
, Generalist assistance
, Communications (academic research/policy)
, Communications and copy-editing
, Communications and consulting
, technical software support
Red Bermejo, Mikee Mercado, Jenny Siers – consulting (through ) on strategy, marketing, and task management tools
We are a member of . They are working with us to update PubPub and incorporate new features (editorial management, evaluation tools, etc.) that will be particularly useful to The Unjournal and other members.
Abel Brodeur, Founder/chair of the
The
, Assistant Professor in the Department of Economics at the University of Toronto
Other academic and policy economists, such as , , , , , and
Cooper Smout, head of
, Center for Open Science
, Faculty Director, Berkeley Initiative for Transparency in the Social Sciences (BITSS)
Daniel Saderi,
, who helped me put this proposal together through asking a range of challenging questions and offering his feedback
, Experimental Psychologist at the Human-Technology Interaction group at Eindhoven University of Technology (Netherlands), has also completed research with the Open Science Collaboration and the Peer Reviewers’ Openness Initiative
Paolo Crosetto (Experimental Economics, French National Research Institute for Agriculture, Food and Environment)
Sergey Frolov (Physicist), Prof. J.-S. Caux, Physicist and head of
Alex Barnes, Business Systems Analyst,
Nathan Young ; considering connecting The Unjournal to Metaculus predictions
Edo Arad (mathematician and EA research advocate)
See also (in ACX grant proposal).
We will compensate you for your time at a rate reflecting your experience and skills ($25–$65/hour). This work also has the potential to serve as a “work sample” for future roles at The Unjournal, as it is highly representative of what our and are commissioned to do.
If you are interested in involvement in either the CR or IP side of this project, please let us know .
You can also suggest research yourself and then do an independent evaluation of it.
We’re looking for careful methodological/technical evaluations that focus on research credibility, impact, and usefulness. We want evaluators to dig into the weeds, particularly in areas where they have aptitude and expertise. See our.
Our : If you are evaluating research aimed at an academic journal or
Our : If you are evaluating research that is probably not aimed at an academic journal. This may include somewhat less technical work, such as reports from policy organizations and think tanks, or impact assessments and cost-benefit analyses
Other public evaluation platforms: We are also open to engaging with evaluations done on existing public evaluation platforms such as. Evaluators: If you prefer to use another platform, please let us know about your evaluation using one of the forms above. If you like, you can leave most of our fields blank, and provide a link to your evaluation on the other public platform.
Academic (~PhD) assignments and projects: We are also looking to build ties with research-intensive university programs; we can help you structure academic assignments and provide external reinforcement and feedback. Professors, instructors, and PhD students: please contact us ().
We will encourage all these independent evaluations to be publicly hosted, and will share links to these. We will further promote the strongest independent evaluations, potentially (such as )
However, when we host or link these, we will keep them clearly separated and signposted as distinct from the commissioned evaluations; independent evaluations will not be considered official, and their ratings won’t be included in our (see dashboard; see ).
As a start, after the first eight (or by Jan. 1 2025, whichever comes later), we will award a prize of $500 to the most valuable evaluation.
Further details tbd.
We’re also moving towards a two-tiered base We will offer a higher rate to people who can demonstrate previous strong review/evaluation work. These independent evaluations will count towards this ‘portfolio’.
Our provides examples of strong work, including the.
We will curate guidelines and learning materials from relevant fields and from applied work and impact-evaluation. For a start, see
commissions public evaluations of impactful research in quantitative social sciences fields. We are an alternative and a supplement to traditional academic peer-reviewed journals – separating evaluation from journals unlocks a . We ask expert evaluators to write detailed, constructive, critical reports. We also solicit a set of structured ratings focused on research credibility, methodology, careful and calibrated presentation of evidence, reasoning transparency, replicability, relevance to global priorities, and usefulness for practitioners (including funders, project directors, and policymakers who rely on this research). While we have mainly targeted impactful research from academia, our covers impactful work that uses formal quantitative methods but is not aimed at academic journals. So far, we’ve commissioned about 50 evaluations of 24 papers, and published these evaluation packages , linked to academic search engines and bibliometrics.
Our document also provides some guidance on the nature of work and the time involved.
Compensation: We have put together a preliminary/trial compensation formula (); we aim to fairly compensate people for time spent on work done to support The Unjournal, and to provide incentives for suggesting and helping to prioritize research for evaluation. In addition, will be compensated at roughly $300–$450 per project.
Interested? (about 3–5 min, using the same form for all roles).
We will compensate you for the time you spend on this process (details tbd), particularly to the extent that the time you spend does not contribute to your other work or research. (See .)
See:
PhD students (and others) may be interested in getting involved but worried about making . But getting busy students and researchers involved even minimally could keep us in touch with the cutting edge of research and help us forge collaborations in academia. So we're offering the role, which is similar to the Field Specialists, but with less responsibility and time commitment. It comes with , but URAs are eligible for the compensation for suggesting papers, high-value assessment prioritization, etc.
If you are interested in discussing any of the above in person, please email us () to arrange a conversation.
We invite you to (the same as that linked above) to leave your contact information and outline which parts of the project interest you.
Note: These descriptions are under continual refinement; see our for more details.
Dec 2024: We are still looking to bring in more field specialists to build our teams in a all areas, but particularly in the quantitative social science and economics/behavior modeling of catastrophic risks, AI governance and safety.
In addition to the "work roles," we are looking to engage researchers, research users, meta-scientists, and people with experience in open science, open access, and management of initiatives similar to The Unjournal.
We are continually looking to enrich our general team and board, including our #management-committee-members, #advisory-board-members-abm-and-field-specialists-fs These roles come with some compensation and incentives.
(Please see links and consider submitting an expression of interest).
These are principally not research roles, but familiarity with research and research environments will be helpful, and there is room for research involvement depending on the candidate’s interest, background, and skills/aptitudes.
There are currently one such role:
#h.htvmrldrqnhn (As of November 2023, still seeking freelancers)
Further note: We previously considered a “Management support and administrative professional” role. We are not planning to hire for this role currently. Those who indicated interest will be considered for other roles.
As of November 2023, we are soliciting applications for freelancers with skills in particular areas
The Unjournal is looking to work with a proficient writer who is adept at communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. As we are in our early stages, this is a generalist role. We need someone to help us explain what The Unjournal does and why, make our processes easy to understand, and ensure our outputs (evaluations and research synthesis) are accessible and useful to non-specialists. We seek someone who values honesty and accuracy in communication; someone who has a talent for simplifying complex ideas and presenting them in a clear and engaging way.
The work is likely to include:
Promotion and general explanation
Spread the word about The Unjournal, our approach, our processes, and our progress in press releases and short pieces, as well as high-value emails and explanations for a range of audiences
Make the case for The Unjournal to potentially skeptical audiences in academia/research, policy, philanthropy, effective altruism, and beyond
Keeping track of our progress and keeping everyone in the loop
Help produce and manage our external (and some internal) long-form communications
Help produce and refine explanations, arguments, and responses
Help provide reports to relevant stakeholders and communities
Making our rules and processes clear to the people we work with
Explain our procedures and policies for research submission, evaluation, and synthesis; make our systems easy to understand
Help us build flexible communications templates for working with research evaluators, authors, and others
Other communications, presentations, and dissemination
Write and organize content for grants applications, partnership requests, advertising, hiring, and more
Potentially: compose non-technical write-ups of Unjournal evaluation synthesis content (in line with interest and ability)
Most relevant skills, aptitudes, interests, experience, and background knowledge:
Understanding of The Unjournal project
Strong written communications skills across a relevant range of contexts, styles, tones, and platforms (journalistic, technical, academic, informal, etc.)
Familiarity with academia and research processes and institutions
Familiarity with current conversations and research on global priorities within government and policy circles, effective altruism, and relevant academic fields
Willingness to learn and use IT, project management, data management, web design, and text-parsing tools (such as those mentioned below), with the aid of GPT/AI chat
Further desirable skills and experience:
Academic/research background in areas related to The Unjournal’s work
Operations, administrative, and project management experience
Experience working in a small nonprofit institution
Experience with promotion and PR campaigns and working with journalists and bloggers
Proposed terms:
Project-based contract "freelance" work
$30–$55/hour USD (TBD, depending on experience and capabilities). Hours for each project include some onboarding and upskilling time.
Our current budget can cover roughly 200 hours of this project work. We hope to increase and extend this (depending on our future funding and expenses).
This role is contract-based and supports remote and international applicants. We can contract people living in most countries, but we cannot serve as an immigration sponsor.
We are again considering application for the 'evaluation metrics/meta-science' role. We will also consider all applicants for our field specialist positions, and for roles that may come up in the future.
The potential roles discussed below combine research-linked work with operations and administrative responsibilities. Overall, this may include some combination of:
Assisting and guiding the process of identifying strong and potentially impactful work in key areas, explaining its relevance, its strengths, and areas warranting particular evaluation and scrutiny
Interacting with authors, recruiting, and overseeing evaluators
Synthesizing and disseminating the results of evaluations and ratings
Aggregating and benchmarking these results
Helping build and improve our tools, incentives, and processes
Curating outputs relevant to other researchers and policymakers
Doing "meta-science" work
See also our field specialist team pool and evaluator pool. Most of these roles involve compensation/honorariums. See Advisory/team roles (research, management)
Express your interest here. (Nov. 2023: Note, we can not guarantee that we will be hiring for this role, because of changes in our approach.)
Several expositions for different audiences, fleshing out ideas and plans
See/subscribe to our YouTube channel
See slide deck (Link: bit.ly/unjourrnalpresent; offers comment access)
(Link: bit.ly/unjourrnalpresent; offers comment access)
Earlier discussion document, aimed at EA/global priorities, academic, and open-science audiences [link]
2021 A shorter outline posted on onscienceandacademia.org
Did you just write a brilliant peer review for an economics (or social science, policy, etc.) journal? Your work should not be wasted, there should be a way to share your insights and get credit!
Consider transforming these insights into a public "independent evaluation" for . This will benefit the community and help make research better and more impactful. And we can share your work and provide you feedback. This will help you build a portfolio with The Unjournal, making it more likely we'll hire you for paid work and compensate you at the higher rate. And we offer prizes for the best work.
You can do this either anonymously or sign your name.
To say this in :
Journal peer review is critical for assessing and improving research, but too often these valuable discussions remain hidden behind closed doors. By publishing a version of your review, you can: (1) Amplify the impact of your reviewing efforts by contextualizing the research for a broader audience, (2) Facilitate more transparent academic discussions around the strengths and limitations of the work, (3) Get public recognition for your peer review contributions, which are often unseen and unrewarded (4) Reduce overall reviewing burdens by allowing your assessment to be reused, (5) Support a culture of open scholarship by modeling constructive feedback on public research
According to a COPE Discussion document: Who “owns” peer reviews (emphasis added)
While the depth of commentary may vary greatly among reviews, given the minimal thresholds set by copyright law, it can be presumed that most reviews meet the requirements for protection as an “original work of authorship”. As such, in the absence of an express transfer of copyright or a written agreement between the reviewer and publisher establishing the review as a “work for hire”, it may be assumed that, by law, the reviewer holds copyright to their reviewer comments and thus is entitled to share the review however the reviewer deems fit...
The COPE council notes precisely the benefits we are aiming to unlock. They mention an 'expectation of confidentiality' that seems incompletely specified.
For example, reviewers may wish to publish their reviews in order to demonstrate their expertise in a subject matter and to contribute to their careers as a researcher. Or they may see publication of their reviews as advancing discourse on the subject and thus acting for the benefit of science as a whole. Nevertheless, a peer reviewer’s comments are significantly different from many other works of authorship in that they are expressly solicited as a work product by a journal and—whatever the peer review model—are subject to an expectation of confidentiality. However, without an express agreement between the journal and the reviewer, it is questionable whether such obligation of confidentiality should be considered to apply only until a final decision is reached on the manuscript, or to extend indefinitely.
Several journals explicitly agree that reviewers are welcome to publish the content of their reviews, with some important caveats. The Publish Your Reviews initiative gathered public statements from several journals and publishers confirming that they support reviewers posting their comments externally. However, they generally ask reviewers to remove any confidential information before sharing their reviews. This includes: the name of the journal, the publication recommendation (e.g., accept, revise, or reject), and any other details the journal or authors considered confidential, such as unpublished data.
For these journals, we are happy to accept and share/link the verbatim content as part of an independent Unjournal evaluation.
But even for journals that have not signed onto this, as the COPE mentioned Your peer review is your intellectual property, it is not owned by the journal!
There may be some terms and conditions you agreed to as part of submitting a referee report. Please consult these carefully.
However, you are still entitled to share your own expert opinions on publicly-shared research. You may want to rewrite the review somewhat. You should make it clear that it refers to the publicly-shared (working paper/preprint) version of the research, not the one the journal shared with you in confidence. As above, you should probably not mention the journal name, the decision, or any other sensitive information. You don't even need to mention that you did review the paper for a journal.
Even if a journal considers the specific review confidential, this doesn't prevent the reviewer from expressing their independent assessment elsewhere.
As an expert reviewer, you have unique insights that can improve the quality and impact of research. Making your assessment available through The Unjournal amplifies the reach and value of your efforts. You can publish evaluations under your name or remain anonymous.
Ready to make your peer reviews work harder for science? Consider submitting an independent evaluation, for recognition, rewards, and to improve research. Contact us anytime at contact@unjournal.org for guidance... We look forward to unlocking your valuable insights!
What do we offer? How does it improve upon traditional academic review/publishing?
: The Unjournal's process reduces the high costs and "gaming" associated with standard journal publication mechanisms.
: We promote research replicability and robustness in line with the RRC agenda.
: We prioritize impactful work. Expert evaluators focus on reliability, robustness, and usefulness. This fosters a productive bridge between high-profile mainstream researchers and global-impact-focused organizations, researchers, and practitioners.
: We open up the evaluation process, making it more timely and transparent, and providing valuable public metrics and feedback for the benefit of authors, other researchers, and policymakers who may want to use the research..
: By separating evaluation from journal publication, free research from the static 'PDF prisons'. This enables "dynamic documents/notebooks" that boost transparency and replicability, improved research communication through web-based formats. It also facilitates "living projects": research that can continuously grow, improving in response to feedback and incorporating new data and methods in the same environment.
22 Aug 2024: we will be moving our latest updates to our
Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.
Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), (The Unjournal), (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.
The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.
See the for all details, and to registr.
With the completed set of evaluations of and ,” our pilot is complete:
10 research papers evaluated
21 evaluations
5 author responses
Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.
"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.
Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .
synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;
analyzing the evaluation metrics for patterns, potential biases, and reliability measures;
"aggregating expert judgment" from these metrics;
tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and
drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).
discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;
the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and
how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.
Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.
Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!
We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:
existing research into the research evaluation process, and on expert judgment elicitation and aggregation;
practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and
Of course, our context and goals are somewhat distinct from the initiatives above.
We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.
(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)
We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.
I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.
Some (other) ways to follow The Unjournal's progress
MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.
Hope these updates are helpful. Let me know if you have suggestions.
Building a "best feasible plan"..
What is this Unjournal?... See .
See the vision and broad plan presented (and embedded below), updated August 2023.
Status: Mostly completed/decided for pilot phase
Which projects enter the review system (relevance, minimal quality, stakeholders, any red lines or "musts")
How projects are to be submitted
How reviewers are to be assigned and compensated
Status: Mostly completed/decided for pilot phase; will review after initial trial
To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible (10 Dec 2022 update)
Share, advertise, promote this; have efficient meetings and presentations
Establish links to all open-access bibliometric initiatives (to the extent feasible)
Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.
Status: Mostly completed/decided for pilot phase; will review after the initial trial
Status: We are still working with Google Docs and building an external survey interface. We plan to integrate this with PubPub over the coming months (August/Sept. 2023)
You can see this output most concisely (evaluations are listed as "supplements," at least for the time being).
For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook .
Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar (The Google Scholar algorithm is a bit opaque—your tips are welcome.)
We will make decisions and award our pilot and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on ""
We are still inviting applications for the helping us accomplish these frameworks and processes. Our next steps:
Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "" and our .
Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will
New members of our team: Welcome to our advisory board, as a field specialist.
As part of our scale-up (and in conjunction with supporting on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).
metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as .
Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")
Check out our to read evaluations and author responses.
(David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).
Visit for an overview.
Alternatively, fill out this to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked .
Progress notes: We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in ; we will incorporate changes.
See also
Updated:
See for proposed specifics.
/ Define the broad scope of our research interest and key overriding principles. Light-touch, to also be attractive to aligned academics
Build "editorial-board-like" teams with subject or area expertise
See for a first pass.
See our .
See our .
Host article (or dynamic research project or 'registered report') on OSF or another place allowing time stamping & DOIs (see for a start)
Link this to (or similar tool or site) to solicit feedback and evaluation without requiring exclusive publication rights (again, see )
Also: Commit to publish academic reviews or share in our internal group for further evaluation and reassessment or benchmarking of the ‘PREreview’ type reviews above (perhaps taking the ).
TLDR: Unjournal promotes research replicability/robustness
Unjournal evaluations aim to support the "Reproducibility/Robustness-Checking" (RRC) agenda. We are directly engaging with the Institute for Replication (I4R) and the repliCATS project (RC), and building connections to Replication Lab/TRELiSS and Metaculus.
We will support this agenda by:
Promoting data and code sharing: We request pre-print authors to share their code and data, and reward them for their transparency.
Promoting 'Dynamic Documents' and 'Living Research Projects': Breaking out of "PDF prisons" to achieve increased transparency.
Encouraging detailed evaluations: Unjournal evaluators are asked to:
highlight the key/most relevant research claims, results, and tests;
propose possible robustness checks and tests (RRC work); and
make predictions for these tests.
Implementing computational replication and robustness checking: We aim to work with I4R and other organizations to facilitate and evaluate computational replication and robustness checking.
Advocating for open evaluation: We prioritize making the evaluation process transparent and accessible for all.
While the replication crisis in psychology is well known, economics is not immune. Some very prominent and influential work has blatant errors, depends on dubious econometric choices or faulty data, is not robust to simple checks, or uses likely-fraudulent data. Roughly 40% of experimental economics work fail to replicate. Prominent commenters have argued that the traditional journal peer-review system does a poor job of spotting major errors and identifying robust work.
My involvement with the SCORE replication market project shed light on a key challenge (see Twitter posts): The effectiveness of replication depends on the claims chosen for reproduction and how they are approached. I observed that it was common for the chosen claim to miss the essence of the paper, or to focus on a statistical result that, while likely to reproduce, didn't truly convey the author's message.
Simultaneously, I noticed that many papers had methodological flaws (for instance, lack of causal identification or the presence of important confounding factors in experiments). But I thought that these studies, if repeated, would likely yield similar results. These insights emerged from only a quick review of hundreds of papers and claims. This indicates that a more thorough reading and analysis could potentially identify the most impactful claims and elucidate the necessary RRC work.
Indeed, detailed, high-quality referee reports for economics journals frequently contain such suggestions. However, these valuable insights are often overlooked and rarely shared publicly. Unjournal aims to change this paradigm by focusing on three main strategies:
Identifying vital claims for replication:
We plan to have Unjournal evaluators help highlight key "claims to replicate," along with proposing replication goals and methodologies. We will flag papers that particularly need replication in specific areas.
Public evaluation and author responses will provide additional insight, giving future replicators more than just the original published paper to work with.
Encouraging author-assisted replication:
The Unjournal's platform and metrics, promoting dynamic documents and transparency, simplify the process of reproduction and replication.
By emphasizing replicability and transparency at the working-paper stage (Unjournal evaluations’ current focus), we make authors more amenable to facilitate replication work in later stages, such as post-traditional publication.
Predicting replicability and recognizing success:
We aim to ask Unjournal evaluators to make predictions about replicability. When these are successfully replicated, we can offer recognition. The same holds for repliCATS aggregated/IDEA group evaluations: To know if we are credibly assessing replicability, we need to compare these to at least some "replication outcomes."
The potential to compare these predictions to actual replication outcomes allows us to assess the credibility of our replicability evaluations. It may also motivate individuals to become Unjournal evaluators, attracted by the possibility of influencing replication efforts.
By concentrating on NBER papers, we increase the likelihood of overlap with journals targeted by the Institute for Replication, thus enhancing the utility of our evaluations in aiding replication efforts.
By “Dynamic Documents” I mean papers/projects built with Quarto, R-markdown, or JuPyTer notebooks (the most prominent tools) that do and report the data analysis (as well as math/simulations) in the same space that the results and discussion are presented (with ‘code blocks’ hidden).
I consider some of the benefits of this format, particularly for EA-aligned organizations like Open Philanthropy: Benefits of Dynamic Documents
“Continually update a project” rather than start a “new extension paper” when you see what you could have done better.
The main idea is that each version is given a specific time stamp, and that is the object that is reviewed and cited. This is more or less already the case when we cite working papers/drafts/mimeos/preprints.
See #living-kaizend-research-projects, further discussing the potential benefits.
Claim: Rating and feedback is better than an ‘all-or-nothing’ accept/reject process. Although people like to say “peer review is not binary”, the consequences are.
“Publication in a top journal” is used as a signal and a measuring tool for two major purposes. First, policymakers, journalists, and other researchers look at where a paper is published to assess whether the research is credible and reputable. Second, universities and other institutions use these publication outcomes to guide hiring, tenure, promotion, grants, and other ‘rewards for researchers.’
Did you know?: More often than not, academic economists speak of the "supply of spaces in journals” and the “demand to publish in these journals”. Who is the consumer? Certainly not the perhaps-mythical creature known as the ‘reader’.
In the field of economics, it is not unusual for it to take years between the ‘first working paper’ that is publicly circulated and the final publication. During that time, the paper may be substantially improved, but it may not be known to nor accepted by practitioners. Meanwhile, it provides little or no career value to the authors.
As a result, we see three major downsides:
Time spent gaming the system:
Researchers and academics spend a tremendous amount of time 'gaming' this process, at the expense of actually doing .
Randomness in outcomes, unnecessary uncertainty and stress
Wasted feedback, including reviewer's time
I (Reinstein) have been in academia for about 20 years. Around the departmental coffee pot and during research conference luncheons, you might expect us to talk about theories, methods, and results. But roughly half of what we talk about is “who got into which journal and how unfair it is”; “which journal should we be submitting our papers to?”; how long are their “turnaround times?”; “how highly rated are these journals?”; and so on. We even exchange tips on how to ‘sneak into these journals’.
There is a lot of pressure, and even bullying, to achieve these “publication outcomes” at the expense of careful methodology.
The current system can sideline deserving work due to unpredictable outcomes. There's no guarantee that the cream will rise to the top, making research careers much more stressful—even driving out more risk-averse researchers—and sometimes encouraging approaches that are detrimental to good science.
A lot of ‘feedback’ is wasted, including the reviewers' time. Some reviewers write ten-page reports critiquing the paper in great , even when they reject the paper. These reports are sometimes very informative and useful for the author and would also be very helpful for the wider public and research community to understand the nature of the debate and issues.
However, researchers often have a very narrow focus on getting the paper published as quickly and in as high-prestige a journal as possible. Unless the review is part of a 'Revise and Resubmit' that the author wants to fulfill, they may not actually put the comments into practice or address them in any way.
Of course, the reviews may be misinformed, mistaken, or may misunderstand aspects of the research. However, if the paper is rejected (even if the reviewer was positive about the paper), the author has no opportunity or incentive to respond to the reviewer. Thus the misinformed reviewer may remain in the dark.
The other side of the coin: a lot of effort is spent trying to curry favor with reviewers who are often seen as overly fussy and not always in the direction of good science.
of the process and timings at top journals in economics. Hadevand et al report an average of over 24 months between initial submisson and final acceptance (and nearly three years until publication).
Our theory of change is shown above as a series of possible paths; we indicate what is arguably the most "direct" path in yellow. All of these paths begin with our setting up, funding, communicating, and incentivizing participation in a strong, open, efficient research evaluation system (in green, at the top). These processes all lead to impactful research being more in-depth, more reliable, more accessible, and more useful, and thus better informing decision-makers and leading to better decisions and outcomes (in green, at the bottom).
You can zoom in on #some-of-the-main-paths below
(Yellow) Faster and better feedback on impactful research improves this work and better informs policymakers and philanthropists.
(Blue) Our processes and incentives will foster ties between (a) mainstream and prominent academic and policy researchers and (b) global-priorities or EA-aligned researchers. This will improve the rigor, credibility, exposure, and influence of previously "EA niche" work while helping mainstream researchers better understand and incorporate ideas, principles, and methods from the EA and rationalist research communities (such as counterfactual impact, cause-neutrality, reasoning transparency, and so on.) This process will also nudge mainstream academics towards focusing on impact and global priorities, and towards making their research and outputs more accessible and useable.
(Pink) The Unjournal’s more efficient, open, and flexible processes will become attractive to academics and stakeholders. As we become better at "predicting publication outcomes," we will become a replacement for traditional processes, improving research overall—some of which will be highly impactful research.
Rigorous quantitative and empirical research in economics, business, public policy, and social science has the potential to improve our decision-making and enable a flourishing future. This can be seen in the research frameworks proposed by 80,000 Hours, Open Philanthropy, and The Global Priorities Institute (see discussions here). This research is routinely used by effective altruists working on global priorities or existential risk mitigation. It informs both philanthropic decisions (e.g., those influenced by GiveWell's Cost-Effectiveness Analyses, whose inputs are largely based on academic research) and national public policy. Unfortunately, the academic publication process is notoriously slow; for example, in economics, it routinely takes 2–6 years between the first presentation of a research paper and the eventual publication in a peer-reviewed journal. Recent reforms have sped up parts of the process by encouraging researchers to put working papers and preprints online.
However, working papers and preprints often receive at most only a cursory check before publication, and it is up to the reader to judge quality for themselves. Decision-makers and other researchers rely on peer review to judge the work’s credibility. This part remains slow and inefficient. Furthermore, it provides very noisy signals: A paper is typically judged by the "prestige of the journal it lands in"’ (perhaps after an intricate odyssey across journals), but it is hard to know why it ended up there. Publication success is seen to depend on personal connections, cleverness, strategic submission strategies, good presentation skills, and relevance to the discipline’s methods and theory. These factors are largely irrelevant to whether and how philanthropists and policymakers should consider and act on a paper’s claimed findings. Reviews are kept secret; the public never learns why a paper was deemed worthy of a journal, nor what its strengths and weaknesses were.
We believe that disseminating research sooner—along with measures of its credibility—is better.
We also believe that publicly evaluating its quality before (and in addition to) journal publication will add substantial additional value to the research output, providing:
a quality assessment (by experts in the field) that can decisionmakers and other researchers can read alongside the preprint, helping these users weigh its strengths and weaknesses and interpret its implications; and
faster feedback to authors focused on improving the rigor and impact of the work.
Various initiatives in the life sciences have already begun reviewing preprints. While economics took the lead in sharing working papers, public evaluation of economics, business, and social science research is rare. The Unjournal is the first initiative to publicly evaluate rapidly-disseminated work from these fields. Our specific priority: research relevant to global priorities.
The Unjournal’s open feedback should also be valuable to the researchers themselves and their research community, catalyzing progress. As the Unjournal Evaluation becomes a valuable outcome in itself, researchers can spend less time "gaming the journal system." Shared public evaluation will provide an important window to other researchers, helping them better understand the relevant cutting-edge concerns. The Unjournal will permit research to be submitted in a wider variety of useful formats (e.g., dynamic documents and notebooks rather than "frozen pdfs"), enabling more useful, replicable content and less time spent formatting papers for particular journals. We will also allow researchers to improve their work in situ and gain updated evaluations, rather than having to spin off new papers. This will make the literature more clear and less cluttered.
The Unjournal is delighted to announce the winners of our inaugural Impactful Research Prize. We are awarding our first prize to Takahiro Kubo (NIES Japan and Oxford University) and co-authors for their research titled "Banning wildlife trade can boost demand". The paper stood out for its intriguing question, the potential for policy impact, and methodological strength. We particularly appreciated the authors’ open, active, and detailed engagement with our evaluation process.
The second prize goes to Johannes Haushofer (NUS Singapore and Stockholm University) and co-authors for their work "The Comparative Impacts of Cash Transfers and a Psychotherapy Program on Psychological and Economic Wellbeing". Our evaluators rated this paper among the highest across a range of metrics. It was highly commended for its rigor, the importance of the topic, and the insightful discussion of cost-effectiveness.
We are recognizing exceptional evaluators for credible, insightful evaluations. Congratulations to Phil Trammell (Global Priorities Institute at the University of Oxford), Hannah Metzler (Complexity Science Hub Vienna), Alex Bates (independent researcher), and Robert Kubinec (NYU Abu Dhabi).
We would like to congratulate all of the winners on their contributions to open science and commitment to rigorous research. We also thank other authors who have submitted their work but have not been selected at this time - we received a lot of excellent submissions, and we are committed to supporting authors beyond this research prize.
Please see the full press release, as well as award details, below and linked here:
Traditional peer review is a closed process, with reviewers' and editors' comments and recommendations hidden from the public.
In contrast, a (along with authors' responses and evaluation manager summaries) are made public and easily accessible. We give each of these a separate DOI and work to make sure each enters the literature and bibliometric databases. We aim further to curate these, making it easy to see the evaluators' comments in the context of the research project (e.g., with sidebar/hover annotation).
Open evaluation is more useful:
to other researchers and students (especially those early in their careers). Seeing the dialogue helps them digest the research itself and understand its relationship to the wider field. It helps them understand the strengths and weaknesses of the methods and approaches used, and how much agreement there is over these choices. It gives an inside perspective on how evaluation works.
to people using the research, providing further perspectives on its value, strengths and weaknesses, implications, and applications.
Publicly posting evaluations and responses may also lead to higher quality and more reliability. Evaluators can choose whether or not they wish to remain anonymous; there are , but in either case, the fact that all the content is public may encourage evaluators to more fully and transparently express their reasoning and justifications. (And where they fail to do so, readers of the evaluation can take this into account.)
The fact that we are asking for evaluations and ratings of all the projects in our system—and not using "accept/reject"—should also drive more careful and comprehensive evaluation and feedback. At a traditional top-ranked journal, a reviewer may limit themselves to a few vague comments implying that the paper is "not interesting or strong enough to merit publication." This would not make sense within the context of The Unjournal.
We do not "accept or reject" papers; we are evaluating research, not "publishing" it. But then, how do other researchers and students know whether the research is worth reading? How can policymakers know whether to trust it? How can it help a researcher advance their career? How can grantmakers and organizations know whether to fund more of this research?
As an alternative to the traditional measure of worth—asking, "what tier did a paper get published in?"—The Unjournal provides metrics: We ask evaluators to provide a specific set of ratings and predictions about aspects of the research, as well as aggregate measures. We make these public. We aim to synthesize and analyze these ratings in useful ways, as well as make this quantitative data accessible to meta-science researchers, meta-analysts, and tool builders.
Feel free to check out our and (these are our pilot metrics, we aim to refine these).
These metrics are separated into different categories designed to help researchers, readers, and users understand things like:
How much can one believe the results stated by the authors (and why)?
How relevant are these results for particular real-world choices and considerations?
Is the paper written in a way that is clear and readable?
How much does it advance our current knowledge?
We also request overall ratings and predictions . . . of the credibility, importance, and usefulness of the work, and to help benchmark these evaluations to each other and to the current "journal tier" system.
However, even here, the Unjournal metrics are also precise in a sense that "journal publication tiers" are not. There is no agreed-upon metric of exactly how journals rank (e.g., within economics' "top-5" or "top field journals"). More importantly, there is no clear measure of the relative quality and trustworthiness of the paper within particular journals.
In addition, there are issues of lobbying, career concerns, and timing, discussed elsewhere, which make the "tiers" system less reliable. An outsider doesn't know, for example:
Was a paper published in a top journal because of a special relationship and connections? Was an editor trying to push a particular agenda?
Was it published in a lower-ranked journal because the author needed to get some points quickly to fill their CV for an upcoming tenure decision?
In contrast, The Unjournal requires evaluators to give specific, precise, quantified ratings and predictions (along with an explicit metric of the evaluator's uncertainty over these appraisals).
Of course, our systems will not solve all problems associated with reviews and evaluations: power dynamics, human weaknesses, and limited resources will remain. But we hope our approach moves in the right direction.
We want to reduce the time between when research is done (and a paper or other research format is released) and when other people (academics, policymakers, journalists, etc.) have a credible measure of "how much to believe the results" and "how useful this research is."
Here's how The Unjournal can do this.
Public evaluations and ratings: Rather than waiting years to see "what tier journal a paper lands in," the public can simply consult The Unjournal to find credible evaluations and ratings.
Journal-independent review allows work to be rated separately in different areas: theoretical rigor and innovation, empirical methods, policy relevance, and so on, with separate ratings in each category by experts in that area. As a researcher in the current system, I cannot both submit my paper and get public evaluation from (for example) JET and the Journal of Development Economics for a paper engaging both areas.
The Unjournal, and journal-independent evaluation, can enable this through
commissioning a range of evaluators with expertise in distinct areas, and making this expertise known in the public evaluations;
asking specifically for multiple dimensions of quantitative (and descriptive) feedback and ratings (see especially under our ); and
allowing authors to gain evaluation in particular areas in addition to the implicit value of publication in specific traditional field journals.
We acknowledge the potential for "information hazards" when research methods, tools, and results become more accessible. This is of particular concern in the context of direct physical and biological science research, particularly in (although there is a case that specific ). ML/AI research may also fall into this category. Despite these potential risks, we believe that the fields we plan to cover—detailed above—do not primarily present such concerns.
In cases where our model might be extended to high-risk research—such as new methodologies contributing to terrorism, biological warfare, or uncontrolled AI—the issue of accessibility becomes more complex. We recognize that increasing accessibility in these areas might potentially pose risks.
While we don't expect these concerns to be raised frequently about The Unjournal's activities, we remain committed to supporting thoughtful discussions and risk assessments around these issues.
See also .
Early evaluation: We will evaluate potentially impactful research soon after it is released (as a working paper, preprint, etc.). We will encourage authors to submit their work for our evaluation, and we will the evaluation of work from the highest-prestige authors.
We will pay evaluators with further incentives for timeliness (as well as carefulness, thoroughness, communication, and insight). that these incentives for promptness and other qualities are likely to work.
See
You can now ask questions of this GitBook using a chatbot! and choose 'ask gitbook'.
We organize and fund public, journal-independent feedback, rating, and evaluation of academic work. We focus on work that is highly relevant to global priorities, especially in economics, social science, and impact evaluation. We encourage better research by making it easier for researchers to get credible feedback. See here for more details.
No. The Unjournal does not charge any fees. In fact, unlike most traditional journals, we compensate evaluators for their time, and award prizes for strong work.
We are a nonprofit organization. We do not charge fees for access to our evaluations, and work to make them as open as possible. In future, we may consider sliding-scale fees for people submitting their work for Unjournal evaluation. If so, this would simply be to cover our costs and compensate evaluators. We are a nonprofit and will stay that way.
No. We do not research. We just commission public evaluation and rating of relevant research that is already . Having your work evaluated in The Unjournal from submitting it to any publication.
We have grants from philanthropists and organizations who are interested in our priority research areas. We hope that our work will provide enough value to justify further direct funding. We may also seek funding from governments and universities supporting the open-access agenda.
Sure! Please contact us at theunjournal@gmail.com.
FAQ for authors of research that The Unjournal selected for public evaluation, and for authors considering submitting their work to The Unjournal for evaluation
You can fill out this form to submit your work, or email contact@unjournal.org with questions .
We generally seek (aka 'reviewers') with research interests in your area and with complementary expertise. You, the author, can suggest areas you want to on.
The evaluators write detailed and helpful evaluations, and submit them either "signed" or anonymously. Using our evaluation forms, they provide quantitative ratings on several dimensions, such as methods, relevance, and communication. They predict what journal tier the research will be published in, and what tier it should be published in. Here are the Guidelines for evaluators.
These evaluations and ratings are typically made public (see unjournal.pubpub.org), but you will have the right to respond before (or after) these are posted.
To consider your research we only need a link to a publicly hosted version of your work, ideally with a DOI. We will not "publish" your paper. The fact that we are handling your paper will not limit you in any way. You can submit it to any journal before, during, or after the process.
You can request a conditional embargo by emailing us at contact@unjournal.org, or via the submission form. Please explain what sort of embargo you are asking for, and why. By default, we would like Unjournal evaluations to be made public promptly. However, we may make exceptions in special circumstances, particularly for very early-career researchers whose career prospects are particularly vulnerable.
If there is a vulnerable early-career researcher on the authorship team, we may allow authors to "embargo" the publication of the evaluation until a later date. Evaluators (referees) will be informed of this. This date can be contingent, but it should not be indefinite.
For further details on this, and some examples, see 'Conditional embargos' & exceptions.
We may ask for some of the below, but these are mainly optional. We aim to make the process very light touch for authors.
A link to a non-paywalled, hosted version of your work (in ) which .
Responses to .
We may ask for a link to data and code, if possible. Note that our project is not principally about replication, and we are not insisting on this. However, sharing code, data, and materials is .
We also allow you to respond to evaluations, and we give your response its own DOI.
By submitting your research and engaging with public evaluation, you send a powerful public signal that you are confident in your work, open to constructive criticism, and motivated to seek the truth!
And there are further benefits to :
Substantive feedback will help you improve your research. is often very hard to get, especially for young scholars. It's hard to get anyone to read your paper – we can help!
Being evaluated by The Unjournal is a sign of impact. We select our research based on potential global relevance.
Ratings are markers of credibility for your work that could help your career.
The chance to publicly respond to criticism and correct misunderstandings.
Increasing the visibility of your work, which may lead to additional citations. We publicize our evaluations and the original papers on our social media feed, and occasionally in notebook and
A connection to the Open Science/Open Access and EA/Global Priorities communities. This may lead to grant opportunities, open up new ambitious projects, and attract strong PhD students to your research groups.
A reputation as an early adopter and innovator in open science.
Prizes: You may win an Impactful Research Prize (pilot) (publicity, reputation, and substantial financial prizes). These prizes are tied, in part, to your engagement with The Unjournal.
Undervalued or updated work: Your paper may have been "under-published". Perhaps there are a limited set of prestigious journals in your field. You now see ways you could improve the research. The Unjournal can help; we will also consider 'post-peer-review publication' evaluation.
Innovative formats: Journals typically require you to submit a LaTeX or MS Word file, and to use their fussy formats and styles. You may want to use tools like Quarto that integrate your code and data, allow you to present dynamic content, and enhance reproducibility. The Unjournal , and we can evaluate research in virtually any format.
There are risks and rewards to any activity, of course. Here we consider some risks you may weigh against the benefits mentioned above.
Exclusivity
Public negative feedback
, and perhaps they might enforce these more strongly if they fear competition from The Unjournal.
However, The Unjournal is not exclusive. Having your paper reviewed and evaluated in The Unjournal will not limit your options; you can still submit your work to traditional journals.
Our evaluations are public. While there has been some movement towards open review, this is still not standard. Typically when you submit your paper, reviews are private. With The Unjournal, you might get public negative evaluations.
We think this is an acceptable risk. Most academics expect that opinions will differ about a piece of work, and everyone has received negative reviews. Thus, getting public feedback — in The Unjournal or elsewhere — should not particularly harm you or your research project.
Nonetheless, we are planning some exceptions for early-career researchers (see #can-i-ask-for-evaluations-to-be-private).
Unjournal evaluations should be seen as signals of research quality. Like all such signals, they are noisy. But submitting to The Unjournal shows you are confident in your work, and not afraid of public feedback.
Within our "Direct evaluation" track, The Unjournal directly chooses papers (from prominent archives, well-established researchers, etc.) to evaluate. We don't request authors' permission here.
As you can see in our evaluation workflow, on this track, we engage with authors especially at two points:
Informing the authors that the evaluation will take place, requesting they engage, and giving them the opportunity to request a conditional embargo or specific types of feedback.
Of particular interest: are we looking at the most recent version of the paper/project, or is there a further revised version we should be considering instead?
After the evaluations have been completed, the authors are given two weeks to respond, and have their response posted along with our 'package'. (Authors can also respond after we have posted the evaluations, and we will put their response in the same 'package', with a DOI etc.)
Once we receive unsolicited work from an author or authors, we keep it in our database and have our team decide on prioritization. If your paper is prioritized for evaluation, The Unjournal will notify you.
At present, we do not have a system to automatically share the status of author submissions with authors. We hope to put one in place. You can email us for clarification and updates.
You can still submit your work to any traditional journal.
The Unjournal aims to evaluate the most recent version of a paper. We reach out to authors to ensure we have the latest version at the start of the evaluation process.
If substantial updates are made to a paper during the evaluation process, authors are encouraged to share the updated version. We then inform our evaluators and ask if they wish to revise their comments.
If the evaluators can't or don't respond, we will note this and still link the newest version.
Authors are encouraged to respond to evaluations by highlighting major revisions made to their paper, especially those relevant to the evaluators' critiques. If authors are not ready to respond to evaluations, we can post a placeholder response indicating that responses and/or a revised version of the paper are forthcoming.
Re-evaluation: If authors and evaluators are willing to engage, The Unjournal is open to re-evaluating a revised version of a paper after publishing the evaluations of the initial version.
We share evaluations with the authors and give them a chance to respond before we make the evaluations public (and again afterward, at any point). We add these to our evaluation packages on PubPub. Evaluation manager's (public) reports and our further communications incorporate the paper, the evaluations, and the authors' responses.
Authors' responses could bring several benefits...
Personally: a chance to correct misconceptions and explain their approach and planned steps. If you spot any clear errors or omissions, we give evaluators a chance to adjust their reports in response. Your response can also help others have a more accurate and positive view of the research. This includes the evaluators, as well as future journal referees and grant funders.
For research users, to get an informed balanced perspective on how to judge the work
For other researchers, to better understand the methodological issues and approaches. This can serve to start a public dialogue and discussion to help build the field and research agenda. Ultimately, we aim to facilitate a back-and-forth between authors, evaluators, and others.
Evaluations may raise substantive doubts and questions, and make some specific suggestions, and ask about (e.g.) data, context, or assumptions. There's no need to respond to every evaluator point. Only respond where you have something substantive: clarifying doubts, explaining the justification for your particular choices, and giving your thoughts on the suggestions (which will you incorporate, or not, and why?).
A well-written author response might have a clear narrative and group responses into themes.
Try to have a positive tone (no personal attacks etc.) but avoid formality, over-politeness, or flattery. Revise-and-resubmit responses at standard journals sometimes begin each paragraph with "thank you for this excellent suggestion". Feel free to skip this; we want to focus on the substance.
Examples: We've received several detailed and informative author responses, such as:
See links below accessing current policies of The Unjournal, accompanied by discussion and including templates for managers and editors.
People and organizations submit their own research or suggest research they believe may be high-impact. The Unjournal also directly monitors key sources of research and research agendas. Our team then systematically prioritizes this research for evaluation. See the link below for further details.
We choose an evaluation manager for each research paper or project. They commission and compensate expert evaluators to rate and discuss the research, following our evaluation template and . The original research authors are given a chance to publicly respond before we post these evaluations. See the link below for further details.
We make all of this evaluation work public on , along with an evaluation summary. We create DOIs for each element and submit this work to scholarly search engines. We also present a summary and analysis of our .
We outline some further details in the link below.
See for a full 'flowchart' map of our evaluation workflow
We are also piloting several initiatives that involve a different process. See:
As we are paying evaluators and have limited funding, we cannot evaluate every paper and project. Papers enter our database through:
submission by authors;
our own searches (e.g., searching syllabi, forums, working paper archives, and white papers); and
s from other researchers, practitioners, and members of the public, and recommendations from . We have posted more detailed instructions for .
Our management team rates the suitability of each paper according to the criteria discussed below and .
We have followed a few procedures for finding and prioritizing papers and projects. In all cases, we require more than one member of our research-involved team (field specialist, managers, etc.) to support a paper before prioritizing it.
We are building a grounded systematic procedure with criteria and benchmarks. We also aim to give managers and field specialists some autonomy in prioritizing key papers and projects. As noted elsewhere, we are considering targets for particular research areas and sources.
See our basic process (as of Dec. 2023) for prioritizing work:
Through October 2022: For the papers or projects at the top of our list, we contacted the authors and asked if they wanted to engage, only pursuing evaluation if agreed.
In our , we inform authors but do not request permission. For this track, we have largely focused on working papers.
What are (some of) the authors’ main claims that are worth carefully evaluating? What aspects of the evidence, argumentation, methods, interpretation, etc., is the team unsure about? What particular data, code, proof, etc., would they like to see vetted? If it has already been peer-reviewed in some way, why do they think more review is needed?
How well has the author engaged with the process? Do they need particular convincing? Do they need help making their engagement with The Unjournal successful?
In deciding which papers or projects to send out to paid evaluators, we have considered the following issues. for each paper or project to evaluators before they write their evaluations.
Consider: , field relevance, open science, authors’ engagement, data and reasoning transparency. In gauging this relevance, the team may consider the , but not too rigidly.
See for further discussion of prioritization, scope, and strategic and sustainability concerns.
Our is quantitative work that informs , especially in . We want to see better research leading to better outcomes in the real world (see our '').
See (earlier) discussion in public call/EA forum discussion .
To reach these goals, we need to select "the right research" for evaluation. We want to choose papers and projects that are highly relevant, methodologically promising, and that will benefit substantially from our evaluation work. We need to optimize how we select research so that our efforts remain mission-focused and useful. We also want to make our process transparent and fair. To do this, we are building a coherent set of criteria and goals, and a specific approach to guide this process. We explore several dimensions of these criteria below.
Management access only: General discussion of prioritization in Gdoc . Private discussion of specific papers in our Coda resource. We incorporate some of this discussion below.
When considering a piece of research to decide whether to commission it to be evaluated, we can start by looking at its general relevance as well as the value of evaluating and rating it.
Our prioritization of a paper for evaluation should not be seen as an assessment of its quality, nor of its 'vulnerability'. Furthermore, specific and less intensive.
Why is it relevant and worth engaging with?
We consider (and prioritize) the importance of the research to global priorities; its relevance to crucial decisions; the attention it is getting, the influence it is having; its direct relevance to the real world; and the potential value of the research for advancing other impactful work. We de-prioritize work that has already been credibly (publicly) . We also consider the fit of the research with our scope (social science, etc.), and the likelihood that we can commission experts to meaningfully evaluate it. As noted , some 'instrumental goals' (, , driving change, ...) also play a role in our choices.
Some features we value, that might raise the probability we consider a paper or project include the commitment and contribution to open science, the authors' engagement with our process, and the logic, communication, and transparent reasoning of the work. However, if a prominent research paper is within our scope and seems to have a strong potential for impact, we will prioritize it highly, whether or not it has these qualities.
2. Why does it need (more) evaluation, and what are some key issues and claims to vet?
We ask the people who suggest particular research, and experts in the field:
What are (some of) the authors’ key/important claims that are worth evaluating?
What aspects of the evidence, argumentation, methods, and interpretation, are you unsure about?
What particular data, code, proofs, and arguments would you like to see vetted? If it has already been peer-reviewed in some way, why do you think more review is needed?
As we weigh research to prioritize for evaluation, we need to balance directly having a positive impact against building our ability to have an impact in the future.
Importance
What is the direct impact potential of the research?
This is a massive question many have tried to address (see sketches and links below). We respond to uncertainty around this question in several ways, including:
Consulting a range of sources, not only EA-linked sources.
Scoping what other sorts of work are representative inputs to GP-relevant work.
Get a selection of seminal GP publications; look back to see what they are citing and categorize by journal/field/keywords/etc.
Neglectedness
Where is the current journal system failing GP-relevant work the most . . . in ways we can address?
Tractability
“Evaluability” of research: Where does the UJ approach yield the most insight or value of information?
Existing expertise: Where do we have field expertise on the UJ team? This will help us commission stronger evaluations.
"Feedback loops": Could this research influence concrete intervention choices? Does it predict near-term outcomes? If so, observing these choices and outcomes and getting feedback on the research and our evaluation can yield strong benefits.
Consideration/discussion: How much should we include research with indirect impact potential (theoretical, methodological, etc.)?
Moreover, we need to consider how the research evaluation might support the sustainability of The Unjournal and the broader general project of open evaluation. We may need to strike a balance between work informing the priorities of various audences, including:
Relevance to stakeholders and potential supporters
Clear connections to impact; measurability
Support from relevant academic communities
Support from open science
Consideration/discussion: What will drive further interest and funding?
Finally, we consider how our choices will increase the visibility and solidify the credibility of The Unjournal and open evaluations. We consider how our work may help drive positive institutional change. We aim to:
Interest and involve academics—and build the status of the project.
Commission evaluations that will be visibly useful and credible.
‘Benchmark traditional publication outcomes’, track our predictiveness and impact.
Have strong leverage over research "outcomes and rewards."
Increase public visibility and raise public interest.
Bring in supporters and participants.
Achieve substantial output in a reasonable time frame and with reasonable expense.
Maintain goodwill and a justified reputation for being fair and impartial.
We hope we have identified the important considerations (above); but we may be missing key points. We continue to engage discussion and seek feedback, to hone and improve our processes and approaches.
An important part of making this a success will be to spread the word, to get positive attention for this project, to get important players on board, network externalities, and change the equilibrium. We are also looking for specific feedback and suggestions from "mainstream academics" in Economics, Psychology, and policy/program evaluation, as well as from the Open Science and EA communities.
See
This page is a work-in-progress
15 Dec 2023: Our main current process involves
Submitted and (internally/externally) suggested research
Prioritization ratings and discussion by Unjournal field specialists
Feedback from field specialist area teams
A final decision by the management team, guided by the above
See (also embedded below) for more details of the proposed process.
Put broadly, we need to consider how this research allows us to achieve our own goals in line with our , The research we select and evaluate should meaningfully drive positive change. One way we might see this process: “better research & more informative evaluation” → “better decision-making” → “better outcomes” for humanity and for non-human animals (i.e., the survival and flourishing of life and human civilization and values).
Below, we adapt the (popular in effective altruism circles) to assess the direct impact of our evaluations.
EA and more or less adjacent: and overviews, .
Non-EA, e.g., .
We present and analyze the specifics surrounding our current evaluation data in
Below: An earlier template for considering and discussing the relevance of research. This was/is provided both for our own consideration and for sharing (in part?) with evaluators, to give . Think of these as bespoke evaluation notes for a .
As mentioned under , consider factors including importance to global priorities, relevance to the field, the commitment and contribution to open science, the authors’ engagement, and the transparency of data and reasoning. You may consider the explicitly, but not too rigidly.
The project space is unjournal.org, which I'd love to share with the public ... to make it easy, it can be announced as "" as in "bitly dot com EA unjournal"... and everyone should let me know if they want editor access to the gitbook; also, I made a quick 'open comment space' in the Gdoc .
We generally refer to "evaluation" instead of "refereeing" because The Unjournal does not publish work; it only links, rates, and evaluates it.
For more information about what we are asking evaluators to do, see:
We follow standard procedures, considering complementary expertise, interest, and cross-citations, as well as checking for conflicts of interest. (See our internal guidelines for choosing evaluators.)
We aim to consult those who have opted-in to our evaluator pool first.
We favor evaluators with a track record of careful, in-depth, and insightful evaluation — while giving ECRs a chance to build such a record.
For several reasons... (for more discussion, see Why pay evaluators (reviewers)?)
It's equitable, especially for those not getting "service credit" for their refereeing work from their employer.
Paying evaluators can reduce and conflicts of interest —arguably inherent to the traditional process where reviewers work for free.
We need to use explicit incentives while The Unjournal grows.
We can use payment as an incentive for high-quality work, and to access a wider range of expertise, including people not interested in submitting their own work to The Unjournal.
Yes, we allow evaluators to choose whether they wish to remain anonymous or "sign" their evaluations. See Protecting anonymity.
To limit this concern:
You can choose to make your evaluation anonymous. You can make this decision from the outset (this is preferable) or later, after you've completed your review.
Your evaluation will be shared with the authors before it is posted, and they will be given two weeks to respond before we post. If they cite what they believe are any major misstatements in your evaluation, we will give you the chance to correct these.
It is well-known that referee reports and evaluations are subject to mistakes. We expect most people who read your evaluation will take this into account.
You can add an addendum or revision to your evaluation later on (see below).
We will put your evaluation on PubPub and give it a DOI. It cannot be redacted in the sense that this initial version will remain on the internet in some format. But you can add an addendum to the document later, which we will post and link, and the DOI can be adjusted to point to the revised version.
See the For research authors FAQ as well as the "Direct evaluation" track.
We have two main ways that papers and research projects enter the Unjournal process:
Authors ; if we believe the work is relevant, we assign evaluators, and so on.
We that seems potentially influential, impactful, and relevant for evaluation. In some cases, we request the authors' permission before sending out the papers for evaluation. In other cases (such as where senior authors release papers in the prestigious NBER and CEPR series) we contact the authors and request their engagement before proceeding, but we don't ask for permission.
For either track, authors are invited to be involved in several ways:
Authors are informed of the process and given an opportunity to identify particular concerns, request an embargo, etc.
Evaluators can be put in touch with authors (anonymously) for clarification questions.
Authors are given a two-week window to respond to the evaluations (this response is published as well) before the evaluations are made public. They can also respond after the evaluations are released.
If you are writing a signed evaluation, you can share it or link it on your own pages. Please wait to do this until after we have given the author a chance to respond and posted the package.
Otherwise, if you are remaining anonymous, please do not disclose your connection to this report.
Going forward:
We may later invite you to . . .
. . . and to help us judge prizes (e.g., the Impactful Research Prize (pilot)).
We may ask if you want to be involved in replication exercises (e.g., through the Institute for Replication).
As a general principle, we hope and intend always to see that you are fairly compensated for your time and effort.
The evaluations provide at least three types of value, helping advance several paths in our theory of change:
For readers and users: Unjournal evaluations assess the reliability and usefulness of the paper along several dimensions—and make this public, so other researchers and policymakers can
For careers and improving research: Evaluations provide metrics of quality. In the medium term, these should provide increased and accelerated career value, improving the research process. We aim to build metrics that are credibly comparable to the current "tier" of journal a paper is published in. But we aim to do this better in several ways:
More quickly, more reliably, more transparently, and without the unproductive overhead of dealing with journals (see 'reshaping evaluation')
Allowing flexible, transparent formats (such as dynamic documents), thus improving the research process, benefiting research careers, and hopefully improving the research itself in impactful areas.
Feedback and suggestions for authors: We expect that evaluators will provide feedback that is relevant to the authors, to help them make the paper better.
See our guidelines for evaluators.
We still want your evaluation and ratings. Some things to consider as an evaluator in this situation.
A paper/project is not only a good to be judged on a single scale. How useful is it, and to who or what? We'd like you discuss its value in relation to previous work, it’s implications, what it suggests for research and practice, etc.
Even if the paper is great...
Would you accept it in the “top journal in economics”? If not, why not?
Would you hire someone based on this paper?
Would you fund a major intervention (as a government policymaker, major philanthropist, etc.) based on this paper alone? If not, why not
What are the most important and informative results of the paper?
Can you quantify your confidence in these 'crucial' results, and their replicability and generalizability to other settings? Can you state your probabilistic bounds (confidence or credible intervals) on the quantitative results (e.g., 80% bounds on QALYs/DALYs/or WELLBYs per $1000)
Would any other robustness checks or further work have the potential to increase your confidence (narrow your belief bounds) in this result? Which?
Do the authors make it easy to reproduce the statistical (or other) results of the paper from shared data? Could they do more in this respect?
Communication: Did you understand all of the paper? Was it easy to read? Are there any parts that could have been better explained?
Is it communicated in a way that would it be useful to policymakers? To other researchers in this field, or in the general discipline?
Research can be "submitted" by authors (here) or "suggested" by others. For a walk-through on suggesting research, see this video example.
There are two main paths for making suggestions: through our survey form or through Airtable.
Anyone can suggest research using the survey form at https://bit.ly/ujsuggestr. (Note, if you want to "submit your own research," go to bit.ly/ujsubmitr.) Please include the following steps:
Begin by reviewing The Unjournal's guidelines on What research to target to get a sense of the research we cover and our priorities. Look for high-quality research that 1) falls within our focus areas and 2) would benefit from (further) evaluation.
When in doubt, we encourage you to suggest the research anyway.
Navigate to The Unjournal's Suggest Research Survey Form. Most of the fields here are optional. The fields ask the following information:
Who you are: Let us know who is making the suggestion (you can also choose to stay anonymous).
If you leave your contact information, you will be eligible for financial "bounties" for strong suggestions.
If you are already a member of The Unjournal's team, additional fields will appear for you to link your suggestion to your profile in the Unjournal's database.
Research Label: Provide a short, descriptive label for the research you are suggesting. This helps The Unjournal quickly identify the topic at a glance.
Research Importance: Explain why the research is important, its potential impact, and any specific areas that require thorough evaluation.
Research Link: Include a direct URL to the research paper. The Unjournal prefers research that is publicly hosted, such as in a working paper archive or on a personal website.
Peer Review Status: Inform about the peer review status of the research, whether it's unpublished, published without clear peer review, or published in a peer-reviewed journal.
"Rate the relevance": This represents your best-guess at how relevant this work is for The Unjournal to evaluate, as a percentile relative to other work we are considering.
Research Classification: Choose categories that best describe the research. This helps The Unjournal sort and prioritize suggestions.
Field of Interest: Select the outcome or field of interest that the research addresses, such as global health in low-income countries.
Complete all the required fields and submit your suggestion. The Unjournal team will review your submission and consider it for future evaluation. You can reach out to us at contact@unjournal.org with any questions or concerns.
People on our team may find it more useful to suggest research to The Unjournal directl via the Airtable. See this document for a guide to this. (Please request document permission to access this explanation.)
Aside on setting the prioritization ratings: In making your subjective prioritization rating, please consider “What percentile do you think this paper (or project) is relative to the others in our database, in terms of ‘relevance for The UJ to evaluate’?” (Note this is a redefinition; we previously considered these as probabilities.) We roughly plan to commission the evaluation of about 1 in 5 papers in the database, the ‘top 20%’ according to these percentiles. Please don’t consider the “publication status or the “author's propensity to engage” in this rating. We will consider those as separate criteria.
Please don’t enter only the papers you think are ‘very relevant’; please enter in all research that you have spent any substantial time considering (more than a couple minutes). If we all do this, we should all aim for our percentile ratings to be approximately normally distributed; evenly spread over the 1-100% range.
You can request a conditional embargo by emailing us at contact@unjournal.org, or via the submission/response form. Please explain what sort of embargo you are asking for, and why. By default, we'd like Unjournal evaluations to be made public promptly.
However, we may make exceptions in special circumstances particularly
for very early-career researchers who are not clearly #high-professional-status-less-career-sensitive,
where the research is not obviously already influencing a substantial amount of funding in impact-relevant areas, or substantially influencing policy considerations
If there is an early-career researcher on the authorship team, we may allow authors to "embargo" the publication of the evaluation until a later date. Evaluators (referees) will be informed of this. This date can be contingent, but it should not be indefinite.
For example, we might grant an embargo that lasts until after a PhD/postdoc’s upcoming job market or until after publication in a mainstream journal, with a hard maximum of 14 months. (Of course, embargoes can be ended early at the request of the authors.)
In exceptional circumstances we may consider granting a ""
Note: the above are all exceptions to our regular rules, examples of embargos we might or might not agree to.
David Reinstein, Nov 2024: Over the last six months we have considered and evaluated a small amount of work under this “Applied & Policy Stream”. We are planning to continue this stream for the forseeable future.
Much of the most impactful research is not aimed at academic audiences and may never be submitted to academic journals. It is written in formats that are very different from traditional academic outputs, and cannot be easily judged by academics using the same standards. Nonetheless, this work may use technical approaches developed in academia, making it important to gain expert feedback and evaluation.
The Unjournal can help here. However, to avoid confusion, we want to make this clearly distinct from our main agenda, which focuses on impactful academically-aimed research.
Our “Applied & Policy Stream” will be clearly labeled as separate from our main stream. This may constitute roughly 10 or 15% of the work that we cover. Below, we refer to this as the “applied stream” for brevity.
Our considerations for prioritizing this work are generally the same as for our academic stream – is it in the fields that we are focused on, using approaches that enable meaningful evaluation and rating? Is it already having impact (e.g., influencing grant funding in globally-important areas)? Does it have the potential for impact, and if so, is it high-quality enough that we should consider boosting its signal?
We will particularly prioritize policy and applied work that uses technical methods that need evaluation by research experts, often academics.
a range of applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers.
Ratings/metrics: As in the academic stream, this work will be evaluated for its credibility, usefulness, communication/logic, etc. However, we are not seeking to have this work assessed by the standards of academia in a way that yields a comparison to traditional journal tiers. Evaluators: Please ignore these parts of our interface; if you are unsure if it is relevant feel free to ask.
Evaluator selection, number, pay: Generally we want to continue to select academic research experts or non-academic researchers with strong academic and methodological background to do these evaluations. , particularly from academia, to work that is not normally scrutinized by such experts.
The compensation may be flexible as well; in some cases the work may be more involved than for the academic stream and in some cases less involved. As a starting point we will begin by offering the same compensation as for the academic stream.
Careful flagging and signposting: To preserve the reputation of our academic-stream evaluations we need to make it clear, wherever people might see this work, that it is not being evaluated by the same standards as the academic stream and doesn't “count” towards those metrics.
This research is more likely to fall into the category of #direct-evaluation-of-impactful-work, "already influencing a substantial amount of funding in impact-relevant areas, or substantially influencing policy considerations".
If the research itself is being funded by a global-impact focused foundation or donor, this will also constitute a strong prima facie reason to commission an evaluation (without requiring the authors' consent). See this post on the EA Forum.
This discussion is a work-in-progress
We are targeting global priorities-relevant research...
With the potential for impact, and with the potential for Unjournal evaluations to have an impact (see our high-level considerations and our prioritization ratings discussions).
Our is quantitative work that informs global priorities (see linked discussion), especially in , informing our Theory of Change.
We give a data presentation of the work we have already covered and the work we are prioritizing here, which will be continually updated.
But what does this mean in practice? What specific research fields, topics, and approaches are we likely to classify as 'relevant to evaluate'?
We give some lists and annotated examples below.
As of January 2024 The Unjournal focuses on ...
Research where the fundamental question being investigated involves human behavior and beliefs and the consequences of these. This may involve markets, production processes, economic constraints, social interactions, technology, the 'market of ideas', individual psychology, government processes, and more. However, the main research question should not revolve around issues outside of human behavior, such as physical science, biology, or computer science and engineering. These areas are out of our scope (at least for now).
Research that is fundamentally quantitative and uses . It will generally involve or consider measurable inputs, choices, and outcomes; specific categorical or quantitative questions; analytical and mathematical reasoning; hypothesis testing and/or belief updating, etc.
Research that targets and addresses a single specific question or goals, or a small cluster. It should not mainly be a broad discussion and overview of other research or conceptual issues.
This to generally involves the academic fields:
Economics
Applied Statistics (and some other applied math)
Psychology
Political Science
Other quantitative social science fields (perhaps Sociology)
Applied "business school" fields: finance, accounting, operations, etc.
Applied "policy and impact evaluation" fields
Life science/medicine where it targets human behavior/social science
These discipline/field boundaries are not strict; they may adapt as we grow
These were chosen in light of two main factors:
Our founder and our team is most comfortable assessing and managing the consideration of research in these areas.
These fields seem to be particularly amenable to, and able to benefit from our journal-independent evaluation approach. Other fields, such as biology, are already being 'served' by strong initiatives like Peer Communities In.
To do: We will give and explain some examples here
The Unjournal's mission is to prioritize
research with the strongest potential for a positive impact on global welfare
where public evaluation of this research will have the greatest impact
Given this broad goal, we consider research into any cause, topic, or outcome, as long as the research involves fields, methods, and approaches within our domain (see above), and as long as the work meets our other requirements (e.g., research must be publicly shared without a paywall).
While we don't have rigid boundaries, we are nonetheless focusing on certain areas:
(As of Jan. 2024) we have mainly commissioned evaluations of work involving development economics and health-related outcomes and interventions in low-and middle-income countries.
As well as research involving
Environmental economics, conservation, harm to human health
The social impact of AI and emerging technologies
Economics, welfare, and governance
Catastrophic risks; predicting and responding to these risks
The economics of innovation; scientific progress and meta-science
The economics of health, happiness, and wellbeing
We are currently prioritizing further work involving
Psychology, behavioral science, and attitudes: the spread of misinformation; other-regarding preferences and behavior; moral circles
Animal welfare: markets, attitudes
Methodological work informing high-impact research (e.g., methods for impact evaluation)
We are also considering prioritizing work involving
AI governance and safety
Quantitative political science (voting, lobbying, attitudes)
Political risks (including authoritarian governments and war and conflict)
Institutional decisionmaking and policymaking
Long-term growth and trends; the long-term future of civilization; forecasting
To do: We will give and explain some examples here
As noted in Process: prioritizing research, we ask people who suggest research to provide a numerical 0-100 rating:
We also ask people within our team to act as 'assessors' to give as second and third opinions on this. This 'prioritization rating' is one of the criteria we will use to determine whether to commission research to be evaluated (along with author engagement, publication status, our capacity and expertise, etc.) Again, see the previous page for the current process.
We are working on a set of notes on this, fleshing this out and giving specific examples. At the moment this is available to members of our team only (ask for access to "Guidelines for prioritization ratings (internal)"). We aim to share a version of this publicly once it converges, and once we can get rid of arbitrary sensitive examples.
I. This is not the evaluation itself. It is not an evaluation of the paper's merit per se:
Influential work, and prestigious work in influential areas may be highly prioritized regardless of its rigor and quality
The prioritization rating might consider quality for work that seems potentially impactful, which does not seem particularly prestigious or influential. Here aspects like writing clarity, methodological rigor, etc., might put it 'over the bar'. However, even here these will tend to be based on rapid and shallow assessments, and should not be seen as meaningful evaluations of research merit.
II. These ratings will be considered along with the discussion by the field team and the management. Thus is helpful if you give a justification and explanation for your stated rating.
Define/consider the following ‘attributes’ of a piece of research:
Global decision-relevance/VOI: Is this research decision-relevant to high-value choices and considerations that are important for global priorities and global welfare?
Prestige/prominence: Is the research already prominent/valued (esp. in academia), highly cited, reported on, etc?
Influence: Is the work already influencing important real-world decisions and considerations?
Obviously, these are not binary factors; there is a continuum for each. But for the sake of illustration, consider the following flowcharts.
If the flowcharts do not render, please refresh your browser. You may have to refresh twice.
"Fully baked": Sometimes prominent researchers release work (e.g., on NBER) that is not particularly rigorous or involved, which may have been put together quickly. This might be research that links to a conference they are presenting at, to their teaching, or to specific funding or consulting. It may be survey/summary work, perhaps meant for less technical audiences. The Unjournal tends not to prioritize such work, or at least not consider it in the same "prestigious" basket (although there will be exceptions). In the flowchart above, we contrast this with their "fully-baked" work.
Decision-relevant, prestigious work: Suppose the research is both ‘globally decision-relevant’ and prominent. Here, if the research is in our domain, we probably want to have it publicly evaluated. This is basically the case regardless of its apparent methodological strength. This is particularly true if it has been recently made public (as a working paper), if it has not yet been published in a highly-respected peer-reviewed journal, and if there are non-straightforward methodological issues involved.
Prestigious work that seems less globally-relevant: We generally will not prioritize this work unless it adds to our mission in other ways (see, e.g., our ‘sustainability’ and ‘credibility’ goals here). In particular we will prioritize such research more if:
It is presented in innovative, transparent formats (e.g., dynamic documents/open notebooks, sharing code and data)
The research indirectly supports more globally-relevant research, e.g., through…
Providing methodological tools that are relevant to that ‘higher-value’ work
Drawing attention to neglected high-priority research fields (e.g., animal welfare)
(If the flowchart below does not render, please refresh your browser; you may have to refresh twice.)
Decision-relevant, influential (but less prestigious) work: E.g., suppose this research might be cited by a major philanthropic organization as guiding its decision-making, but the researchers may not have strong academic credentials or a track record. Again, if this research is in our domain, we probably want to have it publicly evaluated. However, depending on the rigor of the work and the way it is written, we may want to explicitly class this in our ‘non-academic/policy’ stream.
Decision-relevant, less prestigious, less-influential work: What about for less-prominent work with fewer academic accolades that is not yet having an influence, but nonetheless seems to be globally decision-relevant? Here, our evaluations seem less likely to have an influence unless the work seems potentially strong, implying that our evaluations, rating, and feedback could boost potentially valuable neglected work. Here, our prioritization rating might focus more on our initial impressions of things like …
Methodological strength (this is a big one!)
Rigorous logic and communication
Open science and robust approaches
Engagement with real-world policy considerations
Again: the prioritization process is not meant to be an evaluation of the work in itself. It’s OK to do this in a fairly shallow way.
In future, we may want to put together a loose set of methodological ‘suggestive guidelines’ for work in different fields and areas, without being too rigid or prescriptive. (To do: we can draw from some existing frameworks for this [ref].)
The Unjournal commissions public expert evaluation of open-access research: to make rigorous research more impactful and impactful research more rigorous. We prioritize work that concretely informs global priorities, focusing on economics, quantitative social science, and policy.
Today's research evaluation process is out-of-date. Traditional journals capture rents while discouraging innovative formats. The current publish-or-perish incentive diverts researcher energy away from their core work and towards playing strategic journal-submission games.
The Unjournal provides open, rigorous evaluation, focusing on credibility, robustness, transparency, communication, and usefulness. We make it easier for researchers to get feedback and credible assessment of their work, so they can focus on doing better and more useful research. We publish and disseminate these evaluation reports and benchmarked ratings; often far earlier than traditional journals. This helps policymakers understand what research to trust and how to use it, and it helps other researchers and students learn from the critique and discussion.
Click on the cards below to find out more about our mission, organizational structure, and ways to collaborate, or ' for answers to your questions.
In addition to soliciting research submissions by authors, we have a process for sourcing and prioritizing unsubmitted research for evaluation. For some of this research we ask for author engagement but do not require their permission.
Aside: in the future, we hope to work directly with working paper series, associations, and research groups to get their approval and engagement with Unjournal evaluations. We hope that having a large share of papers in your series evaluated will serve as a measure of confidence in your research quality. If you are involved in such a group and are interested in this, please reach out to us ().
Dec 7, 2024: We have updated some of our rules and guidelines on this page. These will be applied going forward (in future contacts with authors) but not retroactively.
All NBER working papers are generally eligible, with .
We treat these on a case-by-case basis and use discretion. All CEPR members are reasonably secure and successful, but their co-authors might not be, especially if these co-authors are PhD students they are supervising.
In some areas and fields (e.g., psychology, animal product markets) the publication process is relatively rapid or it may fail to engage general expertise. In general, all papers that are already published in peer-reviewed journals are eligible for our direct track.
These are eligible (without author permission) if all authors/all lead authors "have high professional status" or are otherwise less career-sensitive to the consequences of this evaluation.
We define this (at least for economics) as:
having a tenured or ‘long term’ positions at well-known, respected universities or other research institutions, or
having a tenure-track positions at a top universities (e.g., top-20 globally by some credible ranking) and having published one or more papers in a "top-five-equivalent" journal, or
clearly not pursuing an academic career (e.g., the "partner at the aid agency running the trial").
On the other hand, if one or more authors is a PhD student or an untenured academic outside a "top global program,’’ then we will ask for permission and potentially offer an embargo.
If the PhD student or untenured academic is otherwise clearly extremely high-performing by conventional metrics; e.g., an REStud "tourist" or someone with multiple published papers in top-5 journals. In such cases the paper might be considered eligible for direct evaluation.
See also #direct-evaluation-of-impactful-work
Under review/R&R at a journal? The fact that the paper is under submission or in "revise and resubmit" at a top journal does not preclude us from evaluating it. In some cases it may be particular important and helpful to evaluate work at this stage. But we'd like to be aware of this, as it can weigh into our considerations and timing.
We will also evaluate work directly, without requiring author permission, where it is clear that this research is already influencing a substantial amount of funding in impact-relevant areas, or substantially influencing policy considerations. Much of this work will be evaluated as part of our "Applied and Policy" Track.
'Dynamic Documents' are projects or papers that are developed using prominent tools such as R-markdown or JuPyTer notebooks (the two most prominent tools).
The salient features and benefits of this approach include:
Integrated data analysis and reporting means the data analysis (as well as math/simulations) is done and reported in the same space that the results and discussion are presented. This is made possible through the concealment of 'code blocks'.
Transparent reporting means you can track exactly what is being reported and how it was constructed:
Making the process a lot less error-prone
Helping readers understand it better (see 'explorable explanations')
Helping replicators and future researchers build on it
Other advantages of these formats (over PDFs for example) include:
Convenient ‘folding blocks’
Margin comments
and links
Integrating interactive tools
Some quick examples from my own work in progress (but other people have done it much better)
See sections below
: An overview of what we are asking; payment and recognition details
: The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
Other sections and subsections provide further resources, consider future initiatives, and discuss our rationales.
31 Aug 2023: Our present approach is a "working solution" involving some ad-hoc and intuitive choices. We are re-evaluating the metrics we are asking for as well as the interface and framing. We are gathering some discussion , incorporating feedback from our pilot evaluators and authors. We're also talking to people with expertise as well as considering past practice and other ongoing initiatives. We plan to consolidate that discussion and our consensus and/or conclusions into the present (Gitbook) site.
Ultimately, we're trying to replace the question of "what tier of journal did a paper get into?" with "how highly was the paper rated?" We believe this is a more valuable metric. It can be more fine-grained. It should be less prone to gaming. It aims to reduce randomness in the process, through things like 'the availability of journal space in a particular field'. See our discussion of .
To get to this point, we need to have academia and stakeholders see our evaluations as meaningful. We want the evaluations to begin to have some value that is measurable in the way “publication in the AER” is seen to have value.
While there are some ongoing efforts towards journal-independent evaluation, these . Typically, they either have simple tick-boxes (like "this paper used correct statistical methods: yes/no") or they enable descriptive evaluation without an overall rating. As we are not a journal, and we don’t accept or reject research, we need another way of assigning value. We are working to determine the best way of doing this through quantitative ratings. We hope to be able to benchmark our evaluations to "traditional" publication outcomes. Thus, we think it is important to ask for both an overall quality rating and a journal ranking tier prediction.
In addition to the overall assessment, we think it will be valuable to have the papers rated according to several categories. This could be particularly helpful to practitioners who may care about some concerns more than others. It also can be useful to future researchers who might want to focus on reading papers with particular strengths. It could be useful in meta-analyses, as certain characteristics of papers could be weighed more heavily. We think the use of categories might also be useful to authors and evaluators themselves. It can help them get a sense of what we think research priorities should be, and thus help them consider an overall rating.
However, these ideas have been largely ad-hoc and based on the impressions of our management team (a particular set of mainly economists and psychologists). The process is still being developed. Any feedback you have is welcome. For example, are we overemphasizing certain aspects? Are we excluding some important categories?
We are also researching other frameworks, templates, and past practice; we hope to draw from validated, theoretically grounded projects such as .
In eliciting expert judgment, it is helpful to differentiate the level of confidence in predictions and recommendations. We want to know not only what you believe, but how strongly held your beliefs are. If you are less certain in one area, we should weigh the information you provide less heavily in updating our beliefs. This may also be particularly useful for practitioners. Obviously, there are challenges to any approach. Even experts in a quantitative field may struggle to convey their own uncertainty. They may also be inherently "poorly calibrated" (see discussions and tools for ). Some people may often be "confidently wrong." They might state very narrow "credible intervals", when the truth—where measurable—routinely falls outside these boundaries. People with greater discrimination may sometimes be underconfident. One would want to consider and As a side benefit, this may be interesting for research , particularly as The Unjournal grows. We see 'quantifying one's own uncertainty' as a good exercise for academics (and everyone) to engage in.
We had included the note:
We give the previous weighting scheme in a fold below for reference, particularly for those reading evaluations done before October 2023.
As well as:
Elsewhere in that page we had noted:
[FROM PREVIOUS GUIDELINES:]
You may feel comfortable giving your "90% confidence interval," or you may prefer to give a "descriptive rating" of your confidence (from "extremely confident" to "not confident").
[Previous...] Remember, we would like you to give a 90% CI or a confidence rating (1–5 dots), but not both.
And, for the 'journal tier' scale:
[Previous guidelines]: The description folded below focuses on the "Overall Assessment." Please try to use a similar scale when evaluating the category metrics.
We have removed suggested weightings for each of these categories. We discuss the rationale at some length .
Evaluators working before October 2023 saw a previous version of the table, which you can see .
The previous guidelines ; these may be useful in considering evaluations provided pre-2024.
Suggested weighting: 0.
As noted above, we give suggested weights (0–5) to suggest the importance of each category rating to your overall assessment, given The Unjournal's priorities.
The weightings were presented once again along with each description in the section .
Quantify how certain you are about this rating, either giving a 90% / interval or using our . (
: A very strong, solid, and relevant piece of work. It may have minor flaws or limitations, but overall it is very high-quality, meeting the standards of well-respected research professionals in this field.
This page explains the value of the metrics we are seeking from evaluators.
The from Clearer Thinking is fairly helpful and fun for practicing and checking how good you are at expressing your uncertainty. It requires creating account, but that doesn't take long. The 'Confidence Intervals' training seems particularly relevant for our purposes.
#overall-assessment(holistic, most important!)
39, 52
5
47, 54
5
45, 55
4
10, 35
3
40, 70
2
30,46
0**
21,65
#overall-assessment(holistic, most important!)
39, 52
47, 54
45, 55
10, 35
40, 70
30,46
21,65
Thanks for your interest in evaluating research for The Unjournal!
The Unjournal is a nonprofit organization started in mid-2022. We commission experts to publicly evaluate and rate research. Read more about us here.
Write an evaluation of a specific research : essentially a standard, high-quality referee report.
research by filling in a structured form.
Answer a short questionnaire about your background and our processes.
See Guidelines for Evaluators for further details and guidance.
Why use your valuable time writing an Unjournal evaluation? There are several reasons: helping high-impact research users, supporting open science and open access, and getting recognition and financial compensation.
The Unjournal's goal is to make impactful research more rigorous, and rigorous research more impactful, while supporting open access and open science. We encourage better research by making it easier for researchers to get feedback and credible ratings. We evaluate research in high-impact areas that make a difference to global welfare. Your evaluation will:
Help authors improve their research, by giving early, high-quality feedback.
Help improve science by providing open-access, prompt, structured, public evaluations of impactful research.
Inform funding bodies and meta-scientists as we build a database of research quality, strengths and weaknesses in different dimensions. Help research users learn what research to trust, when, and how.
For more on our scientific mission, see here.
Your evaluation will be made public and given a DOI. You have the option to be identified as the author of this evaluation or to remain anonymous, as you prefer.
for providing a and complete evaluation and feedback ($100-$300 base + $100 'promptness bonus') in line with our expected standards.
Note, Aug. 2024: we're adjusting the base compensation to reward strong work and experience.
$100 + $100 for first-time evaluators
$300 + $100 for return Unjournal evaluators and those with previous strong public review experience. We will be integrating other incentives and prizes into this, and are committed to $450 in average compensation per evaluation, including prizes.
You will also be eligible for monetary prizes for "most useful and informative evaluation," plus other bonuses. We currently (Feb. 2024) set aside an additional $150 per evaluation for incentives, bonuses, and prizes.
See also "submitting claims and expenses"
If you have been invited to be an evaluator and want to proceed, simply respond to the email invitation that we have sent you. You will then be sent a link to our evaluation form.
To sign up for our evaluator pool, see 'how to get involved'
To learn more about our evaluation process, seeGuidelines for evaluators. If you are doing an evaluation, we highly recommend you read these guidelines carefully
Previous/less emphasized: Society Group: curating evaluations and papers
Evaluations and author response are given DOI's, enter the bibliometric record
Future consideration:
"publication tier" of authors' responses as a workaround to encode aggregated evaluation
Hypothes.is annotation of hosted and linked papers and projects (aiming to integrate: see: hypothes.is for collab. annotation)
Sharing evaluation data on public Github repo (see data reporting here)
We aim to elicit the experiment judgment from Unjournal evaluators efficiently and precisely. We aim to communicate this quantitative information concisely and usefully, in ways that will inform policymakers, philanthropists, and future researchers.
In the short run (in our pilot phase), we will attempt to present simple but reasonable aggregations, such as simple averages of midpoints and confidence-interval bounds. However, going forward, we are consulting and incorporating the burgeoning academic literature on "aggregating expert opinion." (See, e.g., Hemming et al, 2017; Hanea et al, 2021; McAndrew et al, 2020; Marcoci et al, 2022.)
We are working on this in our public data presentation (Quarto notebook) here.
We are considering...
Syntheses of evaluations and author feedback
Input to prediction markets, replication projects, etc.
Less technical summaries and policy-relevant summaries, e.g., for the EA Forum, Asterisk magazine, or mainstream long-form outlets
This page describes The Unjournal's evaluation guidelines, considering our priorities and criteria, the metrics we ask for, and how these are considered.
These guidelines apply to the evaluation forms in Coda here (academic stream) and here (applied stream).
Please see For prospective evaluators for an overview of the evaluation process, as well as details on compensation, public recognition, and more.
Write an evaluation of the target , similar to a standard, high-quality referee report. Please identify the paper's main claims and carefully assess their validity, leveraging your own background and expertise.
.
Answer a short questionnaire about your background and our processes.
In writing your evaluation and providing ratings, please consider the following.
In many ways, the written part of the evaluation should be similar to a report an academic would write for a traditional high-prestige journal (e.g., see some 'conventional guidelines' here). Most fundamentally, we want you to use your expertise to critically assess the main claims made by the authors. Are the claims well-supported? Are the assumptions believable? Are the methods are appropriate and well-executed? Explain why or why not.
However, we'd also like you to pay some consideration to our priorities, including
Advancing our knowledge and supporting practitioners
Justification, reasonableness, validity, and robustness of methods
Logic and communication, intellectual modesty, transparent reasoning
Open, communicative, replicable science
See our guidelines below for more details on each of these. Please don't structure your review according to these metrics, just pay some attention to them.
If you have questions about the authors’ work, you can ask them anonymously: we will facilitate this.
We want you to evaluate the most recent/relevant version of the paper/project that you can access. If you see a more recent version than the one we shared with you, please let us know.
We designed this process to balance three considerations with three target audiences. Please consider each of these:
Crafting evaluations and ratings that help researchers and policymakers judge when and how to rely on this research. For Research Users.
Ensuring these evaluations of the papers are comparable to current journal tier metrics, to enable them to be used to determine career advancement and research funding. For Departments, Research Managers, and Funders.
Providing constructive feedback to Authors.
We discuss this, and how it relates to our impact and "theory of change", here.
We ask for a set of nine quantitative metrics. For each metric, we ask for a score and a 90% credible interval. We describe these in detail below. (We explain why we ask for these metrics here.)
For some questions, we ask for a percentile ranking from 0-100%. This represents "what proportion of papers in the reference group are worse than this paper, by this criterion". A score of 100% means this is essentially the best paper in the reference group. 0% is the worst paper. A score of 50% means this is the median paper; i.e., half of all papers in the reference group do this better, and half do this worse, and so on.
Here* the population of papers should be all serious research in the same area that you have encountered in the last three years.
For each metric, we ask you to provide a 'midpoint rating' and a 90% credible interval as a measure of your uncertainty. Our interface provides slider bars to express your chosen intervals:
See below for more guidance on uncertainty, credible intervals, and the midpoint rating as the 'median of your belief distribution'.
The table below summarizes the percentile rankings.
Overall assessment
0 - 100%
Claims, strength and characterization of evidence:
0 - 100%
Methods: Justification, reasonableness, validity, robustness
0 - 100%
Advancing knowledge and practice
0 - 100%
Logic and communication
0 - 100%
Open, collaborative, replicable science
0 - 100%
0 - 100%
Percentile ranking (0-100%)
Judge the quality of the research heuristically. Consider all aspects of quality, credibility, importance to future impactful applied research, and practical relevance and usefulness.
Do the authors do a good job of (i) stating their main questions and claims, (ii) providing strong evidence and powerful approaches to inform these, and (iii) correctly characterizing the nature of their evidence?
Percentile ranking (0-100%)
Are the used well-justified and explained; are they a reasonable approach to answering the question(s) in this context? Are the underlying assumptions reasonable?
Are the results and methods likely to be robust to reasonable changes in the underlying assumptions?
Avoiding bias and questionable research practices (QRP): Did the authors take steps to reduce bias from opportunistic reporting ? For example, did they do a strong pre-registration and pre-analysis plan, incorporate multiple hypothesis testing corrections, and report flexible specifications?
Percentile ranking (0-100%)
To what extent does the project contribute to the field or to practice, particularly in ways that are to global priorities and impactful interventions?
(Applied stream: please focus on ‘improvements that are actually helpful’.)
Do the paper's insights inform our beliefs about important parameters and about the effectiveness of interventions?
Does the project add useful value to other impactful research?
Percentile ranking (0-100%)
Are the goals and questions of the paper clearly expressed? Are concepts clearly defined and referenced?
Is the "? Are assumptions made explicit? Are all logical steps clear and correct? Does the writing make the argument easy to follow?
Are the conclusions consistent with the evidence (or formal proofs) presented? Do the authors accurately state the nature of their evidence, and the extent it supports their main claims?
Are the data and/or analysis presented relevant to the arguments made? Are the tables, graphs, and diagrams easy to understand in the context of the narrative (e.g., no major errors in labeling)?
Percentile ranking (0-100%)
This covers several considerations:
Would another researcher be able to perform the same analysis and get the same results? Are the methods explained clearly and in enough detail to enable easy and credible replication? For example, are all analyses and statistical tests explained, and is code provided?
Is the source of the data clear?
Is the data made as available as is reasonably possible? If so, is it clearly labeled and explained??
Consistency
Do the numbers in the paper and/or code output make sense? Are they internally consistent throughout the paper?
Useful building blocks
Do the authors provide tools, resources, data, and outputs that might enable or enhance future work and meta-analysis?
Are the paper’s chosen topic and approach to global priorities, cause prioritization, and high-impact interventions?
Does the paper consider real-world relevance and deal with policy and implementation questions? Are the setup, assumptions, and focus realistic?
Do the authors report results that are relevant to practitioners? Do they provide useful quantified estimates (costs, benefits, etc.) enabling practical impact quantification and prioritization?
Do they communicate (at least in the abstract or introduction) in ways policymakers and decision-makers can understand, without misleading or oversimplifying?
To help universities and policymakers make sense of our evaluations, we want to benchmark them against how research is currently judged. So, we would like you to assess the paper in terms of journal rankings. We ask for two assessments:
a normative judgment about 'how well the research should publish';
a prediction about where the research will be published.
Journal ranking tiers are on a 0-5 scale, as follows:
0/5: "/little to no value". Unlikely to be cited by credible researchers
1/5: OK/Somewhat valuable journal
2/5: Marginal B-journal/Decent field journal
3/5: Top B-journal/Strong field journal
4/5: Marginal A-Journal/Top field journal
5/5: A-journal/Top journal
We give some example journal rankings here, based on SJR and ABS ratings.
We encourage you to , e.g. 4.6 or 2.2.
As before, we ask for a 90% credible interval.
What journal ranking tier should this work be published in?
0.0-5.0
lower, upper
What journal ranking tier will this work be published in?
0.0-5.0
lower, upper
PubPub note: as of 14 March 2024, the PubPub form is not allowing you to give non-integer responses. Until this is fixed, . (Or use the Coda form.)
Journal ranking tier (0.0-5.0)
Assess this paper on the journal ranking scale described above, considering only its merit, giving some weight to the category metrics we discussed above.
Equivalently, if:
the journal process was fair, unbiased, and free of noise, and that status, social connections, and lobbying to get the paper published didn’t matter;
journals assessed research according to the category metrics we discussed above.
Journal ranking tier (0.0-5.0)
We want policymakers, researchers, funders, and managers to be able to use The Unjournal's evaluations to update their beliefs and make better decisions. To do this well, they need to weigh multiple evaluations against each other and other sources of information. Evaluators may feel confident about their rating for one category, but less confident in another area. How much weight should readers give to each? In this context, it is useful to quantify the uncertainty.
But it's hard to quantify statements like "very certain" or "somewhat uncertain" – different people may use the same phrases to mean different things. That's why we're asking for you a more precise measure, your credible intervals. These metrics are particularly useful for meta-science and meta-analysis.
You are asked to give a 'midpoint' and a 90% credible interval. Consider this as that you believe is 90% likely to contain the true value. See the fold below for further guidance.
We are now asking evaluators for “claim identification and assessment” where relevant. This is meant to help practitioners use this research to inform their funding, policymaking, and other decisions. It is not intended as a metric to judge the research quality per se. This is not required but we will reward this work.
See guidelines and examples here.
Lastly, we ask evaluators about their background, and for feedback about the process.
Length/time spent: This is up to you. We welcome detail, elaboration, and technical discussion.
If you still have questions, please contact us, or see our FAQ on Evaluation ('refereeing').
Our data protection statement is linked here.
On this page we link to and discuss on answers to the questions, Which research is most impactful? Which research should be prioritized?
At The Unjournal, we are open to various approaches to the issues of "what is the most impactful research"? Perhaps looking at some of the research, we have already evaluated and research we are prioritizing (public link coming soon) will give you some insights. However, it seems fair that we should give at least one candidate description or definition.
"The direct global impact of a work of research is determined by the value of the information that it provides in helping individuals, governments, funders, and policymakers make better decisions. While research may not definitively answer key questions it should leave us more informed (in a Bayesian sense, 'more concentrated belief distributions') about these. We will measure the value of these 'better choices' in terms of the extent these "
The above comes close to how some people on The Unjournal team think about research impact and prioritization, but we don't plan to adopt an official guiding definition.
Note the above definition is meant to exclude more basic research, which may also be high value, but which mainly serves as a building block for other research. In fact, The Unjournal does consider the value of research as an input into other research, particularly when it directly influences direct policy-relevant research, e.g., see "Replicability & Generalisability: A Guide to CEA discounts" .
It also excludes the value of "learning the truth" as an intrinsic good; we have tended not to make this a priority.
For more guidance on how we apply this, see our #high-level-considerations-for-prioritizing-research.
Syllabi and course outlines that address global prioritization
Those listed below are at least somewhat tied to Effective Altruism.
"Existing resources (economics focused)" page in "Economics for EA and vice versa" Gitbook
Stafforini's list of EA syllabi here
(To be included here)
We next consider organizations that take a broad focus on helping humans, animals, and the future of civilization. Some of these have explicitly set priorities and research agendas, although the level of specificity varies. Most of the organizations below have some connections to Effective Altruism; over time, we aim to also look beyond this EA focus.
"Research agenda draft for GPI economics"
Social Science Research Topics for Global Health and Wellbeing; posted on the EA Forum as "Social science research we'd like to see on global health and wellbeing"
Social Science Research Topics for Animal Welfare posted on the EA Forum as Social science research we'd like to see on animal welfare
“Technical and Philosophical Questions That Might Affect Our Grantmaking” is a fairly brief discussion and overview linking mostly to OP-funded research.
To be expanded, cataloged, and considered in more detail
Happier Lives Institute research agenda ("Research Priorities," 2021): A particularly well organized discussion. Each section has a list of relevant academic literature, some of which is recent and some of which is applied or empirical.
Animal Charity Evaluators: Their "Methodology" and "Research briefs" are particularly helpful, and connect to a range of academic and policy research
Effective Thesis Project "research agendas": This page is particularly detailed and contains a range of useful links to other agendas!
How effective altruism can help psychologists maximize their impact (Gainsburg et al, 2021)
"What’s Worth Knowing? Economists’ Opinions about Economics" (Andre and Falk, 2022): The survey, as reported in the paper, does not suggest a particular agenda, but it does suggest a direction . . . economists would generally like to see more work in certain applied areas.
Ten Years and Beyond: Economists Answer NSF's Call for Long-Term Research Agendas (Compendium, 2011): . . . . NSF to "describe grand challenge questions . . . that transcend near-term funding cycles and are likely to drive next-generation research in the social, behavioral, and economic sciences.”
A simplified rendering, skipping some steps and possibilities
The flowchart below focuses on the evaluation part of our process in detail. See Evaluation workflow – Simplified for a more condensed flowchart.
(Section updated 1 August 2023)
Submission/selection (multiple routes)
Author (A) submits work (W), creates new submission (submits a URL and DOI), through our platform or informally.
Author (or someone on their behalf) can complete a submission form; this includes a potential "request for embargo" or other special treatment.
Managers and field specialists select work (or the project is submitted independently of authors) and the management team agrees to prioritize it.
For either of these cases (1 or 2), authors are asked for permission.
Alternate Direct Evaluation track: "Work enters prestige archive" (NBER, CEPR, and some other cases).
Managers inform and consult the authors but permission . (Particularly relevant: we confirm with author that we have the latest updated version of the research.)
Prioritization
Following author submission ...
Manager(s) (M) and Field Specialists (FS) prioritize work for review (see Project selection and evaluation).
Following direct evaluation selection...
"evaluation suggestions" (see examples here) explaining why it's relevant, what to evaluate, etc., to be shared later with evaluators.
If requested (in either case), M decides whether to grant embargo or other special treatment, notes this, and informs authors.
an Evaluation Manager (EM – typically part of our management team or advisory board) to selected project.
EM invites evaluators (aka "reviewers") and shares the paper to be evaluated along with (optionally) a brief summary of why The Unjournal thinks it's relevant, and what we are asking.
Potential evaluators are given full access to (almost) all information submitted by the author and M, and notified of any embargo or special treatment granted.
EM may make special requests to the evaluator as part of a management policy (e.g., "signed/unsigned evaluation only," short deadlines, extra incentives as part of an agreed policy, etc.).
EM (, optionally) may add "evaluation suggestions" to share with the evaluators.
Evaluator accepts or declines the invitation to review, and if the former, agrees on a deadline (or asks for an extension).
If the evaluator accepts, the EM shares full guidelines/evaluation template and specific suggestions with the evaluator.
Evaluator completes .
Evaluator submits evaluation including numeric ratings and predictions, plus "CI's" for these.
Possible addition (future plan): Reviewer asks for minor revisions and corrections; see "How revisions might be folded in..." in the fold below.
EM collates all evaluations/reviews, shares these with Author(s).
Evaluator must be very careful not to share evaluators' identities at this point.
This includes caution to avoid accidentally-identifying information, especially where .
Even if evaluators chose to "sign their evaluation," their identity should not be disclosed to authors at this point. However, evaluators are told they can reach out to the
Evaluations are shared with the authors as a separate doc, set of docs, file, or space; which the . (Going forward, this will be automated.)
It is made clear to authors that their responses will be published (and given a DOI, when possible).
Author(s) read(s) evaluations, given two working weeks to submit responses.
If there is an embargo, there is more time to do this, of course.
EM creates evaluation summary and "EM comments."
EM or UJ team publishes each element on our PubPub space as a separate "pub" with a DOI for each (unless embargoed):
Summary and EM comments
With a prominent section for the "ratings data tables"
Each evaluation, with summarized ratings at the top
The author response
All of the above are linked in a particular way, with particular settings; see notes
Authors and evaluators are informed once elements are on PubPub; next steps include promotion, checking bibliometrics, etc.
("Ratings and predictions data" to enter an additional public database.)
Note that we intend to automate and integrate many of the process into an editorial-management-like system in PubPub.
In our current (8 Feb 2023 pilot) phase, we have the evaluators consider the paper "as is," frozen at a certain date, with no room for revisions. The authors can, of course, revise the paper on their own and even pursue an updated Unjournal review; we would like to include links to the "permanently updated version" in the Unjournal evaluation space.
After the pilot, we may consider making minor revisions part of the evaluation process. This may add substantial value to the papers and process, especially where evaluators identify straightforward and easily-implementable improvements.
We don't want to replicate the slow and inefficient processes of the traditional system. Essentially, we want evaluators to give a report and rating as the paper stands.
We also want to encourage papers as permanent-beta projects. The authors can improve it, if they like, and resubmit it for a new evaluation.
The Unjournal Evaluators have the option of remaining anonymous (see ). Where evaluators choose this, we will carefully protect this anonymity, aiming at a high standard of protection, as good as or better than traditional journals. We will give evaluators the option to take extra steps to safeguard this further. We are offering anonymity in perpetuity to those who request it. (As well as anonymity on other terms to those who request it, on explicitly mutually agreed upon terms.)
If they choose to stay anonymous, there should be no way for authors to be able to ‘guess’ who has reviewed their work.
We will take steps to keep private any information that could connect the identity of an anonymous evaluator and their evaluation/the work they are evaluating.
We will take extra steps to make the possibility of accidental disclosure extremely small (this is never impossible of course, even in the case of conventional journal reviews). In particular, we will use pseudonyms or ID codes for these evaluators in any discussion or database that is shared among our management team that connects individual evaluators to research work.
If we ever share a list of Unjournal’s evaluators this will not include anyone who wished to remain anonymous (unless they explicitly ask us to be on such a list).
We will do our best to warn anonymous evaluators of ways that they might inadvertently be identifying themselves in the evaluation content they provide.
We will provide platforms to enable anonymous and secure discussion between anonymous evaluators and others (authors, editors, etc.) Where an anonymous evaluator is involved, we will encourage these platforms to be used as much as possible. In particular, see .
Aside: In future, we may consider , and these tools will also be
Text to accompany the Impactful Research Prize discussion
Note: This section largely repeats content in our , especially our
Jan. 2024: We have lightly updated this page to reflect our current systems.
We describe the nature of the work we are looking to evaluate, along with examples, in . Update 2024: This is now better characterized under and .
If you are interested in submitting your work for public evaluation, we are looking for research which is relevant to global priorities—especially quantitative social sciences—and impact evaluations. Work that would benefit from further feedback and evaluation is also of interest.
Your work will be evaluated using our evaluation guidelines and metrics. You can read these before submitting.
Important Note: We are not a journal. By having your work evaluated, you will not be giving up the opportunity to have your work published in a journal. We simply operate a system that allows you to have your work independently evaluated.
If you think your work fits our criteria and would like it to be publicly evaluated, please submit your work through .
If you would like to submit more than one of your papers, you will need to complete a new form for each paper you submit.
By default, we would like Unjournal evaluations to be made public. We think public evaluations are generally good for authors, as explained . However, in special circumstances and particularly for very early-career researchers, we may make exceptions.
If there is an early-career researcher on the author team, we will allow authors to "embargo" the publication of the evaluation until a later date. This date is contingent, but not indefinite. The embargo lasts until after a PhD/postdoc’s upcoming job search or until it has been published in a mainstream journal, unless:
the author(s) give(s) earlier permission for release; or
until a fixed upper limit of 14 months is reached.
If you would like to request an exception to a public evaluation, you will have the opportunity to explain your reasoning in the submission form.
The Unjournal presents an additional opportunity for evaluation of your work with an emphasis on impact.
Substantive feedback will help you improve your work—especially useful for young scholars.
Ratings can be seen as markers of credibility for your work that could help your career advancement at least at the margin, and hopefully help a great deal in the future. You also gain the opportunity to publicly respond to critiques and correct misunderstandings.
You will gain visibility and a connection to the EA/Global Priorities communities and the Open Science movement.
You can take advantage of this opportunity to gain a reputation as an ‘early adopter and innovator’ in open science.
You can win prizes: You may win a “best project prize,” which could be financial as well as reputational.
Entering into our process will make you more likely to be hired as a paid reviewer or editorial manager.
We will encourage media coverage.
If we consider your work for public evaluation, we may ask for some of the items below, although most are optional. We will aim to make this a very light touch for authors.
A link to a non-paywalled, hosted version of your work (in any format—PDFs are not necessary) that can be given a Digital Object Identifier (DOI). Again, we will not be "publishing" this work, just evaluating it.
A link to data and code, if possible. We will work to help you to make it accessible.
Assignment of two evaluators who will be paid to assess your work. We will likely keep their identities confidential, although this is flexible depending on the reviewer. Where it seems particularly helpful, we will facilitate a confidential channel to enable a dialogue with the authors. One person on our managing team will handle this process.
Have evaluators publicly post their evaluations (i.e., 'reviews') of your work on our platform. As noted above, we will ask them to provide feedback, thoughts, suggestions, and some quantitative ratings for the paper.
By completing the submission form, you are providing your permission for us to post the evaluations publicly unless you request an embargo.
You will have a two-week window to respond through our platform before anything is posted publicly. Your responses can also be posted publicly.
Why are we seeking these pivotal questions to be 'operationalizable'?
This is in line with on this type of research
The Unjournal evaluating (largely empirical) research that clearly poses and answers specific impactful questions, rather than research that seeks to define a question, survey a broad landscape of other research, open routes to further inquiry, etc.
I think this will help us focus on fully-baked questions, where the answer is likely to provide actual value to the target organization and others (and avoid the old ‘ trap).
It offers potential for benchmarking and validation (e.g., using prediction markets), specific routes to measure our impact (updated beliefs, updated decisions), and informing the we’re asking from evaluators (see footnote above).
However, as this initiative progresses we may allow a wider range of questions, e.g., more open-ended, multi-outcome, non-empirical (perhaps ‘normative), and best-practice questions.
Express your interest/suggest a pivotal question using .
commissions public evaluations of impactful research in quantitative social sciences fields. We are seeking pivotal questions to guide our choice of research papers to commission for evaluation. We're contacting organizations that aim to use evidence to do the most good, and asking:
Which open questions most affect your policies and funding recommendations?
For which questions would research yield the highest ‘value of information’?
The Unjournal has focused on finding that seems relevant to impactful questions and crucial considerations, and then commissioning experts to publicly evaluate them. (For more about our process, see ). Our field specialist teams search and monitor prominent research archives (like ), and consider , while keeping an eye on forums and social media.
We're now exploring turning this on its head and identifying pivotal questions first and identifying evaluating a cluster of research that informs these. This could offer a more efficient and observable path to impact. (For context, see our .)
The Unjournal will ask impact-focused research-driven organizations such as Open Philanthropy and Charity Entrepreneurship to identify specific that impact their funding, policy, and research-direction choices. For example, If GiveWell is considering recommending a charity running a CBT intervention in West Africa, they’d like to know “how much does a 16 week course of non-specialist psychotherapy increase self-reported happiness, compared to the same amount spent on direct cash transfers?” We’re looking for the questions with the highest value-of-information (VOI) for the organization’s work over the next few years.
We have some requirements — the questions should relate to The Unjournal’s coverage areas and engage rigorous research in economics, social science, policy, or impact quantification. Ideally, organizations will identify at least one piece of publicly-available research that relates to their question. But we are doing this mainly to help these organizations, so we will try to keep it simple and low-effort for them.
We will work to minimize the effort required from these organizations; e.g., by leveraging their existing writings and agendas to suggest potential high value-of-information questions. We will also crowdsource questions (via EA Forum, social media, etc.), offering bounties for valuable suggestions.
The Unjournal team will discuss the suggested questions, leveraging our field specialists’ expertise. We’ll rank these questions, prioritizing at least one for each organization.
We’ll work with the organization to specify the priority question precisely and in a useful way. We want to be sure that (1) evaluators will interpret these questions as intended, and (2) the answers that come out are likely to actually be helpful. We’ll make these lists of questions public and solicit general feedback — on their relevance, on their framing, on key sub-questions, and on pointers to relevant research.
Where practicable, we will operationalize the target questions as a claim on a prediction market (for example, Metaculus) to be resolved by the evaluations and synthesis below.
Where feasible, post these on public prediction markets (such as Metaculus)
We will ask (and help) the organizations and interested parties to specify their own beliefs about these questions, aka their 'priors'. We may adapt the Metaculus interface for this.
Once we’ve converged on the target question, we’ll do a variation of our usual evaluation process.
We will contact both the research authors (as per our standard process) and the target organizations for their responses to the evaluations, and for follow-up questions. We’ll foster a productive discussion between them (while preserving anonymity as requested, and being careful not to overtax people’s time and generosity)
These reports should synthesize “What do the research, evaluations, and responses say about the question/claim?” They should provide an overall metric relating to the truth value of the target question (or similar for the parameter of interest). In cases where we integrate prediction markets, they should decisively resolve the market claim.
Next, we will share these synthesis reports with authors and organizations for feedback.
We give detailed guidance with examples below:
Why do we want these pivotal questions to be 'operationalizable'?
A brief description of what your organization does (linking your ‘about us’ page is fine)
A brief explanation of why this question is particularly high-value for your organization or your work, and, if applicable, how you have tried to answer it
If possible, a link to at least one research paper that relates to this question
Optionally, your current beliefs about this question (your ‘priors’)
Please also let us know how you would like to engage with us on refining this question and addressing it. Do you want to follow up with a 1-1 meeting? How much time are you willing to put in? Who, if anyone, should we reach out to at your organization?
Remember that we plan to make all of this analysis and evaluation public. However, we will not make any of your input public without your consent.
It's a norm in academia that people do reviewing work for free. So why is The Unjournal paying evaluators?
From a recent
We estimate that the average (median) respondent spends 12 (9) working days per year on refereeing. The top 10% of the distribution dedicates 25 working days or more, which is quite substantial considering refereeing is usually unpaid.
The peer-review process in economics is widely-argued to be too slow and lengthy. But there is evidence that payments may help improve this.
, they note that few economics journals currently pay reviewers (and these payments tend to be small (e.g., JPE and AER paid $100 at the time). However, they also note, citing several papers:
The existing evidence summarized in Table 5 suggests that offering financial incentives could be an effective way of reducing turnaround time.
notes that the work of reviewing is not distributed equally. To the extent that accepting to do a report is based on individual goodwill, the unpaid volunteer model could be seen to unfairly penalize more generous and sympathetic academics. Writing a certain number of referee reports per year is generally considered part of "academic service". Academics put this on their CVs, and it may lead to being on the board of a journal which is valued to an extent. However, this is much less attractive for researchers who are not tenured university professors. Paying for this work would do a better job of including them in the process.
'Payment for good evaluation work' may also lead to fair and more useful evaluations.
In the current system academics may take on this work in large part to try to impress journal editors and get favorable treatment from them when they submit their own work. They may also write reviews in particular ways to impress these editors.
For less high-prestige journals, to get reviewers, editors often need to lean on their personal networks, including those they have power relationships with.
Reviewers are also known to strategically try to get authors to cite and praise the reviewer's own work. They maybe especially critical to authors they see as rivals.
To the extent that reviewers are doing this as a service they are being paid for, these other motivations will be comparatively somewhat less important. The incentives will be more in line with doing evaluations that are seen as valuable by the managers of the process, in order to get chosen for further paid work. (And, if evaluations are public, the managers can consider the public feedback on these reports as well.)
We are not ‘just another journal.’ We need to give incentives for people to put effort into a new system and help us break out of the old inferior equilibrium.
In some senses, we are asking for more than a typical journal. In particular, our evaluations will be made public and thus need to be better communicated.
We cannot rely on 'reviewers taking on work to get better treatment from editors in the future.' This does not apply to our model, as we don't have editors make any sort of ‘final accept/reject decision’
Our ‘paying evaluators’ brings in a wider set of evaluators, including non-academics. This is particularly relevant to our impact-focused goals.
UNICEF strategic plan: Not easy to link to research; they have a large number of priorities, goals, and principles; see infographic:
See "" for more detail, and examples.
For more information on why authors may want to engage and what we may ask authors to do, please see .
If the question is well operationalized, and we have a clear approach to 'resolving it' after the evaluations and synthesis, we will post it on a reputation-based market like or . Metaculus is offering 'minitaculus' platforms such as to enable these more flexible questions.
For each question, we will prioritize roughly two to five . These may be suggested by the organization that proposed the question, sourced by The Unjournal, or discovered through community feedback ().
As we normally do, we’ll have evaluation managers recruit . However, we’ll ask the evaluators to , and to consider the target organization’s priorities.
We’ll also . This is inspired by the, and some evidence suggesting that the (mechanistically aggregated) estimates of experts after deliberations than their independent estimates (also mechanistically aggregated). We may also facilitate collaborative evaluations and ‘live reviews’, following the examples of , , and others.
evaluation managers to write a report as a summary of the research investigated.
We’ll put up each evaluation on our page, bringing them into academic search tools, databases, bibliometrics, etc. We’ll also curate them, linking them to the relevant target question and to the synthesis report.
We will produce, share, and promote further summaries of these packages. This could include forum and blog posts summarizing the results and insights, as well as interactive and visually appealing web pages. We may also produce less technical content, perhaps submitting work to outlets like, , or .
At least initially, we’re planning to ask for questions that could be definitively answered and/or measured quantitatively. We will help organizations and other suggesters refine their questions to make this the case. These should resemble questions that could be posted on forecasting platforms such as or . These should also resemble the we currently request from evaluators.
We’re still refining this idea, and looking for your suggestions about what is unclear, what could go wrong, what might make this work better, what has been tried before, and where the biggest wins are likely to be. We’d appreciate your feedback! (Feel free to email to make suggestions or arrange a discussion.)
If you work for an impact-focused research organization and you are interested in participating in our pilot, please reach out to us at contact@unjournal.org to flag your interest and/or complete . We would like to see:
A specific, , high-value claim or research question you'd like to be evaluated, that falls within our scope (~quantitative social science, economics, policy, and impact measurement)
If you don’t represent an organization, we still welcome your suggestions, and will try to give feedback. ('.)
Again, please remember that we currently focus on quantitative ~social sciences fields, including economics, policy, and impact modeling (see for more detail on our coverage). Questions surrounding (for example) technical AI safety, microbiology, or measuring animal sentience are less likely to be in our domain.
If you want to talk about this first, or if you have any questions, please send an email or with David Reinstein, our co-founder and director.
At least initially, we’re planning to ask for questions that could be definitively answered and/or measured quantitatively, and we will help organizations and other suggesters refine their questions to make this the case. These should approximately resemble questions that could be posted on forecasting platforms such as Manifold Markets or Metaculus. These should also somewhat resemble the 'claim identification' we currently request from evaluators.
Phil Tetlock’s “Clairvoyance Test” is particularly relevant. As :
if you handed your question to a genuine clairvoyant, could they see into the future and definitively tell you [the answer]? Some questions like ‘Will the US decline as a world power?’...‘Will an AI exhibit a goal not supplied by its human creators?’ struggle to pass the Clairvoyance Test… How do you tell one type of AI goal from another, and how do you even define it?... In the case of whether the US might decline as a world power, you’d want to get at the theme with multiple well-formed questions such as ‘Will the US lose its #1 position in the IMF’s annual GDP rankings before 2050?’.... These should also somewhat resemble the 'claim identification' we currently request from evaluators.
Metaculus and Manifold: .
Some questions are important, but difficult to make specific, focused, and operationalizable. For example (from 80,000 Hours’ list of “research questions”):
“What can economic models … tell us about recursive self improvement in advanced AI systems?”
“How likely would catastrophic long-term outcomes be if everyone in the future acts for their own self-interest alone?”
“How could AI transform domestic and mass politics?”
Other questions are easier to operationalize or break down into several specific sub-questions. For example (again from 80,000 Hours’ “research questions”):
Could advances in AI lead to risks of very bad outcomes, like suffering on a massive scale? Is it the most likely source of such risks?
I rated this a 3/10 in terms of how operationalized it was. The word “could” is vague. “Could” might suggest some reasonable probability outcome (1%, 0.1%, 10%), or it might be interpreted as “can I think of any scenario in which this holds?” “Very bad outcomes” also needs a specific measure.
However, we can reframe this to be more operationalized. E.g., here are some fairly well-operationalized questions:
What is the risk of a catastrophic loss (defined as the death of at least 10% of the human population over any five year period) occurring before the year 2100?
How does this vary depending on the total amount of money invested in computing power for building advanced AI capabilities over the same period?
Here are some highly operationalizable questions developed by the Farm Animal Welfare team at Open Phil:
What percentage of plant-based meat alternative (PBMA) units/meals sold displace a unit/meal of meat?
What percentage of people will be [vegetarian or vegan] in 20, 50, or 100 years?
And a few more posed and addressed by Our World in Data:
How much of global greenhouse gas emissions come from food? (full article)
What share of global CO₂ emissions come from aviation? (full article)
However, note that many of the above questions are descriptive or predictive. We are also very interested in causal questions such as
What is the impact of an increase (decrease) in blood lead level by one “natural log unit” on children’s learning in the developing world (measured in standard deviation units)?
7 Feb 2023: We have an organized founding/management committee, as well as an advisory board (see Our team). We are focusing on pushing research through the evaluation pipeline, communicating this output, and making it useful. We have a working division of labor, e.g., among "managing editors," for specific papers. We are likely to expand our team after our pilot, conditional on further funding.
The creation of an action plan can be seen in the Gdoc discussion "Procedure for choosing committee"
Assemble a list of the most relevant and respected people, using more or less objective criteria and justification.
Ask to join founding committee.
Ask to join list of supporters.
Add people who have made past contributions.
28 May 2022: The above has mostly been done, at least in terms of people attending the first meeting. We probably need a more systematic approach to getting the list of supporters.
Further posts on social media, academic websites and message boards, etc.
The steps we've taken and our plans; needs updating
This page and its sub-pages await updating
See also Plan of action
See also Updates (earlier)
18 Jun 2023: This needs updating
Initial evaluations; feedback on the process
Revise process; further set of evaluations
Disseminate and communicate (research, evaluations, processes); get further public feedback
Further funding; prepare for scaling-up
Management: updates and CTA in gdoc shared in emails
Passed on to LTFF and funding was awarded
frozen version as Dropbox paper here
Passed on to LTFF and funding was awarded
Start date = ~21 February 2022
The "Unjournal" will organize and fund 'public journal-independent evaluation’ of EA-relevant/adjacent research, encouraging this research by making it easier for academics and EA-organization researchers to get feedback and credible ratings.
Peer review is great, but academic publication processes are wasteful, slow, rent-extracting, and they discourage innovation. From onscienceandacademia
Academic publishers extract rents and discourage progress. But there is a coordination problem in ‘escaping’ this. Funders like Open Philanthropy and EA-affiliated researchers are not stuck, we can facilitate an exit.
The traditional binary ‘publish or reject’ system wastes resources (wasted effort and gamesmanship) and adds unnecessary risk. I propose an alternative, the “Evaluated Project Repo”: a system of credible evaluations, ratings, and published reviews (linked to an open research archive/curation). This will also enable more readable, reliable, and replicable research formats, such as dynamic documents; and allow research projects to continue to improve without “paper bloat”. (I also propose some ‘escape bridges’ from the current system.)
Global priorities and EA research organizations are looking for ‘feedback and quality control’, dissemination, and external credibility. We would gain substantial benefits from supporting, and working with the Evaluated Project Repo (or with related peer-evaluation systems), rather than (only) submitting our work to traditional journals. We should also put some direct value on results of open science and open access, and the strong impact we may have in supporting this.
I am asking for funding to help replace this system, with EA 'taking the lead'. My goal is permanent and openly-hosted research projects, and efficient journal-independent peer review, evaluation, and communication. (I have been discussing and presenting this idea publicly for roughly one year, and gained a great deal of feedback. I return to this in the next section).
I propose the following 12-month Proof of Concept: Proposal for EA-aligned research 'unjournal' collaboration mechanis
Build a ‘founding committee’ of 5-8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).
Update 1 Aug 2022, mainly DONE, todo: consult EAG speakers
I will publicly share my procedure for choosing this group (in the long run we will aim at transparent and impartial process for choosing ‘editors and managers’, as well as aiming at decentralized forms of evaluation and filtering.)
2. Host a meeting (and shared collaboration space/document), to come to a consensus/set of principles on
A cluster of EA-relevant research areas we want to start with
A simple outreach strategy
How we determine which work is 'EA-interesting’
How we will choose ‘reviewers’ and avoid conflicts-of-interest
How we will evaluate, rate, rank, and give feedback on work
The platforms we will work with
How to promote and communicate the research work (to academics, policymakers, and the EA community)
Update 1 Aug 2022: 2 meetings so far, agreed on on going-forward policies for most of the above
3. Post and present our consensus (on various fora especially in the EA, Open Science, and relevant academic communities, as well as pro-active interviews with key players). Solicit feedback. Have a brief ‘followup period’ (1 week) to consider adjusting the above consensus plan in light of the feedback.
Update 1 Aug 2022: Done somewhat; waiting to have 2+ papers assessed before we engage more
4. Set up the basic platforms, links
Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental spac\
Update 1 Aug 2022: Going with Kotahi and Sciety as a start; partially setup
5. Reach out to researchers in relevant areas and organizations and ask them to 'submit' their work for 'feedback and potential positive evaluations and recognition', and for a chance at a prize.
The unjournal will *not be an exclusive outlet.* Researchers are free to also submit the same work to 'traditional journals' at any point.
Their work must be publicly hosted, with a DOI. Ideally the 'whole project' is maintained and updated, with all materials, in a single location. We can help enable them to host their work and enable DOI's through (e.g.) Zenodo; even hosted 'dynamic documents' can be DOI'd.
Update 1 Aug 2022: Did a 'bounty' and some searching of our own, plan a 'big public call' afrter pilot evaluations of 2+ papers
Researchers are encouraged to write and present work in 'reasoning transparent' (as well as 'open science' transparent) ways. They are encouraged to make connections with core EA ideas and frameworks, but without being too heavy-handed. Essentially, we are asking them to connect their research to 'the present and future welfare of humanity/sentient beings'.
Reviews will, by default, be made public and connected with the paper. However, our committee will discuss I. whether/when authors are allowed to withdraw/hide their work, and II. when reviews will be ‘signed’ vs anonymous. In my conversations with researchers, some have been reluctant to ‘put themselves out there for public criticism’, while others seem more OK with this. We aim to have roughly 25 research papers/projects reviewed/evaluated and 'communicated' (to EA audiences) in the first year.
Update July 2022: scaled back to 15 papers
My suggestions on the above, as a starting point...
Given my own background, I would lean towards ‘empirical social science’ (including Economics) and impact evaluation and measurement (especially for ‘effective charitable interventions’)
Administration should be light-touch, to also be attractive to aligned academics
We should build "editorial-board-like" teams with subject/area expertise
We should pay reviewers for their work (I propose $250 for 5 hours of quality reviewing work)
Create a set of rules for 'submission and management', 'which projects enter the review system' (relevance, minimal quality, stakeholders, any red lines or 'musts'), how projects are to be submitted (see above, but let's be flexible), how reviewers are to be assigned and compensated (or 'given equivalent credit')
Rules for reviews/assessments
Reviews to be done on the chosen open platform (likely Prereview) unless otherwise infeasible
Share, advertise, promote the reviewed work
Establish links to all open-access bibliometric initiatives to the extent feasible
Each research paper/project should be introduced in at least one EA Forum post
Laying these out; I have responses to some of these, others will require further consideration \
Will researchers find it useful to submit/share their work? From my experience (i) as an academic economist and (ii) working at Rethink Priorities, and my conversations with peers, I think people would find this very useful. I would have (and still would).
i. FEEDBACK IS GOLD: It is very difficult to get anyone to actually read your paper, and to get actual useful feedback on your work. The incentive is to publish, not to read, papers are dense and require specific knowledge, and people may be reluctant to criticize peers, and economists tend to be free-riders. It is hard to engage seminar audiences on the more detailed aspects of the work, and then one gets feedback on the ‘presentation’ not the ‘paper’. We often use ‘submission to journal’ as a way to get feedback, but this is slow, not the intended use of the journal process (I’ve been told), and often results in less-useful feedback. (A common perception is that the referee ‘decides what decision to make and then fleshes out a report to justify it.)
ii. ACADEMICS NEED SOURCES OF TIMELY VALIDATION: The publication process is extremely slow and complicated in Economics (and other fields, in my experience), requiring years of submissions and responses to multiple journals. This imposes a lot of risky for an academic’s career, particularly pre-tenure. Having an additional credible source validating the strength of one’s work could help reduce this risk. If we do this right, I think hiring and tenure committees would consider it as an important source of quality information.
iii. EA ORGS/FUNDERS need both, but the traditional journal process is costly in time and hassle. I think researchers and research managers at RP would be very happy to get feedback through this, as well as an assessment of the quality of their work, and suggestions for alternative methods and approaches. We would also benefit from external signals of the quality of our work, in justifying this to funders such as Open Philanthropy. (OP themselves would value this greatly, I believe. They are developing their own systems for asse_s_sing the quality of their funded work, but I expect they would prefer an external source.) However, it is costly for us at RP to submit to academic journals: the process is slow and bureaucratic and noisy, and traditional journals will typically not evaluate work with EA priorities and frameworks in mind. (Note that I suggest the unjournal make these priorities a factor while also assessing the work’s rigor in ways that invoke justifiable concerns in academic disciplines.)
I assume that similar concerns apply to other EA research organizations.
iv. OPEN SCIENCE AND DYNAMIC FORMATS
IMO the best and most transparent way to present data-driven work (as well as much quantitative work) is in a dynamic document, where narrative, code, and results are presented in concert. Readers can ‘unfold for further details’. The precise reasoning, data, and generation of each result can be traced. These can also be updated and improved with time. Many researchers, particularly those involved in Open Science, find this the most attractive way to work and present their work. However, ‘frozen pdf prison’ and ‘use our bureaucratic system’ approaches makes this very difficult to use in traditional journals. As the ‘unjournal’ does not host papers, but merely assesses work with DOI’s (which can be, e.g. a hosted web page, as frozen at a particular point in time of review), we can facilitate this. Will researchers find it ‘safe’ to share their work?
A large group of Economists and academics tend to be conservative, risk-averse, and leader-following. But there are important exceptions and also substantial groups that seek to be particular innovative and iconoclastic.
The key concerns we will need to address (at least for some researchers). i. Will my work be ‘trashed publicly in a way that hurts my reputation’? I think this is more for early-career; more experienced researchers will have a thicker skin and realize that it’s common-knowledge that some people disagree with their approaches. ii. Will this tag me as ‘weird or non-academic’. This might be addressed by our making connections to academic bodies and established researchers. How to get quality reviews and avoid slacking/free-riding by reviewers? Ideas:
compensation and rewarding quality as an incentive,
recruiting reviewers who seem to have intrinsic motivations,
publishing some ‘signed’ reviews (but there are tradeoffs here as we want to avoid flattery)
longer run: integrated system of ‘rating the reviews’, a la StackExchange (I know there are some innovations in process here we’d love to link with
QUANTIFY and CALIBRATE
We will ask referees to give a set of quantitative ratings in addition to their detailed feedback and discussion. These should be stated in ways that are made explicitly relative to other work they have seen, both within the Unjournal, and in general. Referees might be encouraged to ‘calibrate’; first given a set of (previously traditionally-published) papers to rank and rate. They should be later reminded about how the distribution of the evaluation they have given.
Within our system, evaluations themselves could be stated ‘relative to the other evaluations given by the same referee.’
BENCHMARK We also will encourage or require referees provide a ‘a predicted/equivalent “traditional publication outcome” and possibly incentivize these predictions. (And we could consider running public prediction markets on this in the longer run, as has been done in other contexts). This should be systematized. It could be stated as “this project is of the sufficient quality that it has a 25% probability of being published in a journal of the rated quality of Nature, and a 50% probability of being published in a journal such as the Journal of Public Economics or better … within the next 3 years.” (We can also elicit statements about the impact factor, etc.)
I expect most/many academics who submit their work will also submit it to traditional journals at least in the first year or so of this project. (but ultimately we hope this 'unjournal' system of journal-independent evaluation provides a signal of quality that will supercede The Musty Old Journal.) This will thus provide us a way to validate the above predictions, as well as independently establish a connection between our ratings and the ‘traditional’ outcomes. PRIZE as a powerful signal/scarce commodity The “prize for best submissions” (perhaps a graded monetary prize for the top 5 submissions in the first year) will provide a commitment device and a credible signal, to enhance the attractiveness and prestige of this.
We may try to harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, the coin-based 'ResearchHub', etc.
Will the evaluations be valued by gatekeepers (universities, grantmakers, etc.) and policy-makers? This will ultimately depend on the credibility factors mentioned above. I expect they will have value to EA and open-science-oriented grantmakers fairly soon, especially if the publicly-posted reviews are of a high apparent quality.
I expect academia to take longer to come on board. In the medium run they are likely to value it as ‘a factor in career decisions’ (but not as much as a traditional journal publication); particularly if our Unjournal finds participation and partnership with credible established organizations and prominent researchers.
I am optimistic because of my impression that non-traditional-journal outcomes (arXiv and impact factors, conference papers, cross-journal outlets, distill.pub) are becoming the source of value in several important disciplines How will we choose referees? How to avoid conflicts of interest (and the perception of this)?
This is an important issue. I believe there are ‘pretty good’ established protocols for this. I’d like to build specific prescribed rules for doing this, and make it transparent. We may be able to leverage tools, e.g., those involving GPT3 like elicit.org.
COI: We should partition the space of potential researchers and reviewers, and/or establish ‘distance measures’ (which may themselves be reported along with the review). There should be specified rules, e.g., ‘no one from the same organization or an organization that is partnering with the author’s organization’. Ideally EA-orgresearchers’ work should be reviewed by academic researchers, and to some extent vice-versa.
How to support EA ideas, frameworks, and priorities while maintaining (actual and perceived) objectivity and academic rigor
(Needs discussion)
Why hasn’t this been done before? I believe it involves a collective action problem, as well as a coordination/lock-in problem that can be solved by bringing together the compatible interests of two groups. Academic researchers have expertise, credibility, but they are locked into traditional and inefficient systems. EA organizations/researchers have a direct interest in feedback and fostering this research, and have some funding and are not locked into traditional systems.
Yonatan Cale restating my claim:
Every Econ researcher (interested in publishing) pays a price for having the system set up badly, the price isn't high enough for any one researcher to have an incentive to fix the system for themselves, but as a group, they would be very happy if someone would fix this systematic problem (and they would in theory be willing to "pay" for it, because the price of "fixing the system" is way lower then the sum of the prices that each one of them pays individually)
‘Sustainability’ … Who will pay for these reviews in the longer run
Once this catches on…Universities will pay to support this; they will save massively on journal subscriptions. Governments supporting Open Science will fund this. Authors/research orgs will pay a reasonable submission fee to partly/fully cover the cost of the reviews. EA-aligned research funders will support this.
But we need to show a proof-of-concept and build credibility. The ACX grant funds can help make this happen.
My CV https://daaronr.github.io/markdown-cv/ should make this clear\
I have been an academic economist for 15-20 years, and I have been deeply involved in the research and publication process, with particular interests in open science and dynamic documents. (PhD UC Berkeley Lecturer University of Essex, Senior Lecturer, University of Exeter). My research has mainly been in Economics, but also involving other disciplines (especially Psychology).
I’m a Senior Economist at Rethink Priorities, where I’ve worked for the past year, engaging with a range of researchers and practitioners at RP and other EA groups
My research has involved EA-relevant themes since the latter part of my PhD. I’ve been actively involved with the EA community since about 2016, when I received a series of ESRC ‘impact grants’ for the innovationsinfundraising.org and giveifyouwin.org projects, working with George Howlett and the CEA
I’ve been considering and discussing this proposal for many years with colleagues in Economics and other fields, and presenting it publicly and soliciting feedback over the past year— mainly through https://bit.ly/unjournal, social media, EA and open science Slack groups and conferences (presenting this at a GPI lunch and at the COS/Metascience conference, as well as in an EA Forum post and the onscienceandacademia post mentioned above).
I have had long 1-1 conversations on this idea with a range of knowledgable and relevant EAs, academics, and open-science practitioners, and technical/software developers including
Cooper Smout, head of ‘https://freeourknowledge.org/, which I’d like to ally with (through their pledges, and through an open access journal Cooper is putting together, which the Unjournal could feed into, for researchers needing a ‘journal with an impact factor’)
Participants in the GPI seminar luncheon
Daniela Saderi of PreReview
Paolo Crosetto (Experimental Economics, French National Research Institute for Agriculture, Food and Environment) https://paolocrosetto.wordpress.com/
Cecilia Tilli, Foundation to Prevent Antibiotics Resistance and EA research advocate
Sergey Frolov (Physicist), Prof. J.-S. Caux, Physicist and head of https://scipost.org/
Peter Slattery, Behaviourworks Australia
Alex Barnes, Business Systems Analyst, https://eahub.org/profile/alex-barnes/
Gavin Taylor and Paola Masuzzo of IGDORE (biologists and advocate of open science)
William Sleegers (Psychologist and Data Scientist, Rethink Priorities)
Nathan Young https://eahub.org/profile/nathan-young/
Edo Arad https://eahub.org/profile/edo-arad/ (mathematician and EA research advocate)
Hamish Huggard (Data science, ‘literature maps’)
Yonatan Cale, who helped me put this proposal together through asking a range of challengin questions and offering his feedback. https://il.linkedin.com/in/yonatancale
https://daaronr.github.io/markdown-cv/, my online CV has links to almost everything else@givingtools on twitter david_reinstein on EA forum; see post on this: https://forum.effectivealtruism.org/posts/Z2jPENrHpY9QSQBDQ/proposal-alternative-to-traditional-academic-journals-for-ea I read/discuss this on my podcast, e.g., see https://anchor.fm/david-reinstein/episodes/Journal-slaying-The-Evaluated-Project-Repo-aka-the-Unjournal--httpbit-lyunjournal-Future-EA-Forum-post-e149uc2\
Feel free to give either a simple number, or a range, a complicated answer, or a list of what could be done with how much
Over a roughly one-year ‘pilot’ period, I propose the following. Note that most of the costs will not be incurred in the event of the ‘failure modes’ I consider. E.g., if we can’t find qualified and relevant reviewers and authors, these payments will not be made
$15k: Pay reviewers for their time for doing 50 reviews of 25 papers (2 each), at 250 USD per review (I presume this is 4-5 hours of concentrated work) --> 12,500 USD
$5k to find ways to ’buy off” 100 hours of my time (2-3 hours per week over some 50 weeks) to focus on managing the project, setting up rules/interface, choosing projects to review, assigning reviewers, etc. I will do this either through paying my employer directly or ‘buying time’ by getting delivery meals, Uber rides, etc.)
$5k to ’buy off” 100 hours of time from other ‘co-editors’ to help, and for a board to meet/review the initial operating principles
$5k: to hire about 100 hours technical support for 1 year to help authors host and format their work, to tailor the ‘experimental’ space that PreReview has promiosed us, and potentially working with the EA forum and other interfaces
$2.5k: Hire clerical/writing/copy editing support as needed
$7.5k: rewards for ‘authors of the best papers/projects’ (e.g., 5 * 1000 USD … perhaps with a range of prizes) … and/or additional incentives for ‘best reviews’ (e.g., 5 * 250 USD)
We have an action plan (mainly for EA organizations) and a workspace in the GitBook here: https://app.gitbook.com/o/-MfFk4CTSGwVOPkwnRgx/s/-MkORcaM5xGxmrnczq25/ This also nests several essays discussing the idea, including the collaborative document (with many comments and suggestions) at https://bit.ly/unjournal\
Most of the measures of ‘small success’ are scaleable; the funds I am asking for, for referee payments, some of my time, etc., will not be spent/will be returned to you if we do not recieve quality submissions and commitments to review and assist in the management
My own forecast (I’ve done some calibration training, but these are somewhat off-the-cuff) 80% that we will find relevant authors and referees, and this will be a useful resource for improving and assesing the credibility of EA-relevant research
60% that we will get the academic world substantially involved in such a way that it becomes reasonably well known, and quality academic researchers are asking to ‘submit their work’ to this without our soliciting their work.
50% that this becomes among the top/major ways that EA-aligned research organizations seek feedback on their work (and the work that they fund — see OpenPhil), and a partial alternative to academic publication
10-25% that this becomes a substantial alternative (or is at the core of such a sea-change) to traditional publication in important academic fields and sub-fields within the next 1-3 years. (This estimate is low in part because I am fairly confident a system along these lines will replace the traditional journal, but less confident that it will be so soon, and still less confident my particular work on this will be at the center of it.) \
Yes
Yes
Peter Slattery: on EA Forum, fork moved to .
Other comments, especially post-grant, in this Gdoc discussion space (embedded below) will be integrated back.
\
Should research projects be improved and updated 'in the same place', rather than with 'extension papers'?
Small changes and fixes: The current system makes it difficult to make minor updates – even obvious corrections – to published papers. This makes these papers less useful and less readable. If you find an error in your own published work, there is also little incentive to note it and ask for a correction, even if this were possible.
In contrast, a 'living project' could be corrected and updated in situ. If future and continued evaluations matter, they will have the incentive to do so.
Lack of incentives for updates and extensions: If academic researchers see major ways to improve and build on their past work, these can be hard to get published and get credit for. The academic system rewards novelty and innovation, and top journals are reluctant to publish 'the second paper' on a topic. As this would count as 'a second publication' (for tenure etc.), authors may be accused of double-dipping, and journals and editors may punish them for this.
Clutter and confusion in the literature: Because of the above, researchers often try to spin an improvement to a previous paper as very new and different. They do sometimes publish a range of papers getting at similar things and using similar methods, in different papers/journals. This makes it hard for other researchers and readers to understand which paper they should read.
In contrast, a 'living project' can keep these in one place. The author can lay out different chapters and sections in ways that make the full work most useful.
But we recognize there may also be downsides to _'_all extensions and updates in a single place'...
Some discussion follows. Note that the Unjournal enables this but does not require it.
"Progress notes": We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.
The SFF grant is now 'in our account' (all is public and made transparent on our OCF page). This makes it possible for us to
move forward in filling staff and contractor positions (see below); and
increase evaluator compensation and incentives/rewards (see below).
We are circulating a press release sharing our news and plans.
Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for "The Governance Of Non-Profits And Their Social Impact: Evidence from a Randomized Program In Healthcare In DRC.” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at https://unjournal.pubpub.org/. (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)
We will make decisions and award our Impactful Research Prize (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on What is global-priorities relevant research?
In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research (NBER) and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.
We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at Unjournal - standalone work task: Research scoping for evaluation management; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.
We see a lot of value in this task and expect to actually use and credit this work.
If you are interested in applying to do this paid project, please let us know through our CtA survey form here.
Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.
This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...
focus on a particular area of research, policy, or impactful outcome;
keep track of new or under-considered research with potential for impact;
explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and
“curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and
potentially serve as an evaluation manager for this same work.
Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .
Interested in a field specialist role or other involvement in this process? Please fill out this general involvement form (about 3–5 minutes).
We are also considering how to set priorities for our evaluators. Should they prioritize:
Giving feedback to authors?
Helping policymakers assess and use the work?
Providing a 'career-relevant benchmark' to improve research processes?
We discuss this topic here, considering how each choice relates to our Theory of Change.
We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. We've increased the base compensation for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes.
Please consider signing up for our evaluator pool (fill out the good old form).
As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. This Airtable view gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.
Gary Charness, Professor of Economics, UC Santa Barbara
Nicolas Treich, Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)
Anca Hanea, Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification
Jordan Dworkin, Program Lead, Impetus Institute for Meta-science
Michael Wiebe, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)
We're working with PubPub to improve our process and interfaces. We plan to take on a KFG membership to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.
Our next hiring focus: Communications. We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.
We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)
We are also expanding our Management Committee and Advisory Board; see calls to action.
Update from David Reinstein, Founder and Co-Director
With the recent news, we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?
Over the next 18 months, we aim to:
Build awareness: (Relevant) people and organizations should know what The Unjournal is.
Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.
Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.
I sketch these goals HERE, along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.
While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our Pubpub.
With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the Impactful Research Prize and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).
No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:
Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"
Communications specialists
Administrative and Operations personnel
Tech support/software developers
Here's a brief and rough description of these roles. And here’s a quick form to indicate your potential interest and link your CV/webpage.
You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, here.
We also plan to expand our Management Committee; please reach out if you are interested or can recommend suitable candidates.
We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.
We are delighted to welcome Jordan Dworkin (FAS) and Nicholas Treich (INRA/TSE) to our Advisory Board, and Anirudh Tagat (Monk Prayogshala) to our Management Committee!
Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."
Treich's current research agenda largely focuses on the intersection of animal welfare and economics.
Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also an active participant in the COS SCORE project.
The Unjournal was recommended/approved for a substantial grant through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...
To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.
Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.
Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see HERE) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)
We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list here), and one on our Sciety page here (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.
The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "Is CBT effective for poor households?" [link: EA Forum post]
Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
See the evaluation summaries and ratings, with linked evaluations HERE (Haushofer et al) and HERE (Barker et al).
We are now up to twelve total evaluations of five papers. Most of these are on our PubPub page (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, as noted in a recent EA Forum Post:
Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
These are part of Unjournal's 'direct NBER evaluation' stream.
More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.
To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.
You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."
See our latest post on the EA Forum
Our new platform (unjournal.pubpub.org), enabling DOIs and CrossRef (bibliometrics)
Evaluations of "Artificial Intelligence and Economic Growth"; "self-correcting science"
More evaluations soon
We are pursuing collaborations with replication and robustness initiatives such as the "Institute for Replication" and repliCATS
We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HERE. (Note, this is an administrative thing, it's not a source of funding)
Our Sciety Group is up...
With our first posted evaluation ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.
Two more evaluations 'will be posted soon' (waiting for final author responses.
Working on getting six further papers (projects) evaluated, most of which are part of our NBER"Direct evaluation" track
Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments
Building our platforms, and considering ways to better format and integrate evaluations
with the original research (e.g., through Hypothes.is collaborative annotation)
into the bibliometric record (through DOI's etc)
and with each other.
We are seeking grant funding for our continued operation and expansion (see grants and proposals below). We're appealing to funders interested in Open Science and in impactful research.
We're considering collaborations with other compatible initiatives, including...
replication/reproducibility/robustness-checking initiatives,
prediction and replication markets,
and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).
We are now under the Open Collective Foundation 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the Impactful Research Prize and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.
EA Forum: "Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA": recent post and AMA (answering questions about the Unjournal's progress, plans, and relation to effective-altruism-relevant research
March 9-10: David Reinstein will present at the COS Unconference, session on "Translating Open Science Best Practices to Non-academic Settings". See agenda. David will discuss The Unjournal for part of this session.
Evaluators: We have a strong pool of evaluators.
Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out THIS FORM (about 3-5 min) if you are interested
Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.
Hope these updates are helpful. Let me know if you have suggestions.
(See sections below)
As part of The Unjournal’s general approach, we keep track of and maintain contact with other initiatives in open science, open access, robustness/transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts.
The Airtable view below gives a preliminary breakdown of some initiatives that are the most similar to—or partially overlap—ours, and tries to catalog the similarities and differences to give a picture of who is doing what and in what fields.
See especially eLife and Peer Communities In
We are considering asking evaluators, with compensation, to assist and engage in the process of "robustness replication." This may lead to some interesting follow-on possibilities as we build our potential collaboration with the Institute for Replication and others in this space.
We might ask evaluators discussion questions like these:
What is the most important, interesting, or relevant substantive claim made by the authors, (particularly considering global priorities and potential interventions and responses)?
What statistical test or evidence does this claim depend on, according to the authors?
How confident are you in the substantive claim made?
"Robustness checks": What specific statistical test(s) or piece(s) of evidence would make you substantially more confident in the substantive claim made?
If a robustness replication "passed" these checks, how confident would you be then in the substantive claim? (You can also express this as a continuous function of some statistic rather than as a binary; please explain your approach.)
Background:
The Institute for Replication is planning to hire experts to do "robustness-replications" of work published in a top journal in economics and political science. Code- and data sharing is now being enforced in many or all of these journals and other important outlets. We want to support their efforts and are exploring collaboration possibilities. We are also considering how to best guide potential future robustness replication work.
We're happy for you to use whichever process and structure you feel comfortable with when writing your evaluation content.
Remember: The Unjournal doesn’t “publish” and doesn’t “accept or reject.” So don’t give an Accept, Revise-and-Resubmit', or Reject-type recommendation. We ask for quantitative metrics, written feedback, and expert discussion of the validity of the paper's main claims, methods, and assumptions.
Semi-relevant: Econometric Society: Guidelines for referees
Report: Improving Peer Review in Economics: Stocktaking and Proposal (Charness et al 2022)
Open Science
PLOS (Conventional but open access; simple and brief)
Peer Community In... Questionnaire (Open-science-aligned; perhaps less detail-oriented than we are aiming for)
Open Reviewers Reviewer Guide (Journal-independent “PREreview”; detailed; targets ECRs)
General, other fields
The Wiley Online Library (Conventional; general)
"Peer review in the life sciences (Fraser)" (extensive resources; only some of this is applicable to economics and social science)
Collaborative template: RRR assessment peer review
Introducing Structured PREreviews on PREreview.org
‘the 4 validities’ and seaboat
The Peer Commuities In organization (PCI) and the Peer Community Journal, a diamond open access journal, have considerable overlap with The Unjournal model. They started out (?) as a "recommendation system" but now have established the PCI Journal to "publish unconditionally, free of charge, exclusively, immediately (as soon as possible) [and on an opt-in basis] . . . any article recommended by a PCI." .
Especially relevant to The Unjournal are these aspects of their program:
The standard "recommender" model has an approved recommender volunteer to play the role of managing editor for a paper and make the decisions; authors are consulted to recommend reviewers.
This might bring up concerns about conflict of interest, e.g., I become "recommender" for a friend or for the stuff that supports my agenda.
There are 17 "Peer Communities in" (i.e., research areas)—mainly in life sciences (some seem to have just started; there are no public preprints up).
Authors must
They (opt-in) "publish" the article rather than being an "overlay journal," to improve their indexing possibilities (but this is opt-in; you can also submit elsewhere and there are "PCI-friendly" journals).
They depend on volunteer evaluations.
Their evaluation is 0/1 and descriptive rather than quantitative.
Sciety is essentially a hub for curating the sort of evaluations that Unjournal aims to do. Users can access research works that have been publicly evaluated.
There are several initiatives of public—and sometimes journal-independent—peer evaluation, including around two dozen groups listed on Sciety, such as the Biophysics Collab, Rapid Reviews COVID-19, and E-Life. However, these are nearly exclusively in biology and related areas.
Sciety’s mission is to grow a network of researchers who evaluate, curate and consume scientific content in the open. In doing so, we will support several long-term changes to scientific communication:
Change peer review to better recognize its scholarly contribution
Shift the publishing decision from editors to authors
Move evaluation and curation activity from before to after publication
Our community-driven technology effort is producing an application that can support the changes in behaviour required to secure this future.
Link: "Asterisk is a quarterly magazine of clear writing and clear thinking about things that matter."
Asterisk is a new quarterly journal of ideas from in and around Effective Altruism. Our goal is to provide clear, engaging, and deeply researched writing about complicated questions. This might look like a superforecaster giving a detailed explanation of the reasoning they use to make a prediction, a researcher discussing a problem in their work, or deep-dive into something the author noticed didn’t quite make sense. While everything we publish should be useful (or at least interesting) to committed EAs, our audience is the wider penumbra of people who care about improving the world but aren't necessarily routine readers of, say, the EA forum.
Includes "Speculative pieces with 'epistemic signposts'"
Possible scope for collaboration or sharing
Followup on crucial research—I will share non-sensitive parts of Airtable
Sharing a database/CRM of
Experts to vouch
Interested academics and good writers
Shared thinking on "what is relevant" and "how to classify things"
Unjournal could '"feed in" to Asterisk: Academic article, then you do a writeup; they have funding, can pay authors ~$4,000 for 4000 words; can't guarantee that academic work will feed into Asterisk
Passing back and forth relevant work and directions to go in
Some shared cross-promotion (e.g., at universities and in policy circles, where both Unjournal and Asterisk are relevant)
eLife's is a fairly well respected (?) journal in Life Sciences. Their New Model (originally called "publish, review, curate") was big news. Their three-month update seems fairly stable and successful. Here's their FAQ. Their model is similar to ours in many ways, but it's mainly or exclusively for life sciences. They use Sciety for curation.
They don't have explicit quantitative metrics, but an "eLife assessment . . . is written with the help of a common vocabulary to ensure consistency," which may proxy this.
Evaluators (reviewers) are not compensated. ("We offer remuneration to our editors but not to our peer reviewers.")
Reviewers' names are not displayed. ("All public comments posted alongside a preprint will be signed by eLife and not by individuals, putting the onus on eLife as an organisation and community to ensure that the outputs of our peer-review process are of the highest standard.")
They charge a $2,000 APC. Presumably, this is true for all "reviewed preprints" on the eLife website, whether or not you request it become a "version of record."
The evaluation is non-exclusive unless you request that the reviewed preprint be a "'Version of Record' that will be sent to indexers like PubMed and can be listed in funding, job applications and more."
Some share of the work they cover are registered reports.
Ben West on how to make the EA Forum involve more rigorous processes
Ozzy at QURI — I will contact him about "how to do better evaluation"
7 Feb 2023 We have considered and put together:
See: Guidelines for evaluators
... including descriptive and quantitative (rating and prediction elements). With feedback from evaluators and others, we are continuing to build and improve these guidelines.
See sections below.
Rethink Priorities will act as fiscal sponsor for this, to help administer payments. They will also receive $5,000 to cover roughly two hours/week of Reinstein's time on this project.
Administering payments to referees, researchers, etc.
We will need to make small payments to (say) 20–50 different referees, 5–10 committee members and "editorial managers," 5–10 research prize winners, as well as clerical and IT assistants.
LTFF:
Please let us know how you would like your grant communicated on the ACX blog, e.g., if you'd like Scott to recommend that readers help you in some way (see this post for examples).
See #acx-and-ltff-media__
Organization/discussion following a thread in...
Some conversation highlights:
Kris Gulati: Recently I've been talking to more global priorities-aligned researchers to get to know what people are working on. I noticed they're somewhat scattered around (Stanford, PSE, Chicago, Oxford etc.). Additionally, sometimes established academics don't always entirely grasp global priorities-focused work and so it can be tough to get feedback on ideas from supervisors or peers when it's pretty different to the more orthodox research many academics focus on. One way of remedying this is to have an informal seminar series where people interested in GP work present early stages ideas and can receive feedback on their ideas, etc.
David Mannheim: Yes, this seems very promising. And I think that it would be pretty easy to get a weekly seminar series together on this on Zoom.
Robin Hanson: Why limit it to PhD students? All researchers can gain from feedback, and can offer it.
Eva Vivalt: Sounds great. GPI also has seminars on global priorities research in philosophy and economics that might be of interest. . . . [followed by some notes of caution] I'm just worried about stretching the senior people too thin. I've been attending the econ ones remotely for a while, and only this semester did I finally feel like it was really mature; a senior person less would be a setback. I fully think there should be many groups even within econ, and at different institutions; that would be a healthy ecosystem.
Kris, responding to Dave Rhys-Bernard: If GPI's seminar series is meant to be private, then it's worth running something additional, given we can get a decent critical mass of attendance and some senior people are happy to attend.
DR: I think a focus on empirical economics, social science, and program evaluation would be most promising (and I could help with this). We could also incorporate "applications of economic theory and decision theory." Maybe I would lean away from philosophy and "fundamental theory," as GPI's seminar seems to concentrate on that.
Rethink Priorities would probably (my guess) be willing to attach our name to it and a decent number of our research staff would attend. I think GPI might be interested, hopefully Open Philanthropy and other organizations. Robin Hanson and other academics have expressed interest.
We could try to get a MWE/PoC online seminar 1x per month, for example
Start with...
Presentations of strong, solid working papers and research projects from reputable authors
EA-adjacent and -aligned academics and academically connected EA-org researchers (at RP, Open Phil, GPI, etc.)
"Job-markety PhD students"
Make it desirable to present
Selective choices, "awards" with small stipends? Or "choose a donation"?
Guarantee of strong feedback and expertise
Communication/coverage of work
Encourage experts to attend and give concrete feedback
Open and saved chat window
Write up feedback; consider drawing future presenters from the attending crowd
DR: I would not publicize this until we get #the-featured-research-seminar-series off the ground . . . just do it informally, perhaps.
15-20 minute presentations
Provide or link writeup for follow-up comments and #collaborative-annotated-feedback
Such a seminar needs association with credible people and orgs to get participation.
Do we need any funding for small honorariums or some such?
Do we want to organize "writeups" and publicity? Should we gain support for this?
Economic theory and empirics are different groups . . . . I think a focus on empirical and applied work would have a critical mass.
See draft above
I think a really neat feature, as a companion to the seminar, could be that the author would (ideally) post a working paper or project website and everyone would leave collaborative annotation comments on it.
This kind of feedback could be golden for the author.
GPI lunchtime seminar (not public)
EA Global talks
RP has internal talks; hope to expand this
()
Identify a small set of papers or projects as representative first-cases; use to help test the system we are building in a concrete manner.
In doing the above, we are also collecting a longer list of key papers, projects, authors, topics, issues, etc.
Post on EA Forum (and other places) and present form (see view at bottom of this section) promoting our call for papers further, with bounty.
2. Search for most-cited papers (within our domain) among EA-aligned researchers and organizations.
3. Dig into existing lists, reviews, and syllabi, such as:
GPI research agenda (includes many posed questions)
Open Philanthropy "questions that might affect our grantmaking" (needs updating? few academic links)
Syllabi: Pablo's list; Economics focus list; David Rhys-Bernard's syllabus (link to my commented/highlighted version)
5. Appeal directly to authors and research groups
6. Cross-promote with How to get involved
Pete Slattery: "Do a complete test run using a single paper and team…" Thus, we aim to identify a small set of papers (or projects), maybe 2–3, that seem like good test and example cases, and offer a bounty for projects we choose as test cases.
Note that much of the work identified here has already been peer-reviewed and "published." While we envision that The Unjournal may assess papers that are already published in traditional journals, these are probably not the best case for the PoC. Thus, we de-prioritize these for now.
Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects.
7 Feb 2023
Set up Kotahi page HERE
Configured it for submission and management
Mainly configured for evaluation but it needs bespoke configuration to be efficient and easy for evaluators, particular for the quantitative ratings and predictions. Thus we are using Google Docs (or cryptpads) for the pilot. Will configure Kotahi with further funds.
Evaluations are curated in our Sciety.org group, which integrates these with the publicly-hosted research.
7 Feb 2023: We are working on
The best ways to get evaluations from "submissions on Kotahi" into Sciety,
... with the curated link to the publicly-hosted papers (or projects) on a range of platforms, including NBER
Ways to get DOIs for each evaluation and author response
Ways to integrate evaluation details as 'collaborative annotations' (with hypothes.is) into the hosted papers
(We currently use a hypothes.is workaround to have this feed into Sciety so these show up as ‘evaluated pre-prints’ in their public database, gaining a DOI.
Notes, exploring the platform.
Status: 7 Feb 2023
Volunteer pool of 80+ reviewers (see Airtable), responding to How to get involved and other outreach
For our initial 3 focal pilot papers we have a total of 8 completed evaluations (2 of these are completed subject to some final checks.
For the remaining 7 pilot papers, we have roughly 6 agreed evaluators so far (we aim for 2-3 per paper)
Our brief application to a regranting program (first round) is linked here and embedded below, verbatim.
Content from this grant application is linked here and embedded below, verbatim.
Update Feb. 2024: We are moving the discussion of the details of this process to an internal Coda link (here, accessible by team members only). We will present an overview in broad strokes below.
See also Mapping evaluation workflowfor an overview and flowchart of our full process (including the evaluation manager role).
Compensation: As of April Dec 2023, evaluation managers are compensated a minimum of $300 per project, and up to $500 for detailed work. Further work on 'curating' the evaluation, engaging further with authors and evaluators, writing detailed evaluation summary content, etc., can earn up to an additional $200.
If you are the evaluation manager please follow the process described in our private Coda space here
Engage with our previous discussion of the papers; why we prioritized this work, what sort of evaluators would be appropriate, what to ask them to do.
Inform and engage with the paper's authors, asking them for updates and requests for feedback. The process varies depending on whether the work is part of our "Direct evaluation" track or whether we require authors' permission.
Find potential evaluators with relevant expertise, contact them. We generally seek two evaluators per paper.
Suggest research-specific issues for evaluators to consider. Guide evaluators on our process.
Read the evaluations as they come in, suggest additions or clarifications if necessary.
Rate the evaluations for awards and bonus incentives.
Share the evaluations with the authors, requesting their response.
Optionally, provide a brief "evaluation manager's report" (synthesis, discussion, implications, process) to accompany the evaluation package.
See also:
See also: Protecting anonymity
We give the authors two weeks to respond before publishing the evaluation package (and they can always respond afterwards).
Once the evaluations are up on PubPub, reach out the evaluators again with the link, in case they want to view their evaluation and the others. The evaluators may be allowed to revise their evaluation, e.g., if the authors find an oversight in the evaluation. (We are working on a policy for this.)
At the moment (Nov. 2023) we don't have any explicit 'revise and resubmit' procedure, as part of the process. Authors are encouraged to share changes they plan to make, and a (perma)-link to where their revisions can be found. They are also welcome to independently (re)-submit an updated version of their work for a later Unjournal evaluation.
Notable:
PREreview.org
Mercatus commissions external reviews
NBER, CEPR etc -- very loosely filtered within member network
World Bank, Federal Reserve, etc. Internal review?
Open Philanthropy?
This page should explain or link clear and concise explanations of the key resources, tools, and processes relevant to members of The Unjournal team, and others involved.
5 Sep 2024: Much of the information below is out of date. We have moved most of this content to our internal (Coda) system (but may move some of it back into hidden pages here to enable semantic search)
See also (and integrate): Jordan's 'Onboarding notes'
The main platforms for the management team are outlined below with links provided.
Please ask for group access, as well as access to private channels, especially "management-policies". Each channel should have a description and some links at the top.
We are no longer using Airtable; the process, and instructions. have been moved into Coda.
See Tech scoping
Management team: You don't need to edit the GitBook if you don't want to, but we're trying to use it as our main place to 'explain everything' to ourselves and others. We will try to link all content here. Note you can use 'search' and 'lens' to look for things.
Access to the PubPub is mainly only needed for doing 'full-service evaluation manager work'.
Please ask for access to this drive. This drive contains meeting notes, discussion, grant applications and tech details.
This is for submitting invoices for your work.
The main platforms needed for the advisory board are outlined below with links provided.
Members of the advisory board can join our Slack (if they want). They can have access to private channels (subject to ) other than the 'management-policies' channel
We are no longer using Airtable (except to recover some older content; the process, and instructions have been moved into Coda.io
In addition to the management team platforms explained above, additional information for how to use the platforms specifically for managing evaluations is outlined below.
We are no longer using Airtable; the process, and instructions. have been moved into Coda.
For details on our current PubPub process please see this google doc. To find this in the google drive, it is under "hosting and tech".
Airtable: Get to know it's features, it's super-useful. E.g., 'views' provide different pictures of the same information. 'Link' field types connect different tables by their primary keys, allowing information and calculations to flow back and forth.
Airtable table descriptions: as well as by hovering over the '(i)' symbol for each tab. Many of the columns in each tab also have descriptions.
Additional Airtable security: We also keep more sensitive in this AIrtable encrypted, or moved to a different table that only David Reinstein has access to.
Use discretion in sharing: advisory board members might be authors, evaluators, job candidates, or parts of external organizations we may partner with
We designed and disseminated a survey taken by over 1,400 economists in order to (i) understand their experiences with peer review and (ii) collect opinions about potential proposals to improve the system.
...
We reviewed the existing literature about peer review, drawing on sources from inside and outside of economics. ... We then built a (non-comprehensive) themed bibliography,
... we took the additional step of preparing a list of over 160 proposals.
Other peer-review models Our current peer-review system relies on the feedback of a limited number of ad-hoc referees, given after a full manuscript was produced. We consider several changes that could be made to this model, including:
Post-publication peer review: Submissions could be published immediately and then subjected to peer review, or they could be subject to continued evaluation at the conclusion of the standard peer-review process.
Peer review of registered reports: Empirical papers could be conditionally accepted before the results are known, based on their research question and design. A limited number of journals have started to offer publication tracks for registered reports.
Crowdsourced peer review and prediction markets: Rather than relying on a small number of referees, the wisdom of crowds could be leveraged to provide assessments of a manuscript's merits.
Non-economists and non-academics as referees: Besides enlarging the size of the pool of referees who assess a paper, the diversity of the pool could be increased by seeking the opinion of researchers from other disciplines or non-academics, such as policy makers.
Collaborative peer review platforms: Communication between authors, reviewers, and editors could be made more interactive, with the implementation of new channels for real-time discussion. Collaborative platforms could also be set up to solicit feedback before journal submission occurs.
This initiative and EA/gp Unjournal will interact with the EA forum and build on initiatives coming there.
Some of these links come from a conversation with Aaron Gertler
Here's where to suggest new Forum features.
Here's an example of a PR FAQ post that led us to develop a new feature.
Note: Reinstein and Hamish Huggard have worked on tools to help transform R-markdown and bookdown files. Some work can be found on this Repo (but may need some explanation).
Jaime Sevilla has thoughts on creating a peer-review system for the Forum. (See embedded doc below, link here.)
To create a quick and easy prototype to test, you fork the EA Forum and use that fork as a platform for the Unjournal project (maybe called something like "The Journal of Social Impact Improvement and Assessment").
People (ideally many from EA) would use the Forum-like interface to submit papers to this Unjournal.
These papers would look like EA Forum posts, but with an included OSF link to a PDF version. Any content (e.g., slides or video) could be embedded in the submission.
All submissions would be reviewed by a single admin (you?) for basic quality standards.
Most drafts would be accepted to The Unjournal.
Any accepted drafts would be publicly "peer reviewed." They would achieve peer-reviewed status when >x (3?) people from a predetermined or elected board of editors or experts had publicly or anonymously reviewed the paper by commenting publicly on the post. Reviews might also involve ratings the draft on relevant criteria (INT?). Public comment/review/rating would also be possible.
Draft revisions would be optional but could be requested. These would simply be new posts with version X/v X appended to the title.
All good comments or posts to the journal would receive upvotes, etc., so authors, editors and commentators would gain recognition, status and "points" from participation. This is sufficient for generating participation in most forums and notably lacking in most academic settings.
Good papers submitted to the journal would be distinguished by being more widely read, engaged with, and praised than others. If viable, they would also win prizes. As an example, there might be a call for papers on solving issue x with a reward pool of grant/unconditional funding of up to $x for winning submissions. The top x papers submitted to The Unjournal in response to that call would get grant funding for further research.
A change in rewards/incentives (from "I had a paper accepted/cited" to "I won a prize") seems to have various benefits.
It still works for traditional academic metrics—grant money is arguably even more prized than citations and publication in many settings
It works for non-academics who don't care about citations or prestigious journal publications.
As a metric, "funds received" would probably better track researchers' actual impact than their citations and acceptance in a top journal. People won't pay for more research that they don't value, but they will cite or accept that to a journal for other reasons.
Academics could of course still cite the DOIs and get citations tracked this way.
Reviewers could be paid per-review by research commissioners.
Here is a quick example of how it could work for the first run: Open Philanthropy calls for research on something they want to know about (e.g., interventions to reduce wild animal suffering). They commit to provide up $100,000 in research funding for good submissions and $10,000 for review support. Ten relevant experts apply and are elected to the expert editorial boards to review submissions. They will receive 300 USD per review and are expected to review at least x papers. People submit papers; these are reviewed; OP awards follow-up prizes to the winning papers. The cycle repeats with different funders, and so on.
I suppose I like the above because it seems pretty easy and actionable to do over as a test run for something to refine and scale. I estimate that I could probably do it myself if I had 6–12 months to focus on it. However, I imagine that I am missing a few key considerations as I am usually over-optimistic! Feel free to point those out and offer feedback.
9 Apr 2024: This section outlines our management structure and polices. More detailed content is being moved to our private (Coda.io) knowledge base.
Tech, tools and resources has been moved to it's own section
Updated 11 Jan 2023
The official administrators are David Reinstein (working closely with the Operations Lead) and Gavin Taylor; both have control and oversight of the budget.
Major decisions are made by majority vote by the Founding Committee (aka the ‘Management Committee’).
Members: #management-committee
Advisory board members are kept informed and consulted on major decisions, and relied on for particular expertise.
Advisory Board Members: #advisory-board
Did the people who suggested the paper suggest any evaluators?
We prioritize our "evaluator pool" (people who signed up; see "how to get involved")
Expertise in the aspects of the work that need evaluation
Interest in the topic/subject
Conflicts of interest (especially co-authorships)
Secondary concerns: Likely alignment and engagement with Unjournal's priorities. Good writing skills. Time and motivation to write the evaluation promptly and thoroughly.
Mapping collaborator networks through Research Rabbit
We use a website called Research Rabbit (RR).
Our RR database contains papers we are considering evaluating. To check potential COI, we use the following steps:
After choosing a paper, we select the button "these authors." This presents all the authors for that paper.
After this, we choose "select all," and click "collaborators." This presents all the people that have collaborated on papers with the authors.
Finally, by using the "filter" function, we can determine whether the potential evaluator has ever collaborated with an author from the paper.
If a potential evaluator has no COI, we will add them to our list of possible evaluators for this paper.
Note: Coauthorship is not a disqualifier for a potential evaluator; however, we think it should be avoided where possible. If it cannot be avoided, we will note it publicly.
This page is mainly for The Unjournal management, advisory board and staff, but outside opinions are also valuable.
Unjournal team members:
Priority 'ballot issues' are given in our 'Survey form', linked to the Airtable (ask for link)
Key discussion questions in the broad_issue_stuff
view inquestions
table, linking discussion Google docs
We are considering a second stream to evaluate non-traditional, less formal work, not written with academic standards in mind. This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers. See comments here; see also Pete Slattery’s proposal here, which namechecks the Unjournal.
E.g., for
We further discuss the case for this stream and sketch and consider some potential policies for this HERE.
Internal discussion space: Unjournal Evaluator Guidelines & Metrics
DR: I suspect that signed reviews (cf blog posts) provide good feedback and evaluation. However, when it comes to rating (quantitative measures of a paper's value), my impression from existing initiatives and conversations is that people are reluctant to award anything less than 5/5 'full marks'.
Power dynamics: referees don't want to be 'punished', may want to flatter powerful authors
Connections and friendships may inhibit honesty
'Powerful referees signing critical reports' could hurt ECRs
Public reputation incentive for referees
(But note single-blind paid review has some private incentives.)
Fosters better public dialogue
Inhibits obviously unfair and impolite 'trashing'
Author and/or referee choose whether it should be single-blind or signed
Random trial: We can compare empirically (are signed reviews less informative?)
Use a mix (1 signed, 2 anonymous reviews) for each paper
We may revisit our "evaluators decide if they want to be anonymous" policy. Changes will, of course never apply retroactively: we will carefully keep our promises. However, we may consider requesting certain evaluators/evaluations to specifically be anonymous, or to publish their names. A mix of anonymous and signed reviews might be ideal, leveraging some of the benefits of each.
We are also researching other frameworks, templates, and past practices; we hope to draw from validated, theoretically grounded projects such as RepliCATS.
See the 'IDEAS protocol' and Marcoci et al, 2022
#considering-for-future-enabling-minor-revisions
Should we wait until all commissioned evaluations are in, as well as authors' responses, and release these as a group, or should we sometimes release a subset of these if we anticipate a long delay in others? (If we did this, we would still stick by our guarantee to give authors two weeks to respond before release.)
15 Aug 2023: We are organizing some meetings and working groups, and building some private spaces ... where we are discussing 'which specified research themes and papers/projects we should prioritize for UJ evaluation.'
This is guided by concerns we discuss in other sections (e.g., '', '')
Research we prioritize, and short comments and ratings on its prioritization is currently maintained in our Airtable database (under 'crucial_research'). We consider 'who covers and monitors what' (in our core team) in the 'mapping_work' table). This exercise suggested some loose teams and projects. I link some (private) Gdocs for those project discussions below. We aim to make a useful discussion version/interface public when this is feasible.
Team members and field specialists: You should have access to a Google Doc called "Unjournal Field Specialists+: Proposed division (discussion), meeting notes", where we are dividing up the monitoring and prioritization work.
Some of the content in the sections below will overlap.
'Impactful, Neglected, Evaluation-Tractable' work in the global health & RCT-driven intervention-relevant part of development economics
Mental health and happiness; HLI suggestions
Givewell specific recommendations and projects
Governance/political science
Global poverty: Macro, institutions, growth, market structure
Evidence-based policy organizations, their own assessments and syntheses (e.g., 3ie)
How to consider and incorporate adjacent work in epidemiology and medicine
Syllabi (and ~agendas): Economics and global priorities (and adjacent work)
Microeconomic theory and its applications? When/what to consider?
The economics of animal welfare (market-focused; 'ag econ'), implications for policy
Attitudes towards animals/animal welfare; behavior change and 'go veg' campaigns
Impact of political and corporate campaigns
Environmental economics and policy
Moral psychology/psychology of altruism and moral circles
Innovation, R&D, broad technological progress
Meta-science and scientific productivity
Social impact of AI (and other technology)
Techno-economic analysis of impactful products (e.g., cellular meat, geo-engineering)
Pandemics and other biological risks
Artificial intelligence; AI governance and strategy (is this in the UJ wheelhouse?)
International cooperation and conflict
Long term population, growth, macroeconomics
Normative/welfare economics and philosophy (should we cover this?)
Empirical methods (should we consider some highly-relevant subset, e.g., meta-analysis?)
: How can UJ source and evaluate credible work in psychology? What to cover, when, who, with what standards...
See .
You can also search and query this Gitbook (press control-K or command -k)
About The Unjournal
The Unjournal in a nutshell (and more)
Get involved
Apply to join our team, join our evaluator pool, do an independent evaluation, submit research, and more
Our team
Management, advisors, research field specialists, contractors
Our plan of action
The Unjournal pilot and beyond
Research with impact potential database
Includes work we evaluated and are considering
FAQs for Authors
For authors submitting research, & whose work we're considering or evaluating.
Unjournal evaluations+
Evaluations, ratings, summaries, responses (PubPub)
Map of our workflow
Flowchart and description of our intake and evaluation processes
Guidelines for evaluators
What we ask (and pay) evaluators to do
You can also press ⌘k or control-k to search or query our site.
The Unjournal is now an independent 501(c)(3) organization. We have new (and hopefully simpler and easier) systems for submitting expenses.
Evaluators: to claim your payment for evaluation work, please complete this very brief form.
You will receive your payment via a Wise transfer (they may ask you for your bank information if you don't have an account with them).
We aim to process all payments within one week.
Confidentiality: Please note that even though you are asked to provide your name and email, your identity will only be visible to The Unjournal administrators for the purposes of making this payment. The form asks you for the title of the paper you are evaluating. If you are uncomfortable doing this, please let us know and we can find another approach to this.
This information should be moved to a different section