Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A "curated guide" to this GitBook; updated June 2023
You can now ask questions of this GitBook using a chatbot: click the search bar or press cmd-k and choose "ask Gitbook."
Frequently Asked Questions (FAQ) For authors, evaluators, etc.
Explanations & outreach Writeups of the main points for a few different audiences
Why Unjournal? Important benefits of journal-independent public evaluation and The Unjournal's approach, with links to deeper commentary
Our policies: evaluation & workflow How we choose papers/projects to evaluate, how we assign evaluators, and so on
Parallel/partner initiatives and resources Groups we work with; comparing approaches
What is global-priorities-relevant research? What research are we talking about? What will we cover?
These are of more interest to people within our team; we are sharing these in the spirit of transparency.
Plan of action A "best feasible plan" for going forward
Grants and proposals Successful proposals (ACX, SFF), other applications, initiatives
UJ Team: resources, onboarding Key resources and links for managers, advisory board members, staff, team members and others involved with The Unjournal project.
Note: we have moved some of this "internal interest content" over to our Coda.io knowledge base.
19 Feb 2024. We are not currently hiring, but expect to do so in the future
To indicate your potential interest in roles at The Unjournal, such as those described below, please fill out this quick survey form and link (or upload) your CV or webpage.
If you already filled out this form for a role that has changed titles, don’t worry. You will still be considered for relevant and related roles in the future.
If you add your name to this form, we may contact you to offer you the opportunity to do paid project work and paid work tasks.
Furthermore, if you are interested in conducting paid research evaluation for The Unjournal, or in joining our advisory board, please complete the form linked here.
Feel free to contact contact@unjournal.org with any questions.
Administration, operations and management roles
Research & operations-linked roles & projects
Standalone project: Impactful Research Scoping (temp. pause)
Express interest in any of these roles in our survey form.
The Unjournal, a not-for-profit collective under the umbrella and fiscal sponsorship of the Open Collective Foundation, is an equal-opportunity employer and contractor. We are committed to creating an inclusive environment for all employees, volunteers, and contractors. We do not discriminate on the basis of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetic information, disability, age, or veteran status.
See our data protection statement linked here.
In addition to the jobs and paid projects listed here, we are expanding our management team, advisory board, field specialist team pool, and evaluator pool. Most of these roles involve compensation/honorariums. See Advisory/team roles (research, management)
As of December 2023, the prizes below have been chosen and will be soon announced. We are also scheduling an event linked to this prize. However, we are preparing for even larger author and evaluator prizes for our next phase. Submit your research to The Unjournal or serve as an evaluator to be eligible for future prizes (details to be announced).
Submit your work to be eligible for our “Unjournal: Impactful Research Prize” and a range of other benefits including the opportunity for credible public evaluation and feedback.
First-prize winners will be awarded $, and the runner-ups will receive $1000.
Note: these are the minimum amounts; we will increase these if funding permits.
Prize winners will have the opportunity (but not the obligation) to present their work at an online seminar and prize ceremony co-hosted by The Unjournal, Rethink Priorities, and EAecon.
To be eligible for the prize, submit a link to your work for public evaluation here.
Please choose “new submission” and “Submit a URL instead.”
The latter link requires an ORCID ID; if you prefer, you can email your submission to
The Unjournal, with funding from the Long Term Future Fund and the Survival and Flourishing Fund, organizes and funds public-journal-independent feedback and evaluation. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation, and aim to expand this widely. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
We aim to publicly evaluate 15 papers (or projects) within our pilot year. This award will honor researchers doing robust, credible, transparent work with a global impact. We especially encourage the submission of research in "open" formats such as hosted dynamic documents (Quarto, R-markdown, Jupyter notebooks, etc.).
The research will be chosen by our management team for public evaluation by 2–3 carefully selected, paid reviewers based on an initial assessment of a paper's methodological strength, openness, clarity, relevance to global priorities, and the usefulness of further evaluation and public discussion. We sketch out these criteria here.
All evaluations, including quantitative ratings, will be made public by default; however, we will consider "embargos" on this for researchers with sensitive career concerns (the linked form asks about this). Note that submitting your work to The Unjournal does not imply "publishing" it: you can submit it to any journal before, during, or after this process.
If we choose not to send your work out to reviewers, we will try to at least offer some brief private feedback (please on this).
All work evaluated by The Unjournal will be eligible for the prize. Engagement with The Unjournal, including responding to evaluator comments, will be a factor in determining the prize winners. We also have a slight preference for giving at least one of the awards to an early-career researcher, but this need not be determinative.
Our management team and advisory board will vote on the prize winners in light of the evaluations, with possible consultation of further external expertise.
Deadline: Extended until 5 December (to ensure eligibility).
Note: In a subsection below, Recap: submissions, we outline the basic requirements for submissions to The Unjournal.
The prize winners for The Unjournal's Impactful Research Prize were selected through a multi-step, collaborative process involving both the management team and the advisory board. The selection was guided by several criteria, including the quality and credibility of the research, its potential for real-world impact, and the authors' engagement with The Unjournal's evaluation process.
Initial Evaluation: All papers that were evaluated by The Unjournal were eligible for the prize. The discussion, evaluations, and ratings provided by external evaluators played a significant role in the initial shortlisting.
Management and Advisory Board Input: Members of the management committee and advisory board were encouraged to write brief statements about papers they found particularly prize-worthy.
Meeting and Consensus: A "prize committee" meeting was held with four volunteers from the management committee to discuss the shortlisted papers and reach a consensus. The committee considered both the papers and the content of the evaluations Members of the committee allocated a total of 100 points among the 10 paper candidates. We used this to narrow down a shortlist of five papers.
Point Voting: The above shortlist and the notes from the accompanying discussion were shared with all management committee and advisory board members. Everyone in this larger group was invited to allocate up to 100 points among the shortlisted papers (and asked to allocate fewer points if they were less familiar with the papers and evaluations).
Special Considerations: We decided that at least one of the winners had to be a paper submitted by the authors or one where the authors substantially engaged with The Unjournal's processes. However, this constraint did not prove binding. Early-career researchers were given a slight advantage in our consideration.
Final Selection: The first and second prizes were given to the papers with the first- and second-most points, respectively.
This comprehensive approach aimed to ensure that the prize winners were selected in a manner that was rigorous, fair, and transparent, reflecting the values and goals of The Unjournal.
I (David Reinstein) am an economist who left UK academia after 15 years to pursue a range of projects (see my web page). One of these is The Unjournal:
The Unjournal (with funding from the Long Term Future Fund and the Survival and Flourishing Fund) organizes and funds public-journal-independent feedback and evaluation, paying reviewers for their work. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.
We are looking for your involvement...
We want researchers who are interested in doing evaluation work for The Unjournal. We pay an average of evaluation, and we award monetary prizes for the strongest work. Right now we are particularly looking for economists and people with quantitative and policy-evaluation skills. We describe what we are asking evaluators to do here: essentially a regular peer review with some different emphases, plus providing a set of quantitative ratings and predictions. Your evaluation content would be made public (and receive a DOI, etc.), but you can choose if you want to remain anonymous or not.
To sign up to be part of the pool of evaluators or to get involved in The Unjournal project in other ways, please fill out this brief form or email contact@unjournal.org.
We welcome suggestions for particularly impactful research that would benefit from (further) public evaluation. We choose research for public evaluation based on an initial assessment of methodological strength, openness, clarity, relevance to global priorities, and the usefulness of further evaluation and public discussion. We sketch these criteria here, and discuss some potential examples here (see research we have chosen and evaluated at unjournal.pubpub.org, and a larger list of research we're considering here).
If you have research—your own or others—that you would like us to assess, please fill out this form. You can submit your own work here (or by contacting ). Authors of evaluated papers will be eligible for our Impactful Research Prizes ().
We are looking for both feedback on and involvement in The Unjournal project. Feel free to reach out at .
View our data protection statement
Nov. 2023: We are currently prioritizing bringing in more to build our teams in a few areas, particularly in:
Catastrophic risks, AI governance and safety
Animal welfare: markets, attitudes
As well as:
Quantitative political science (voting, lobbying, attitudes)
Social impact of AI/emerging technologies
Macro/growth, finance, public finance
Long-term trends and demographics
In addition to the "work roles," we are looking to engage researchers, research users, meta-scientists, and people with experience in open science, open access, and management of initiatives similar to The Unjournal.
We are continually looking to enrich our general team and board, including our , These roles come with some compensation and incentives.
(Please see links and consider submitting an expression of interest).
These are principally not research roles, but familiarity with research and research environments will be helpful, and there is room for research involvement depending on the candidate’s interest, background, and skills/aptitudes.
There are currently one such role:
#h.htvmrldrqnhn (As of November 2023, still seeking freelancers)
Further note: We previously considered a “Management support and administrative professional” role. We are not planning to hire for this role currently. Those who indicated interest will be considered for other roles.
As of November 2023, we are soliciting applications for freelancers with skills in particular areas
The Unjournal is looking to work with a proficient writer who is adept at communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. As we are in our early stages, this is a generalist role. We need someone to help us explain what The Unjournal does and why, make our processes easy to understand, and ensure our outputs (evaluations and research synthesis) are accessible and useful to non-specialists. We seek someone who values honesty and accuracy in communication; someone who has a talent for simplifying complex ideas and presenting them in a clear and engaging way.
The work is likely to include:
Promotion and general explanation
Spread the word about The Unjournal, our approach, our processes, and our progress in press releases and short pieces, as well as high-value emails and explanations for a range of audiences
Make the case for The Unjournal to potentially skeptical audiences in academia/research, policy, philanthropy, effective altruism, and beyond
Keeping track of our progress and keeping everyone in the loop
Help produce and manage our external (and some internal) long-form communications
Help produce and refine explanations, arguments, and responses
Help provide reports to relevant stakeholders and communities
Making our rules and processes clear to the people we work with
Explain our procedures and policies for research submission, evaluation, and synthesis; make our systems easy to understand
Help us build flexible communications templates for working with research evaluators, authors, and others
Other communications, presentations, and dissemination
Write and organize content for grants applications, partnership requests, advertising, hiring, and more
Potentially: compose non-technical write-ups of Unjournal evaluation synthesis content (in line with interest and ability)
Most relevant skills, aptitudes, interests, experience, and background knowledge:
Understanding of The Unjournal project
Strong written communications skills across a relevant range of contexts, styles, tones, and platforms (journalistic, technical, academic, informal, etc.)
Familiarity with academia and research processes and institutions
Familiarity with current conversations and research on global priorities within government and policy circles, effective altruism, and relevant academic fields
Willingness to learn and use IT, project management, data management, web design, and text-parsing tools (such as those mentioned below), with the aid of GPT/AI chat
Further desirable skills and experience:
Academic/research background in areas related to The Unjournal’s work
Operations, administrative, and project management experience
Experience working in a small nonprofit institution
Experience with promotion and PR campaigns and working with journalists and bloggers
Proposed terms:
Project-based contract "freelance" work
$30–$55/hour USD (TBD, depending on experience and capabilities). Hours for each project include some onboarding and upskilling time.
Our current budget can cover roughly 200 hours of this project work. We hope to increase and extend this (depending on our future funding and expenses).
This role is contract-based and supports remote and international applicants. We can contract people living in most countries, but we cannot serve as an immigration sponsor.
I was in academia for about 20 years (PhD Economics, UC Berkeley; Lecturer, University of Essex; Senior Lecturer, University of Exeter). I saw how the journal system was broken.
Academics constantly complain about it (but don't do anything to improve it).
Most conversations are not about research, but about 'who got into what journal' and 'tricks for getting your paper into journals'
Open science and replicability are great, and dynamic documents make research a lot more transparent and readable. But these goals and methods are very hard to apply within the traditional journal system and its 'PDF prisons'.
Now I'm working outside academia and can stick my neck out. I have the opportunity to help fix the system. I work with research organizations and large philanthropists involved with effective altruism and global priorities. They care about the results of research in areas that are relevant to global priorities. They want research to be reliable, robust, reasoning-transparent, and well-communicated. Bringing them into the equation can change the game.
We are again considering application for the 'evaluation metrics/meta-science' role. We will also consider all applicants for our field specialist positions, and for roles that may come up in the future.
The potential roles discussed below combine research-linked work with operations and administrative responsibilities. Overall, this may include some combination of:
Assisting and guiding the process of identifying strong and potentially impactful work in key areas, explaining its relevance, its strengths, and areas warranting particular evaluation and scrutiny
Interacting with authors, recruiting, and overseeing evaluators
Synthesizing and disseminating the results of evaluations and ratings
Aggregating and benchmarking these results
Helping build and improve our tools, incentives, and processes
Curating outputs relevant to other researchers and policymakers
Doing "meta-science" work
See also our field specialist team pool and evaluator pool. Most of these roles involve compensation/honorariums. See Advisory/team roles (research, management)
Express your interest here. (Nov. 2023: Note, we can not guarantee that we will be hiring for this role, because of changes in our approach.)
Nov. 2023 update: We have paused this process focus to emphasize our field specialist positions. We hope to come back to hiring researchers to implement these projects soon.
We are planning to hire 3–7 researchers for a one-off paid project.
There are two opportunities: Contracted Research (CR) and Independent Projects (IP).
Project Outline
What specific research themes in economics, policy, and social science are most important for global priorities?
What projects and papers are most in need of further in-depth public evaluation, attention, and scrutiny?
Where does "Unjournal-style evaluation" have the potential to be one of the most impactful uses of time and money? By impactful, we mean in terms of some global conception of value (e.g., the well-being of living things, the survival of human values, etc.).
This is an initiative that aims to identify, summarize, and conduct an in-depth evaluation of the most impactful themes in economics, policy, and social science to answer the above questions. Through a systematic review of selected papers and potential follow-up with authors and evaluators, this project will enhance the visibility, understanding, and scrutiny of high-value research, fostering both rigorous and impactful scholarship.
Contracted Research (CR) This is the main opportunity, a unique chance to contribute to the identification and in-depth evaluation of impactful research themes in economics, policy, and social science. We’re looking for researchers and research users who can commit a (once-off) 15–20 hours. CR candidates will:
Summarize a research area or theme, its status, and why it may be relevant to global priorities (~4 hours).
We are looking for fairly narrow themes. Examples might include:
The impact of mental health therapy on well-being in low-income countries.
The impact of cage-free egg regulation on animal welfare.
Public attitudes towards AI safety regulation.
Identify a selection of papers in this area that might be high-value for UJ evaluation (~3 hours).
Choose at least four of these from among NBER/"top-10 working paper" series (or from work submitted to the UJ – we can share – or from work where the author has expressed interest to you).
For a single paper, or a small set of these papers (or projects) (~6 hours)
Read the paper fairly carefully and summarize it, explaining why it is particularly relevant.
Discuss one or more aspects of the paper that need further scrutiny or evaluation.
Identify 3 possible evaluators, and explain why they might be particularly relevant to evaluate this work. (Give a few sentences we could use in an email to these evaluators).
Possible follow-up task: email and correspond with the authors and evaluators (~3 hours).
We will compensate you for your time at a rate reflecting your experience and skills ($25–$65/hour). This work also has the potential to serve as a “work sample” for future roles at The Unjournal, as it is highly representative of what our #field-specialists-fs and#evaluators are commissioned to do.
We are likely to follow up on your evaluation suggestions. We also may incorporate your writing into our web page and public posts; you can choose whether you want to be publicly acknowledged or remain anonymous.
Independent Projects (IP)
We are also inviting applications to do similar work as an “Independent Project” (IP), a parallel opportunity designed for those eager to engage but not interested in working under a contract, or not meeting some of the specific criteria for the Contracted Research role. This involves similar work to above.
If you are accepted to do an IP, we will offer some mentoring and feedback. We will also offer prize rewards/bounties for particularly strong IP work. We will also consider working with professors and academic supervisors on these IP projects, as part of university assignments and dissertations.
You can apply to the CR and IP positions together; we will automatically consider you for each.
Get Involved!
If you are interested in involvement in either the CR or IP side of this project, please let us know through our survey form here.
Disambiguation: The Unjournal focuses on commissioning expert evaluations, guided by an ‘evaluation manager’ and compensating people for their work. (See the outline of our main process here). We plan to continue to focus on that mode. Below we sketch an additional parallel but separate approach.
Note on .
The Unjournal is seeking academics, researchers, and students to submit structured evaluations of the most impactful research . Strong evaluations will be posted or linked on our PubPub community, offering readers a perspective on the implications, strengths, and limitations of the research. These evaluations can be submitted using this form for academic-targeted research or this form for ; evaluators can publish their name or maintain anonymity; we also welcome collaborative evaluation work. We will facilitate, promote, and encourage these evaluations in several ways, described below.
We are particularly looking for people with research training, experience, and expertise in quantitative social science and statistics including cost-benefit modeling and impact evaluation. This could include professors, other academic faculty, postdocs, researchers outside of academia, quantitative consultants and modelers, PhD students, and students aiming towards PhD-level work (pre-docs, research MSc students etc.) But anyone is welcome to give this a try — when in doubt, piease go for it.
We are also happy to support collaborations and group evaluations. There is a good track record for this — see: “What is a PREreview Live Review?”, ASAPBio’s Crowd preprint review, I4replication.org and repliCATS for examples in this vein. We may also host live events and/or facilitate asynchronous collaboration on evaluations
Instructors/PhD, MRes, Predoc programs: We are also keen to work with students and professors to integrate ‘independent evaluation assignments’ (aka ‘learn to do peer reviews’) into research training.
Your work will support The Unjournal’s core mission — improving impactful research through journal-independent public evaluation. In addition, you’ll help research users (policymakers, funders, NGOs, fellow researchers) by providing high quality detailed evaluations that rate and discuss the strengths, limitations, and implications of research.
Doing an independent evaluation can also help you. We aim to provide feedback to help you become a better researcher and reviewer. We’ll also give prizes for the strongest evaluations. Lastly, writing evaluations will help you build a portfolio with The Unjournal, making it more likely we will commission you for paid evaluation work in the future.
We focus on rigorous, globally-impactful research in quantitative social science and policy-relevant research. (See “What specific areas do we cover?” for details.) We’re especially eager to receive independent evaluations of:
Research we publicly prioritize: see our public list of research we've prioritized or evaluated. ()
Research we previously evaluated (see public list, as well as https://unjournal.pubpub.org/ )
Work that other people and organizations suggest as having high potential for impact/value of information (also see Evaluating Pivotal Questions)
You can also suggest research yourself here and then do an independent evaluation of it.
We’re looking for careful methodological/technical evaluations that focus on research credibility, impact, and usefulness. We want evaluators to dig into the weeds, particularly in areas where they have aptitude and expertise. See our guidelines.
The Unjournal’s structured evaluation forms: We encourage evaluators to do these using either:
Our Academic (main) stream form: If you are evaluating research aimed at an academic journal or
Our ‘Applied stream’ form: If you are evaluating research that is probably not aimed at an academic journal. This may include somewhat less technical work, such as reports from policy organizations and think tanks, or impact assessments and cost-benefit analyses
Other public evaluation platforms: We are also open to engaging with evaluations done on existing public evaluation platforms such as PREreview.org. Evaluators: If you prefer to use another platform, please let us know about your evaluation using one of the forms above. If you like, you can leave most of our fields blank, and provide a link to your evaluation on the other public platform.
Academic (~PhD) assignments and projects: We are also looking to build ties with research-intensive university programs; we can help you structure academic assignments and provide external reinforcement and feedback. Professors, instructors, and PhD students: please contact us (contact@unjournal.org).
We will encourage all these independent evaluations to be publicly hosted, and will share links to these. We will further promote the strongest independent evaluations, potentially (such as unjournal.pubpub.org)
However, when we host or link these, we will keep them clearly separated and signposted as distinct from the commissioned evaluations; independent evaluations will not be considered official, and their ratings won’t be included in our ‘main data’ (see dashboard here; see ).
Bounties: We will offer prizes for the ‘most valuable independent evaluations’.
As a start, after the first eight (or by Jan. 1 2025, whichever comes later), we will award a prize of $500 to the most valuable evaluation.
Further details tbd.
All evaluation submissions will be eligible for these prizes and “grandfathered in” to any prizes announced later. We will announce and promote the prize winners (unless they opt for anonymity).
Evaluator pool: People who submit evaluations can elect to join our evaluator pool. We will consider and (time-permitting) internally rate these evaluations. People who do the strongest evaluations in our focal areas are likely to be commissioned as paid evaluators for The Unjournal.
We’re also moving towards a two-tiered base We will offer a higher rate to people who can demonstrate previous strong review/evaluation work. These independent evaluations will count towards this ‘portfolio’.
Our PubPub page provides examples of strong work, including the prize-winning evaluations.
We will curate guidelines and learning materials from relevant fields and from applied work and impact-evaluation. For a start, see "Conventional guidelines for referee reports" in our knowledge base.
We are reaching out to PhD programs and pre-PhD research-focused programs. Some curricula already involve “mock referee report” assignments. We hope professors will encourage their students to do these through our platform. In return, we’ll offer the incentives and promotion mentioned above, as well as resources, guidance, and some further feedback.
Crowdsourced feedback can add value in itself; encouraging this can enable some public evaluation and discussion of work that The Unjournal doesn’t have the bandwidth to cover
Improving our evaluator pool and evaluation standards in general.
Students and ECRs can practice and (if possible) get feedback on independent evaluations
They can demonstrate their ability this publicly, enabling us to recruit and commission the strongest evaluators
Examples will help us build guidelines, resources, and insights into ‘what makes an evaluation useful’.
This provides us opportunities to engage with academia, especially in Ph.D programs and research-focused instruction.
The Unjournal commissions public evaluations of impactful research in quantitative social sciences fields. We are an alternative and a supplement to traditional academic peer-reviewed journals – separating evaluation from journals unlocks a range of benefits. We ask expert evaluators to write detailed, constructive, critical reports. We also solicit a set of structured ratings focused on research credibility, methodology, careful and calibrated presentation of evidence, reasoning transparency, replicability, relevance to global priorities, and usefulness for practitioners (including funders, project directors, and policymakers who rely on this research).[2] While we have mainly targeted impactful research from academia, our ‘applied stream’ covers impactful work that uses formal quantitative methods but is not aimed at academic journals. So far, we’ve commissioned about 50 evaluations of 24 papers, and published these evaluation packages on our PubPub community, linked to academic search engines and bibliometrics.
5 Sep 2024: The Unjournal is still looking to build our team and evaluator pool. Please consider the roles below and express your interest here or contact us at contact@unjournal.org.
Activities of those on the management committee may involve a combination of the following (although you can choose your focus):
Contributing to the decision-making process regarding research focus, reviewer assignment, and prize distribution.
Collaborating with other committee members on the establishment of rules and guidelines, such as determining the metrics for research evaluation and defining the mode of assessment publication.
Helping plan The Unjournal’s future path.
Helping monitor and prioritize research for The Unjournal to evaluate (i.e., acting as a field specialist; see further discussion below). Acting as an evaluation manager for research in your area.
Time commitment: A minimum of 15–20 hours per year.
Compensation: We have funding for a $57.50 per hour honorarium for the first 20 hours, with possible compensation . Evaluation management work will be further compensated (at roughly $300–$450 per paper).
Who we are looking for: All applicants are welcome. We are especially interested in those involved in global priorities research (and related fields), policy research and practice, open science and meta-science, bibliometrics and scholarly publishing, and any other academic research. We want individuals with a solid interest in The Unjournal project and its goals, and the ability to meet the minimal time commitment. Applying is extremely quick, and those not chosen will be considered for other roles and work going forward.
Beyond direct roles within The Unjournal, we're building a larger, more passive advisory board to be part of our network, to offer occasional feedback and guidance, and to act as an evaluation manager when needed (see our evaluation workflow).
There is essentially no minimum time commitment for advisory board members—only opportunities to engage. We sketch some of the expectations in the fold below.
FSs will focus on a particular area of research, policy, or impactful outcome. They will keep track of new or under-considered research with potential for impact and explain and assess the extent to which The Unjournal can add value by commissioning its evaluation. They will "curate" this research and may also serve as evaluation managers for this work.
Some advisory board members will also be FSs, although some may not (e.g., because they don't have a relevant research focus).
Time commitment: There is no specific time obligation—only opportunities to engage. We may also consult you occasionally on your areas of expertise. Perhaps 1–4 hours a month is a reasonable starting expectation for people already involved in doing or using research, plus potential additional paid assignments.
Our Incentives and norms document also provide some guidance on the nature of work and the time involved.
Compensation: We have put together a preliminary/trial compensation formula (incentives and norms); we aim to fairly compensate people for time spent on work done to support The Unjournal, and to provide incentives for suggesting and helping to prioritize research for evaluation. In addition, evaluation management work will be compensated at roughly $300–$450 per project.
Who we are looking for: For the FS roles, we are seeking active researchers, practitioners, and stakeholders with a strong publication record and/or involvement in the research and/or research-linked policy and prioritization processes. For the AB, also people with connections to academic, governmental, or relevant non-profit institutions, and/or involvement in open science, publication, and research evaluation processes. People who can offer relevant advice, experience, guidance, or help communicate our goals, processes, and progress.
Interested? Please fill out this form (about 3–5 min, using the same form for all roles).
See: Unjournal Field Specialists: Incentives and norms (trial)
We invite you to fill in this form (the same as that linked above) to leave your contact information and outline which parts of the project interest you.
Note: These descriptions are under continual refinement; see our policies for more details.
We are not a journal!
The Unjournal seeks to make rigorous research more impactful and impactful research more rigorous. We are a team of researchers, practitioners, and open science advocates led by David Reinstein.
The Unjournal encourages better research by making it easier for researchers to get feedback and credible ratings. We coordinate and fund public journal-independent evaluation of hosted . We publish evaluations, ratings, manager summaries, author responses, and links to evaluated research on our PubPub page.
As the name suggests, we are not a journal!
We are working independently of traditional academic journals to build an open platform and a sustainable system for feedback, ratings, and assessment. We are currently focusing on
How to get involved?
We are looking for research papers to evaluate, as well as evaluators. If you want to suggest research, your own or someone else's, you can let us know using this form. If you want to be an evaluator, apply here. You can express your interest in being a member of the management team, advisory board, or reviewer pool. For more information, check our guide on how to get involved. Why The Unjournal? Peer review is great, but conventional academic publication processes are wasteful, slow, and rent-extracting. They discourage innovation and prompt researchers to focus more on "gaming the system" than on the quality of their research. We will provide an immediate alternative, and at the same time, offer a bridge to a more efficient, informative, useful, and transparent research evaluation system.
Does The Unjournal charge any fees?
No. We are a nonprofit organization (hosted by OCF) and we do not charge any fees for submitting and evaluating your research. We compensate evaluators for their time and even award prizes for strong research work, in contrast to most traditional journals. We do so thanks to funding from the Long-Term Future Fund and Survival and Flourishing Fund.
At some point in the future, we might consider sliding-scale fees for people or organizations submitting their work for Unjournal evaluation, or for other services. If we do this, it would simply be a way to cover the compensation we pay evaluators and to cover our actual costs. Again, we are a nonprofit and we will stay that way.
Research submission/identification and selection: We identify, solicit, and select relevant research work to be hosted on any open platform in any format Authors are encouraged to present their work in the ways they find most comprehensive and understandable. We support the use of dynamic documents and other formats that foster replicability and open science. (See: the benefits of dynamic docs).
Paid evaluators (AKA "reviewers"): We compensate evaluators (essentially, reviewers) for providing thorough feedback on this work. (Read more: Why do we pay?)
Eliciting quantifiable and comparable metrics: We aim to establish and generate credible measures of research quality and usefulness. We intend to benchmark these against traditional previous measures (such as journal tiers) and assess the reliability, consistency, and predictive power of these measures. (Read more: Why quantitative metrics?)
Public evaluation: Reviews are typically public, including potential author responses. This facilitates dialogue and .
Linking, not publishing: Our process is not "exclusive." Authors can submit their work to a journal (or other evaluation service) at any time. This approach also allows us to benchmark our evaluations against traditional publication outcomes.
Financial prizes: We award financial prizes, paired with public presentations, to works judged to be the strongest.
Transparency: We aim for maximum transparency in our processes and judgments.
Academics and funders have complained about this stuff for years and continue to do so every day on social media . . . and we suspect our critiques of the traditional review and publication process will resonate with readers.
So why haven't academia and the research community been able to move to something new? There is a difficult collective action problem. Individual researchers and universities find it risky to move unilaterally. But we believe we have a good chance of finally changing this model and moving to a better equilibrium because we will:
Take risks: Many members of The Unjournal management are not traditional academics; we can stick our necks out. We are also bringing on board established senior academics who are less professionally vulnerable.
Bring in new interests, external funding, and incentives: There are a range of well-funded and powerful organizations—such as the Sloan Foundation and Open Philanthropy—with a strong inherent interest in high-impact research being reliable, robust, and reasoning-transparent. This support can fundamentally shift existing incentive structures.
Allow less risky "bridging steps": As noted above, The Unjournal allows researchers to submit their work to traditional journals. In fact, this will provide a benchmark to help build our quantitative ratings and demonstrate their value.
Communicate with researchers and stakeholders to make our processes easy, clear, and useful to them.
Make our output useful: It may take years for university departments and grant funders to incorporate journal-independent evaluations as part of their metrics and reward systems. The Unjournal can be somewhat patient: our evaluation, rating, feedback, and communication will provide a valuable service to authors, policymakers, and other researchers in the meantime.
Leverage new technology: A new set of open-access tools (such as those funded by Sloan Scholarly Communications) makes what we are trying to do easier, and makes more useful every day.
Reward early adopters with prizes and recognition: We can replace "fear of standing out" with "fear of missing out." In particular, authors and research institutions that commit to publicly engaging with evaluations and critiques of their work should be commended and rewarded. And we intend to do this.
This GitBook serves as a platform to organize our ideas and resources and track our progress towards The Unjournal's dual objectives:
Making "peer evaluation and rating" of open projects into a standard high-status outcome in academia and research, specifically within economics and social sciences. This stands in contrast to the conventional binary choice of accepting or rejecting papers to be published as PDFs and other static formats.
Building a cohesive and efficient system for publishing, accruing credibility, and eliciting feedback for research aligned with effective altruism and global priorities. Our ultimate aim is to make rigorous research more impactful, and impactful research more rigorous.
See Content overview
Please do weigh in, all suggestions and comments will be credited. See also Unjournal: public-facing FAQ in progress; remember to callout contact@unjournal.org if you make any comments
The Unjournal call for participants and research
See #in-a-nutshell for an overview of The Unjournal.
In brief (TLDR): If you are interested in being on The Unjournal's management committee, advisory board, or evaluator pool, please fill out this form (about 3–5 min).
If you want to suggest research for us to assess, please fill out this form. You can also submit your own work here, or by contacting .
Please note that while data submitted through the above forms may be shared internally within our Management Team, it will not be publicly disclosed. Data protection statement linked here.
I am David Reinstein, founder and co- of The Unjournal. We have an open call for committee members, board members, reviewers, and suggestions for relevant work for The Unjournal to evaluate.
The Unjournal team is building a system for credible, public, journal-independent feedback and evaluation of research.
We maintain an open call for participants for four different roles:
Management Committee members (involving honorariums for time spent)
Advisory Board members (no time commitment)
Field Specialists (who will often also be on the Advisory Board)
A pool of Evaluators (who will be paid for their time and their work; we also draw evaluators from outside this pool)
The roles are explained in more detail here. You can express your interest (and enter our database) here.
We will reach out to evaluators (a.k.a. "reviewers") on a case-by-case basis, appropriate for each paper or project being assessed. This is dependent on expertise, the researcher's interest, and a lack of conflict of interest.
Time commitment: Case-by-case basis. For each evaluation, here are some guidelines for the amount of time to spend.
Compensation: We pay a minimum of $200 (updated Aug. 2024) for a prompt and complete evaluation, $400 for experienced evaluators. We offer additional prizes and incentives, and are committed to an average compensation of at least $450 per evaluator. See here for more details.
Who we are looking for: We are putting together a list of people interested in being an evaluator and doing paid referee work for The Unjournal. We generally prioritize the pool of evaluators who signed up for our database before reaching out more widely.
Interested? Please fill out this form (about 3–5 min, same form for all roles or involvement).
We are looking for high-quality, globally pivotal research projects to evaluate, particularly those embodying open science practices and innovative formats. We are putting out a call for relevant research. Please suggest research here. (We offer bounties and prizes for useful suggestions – .) For details of what we are looking for, and some potential examples, see this post and accompanying links.
You can also put forward your own work.
We provide a separate form for research suggestions here. We may follow up with you individually.
We invite you to fill in this form to leave your contact information, as well as outlining which parts of the project you may be interested in.
Note: This is under continual refinement; see our policies for more details.
The Unjournal was founded by David Reinstein , who maintains this wiki/GitBook and other resources.
See our "Team page" at Unjournal.org for an updated profile of our team members
( on terminology)
See description under roles.
David Reinstein, Founder and Co-director
Gavin Taylor, Interdisciplinary Researcher at IGDORE; Co-director
Ryan Briggs, Social Scientist and Associate Professor in the Guelph Institute of Development Studies and Department of Political Science at the University of Guelph, Canada
Kris Gulati, Economics PhD student at the University of California, Merced
Hansika Kapoor, Research Author at the Department of Psychology, Monk Prayogshala (India)
Tanya O'Garra, Senior Research Fellow, Institute of Environment & Sustainability, Lee Kuan Yew of School of Public Policy, National University of Singapore
Emmanuel Orkoh, Research Scientist (fellow) at North-West University (South Africa)
Anirudh Tagat, Research Author at the Department of Economics at Monk Prayogshala (India)
See description under roles.
Sam Abbott, Infectious Disease Researcher, London School of Hygiene and Tropical Medicine
Jonathan Berman, Associate Professor of Marketing, London Business School
Rosie Bettle, Applied Researcher (Global Health & Development) at Founder's Pledge
Gary Charness, Professor of Economics, UC Santa Barbara
Daniela Cialfi, Post-Doctoral Researcher in the Department of Quantitative Methods and Economic Theory at the University of Chieti (Italy)
Jordan Dworkin, Metascience Program Lead, Federation of American Scientists
Jake Eaton, Managing Editor at Asterisk Mag: writing and research on global health, development, and nutrition
Andrew Gelman, Professor of Statistics and Political Science at Columbia University (New York)
Anca Hanea, Associate Professor, University of Melbourne (Australia): expert judgment, biosciences, applied probability, uncertainty quantification
Alexander Herwix, Late-Stage PhD Student in Information Systems at the University of Cologne, Germany
Conor Hughes, PhD Student, Applied Economics, University of Minnesota
Jana Lasser, Postdoctoral researcher, Institute for Interactive Systems and Data Science at Graz University of Technology (Austria)
Nicolas Treich, Associate Researcher, INRAE, Member, Toulouse School of Economics (France)
Michael Wiebe, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)
The table below shows all the members of our team (including field specialists) taking on a research-monitoring role (see here for a description of this role).
Jordan Pieters, Operations generalist
Kynan Behan, Generalist assistance
Laura Sofia-Castro, Communications (academic research/policy)
Adam Steinberg, Communications and copy-editing
Toby Weed, Communications and consulting
Nesim Sisa, technical software support
Red Bermejo, Mikee Mercado, Jenny Siers – consulting (through Anti-Entropy) on strategy, marketing, and task management tools
We are a member of Knowledge Futures. They are working with us to update PubPub and incorporate new features (editorial management, evaluation tools, etc.) that will be particularly useful to The Unjournal and other members.
See also List of people consulted (in ACX grant proposal).
An important part of making this a success will be to spread the word, to get positive attention for this project, to get important players on board, network externalities, and change the equilibrium. We are also looking for specific feedback and suggestions from "mainstream academics" in Economics, Psychology, and policy/program evaluation, as well as from the Open Science and EA communities.
See
Several expositions for different audiences, fleshing out ideas and plans
See/subscribe to
22 Aug 2024: we will be moving our latest updates to our
Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.
Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), (The Unjournal), (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.
The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.
See the for all details, and to registr.
With the completed set of evaluations of and ,” our pilot is complete:
10 research papers evaluated
21 evaluations
5 author responses
Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.
"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.
Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .
synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;
analyzing the evaluation metrics for patterns, potential biases, and reliability measures;
"aggregating expert judgment" from these metrics;
tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and
drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).
discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;
the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and
how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.
Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.
Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!
We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:
existing research into the research evaluation process, and on expert judgment elicitation and aggregation;
practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and
Of course, our context and goals are somewhat distinct from the initiatives above.
We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.
(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)
We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.
I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.
Some (other) ways to follow The Unjournal's progress
MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.
Hope these updates are helpful. Let me know if you have suggestions.
Building a "best feasible plan"..
What is this Unjournal?... See .
See the vision and broad plan presented (and embedded below), updated August 2023.
Status: Mostly completed/decided for pilot phase
Which projects enter the review system (relevance, minimal quality, stakeholders, any red lines or "musts")
How projects are to be submitted
How reviewers are to be assigned and compensated
Status: Mostly completed/decided for pilot phase; will review after initial trial
To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible (10 Dec 2022 update)
Share, advertise, promote this; have efficient meetings and presentations
Establish links to all open-access bibliometric initiatives (to the extent feasible)
Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.
Status: Mostly completed/decided for pilot phase; will review after the initial trial
Status: We are still working with Google Docs and building an external survey interface. We plan to integrate this with PubPub over the coming months (August/Sept. 2023)
If you are interested in discussing any of the above in person, please email us () to arrange a conversation.
If you are interested in discussing any of the above in person, please email us () to arrange a conversation.
, Research Specialist: Data science, metascience, aggregation of expert judgment
The project space is unjournal.org, which I'd love to share with the public ... to make it easy, it can be announced as "" as in "bitly dot com EA unjournal"... and everyone should let me know if they want editor access to the gitbook; also, I made a quick 'open comment space' in the Gdoc .
See slide deck (Link: ; offers comment access)
(Link: ; offers comment access)
Nov 2022: Version targeted towards OSF/Open Science
Earlier discussion document, aimed at EA/global priorities, academic, and open-science audiences
2021 A shorter outline posted on
You can see this output most concisely (evaluations are listed as "supplements," at least for the time being).
For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook .
Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar (The Google Scholar algorithm is a bit opaque—your tips are welcome.)
We will make decisions and award our pilot and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on ""
We are still inviting applications for the helping us accomplish these frameworks and processes. Our next steps:
Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "" and our .
Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will
New members of our team: Welcome to our advisory board, as a field specialist.
As part of our scale-up (and in conjunction with supporting on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).
metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as .
Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")
Check out our to read evaluations and author responses.
(David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).
Visit for an overview.
Alternatively, fill out this to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked .
Progress notes: We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in ; we will incorporate changes.
See also
Updated:
See for proposed specifics.
/ Define the broad scope of our research interest and key overriding principles. Light-touch, to also be attractive to aligned academics
Build "editorial-board-like" teams with subject or area expertise
See for a first pass.
See our .
See our .
Host article (or dynamic research project or 'registered report') on OSF or another place allowing time stamping & DOIs (see for a start)
Link this to (or similar tool or site) to solicit feedback and evaluation without requiring exclusive publication rights (again, see )
Also: Commit to publish academic reviews or share in our internal group for further evaluation and reassessment or benchmarking of the ‘PREreview’ type reviews above (perhaps taking the ).
"Progress notes": We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.
The SFF grant is now 'in our account' (all is public and made transparent on our OCF page). This makes it possible for us to
move forward in filling staff and contractor positions (see below); and
increase evaluator compensation and incentives/rewards (see below).
We are circulating a press release sharing our news and plans.
Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for "The Governance Of Non-Profits And Their Social Impact: Evidence from a Randomized Program In Healthcare In DRC.” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at https://unjournal.pubpub.org/. (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)
We will make decisions and award our Impactful Research Prize (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on What is global-priorities relevant research?
In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research (NBER) and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.
We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at Unjournal - standalone work task: Research scoping for evaluation management; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.
We see a lot of value in this task and expect to actually use and credit this work.
If you are interested in applying to do this paid project, please let us know through our CtA survey form here.
Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.
This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...
focus on a particular area of research, policy, or impactful outcome;
keep track of new or under-considered research with potential for impact;
explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and
“curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and
potentially serve as an evaluation manager for this same work.
Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .
Interested in a field specialist role or other involvement in this process? Please fill out this general involvement form (about 3–5 minutes).
We are also considering how to set priorities for our evaluators. Should they prioritize:
Giving feedback to authors?
Helping policymakers assess and use the work?
Providing a 'career-relevant benchmark' to improve research processes?
We discuss this topic here, considering how each choice relates to our Theory of Change.
We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. We've increased the base compensation for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes.
Please consider signing up for our evaluator pool (fill out the good old form).
As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. This Airtable view gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.
Gary Charness, Professor of Economics, UC Santa Barbara
Nicolas Treich, Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)
Anca Hanea, Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification
Jordan Dworkin, Program Lead, Impetus Institute for Meta-science
Michael Wiebe, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)
We're working with PubPub to improve our process and interfaces. We plan to take on a KFG membership to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.
Our next hiring focus: Communications. We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.
We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)
We are also expanding our Management Committee and Advisory Board; see calls to action.
Update from David Reinstein, Founder and Co-Director
With the recent news, we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?
Over the next 18 months, we aim to:
Build awareness: (Relevant) people and organizations should know what The Unjournal is.
Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.
Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.
I sketch these goals HERE, along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.
While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our Pubpub.
With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the Impactful Research Prize and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).
No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:
Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"
Communications specialists
Administrative and Operations personnel
Tech support/software developers
Here's a brief and rough description of these roles. And here’s a quick form to indicate your potential interest and link your CV/webpage.
You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, here.
We also plan to expand our Management Committee; please reach out if you are interested or can recommend suitable candidates.
We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.
We are delighted to welcome Jordan Dworkin (FAS) and Nicholas Treich (INRA/TSE) to our Advisory Board, and Anirudh Tagat (Monk Prayogshala) to our Management Committee!
Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."
Treich's current research agenda largely focuses on the intersection of animal welfare and economics.
Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also an active participant in the COS SCORE project.
The Unjournal was recommended/approved for a substantial grant through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...
To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.
Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.
Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see HERE) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)
We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list here), and one on our Sciety page here (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.
The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "Is CBT effective for poor households?" [link: EA Forum post]
Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
See the evaluation summaries and ratings, with linked evaluations HERE (Haushofer et al) and HERE (Barker et al).
We are now up to twelve total evaluations of five papers. Most of these are on our PubPub page (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, as noted in a recent EA Forum Post:
Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
These are part of Unjournal's 'direct NBER evaluation' stream.
More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.
To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.
You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."
See our latest post on the EA Forum
Our new platform (unjournal.pubpub.org), enabling DOIs and CrossRef (bibliometrics)
Evaluations of "Artificial Intelligence and Economic Growth"; "self-correcting science"
More evaluations soon
We are pursuing collaborations with replication and robustness initiatives such as the "Institute for Replication" and repliCATS
We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HERE. (Note, this is an administrative thing, it's not a source of funding)
Our Sciety Group is up...
With our first posted evaluation ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.
Two more evaluations 'will be posted soon' (waiting for final author responses.
Working on getting six further papers (projects) evaluated, most of which are part of our NBER"Direct evaluation" track
Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments
Building our platforms, and considering ways to better format and integrate evaluations
with the original research (e.g., through Hypothes.is collaborative annotation)
into the bibliometric record (through DOI's etc)
and with each other.
We are seeking grant funding for our continued operation and expansion (see grants and proposals below). We're appealing to funders interested in Open Science and in impactful research.
We're considering collaborations with other compatible initiatives, including...
replication/reproducibility/robustness-checking initiatives,
prediction and replication markets,
and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).
We are now under the Open Collective Foundation 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the Impactful Research Prize and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.
EA Forum: "Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA": recent post and AMA (answering questions about the Unjournal's progress, plans, and relation to effective-altruism-relevant research
March 9-10: David Reinstein will present at the COS Unconference, session on "Translating Open Science Best Practices to Non-academic Settings". See agenda. David will discuss The Unjournal for part of this session.
Evaluators: We have a strong pool of evaluators.
Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out THIS FORM (about 3-5 min) if you are interested
Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.
Hope these updates are helpful. Let me know if you have suggestions.
The Unjournal is delighted to announce the winners of our inaugural Impactful Research Prize. We are awarding our first prize to Takahiro Kubo (NIES Japan and Oxford University) and co-authors for their research titled "Banning wildlife trade can boost demand". The paper stood out for its intriguing question, the potential for policy impact, and methodological strength. We particularly appreciated the authors’ open, active, and detailed engagement with our evaluation process.
The second prize goes to Johannes Haushofer (NUS Singapore and Stockholm University) and co-authors for their work "The Comparative Impacts of Cash Transfers and a Psychotherapy Program on Psychological and Economic Wellbeing". Our evaluators rated this paper among the highest across a range of metrics. It was highly commended for its rigor, the importance of the topic, and the insightful discussion of cost-effectiveness.
We are recognizing exceptional evaluators for credible, insightful evaluations. Congratulations to Phil Trammell (Global Priorities Institute at the University of Oxford), Hannah Metzler (Complexity Science Hub Vienna), Alex Bates (independent researcher), and Robert Kubinec (NYU Abu Dhabi).
We would like to congratulate all of the winners on their contributions to open science and commitment to rigorous research. We also thank other authors who have submitted their work but have not been selected at this time - we received a lot of excellent submissions, and we are committed to supporting authors beyond this research prize.
Please see the full press release, as well as award details, below and linked here: