Previous updates
Last updated
Was this helpful?
Last updated
Was this helpful?
The SFF grant is now 'in our account' (all is public and made transparent on our ). This makes it possible for us to
move forward in filling staff and contractor positions (see below); and
increase evaluator compensation and incentives/rewards (see below).
We are circulating a sharing our news and plans.
Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for .” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at . (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)
We will make decisions and award our (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
We see a lot of value in this task and expect to actually use and credit this work.
Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.
This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...
focus on a particular area of research, policy, or impactful outcome;
keep track of new or under-considered research with potential for impact;
explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and
“curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and
Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .
We are also considering how to set priorities for our evaluators. Should they prioritize:
Giving feedback to authors?
Helping policymakers assess and use the work?
Providing a 'career-relevant benchmark' to improve research processes?
We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)
Update from David Reinstein, Founder and Co-Director
Over the next 18 months, we aim to:
Build awareness: (Relevant) people and organizations should know what The Unjournal is.
Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.
Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.
No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:
Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"
Communications specialists
Administrative and Operations personnel
Tech support/software developers
We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.
Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."
Treich's current research agenda largely focuses on the intersection of animal welfare and economics.
To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.
Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.
Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.
To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.
You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."
See our latest post on the EA Forum
More evaluations soon
Two more evaluations 'will be posted soon' (waiting for final author responses.
Working on getting six further papers (projects) evaluated, most of which are part of our NBER"Direct evaluation" track
Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments
Building our platforms, and considering ways to better format and integrate evaluations
with the original research (e.g., through Hypothes.is collaborative annotation)
into the bibliometric record (through DOI's etc)
and with each other.
We're considering collaborations with other compatible initiatives, including...
replication/reproducibility/robustness-checking initiatives,
prediction and replication markets,
and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).
Evaluators: We have a strong pool of evaluators.
Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.
Hope these updates are helpful. Let me know if you have suggestions.
We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on
In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research () and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.
We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at ; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.
If you are interested in applying to do this paid project, please let us know .
potentially serve as an for this same work.
Interested in a field specialist role or other involvement in this process? Please fill out (about 3–5 minutes).
We discuss this topic , considering how each choice relates to our .
We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes.
Please consider signing up for our evaluator pool (fill out ).
As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.
, Professor of Economics, UC Santa Barbara
, Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)
, Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification
, Program Lead, Impetus Institute for Meta-science
, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)
We're working with PubPub to improve our process and interfaces. We plan to take on a to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.
Our next hiring focus: . We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.
We are also expanding our Management Committee and Advisory Board; see .
With the , we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?
I sketch these goals , along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.
While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our .
With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).
of these roles. And to indicate your potential interest and link your CV/webpage.
You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, .
We also plan to expand our ; please reach out if you are interested or can recommend suitable candidates.
We are delighted to welcome (FAS) and (INRA/TSE) to our , and (Monk Prayogshala) to our !
Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also in the .
The Unjournal was through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...
Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see ) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)
We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list ), and one on our Sciety page (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.
The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "" [link: EA Forum post]
See the evaluation summaries and ratings, with linked evaluations and
We are now up to twelve total evaluations of five papers. Most of these are on our (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, :
These are part of Unjournal's .
Our new platform (), enabling DOIs and CrossRef (bibliometrics)
"self-correcting science"
We are pursuing collaborations with replication and robustness initiatives such as the and
We are now 'fiscally sponsored' by the Open Collective Foundation; see our page . (Note, this is an administrative thing, it's not a source of funding)
Our is up...
With our ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.
We are seeking grant funding for our continued operation and expansion (see below). We're appealing to funders interested in Open Science and in impactful research.
We are now under the 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.
EA Forum: "recent post and AMA (answering questions about the Unjournal's progress, plans, and relation to effective-altruism-relevant research
March 9-10: David Reinstein will present at the , session on "Translating Open Science Best Practices to Non-academic Settings". See . David will discuss The Unjournal for part of this session.
Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out (about 3-5 min) if you are interested