Comment on page

Latest updates

We want to keep the community informed of our progress and the steps we are considering next. We want your feedback too! On this page, we summarize what the Unjournal has been up to most recently.
MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight.
Some (other) ways to follow The Unjournal's progress
  • Our PubPub page contains the evaluations and ratings, author responses, and manager summaries
  • ​Follow @GivingTools (David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it)
  • ​Action and progress: for an overview

Update on recent progress: 1 June 2023

Update from David Reinstein, Founder and Co-Director

A path to change

With the recent news, we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?
Awareness∩Credibility∩Scale→ImpactAwareness \cap Credibility \cap Scale \rightarrow Impact
Over the next 18 months, we aim to:
  1. 1.
    Build Awareness: (Relevant) people and organizations should know what the Unjournal is.
  2. 2.
    Build Credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.
  3. 3.
    Expand our Scale and Scope: We aim to significantly grow while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers/projects over the next 18 months while also producing other valuable outputs and metrics.
I sketch these goals HERE, along with our theory of change, specific steps and approaches we are considering, and some 'wish list wins'. Please free to add your comments and questions.

The pipeline flows on

While we wait for the new grant funding to come in, we are not sitting on our haunches. Our 'pilot phase' is nearing completion. Two more sets of evaluations have been posted on our Pubpub.
With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide the recipients for the Impactful Research Prize and the prizes for evaluators, announce and award these, and organize online presentations/discussions (maybe linked to an 'award ceremony'?).

Contracting, hiring, expansion

No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:
  • Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our ‘evaluation output’
  • Communications specialists.
  • Administrative and Operations personnel.
  • Tech support/software developers.
​Here's a brief and rough description of these roles. And here’s a quick form to indicate your potential interest and link your CV/webpage.
You can also/alternately register your interest in doing (paid) research evaluation work for the Unjournal, and/or being part of our advisory board here.
We also plan to expand our Management Team; please reach out if you are interested or can recommend suitable candidates.

Tech and initiatives

We are committed to enhancing our platforms, and our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.

Welcoming new team members

We are delighted to welcome Jordan Dworkin (FAS) and Nicholas Treich (INRA/TSE) to our Advisory Board, and Anirudh Tagat (Monk Prayogshala) to our Management Committee!
  • Dworkin work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation".
  • Treich's current research agenda largely focuses on the intersection of animal welfare and economics.
  • Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also an active participant in the COS SCORE project.

Update on recent progress: 6 May 2023

Grant funding from the Survival and Flourishing Fund

The Unjournal was recommended/approved for a substantial grant through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...
To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.
Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.
Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see HERE) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)

Evaluation 'output'

We have published a total of 12 evaluations and ratings of 5 papers/projects, as well as 3 author responses. Four can be found on our PubPub page (most concise list here), and one on our Sciety page here (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.
The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround... "Is CBT effective for poor households?" [link: EA Forum post]
Both papers consider randomized randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
See the evaluation summaries and ratings, with linked evaluations HERE (Haushofer et al) and HERE (Barker et al).​

Update on recent progress: 22 April 2023

New 'output'

We are now up to twelve total evaluations of five papers. Most of these are on our PubPub page (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, as noted in a recent EA Forum Post:
Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
These are part of Unjournal's 'direct NBER evaluation' stream.
More evaluations coming out soon, on themes including global health and development, the environment, governance and social media. \

Animal welfare

To round out our initial pilot: We're particularly looking to evaluate papers/projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.

New features of this gitbook: GPT-powered 'chat' Q&A

You can now 'chat' with this page, and ask questions and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens".

Update on recent progress: 17 Mar 2023

See our latest post on the EA Forum
  1. 1.
    Our new platform (, enabling DOIs and CrossRef (bibliometrics)
  2. 3.
    More evaluations soon
  3. 4.
    We are pursuing collaborations with replication and robustness initiatives such as the "Institute for Replication" and repliCATS​
  4. 5.
    We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HERE. (Note, this is an administrative thing, it's not a source of funding)

Update on recent progress: 19 Feb 2023

Content and 'publishing'

  1. 1.
    Our Sciety Group is up...
  2. 2.
    With our first posted evaluation ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.
  3. 3.
    Two more evaluations 'will be posted soon' (waiting for final author responses.

Tip of the Spear ... right now we are:

  • Working on getting six further papers (projects) evaluated, most of which are part of our NBER'Direct evaluation' track​
  • Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments
  • Building our platforms, and considering ways to better format and integrate evaluations
    • with the original research (e.g., through collaborative annotation)
    • into the bibliometric record (through DOI's etc)
    • and with each other.

Funding, plans, collaborations

We are seeking grant funding for our continued operation and expansion (see Grants and proposals below). We're appealing to funders interested in Open Science and in impactful research.
We're considering collaborations with other compatible initiatives, including...
  • replication/reproducability/robustness-checking initiatives,
  • prediction and replication markets,
  • and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).

Management and administration, deadlines

  • We are now under the Open Collective Foundation 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home)
  • We are postponing the deadline for judging the Impactful Research Prize and the prizes for evaluators. Submission of papers, and the processing of these has been somewhat slower than expected.

Other news and media

Calls to action

See: How to be part of this. These are basically still all relevant.

  1. 1.
    Evaluators: We have a strong pool of evaluators.
However, atm we are particularly seeking evaluators:
  • with quantitative backgrounds, especially in economics, policy, and social-science
  • comfortable with statistics, cost-effectiveness, impact evaluation, and or Fermi Montecarlo models,
  • willing to dig into details, identify a paper's key claims, and consider the credibility of the research methodology and its execution.
Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out THIS ​FORM (about 3-5 min) if you are interested
  1. 2.
    Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.
Hope these updates are helpful. Let me know if you have suggestions.