Previous updates

Progress notes since last update

"Progress notes": We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.

Update on recent progress: 21 July 2023

Funding

The SFF grant is now 'in our account' (all is public and made transparent on our OCF page). This makes it possible for us to

  • move forward in filling staff and contractor positions (see below); and

  • increase evaluator compensation and incentives/rewards (see below).

We are circulating a press release sharing our news and plans.

Timelines, and pipelines

Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for "The Governance Of Non-Profits And Their Social Impact: Evidence from a Randomized Program In Healthcare In DRC.” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at https://unjournal.pubpub.org/. (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)

We will make decisions and award our Impactful Research Prize (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

"What research should we prioritize for evaluation, and why?"

We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on What is global-priorities relevant research?

In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research (NBER) and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.

We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at Unjournal - standalone work task: Research scoping for evaluation management; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.

We see a lot of value in this task and expect to actually use and credit this work.

If you are interested in applying to do this paid project, please let us know through our CtA survey form here.

Call for "Field Specialists"

Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.

This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...

"Field Specialists" (FSs), who will

  • focus on a particular area of research, policy, or impactful outcome;

  • keep track of new or under-considered research with potential for impact;

  • explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and

  • “curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and

  • potentially serve as an evaluation manager for this same work.

Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .

Interested in a field specialist role or other involvement in this process? Please fill out this general involvement form (about 3–5 minutes).

Setting priorities for evaluators

We are also considering how to set priorities for our evaluators. Should they prioritize:

  • Giving feedback to authors?

  • Helping policymakers assess and use the work?

  • Providing a 'career-relevant benchmark' to improve research processes?

We discuss this topic here, considering how each choice relates to our Theory of Change.

Increase in evaluator compensation, incentives/rewards

We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. We've increased the base compensation for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes.

Please consider signing up for our evaluator pool (fill out the good old form).

Adjacent initiatives and 'mapping this space'

As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. This Airtable view gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.

Also to report

  • Gary Charness, Professor of Economics, UC Santa Barbara

  • Nicolas Treich, Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)

  • Anca Hanea, Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification

  • Jordan Dworkin, Program Lead, Impetus Institute for Meta-science

  • Michael Wiebe, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)

Tech and platforms

We're working with PubPub to improve our process and interfaces. We plan to take on a KFG membership to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.

Our hiring, contracting, and expansion continues

  • Our next hiring focus: Communications. We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.

  • We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)

  • We are also expanding our Management Committee and Advisory Board; see calls to action.

Potentially relevant events in the outside world

Update on recent progress: 1 June 2023

Update from David Reinstein, Founder and Co-Director

A path to change

With the recent news, we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?

Over the next 18 months, we aim to:

  1. Build awareness: (Relevant) people and organizations should know what The Unjournal is.

  2. Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.

  3. Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.

I sketch these goals HERE, along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.

The pipeline flows on

While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our Pubpub.

With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the Impactful Research Prize and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).

Contracting, hiring, expansion

No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:

  • Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"

  • Communications specialists

  • Administrative and Operations personnel

  • Tech support/software developers

Here's a brief and rough description of these roles. And here’s a quick form to indicate your potential interest and link your CV/webpage.

You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, here.

We also plan to expand our Management Committee; please reach out if you are interested or can recommend suitable candidates.

Tech and initiatives

We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.

Welcoming new team members

We are delighted to welcome Jordan Dworkin (FAS) and Nicholas Treich (INRA/TSE) to our Advisory Board, and Anirudh Tagat (Monk Prayogshala) to our Management Committee!

  • Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."

  • Treich's current research agenda largely focuses on the intersection of animal welfare and economics.

  • Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also an active participant in the COS SCORE project.

Update on recent progress: 6 May 2023

Grant funding from the Survival and Flourishing Fund

The Unjournal was recommended/approved for a substantial grant through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...

To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.

Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.

Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see HERE) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)

Evaluation 'output'

We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list here), and one on our Sciety page here (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.

The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "Is CBT effective for poor households?" [link: EA Forum post]

Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.

See the evaluation summaries and ratings, with linked evaluations HERE (Haushofer et al) and HERE (Barker et al).

Update on recent progress: 22 April 2023

New 'output'

We are now up to twelve total evaluations of five papers. Most of these are on our PubPub page (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, as noted in a recent EA Forum Post:

Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.

These are part of Unjournal's 'direct NBER evaluation' stream.

More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.

Animal welfare

To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.

New features of this GitBook: GPT-powered 'chat' Q&A

You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."

Update on recent progress: 17 Mar 2023

See our latest post on the EA Forum

  1. Our new platform (unjournal.pubpub.org), enabling DOIs and CrossRef (bibliometrics)

  2. More evaluations soon

  3. We are pursuing collaborations with replication and robustness initiatives such as the "Institute for Replication" and repliCATS

  4. We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HERE. (Note, this is an administrative thing, it's not a source of funding)

Update on recent progress: 19 Feb 2023

Content and 'publishing'

  1. Our Sciety Group is up...

  2. With our first posted evaluation ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.

  3. Two more evaluations 'will be posted soon' (waiting for final author responses.

Tip of the Spear ... right now we are:

  • Working on getting six further papers (projects) evaluated, most of which are part of our NBER"Direct evaluation" track

  • Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments

  • Building our platforms, and considering ways to better format and integrate evaluations

    • with the original research (e.g., through Hypothes.is collaborative annotation)

    • into the bibliometric record (through DOI's etc)

    • and with each other.

Funding, plans, collaborations

We are seeking grant funding for our continued operation and expansion (see grants and proposals below). We're appealing to funders interested in Open Science and in impactful research.

We're considering collaborations with other compatible initiatives, including...

  • replication/reproducibility/robustness-checking initiatives,

  • prediction and replication markets,

  • and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).

Management and administration, deadlines

  • We are now under the Open Collective Foundation 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the Impactful Research Prize and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.

Other news and media

Calls to action

See: How to get involved. These are basically still all relevant.

  1. Evaluators: We have a strong pool of evaluators.

Howev=er, atm we are particularly seeking evaluators:
  • with quantitative backgrounds, especially in economics, policy, and social-science

  • comfortable with statistics, cost-effectiveness, impact evaluation, and or Fermi Montecarlo models,

  • willing to dig into details, identify a paper's key claims, and consider the credibility of the research methodology and its execution.

Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out THIS FORM (about 3-5 min) if you are interested

  1. Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.

Hope these updates are helpful. Let me know if you have suggestions.

Last updated