The Unjournal is delighted to announce the winners of our inaugural Impactful Research Prize. We are awarding our first prize to Takahiro Kubo (NIES Japan and Oxford University) and co-authors for their research titled "Banning wildlife trade can boost demand". The paper stood out for its intriguing question, the potential for policy impact, and methodological strength. We particularly appreciated the authors’ open, active, and detailed engagement with our evaluation process.
We are recognizing exceptional evaluators for credible, insightful evaluations. Congratulations to Phil Trammell (Global Priorities Institute at the University of Oxford), Hannah Metzler (Complexity Science Hub Vienna), Alex Bates (independent researcher), and Robert Kubinec (NYU Abu Dhabi).
We would like to congratulate all of the winners on their contributions to open science and commitment to rigorous research. We also thank other authors who have submitted their work but have not been selected at this time - we received a lot of excellent submissions, and we are committed to supporting authors beyond this research prize.
Please see the full press release, as well as award details, below and :
March 25 2024: Workshop: Innovations in Research Evaluation, Replicability, and Impact
Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.
Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), (The Unjournal), (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.
The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.
See the for all details, and to registr.
Jan 2024: Impactful Research and Evaluation Prizes winners announced
Aug. 30, 2023: "Pilot's done, what has been won (and learned)?"
Pilot = completed!
With the completed set of evaluations of and ,” our pilot is complete:
10 research papers evaluated
21 evaluations
5 author responses
You can see this output most concisely (evaluations are listed as "supplements," at least for the time being).
For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook .
Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar (The Google Scholar algorithm is a bit opaque—your tips are welcome.)
Following up on the pilot: prizes and seminars
We will make decisions and award our pilot and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.
"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.
Other planned follow-ups from the pilot
Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .
synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;
analyzing the evaluation metrics for patterns, potential biases, and reliability measures;
"aggregating expert judgment" from these metrics;
The big scale-up
Evaluating more research: prioritization
We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on ""
We are still inviting applications for the helping us accomplish these frameworks and processes. Our next steps:
Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "" and our .
Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.)
We begin to lay out (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will
Growing The Unjournal Team
Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!
New members of our team: Welcome to our advisory board, as a field specialist.
Improving the evaluation process and metrics
As part of our scale-up (and in conjunction with supporting on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).
We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:
existing research into the research evaluation process, and on expert judgment elicitation and aggregation;
practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and
metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as .
Of course, our context and goals are somewhat distinct from the initiatives above.
We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.
(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)
We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.
"Spilling tea"
Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")
I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.
Some (other) ways to follow The Unjournal's progress
Check out our to read evaluations and author responses.
MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.
Alternatively, fill out this to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked .
Progress notes since last update
Progress notes: We will keep track of important developments here before we incorporate them into the official fortnightly "Update on recent progress." Members of the UJ team can add further updates here or in ; we will incorporate changes.
See also
Hope these updates are helpful. Let me know if you have suggestions.
tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and
drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).
discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;
the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and
how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.
Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.
(David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).
By clicking Subscribe, you agree to the processing of your email address in accordance with our privacy policy.
Previous updates
Progress notes since last update
"Progress notes": We will keep track of important developments here before we incorporate them into the official fortnightly "Update on recent progress." Members of the UJ team can add further updates here or in ; we will incorporate changes.
Update on recent progress: 21 July 2023
Funding
The SFF grant is now 'in our account' (all is public and made transparent on our ). This makes it possible for us to
move forward in filling staff and contractor positions (see below); and
increase evaluator compensation and incentives/rewards (see below).
We are circulating a sharing our news and plans.
Timelines, and pipelines
Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for .” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at . (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)
We will make decisions and award our (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
"What research should we prioritize for evaluation, and why?"
We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on
In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research () and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.
(Paid) Research opportunity: to help us do this
We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at ; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.
We see a lot of value in this task and expect to actually use and credit this work.
If you are interested in applying to do this paid project, please let us know .
Call for "Field Specialists"
Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.
This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...
" (FSs), who will
focus on a particular area of research, policy, or impactful outcome;
keep track of new or under-considered research with potential for impact;
explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and
Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .
Interested in a field specialist role or other involvement in this process? Please fill out(about 3–5 minutes).
Setting priorities for evaluators
We are also considering how to set priorities for our evaluators. Should they prioritize:
Giving feedback to authors?
Helping policymakers assess and use the work?
Providing a 'career-relevant benchmark' to improve research processes?
We discuss this topic , considering how each choice relates to our .
Increase in evaluator compensation, incentives/rewards
We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes. Details on this to come.
Please consider signing up for our evaluator pool (fill out).
Adjacent initiatives and 'mapping this space'
As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.
Also to report
New
, Professor of Economics, UC Santa Barbara
, Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)
, Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification
Tech and platforms
We're working with PubPub to improve our process and interfaces. We plan to take on a to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.
Our hiring, contracting, and expansion continues
Our next hiring focus: . We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.
We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)
We are also expanding our Management Committee and Advisory Board; see
Potentially relevant events in the outside world
Update on recent progress: 1 June 2023
Update from David Reinstein, Founder and Co-Director
A path to change
With the , we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?
Over the next 18 months, we aim to:
Build awareness: (Relevant) people and organizations should know what The Unjournal is.
Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.
Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.
I sketch these goals , along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.
The pipeline flows on
While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our .
With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).
Contracting, hiring, expansion
No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:
Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"
Communications specialists
Administrative and Operations personnel
of these roles. And to indicate your potential interest and link your CV/webpage.
You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, .
We also plan to expand our ; please reach out if you are interested or can recommend suitable candidates.
Tech and initiatives
We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives.
A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.
Welcoming new team members
We are delighted to welcome (FAS) and (INRA/TSE) to our , and (Monk Prayogshala) to our !
Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."
Treich's current research agenda largely focuses on the intersection of animal welfare and economics.
Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also in the .
Update on recent progress: 6 May 2023
Grant funding from the Survival and Flourishing Fund
The Unjournal was through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...
To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.
Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.
Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see ) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)
Evaluation 'output'
We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list ), and one on our Sciety page (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.
The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "" [link: EA Forum post]
Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
See the evaluation summaries and ratings, with linked evaluations and
Update on recent progress: 22 April 2023
New 'output'
We are now up to twelve total evaluations of five papers. Most of these are on our (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, :
Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.
These are part of Unjournal's .
More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.
Animal welfare
To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.
New features of this GitBook: GPT-powered 'chat' Q&A
You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."
Update on recent progress: 17 Mar 2023
See our latest post on the EA Forum
Our new platform (), enabling DOIs and CrossRef (bibliometrics)
"self-correcting science"
More evaluations soon
Update on recent progress: 19 Feb 2023
Content and 'publishing'
Our is up...
With our ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.
Two more evaluations 'will be posted soon' (waiting for final author responses.
Tip of the Spear ... right now we are:
Working on getting six further papers (projects) evaluated, most of which are part of our NBER
Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments
Building our platforms, and considering ways to better format and integrate evaluations
Funding, plans, collaborations
We are seeking grant funding for our continued operation and expansion (see below). We're appealing to funders interested in Open Science and in impactful research.
We're considering collaborations with other compatible initiatives, including...
and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).
Management and administration, deadlines
We are now under the 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.
Other news and media
EA Forum: "recent post and AMA (answering questions about the Unjournal's progress, plans, and relation to effective-altruism-relevant research
March 9-10: David Reinstein will present at the , session on "Translating Open Science Best Practices to Non-academic Settings". See . David will discuss The Unjournal for part of this session.
Calls to action
See: . These are basically still all relevant.
Evaluators: We have a strong pool of evaluators.
Howev=er, atm we are particularly seeking evaluators:
with quantitative backgrounds, especially in economics, policy, and social-science
comfortable with statistics, cost-effectiveness, impact evaluation, and or Fermi Montecarlo models,
Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out(about 3-5 min) if you are interested
Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.
Hope these updates are helpful. Let me know if you have suggestions.
“curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and
We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HERE. (Note, this is an administrative thing, it's not a source of funding)
with the original research (e.g., through Hypothes.is collaborative annotation)