Updates (earlier)

22 Aug 2024: we will be moving our latest updates to our main home page 'news'.

March 25 2024: Workshop: Innovations in Research Evaluation, Replicability, and Impact

Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.

Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), David Reinstein (The Unjournal), Gary Charness (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.

The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.

See the event page here for all details, and to registr.

Jan 2024: Impactful Research and Evaluation Prizes winners announced

Impactful Research Prize Winners

Aug. 30, 2023: "Pilot's done, what has been won (and learned)?"

Pilot = completed!

With the completed set of evaluations of "Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination in Indonesia" and "The Governance of Non-Profits and Their Social Impact: Evidence from a Randomized Program in Healthcare in DRC,” our pilot is complete:

  • 10 research papers evaluated

  • 21 evaluations

  • 5 author responses

You can see this output most concisely in our PubPub collection here (evaluations are listed as "supplements," at least for the time being).

For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook hosted here.

Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar has picked up one of our outputs. (The Google Scholar algorithm is a bit opaque—your tips are welcome.)

Following up on the pilot: prizes and seminars

We will make decisions and award our pilot Impactful Research Prize and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.

"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.

Other planned follow-ups from the pilot

Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .

  • synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;

  • analyzing the evaluation metrics for patterns, potential biases, and reliability measures;

  • "aggregating expert judgment" from these metrics;

  • tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and

  • drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).

The big scale-up

Evaluating more research: prioritization

We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on "what is global-priorities relevant research?"

We are still inviting applications for the paid standalone project helping us accomplish these frameworks and processes. Our next steps:

  1. Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "high-level considerations" and our practical prioritization concerns and goals.

  2. Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out possible teams and discussions here (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will

    • discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;

    • the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and

    • how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.

    Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.

Growing The Unjournal Team

Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!

New members of our team: Welcome Rosie Bettle (Founder's Pledge) to our advisory board, as a field specialist.

Improving the evaluation process and metrics

As part of our scale-up (and in conjunction with supporting PubPub on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).

We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:

  • existing research into the research evaluation process, and on expert judgment elicitation and aggregation;

  • practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and

  • metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as 3ie's Development Evidence Portal.

Of course, our context and goals are somewhat distinct from the initiatives above.

We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.

(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)

We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.

"Spilling tea"

Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "ReproducibiliTea" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")

I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.

Some (other) ways to follow The Unjournal's progress

MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.

Sign up to our mailing list to receive updates!

Alternatively, fill out this quick survey to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked here.

Progress notes since last update

Progress notes: We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.

See also Previous updates

Hope these updates are helpful. Let me know if you have suggestions.

Last updated