Updates (earlier)
Last updated
Was this helpful?
Last updated
Was this helpful?
Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.
Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), (The Unjournal), (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.
The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.
See the for all details, and to registr.
Impactful Research Prize Winners
With the completed set of evaluations of and ,” our pilot is complete:
10 research papers evaluated
21 evaluations
5 author responses
Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.
Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .
synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;
analyzing the evaluation metrics for patterns, potential biases, and reliability measures;
"aggregating expert judgment" from these metrics;
tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and
drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).
discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;
the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and
how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.
Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.
Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!
We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:
existing research into the research evaluation process, and on expert judgment elicitation and aggregation;
practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and
Of course, our context and goals are somewhat distinct from the initiatives above.
We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.
(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)
We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.
I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.
MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.
See also Previous updates
Hope these updates are helpful. Let me know if you have suggestions.
You can see this output most concisely (evaluations are listed as "supplements," at least for the time being).
For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook .
Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar (The Google Scholar algorithm is a bit opaque—your tips are welcome.)
We will make decisions and award our pilot and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on ""
We are still inviting applications for the helping us accomplish these frameworks and processes. Our next steps:
Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "" and our .
Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will
New members of our team: Welcome to our advisory board, as a field specialist.
As part of our scale-up (and in conjunction with supporting on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).
metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as .
Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")
Check out our to read evaluations and author responses.
(David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).
Alternatively, fill out this to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked .
Progress notes: We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in ; we will incorporate changes.