Comment on page
With the completed set of evaluations of "Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination in Indonesia" and "The Governance of Non-Profits and Their Social Impact: Evidence from a Randomized Program in Healthcare in DRC,” our pilot is complete:
- 10 research papers evaluated
- 21 evaluations
- 5 author responses
Just a peek at the content you can find in our lovely data notebook! Mind the interactive hover-overs etc.
Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar has picked up one of our outputs. (The Google Scholar algorithm is a bit opaque—your tips are welcome.)
We will make decisions and award our pilot Impactful Research Prize and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our Management Team and Advisory Board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.
Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.
"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out, we are looking for further partners. We will announce the details of this event once these are finalized.
Our pilot yielded a rich set of data and learnings-by-doings. We plan to make use of this, including ...
- Synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better
- Analyzing the evaluation metrics for patterns, potential biases, and reliability measures
- "Aggregating expert judgment" from these metrics
- Tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and
- Drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).
We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on "what is global-priorities relevant research?"
- 1.Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "high-level considerations" and our practical prioritization concerns and goals.
- 2.Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out possible teams and discussions here (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These 'field teams' will
Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.
- discuss and report on the state of research in their areas, including 'where and when relevant research is posted publicly, and in what state';
- the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and
- how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.
Our previous call for Field Specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!
As part of our scale-up (and in conjunction with supporting PubPub on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).
We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:
- existing research into the research evaluation process, and on expert judgment elicitation and aggregation;
- practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and
Of course, our context and goals are somewhat distinct from the initiatives above.
We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.
(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)
We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.
Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "ReproducibiliTea" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")
I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.
MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.
Progress notes: We will keep track of important developments here before we incorporate them into the official fortnightly "Update on recent progress." Members of the UJ team can add further updates here or in this linked Gdoc; we will incorporate changes.
Hope these updates are helpful. Let me know if you have suggestions.