LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • March 25 2024: Workshop: Innovations in Research Evaluation, Replicability, and Impact
  • Jan 2024: Impactful Research and Evaluation Prizes winners announced
  • Aug. 30, 2023: "Pilot's done, what has been won (and learned)?"
  • Pilot = completed!
  • The big scale-up
  • Evaluating more research: prioritization
  • Growing The Unjournal Team
  • Improving the evaluation process and metrics
  • "Spilling tea"
  • Progress notes since last update

Was this helpful?

Export as PDF
  1. An Introduction to The Unjournal

Updates (earlier)

PreviousRelated articles and workNextImpactful Research Prize Winners

Last updated 9 months ago

Was this helpful?

22 Aug 2024: we will be moving our latest updates to our

March 25 2024: Workshop: Innovations in Research Evaluation, Replicability, and Impact

Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.

Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), (The Unjournal), (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.

The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.

See the for all details, and to registr.

Jan 2024: Impactful Research and Evaluation Prizes winners announced

Impactful Research Prize Winners

Aug. 30, 2023: "Pilot's done, what has been won (and learned)?"

Pilot = completed!

With the completed set of evaluations of and ,” our pilot is complete:

  • 10 research papers evaluated

  • 21 evaluations

  • 5 author responses

Following up on the pilot: prizes and seminars

Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.

"Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.

Other planned follow-ups from the pilot

Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .

  • synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;

  • analyzing the evaluation metrics for patterns, potential biases, and reliability measures;

  • "aggregating expert judgment" from these metrics;

  • tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and

  • drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).

The big scale-up

Evaluating more research: prioritization

    • discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;

    • the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and

    • how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.

    Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.

Growing The Unjournal Team

Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!

Improving the evaluation process and metrics

We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:

  • existing research into the research evaluation process, and on expert judgment elicitation and aggregation;

  • practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and

Of course, our context and goals are somewhat distinct from the initiatives above.

We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.

(A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)

We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.

"Spilling tea"

I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.

Some (other) ways to follow The Unjournal's progress

  • Visit Action and progress for an overview.

MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.

Progress notes since last update

See also Previous updates

Hope these updates are helpful. Let me know if you have suggestions.

You can see this output most concisely (evaluations are listed as "supplements," at least for the time being).

For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook .

Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar (The Google Scholar algorithm is a bit opaque—your tips are welcome.)

We will make decisions and award our pilot and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on ""

We are still inviting applications for the helping us accomplish these frameworks and processes. Our next steps:

Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "" and our .

Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out (the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will

New members of our team: Welcome to our advisory board, as a field specialist.

As part of our scale-up (and in conjunction with supporting on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).

metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as .

Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")

Check out our to read evaluations and author responses.

(David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).

Alternatively, fill out this to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked .

Progress notes: We will keep track of important developments here before we incorporate them into the ." Members of the UJ team can add further updates here or in ; we will incorporate changes.

main home page 'news'.
David Reinstein
Gary Charness
event page here
"Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination in Indonesia"
"The Governance of Non-Profits and Their Social Impact: Evidence from a Randomized Program in Healthcare in DRC
in our PubPub collection here
hosted here
has picked up one of our outputs.
Impactful Research Prize
what is global-priorities relevant research?
paid standalone project
high-level considerations
practical prioritization concerns and goals
possible teams and discussions here
Rosie Bettle (Founder's Pledge)
PubPub
3ie's Development Evidence Portal
ReproducibiliTea
PubPub page
Follow @GivingTools
quick survey
here
this linked Gdoc
Just a peek at the content you can find in our lovely data notebook! Mind the interactive hover-overs etc.
Sign up to our mailing list to receive updates!