LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • In a nutshell
  • How do we do this?
  • Change is hard: overcoming academic inertia
  • Our objectives
  • Where do I find...? Where do I go next?

Was this helpful?

Export as PDF

An Introduction to The Unjournal

We are not a journal!

PreviousThe UnjournalNextContent overview

Last updated 6 months ago

Was this helpful?

In a nutshell

The Unjournal seeks to make rigorous research more impactful and impactful research more rigorous. We are a , practitioners, and open science advocates led by .

The Unjournal encourages better research by making it easier for researchers to get feedback and credible ratings. We coordinate and fund public journal-independent expert evaluation of hosted . We publish evaluations, ratings, manager summaries, author responses, and links to evaluated research on .

As the name suggests, we are not a journal!

We work independently of traditional academic journals. We're building an open platform and a sustainable system for feedback, ratings, and assessment. We're currently focusing on quantitative work that in .

How to get involved?

We're looking for research to evaluate, as well as evaluators. You can submit research , or suggest research . We offer financial prizes for suggesting research we end up evaluating. If you want to be an evaluator, apply . You can use to express your interest in joining our management team, advisory board, or reviewer pool. For more information, see our guide.

Why The Unjournal? Peer review is great, but conventional academic publication processes are wasteful, slow, and rent-extracting. They discourage innovation and prompt researchers to focus more on "gaming the system" than on the quality of their research. We will provide an immediate alternative, and at the same time, offer a bridge to a more efficient, informative, useful, and transparent research evaluation system.

Does The Unjournal charge any fees?

No. We're a US-registered tax-exempt 501(c)(3) nonprofit, and we don't charge fees for anything. We compensate evaluators for their time and we even award prizes for strong research and evaluation work, in contrast to most traditional journals. We do so thanks to funding from the and .

At some point in the future, we might consider sliding-scale fees for people or organizations submitting their work for Unjournal evaluation, or for other services. If we do this, it would simply be a way to cover the compensation we pay evaluators and to cover our actual costs. Again, we are a nonprofit and we will stay that way.

How do we do this?

  1. Linking, not publishing: Our process is not "exclusive." Authors can submit their work to a journal (or other evaluation service) at any time. This approach also allows us to against traditional publication outcomes.

  2. Prizes: We award financial prizes and hold public events to recognize the most credible, impactful, useful, and insightful research, as well as strong engagement with our evaluation process.

  3. Transparency: We aim for maximum transparency in our processes and judgments.

This is not an original idea, and there are others in this space, but...
Funding

29 Oct 2024: We have about a 9-12 month runway, which could be extended to cover our basic activities for a longer period. We are actively applying for grants and funding.

Our current support comes from:

Change is hard: overcoming academic inertia

So why haven't academia and the research community been able to move to something new? There is a difficult collective action problem. Individual researchers and universities find it risky to move unilaterally. But we believe we have a good chance of finally changing this model and moving to a better equilibrium. How? We will:

  • Take risks: Many members of The Unjournal management are not traditional academics; we can stick our necks out. We are also recruiting established senior academics who are less professionally vulnerable.

  • Allow less risky "bridging steps": As noted above, The Unjournal allows researchers to submit their work to traditional journals. In fact, this will provide a benchmark to help build our quantitative ratings and demonstrate their value.

  • Communicate with researchers and stakeholders to make our processes easy, clear, and useful to them.

  • Make our output useful, in the meantime: It may take years for university departments and grant funders to incorporate journal-independent evaluations as part of their metrics and reward systems. The Unjournal can be somewhat patient: our evaluation, rating, feedback, and communication are already providing a valuable service to authors, policymakers, and other researchers.

  • Leverage new technology: A new set of open-access and AI-powered tools makes what we are trying to do easier, and makes more useful every day.

  • Reward early adopters with prizes and recognition: We can replace "fear of standing out" with "fear of missing out." In particular, authors and research institutions that commit to publicly engaging with evaluations and critiques of their work should be commended and rewarded. And we are doing this.

Our objectives

  1. Making "peer evaluation and rating" of open projects into a standard high-status outcome in academia and research, specifically within economics and social sciences. This stands in contrast to the conventional binary choice of accepting or rejecting papers to be published as PDFs and other static formats.

  2. Building a cohesive and efficient system for publishing, accruing credibility, and eliciting feedback for research aligned with effective altruism and global priorities. Our ultimate aim is to make rigorous research more impactful, and impactful research more rigorous.

Where do I find...? Where do I go next?

See Content overview

Research submission/identification and selection: We identify, solicit, and select relevant research work to be hosted on any open platform in any format Authors are encouraged to present their work in the ways they find most comprehensive and understandable. We support the use of and other formats that foster replicability and open science. (See: ).

Paid evaluators (AKA "reviewers"): We compensate evaluators (essentially, reviewers) for providing thorough feedback on this work. (Read more: )

Eliciting quantifiable and comparable metrics: We aim to establish and generate credible measures of research quality and usefulness. We benchmark these against traditional previous measures (such as journal tiers) and assess the reliability, consistency, and predictive power of these measures. (Read more: )

Public evaluation: We publish the evaluation packages (including reports, ratings, author responses, and manager summaries) on our . Making evaluation public facilitates dialogue, and supports , , , and .

For example, this is closely related to ELife's ; see their updated (Oct 2022) model . COS is also building a "". promotes public journal-independent evaluation. However, we cover a different research focus and make some different choices, discussed below. We also discuss other Parallel/partner initiatives and resources, many of whom we are building partnerships with. However, we think we are the only group funded to do this in this particular research area/focus. We are also taking a different approach to previous efforts, including funding evaluation (see Why pay evaluators (reviewers)?) and asking for quantified ratings and predictions (see Guidelines for evaluators).

Survival and Flourishing Fund (successful); funds deposited Summer 2023. ACX/LTFF grant proposal (as submitted, successful) grant (ACX passed it to the Long Term Future Fund, who awarded it). Extended through mid-2023. We have submitted some other grant applications; e.g., see our unsuccessful ; other grant applications are linked below. We are sharing these in the spirit of transparency.

Academics and funders have complained about this stuff for years . . . and we're fairly confident our critiques of the traditional review and publication process will resonate with most readers.

Bring in new interests, external funding, and incentives: There are a range of well-funded and powerful organizations—such as the and —with a strong inherent interest in high-impact research being reliable, robust, and . This support can fundamentally shift existing incentive structures.

This GitBook is a knowledge base that supplements our main public page, . It serves as a platform to organize our ideas and resources and track our progress towards our dual objectives:

dynamic documents
the benefits of dynamic docs
Why do we pay?
PubPub community
"Publish, Review, Curate" model
here
lifecycle journal
PREReview
FTX application here
and continue to do so every day on social media
Sloan Foundation
Open Philanthropy
reasoning-transparent
unjournal.org
team of researchers
David Reinstein
our PubPub page
here
using this form
here
the same form
how to get involved
Long-Term Future Fund
Survival and Flourishing Fund
Why quantitative metrics?
Key links
  • Why Unjournal?

  • Guidelines for evaluators

  • Explanations & outreach,

  • How to get involved

Slide deck
Our evaluation packages on PubPub
Unjournal.org (public-facing home page)

You can also search and query this Gitbook (press control-K or command -k)

Why should I submit my work to The Unjournal? Why should I engage with them?