LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Formats
  • Accessibility
  • Publication/peer review status

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow
  2. Project submission, selection and prioritization

Formats, research stage, publication status

Previous'Conditional embargos' & exceptionsNextEvaluation

Last updated 1 month ago

Was this helpful?

Formats

We will consider research in almost any format — it does not need to be a frozen pdf or even a linear paper: see Promoting 'Dynamic Documents' and 'Living Research Projects'

Accessibility

We will only evaluate research objects that are openly accessible without paywalls or logins. The NBER working paper series is a borderline case, as there are some limits to access, but we're covering it for now because of it's importance and because there are workarounds to the access limits.

Publication/peer review status

For some fields, the peer review process is rapid and perhaps lower intensity; here 'post-peer-review evaluation' makes sense. In fields like economics, with very slow and high-intensity peer-review, we mostly consider evaluating work before it's been accepted in a peer-reviewed journal. I.e., 'working papers'. In such cases, our evaluations may contribute less to some parts of our (efficiency, author feedback and revision, early credibility signals). See .

But we sometimes consider (economics) papers already published in peer-reviewed journals, if we still see substantial value in having this work publicly evaluated and rated.

In particular, we may evaluate a 'journal-published paper' if

  1. The paper significantly influences policy or funding decisions and therefore merits additional, transparent scrutiny.

  2. Our evaluation process supports broader goals, such as fostering an impactful research community or refining our evaluation methods.

  3. We have reasons to suspect that the journal's focus or field may be less likely to consider critical aspects of the paper, such as empirical robustness or policy relevance. E.g., a natural-science focused journal may neglect issues of causal inference that are more familiar to quantitative social scientists/economists.

value proposition