LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Details of submissions to The Unjournal
  • What we are looking for
  • Conditional embargo on the publishing of evaluations
  • Why might an author want to engage with The Unjournal?
  • What we might ask of authors

Was this helpful?

Export as PDF
  1. Our policies: evaluation & workflow

Recap: submissions

Text to accompany the Impactful Research Prize discussion

PreviousCommunicating resultsNextWhat is global-priorities-relevant research?

Last updated 9 months ago

Was this helpful?

Details of submissions to The Unjournal

Note: This section largely repeats content in our , especially our

Jan. 2024: We have lightly updated this page to reflect our current systems.

What we are looking for

We describe the nature of the work we are looking to evaluate, along with examples, in . Update 2024: This is now better characterized under and .

If you are interested in submitting your work for public evaluation, we are looking for research which is relevant to global priorities—especially quantitative social sciences—and impact evaluations. Work that would benefit from further feedback and evaluation is also of interest.

Your work will be evaluated using our evaluation guidelines and metrics. You can read these before submitting.

Important Note: We are not a journal. By having your work evaluated, you will not be giving up the opportunity to have your work published in a journal. We simply operate a system that allows you to have your work independently evaluated.

If you think your work fits our criteria and would like it to be publicly evaluated, please submit your work through .

If you would like to submit more than one of your papers, you will need to complete a new form for each paper you submit.

Conditional embargo on the publishing of evaluations

By default, we would like Unjournal evaluations to be made public. We think public evaluations are generally good for authors, as explained . However, in special circumstances and particularly for very early-career researchers, we may make exceptions.

If there is an early-career researcher on the author team, we will allow authors to "embargo" the publication of the evaluation until a later date. This date is contingent, but not indefinite. The embargo lasts until after a PhD/postdoc’s upcoming job search or until it has been published in a mainstream journal, unless:

  • the author(s) give(s) earlier permission for release; or

  • until a fixed upper limit of 14 months is reached.

If you would like to request an exception to a public evaluation, you will have the opportunity to explain your reasoning in the submission form.

Why might an author want to engage with The Unjournal?

  1. The Unjournal presents an additional opportunity for evaluation of your work with an emphasis on impact.

  2. Substantive feedback will help you improve your work—especially useful for young scholars.

  3. Ratings can be seen as markers of credibility for your work that could help your career advancement at least at the margin, and hopefully help a great deal in the future. You also gain the opportunity to publicly respond to critiques and correct misunderstandings.

  4. You will gain visibility and a connection to the EA/Global Priorities communities and the Open Science movement.

  5. You can take advantage of this opportunity to gain a reputation as an ‘early adopter and innovator’ in open science.

  6. You can win prizes: You may win a “best project prize,” which could be financial as well as reputational.

  7. Entering into our process will make you more likely to be hired as a paid reviewer or editorial manager.

  8. We will encourage media coverage.

What we might ask of authors

If we consider your work for public evaluation, we may ask for some of the items below, although most are optional. We will aim to make this a very light touch for authors.

  1. A link to a non-paywalled, hosted version of your work (in any format—PDFs are not necessary) that can be given a Digital Object Identifier (DOI). Again, we will not be "publishing" this work, just evaluating it.

  2. A link to data and code, if possible. We will work to help you to make it accessible.

  3. Assignment of two evaluators who will be paid to assess your work. We will likely keep their identities confidential, although this is flexible depending on the reviewer. Where it seems particularly helpful, we will facilitate a confidential channel to enable a dialogue with the authors. One person on our managing team will handle this process.

  4. Have evaluators publicly post their evaluations (i.e., 'reviews') of your work on our platform. As noted above, we will ask them to provide feedback, thoughts, suggestions, and some quantitative ratings for the paper.

  • By completing the submission form, you are providing your permission for us to post the evaluations publicly unless you request an embargo.

  • You will have a two-week window to respond through our platform before anything is posted publicly. Your responses can also be posted publicly.

See "" for more detail, and examples.

For more information on why authors may want to engage and what we may ask authors to do, please see .

Conditional embargos & exceptions
For researchers/authors
Here again is the link to submit your work on our platform.
guide for researchers/authors
this forum post
"What research to target?"
"What specific areas do we cover?"
here
this form
FAQ on "why engage."
here