LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Post-pilot goals
  • Pilot targets
  • Building research "unjournal"
  • Setup and team
  • Create a set of rules for "submission and management"
  • Rules for reviews/assessments
  • Further steps

Was this helpful?

Export as PDF
  1. An Introduction to The Unjournal

Plan of action

Building a "best feasible plan"..

PreviousReinstein's story in briefNextExplanations & outreach

Last updated 9 months ago

Was this helpful?

What is this Unjournal?... See .

Post-pilot goals

See the vision and broad plan presented (and embedded below), updated August 2023.

Pilot targets

What we need our pilot (~12 months) to demonstrate
  1. We actually "do something."

  2. We can provide credible reviews and ratings that have value as measures of research quality comparable to (or better than) traditional journal systems.

  3. We identify important work that informs global priorities.

  4. We boost work in innovative and transparent and replicable formats (especially dynamic documents).

  5. Authors engage with our process and find it useful.

  6. (As a push) Universities, grantmakers, and other arbiters assign value to Unjournal ratings.

Updated:

Building research "unjournal"

Setup and team

Status: Mostly completed/decided for pilot phase

Create a set of rules for "submission and management"

  • Which projects enter the review system (relevance, minimal quality, stakeholders, any red lines or "musts")

  • How projects are to be submitted

  • How reviewers are to be assigned and compensated

Status: Mostly completed/decided for pilot phase; will review after initial trial

Rules for reviews/assessments

  • To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible (10 Dec 2022 update)

  • Share, advertise, promote this; have efficient meetings and presentations

    • Establish links to all open-access bibliometric initiatives (to the extent feasible)

  • Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.

Status: Mostly completed/decided for pilot phase; will review after the initial trial

Further steps

Key next steps (pasted from FTX application)

The key elements of the plan:

Build a "founding committee" of 5–8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).

  1. Host a meeting (and shared collaboration space/document), to come to a consensus/set of practical principles.

  2. Post and present our consensus (coming out of this meeting) on key fora. After a brief "followup period" (~1 week), consider adjusting the above consensus plan in light of feedback, and repost (and move forward).

  3. Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects. Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental space’ Update: Kotahi/Sciety seems a more flexible solution.

  4. Reach out to researchers in relevant areas and organizations and ask them to "submit" their work for "feedback and potential positive evaluations and recognition," and for a chance at a prize. The Unjournal will not be an exclusive outlet. Researchers are free to also submit the same work to 'traditional journals' at any point. However, whether submitted elsewhere or not, papers accepted by The Unjournal must be publicly hosted, with a DOI. Ideally the whole project is maintained and updated, with all materials, in a single location. 21 Sep 2022 status:_ 1-3 mostly completed. We have a good working and management group. We decided a platform and we're configuring it, and we have an interim workaround. We've reached out to researchers and organizations and got some good responses, but we need to find more platforms to disseminate and advertise this. We've identified and are engaging with four papers for the initial piloting. We aim to put out a larger prize-driven call soon and intake about 10 more papers or projects.

Status: We are still working with Google Docs and building an external survey interface. We plan to integrate this with PubPub over the coming months (August/Sept. 2023)

See for proposed specifics.

Pilot: Building a founding committee

/ Define the broad scope of our research interest and key overriding principles. Light-touch, to also be attractive to aligned academics

Build "editorial-board-like" teams with subject or area expertise

See for a first pass.

See our .

See our .

Aside: "Academic-level" work for EA research orgs (building on )

The approach below is largely integrated into the Unjournal proposal, but this is a suggestion for how organizations like RP might consider how to get feedback and boost credibility:

  1. Directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs

Next steps towards this approach:

  • Build our own systems (assign "editors") to do this without bias and with incentives

  • Build standard metrics for interpreting these reviews (possibly incorporating prediction markets)

  • Encourage them to leave their feedback through the PREreview or another platform

Host article (or dynamic research project or 'registered report') on OSF or another place allowing time stamping & DOIs (see for a start)

Link this to (or similar tool or site) to solicit feedback and evaluation without requiring exclusive publication rights (again, see )

Also: Commit to publish academic reviews or share in our internal group for further evaluation and reassessment or benchmarking of the ‘PREreview’ type reviews above (perhaps taking the ).

✔️
✔️
⏳
⏳
here
⏳
here
guidelines for evaluators
post at onscienceandacademia.org
my resources list in Airtable
PREreview
Airtable list
FreeOurKnowledge pledge relating to this
here
our summary
12-month plan