LogoLogo
  • The Unjournal
  • An Introduction to The Unjournal
    • Content overview
    • How to get involved
      • Brief version of call
      • Impactful Research Prize (pilot)
      • Jobs and paid projects with The Unjournal
        • Advisory/team roles (research, management)
        • Administration, operations and management roles
        • Research & operations-linked roles & projects
        • Standalone project: Impactful Research Scoping (temp. pause)
      • Independent evaluations (trial)
        • Reviewers from previous journal submissions
    • Organizational roles and responsibilities
      • Unjournal Field Specialists: Incentives and norms (trial)
    • Our team
      • Reinstein's story in brief
    • Plan of action
    • Explanations & outreach
      • Press releases
      • Outreach texts
      • Related articles and work
    • Updates (earlier)
      • Impactful Research Prize Winners
      • Previous updates
  • Why Unjournal?
    • Reshaping academic evaluation: Beyond accept/reject
    • Promoting open and robust science
    • Global priorities: Theory of Change (Logic Model)
      • Balancing information accessibility and hazard concerns
    • Promoting 'Dynamic Documents' and 'Living Research Projects'
      • Benefits of Dynamic Documents
      • Benefits of Living Research Projects
    • The File Drawer Effect (Article)
    • Open, reliable, and useful evaluation
      • Multiple dimensions of feedback
  • Frequently Asked Questions (FAQ)
    • For research authors
    • Evaluation ('refereeing')
    • Suggesting and prioritizing research
  • Our policies: evaluation & workflow
    • Project submission, selection and prioritization
      • What research to target?
      • What specific areas do we cover?
      • Process: prioritizing research
        • Prioritization ratings: discussion
      • Suggesting research (forms, guidance)
      • "Direct evaluation" track
      • "Applied and Policy" Track
      • 'Conditional embargos' & exceptions
      • Formats, research stage, publication status
    • Evaluation
      • For prospective evaluators
      • Guidelines for evaluators
        • Why these guidelines/metrics?
        • Proposed curating robustness replication
        • Conventional guidelines for referee reports
      • Why pay evaluators (reviewers)?
      • Protecting anonymity
    • Mapping evaluation workflow
      • Evaluation workflow – Simplified
    • Communicating results
    • Recap: submissions
  • What is global-priorities-relevant research?
  • "Pivotal questions"
    • ‘Operationalizable’ questions
    • Why "operationalizable questions"?
  • Action and progress
    • Pilot steps
      • Pilot: Building a founding committee
      • Pilot: Identifying key research
      • Pilot: Setting up platforms
      • Setting up evaluation guidelines for pilot papers
      • 'Evaluators': Identifying and engaging
    • Plan of action (cross-link)
  • Grants and proposals
    • Survival and Flourishing Fund (successful)
    • ACX/LTFF grant proposal (as submitted, successful)
      • Notes: post-grant plan and revisions
      • (Linked proposals and comments - moved for now)
    • Unsuccessful applications
      • Clearer Thinking FTX regranting (unsuccessful)
      • FTX Future Fund (for further funding; unsuccessful)
      • Sloan
  • Parallel/partner initiatives and resources
    • eLife
    • Peer Communities In
    • Sciety
    • Asterisk
    • Related: EA/global priorities seminar series
    • EA and EA Forum initiatives
      • EA forum peer reviewing (related)
      • Links to EA Forum/"EA journal"
    • Other non-journal evaluation
    • Economics survey (Charness et al.)
  • Management details [mostly moved to Coda]
    • Governance of The Unjournal
    • Status, expenses, and payments
    • Evaluation manager process
      • Choosing evaluators (considerations)
        • Avoiding COI
        • Tips and text for contacting evaluators (private)
    • UJ Team: resources, onboarding
    • Policies/issues discussion
    • Research scoping discussion spaces
    • Communication and style
  • Tech, tools and resources
    • Tech scoping
    • Hosting & platforms
      • PubPub
      • Kotahi/Sciety (phased out)
        • Kotahi: submit/eval/mgmt (may be phasing out?)
        • Sciety (host & curate evals)
    • This GitBook; editing it, etc
    • Other tech and tools
      • Cryptpad (for evaluator or other anonymity)
      • hypothes.is for collab. annotation
Powered by GitBook
On this page
  • Core needs
  • Some other desirable capabilities
  • Further discussion HERE ("Unjournal platform discussion")
  • Partial restatement and update, 2023

Was this helpful?

Export as PDF
  1. Tech, tools and resources

Tech scoping

Updated see Gdoc on 'Editorial Management Tech needs HERE and embedded at bottom (to do: integrate these discussions)

We are eligible/may be eligible for some nonprofit discounts

Core needs

  1. Hosting 'qualitative' evaluation content: A place to host the evaluations (as well as the authors’ responses and the editors' comments), allowing the public to read them in an attractive and convenient way (and perhaps respond to them). Ideally this will also make the quantitative ratings and predictions prominent and connected to the evaluations. This system needs to allow evaluations of any research that is publicly hosted and has a DOI.

  2. DOIs, Bibliometrics, Google Scholar: We need these evaluations to be visible in "bibliometric systems". They need a DOI, they need to show up in Google scholar and other search tools. The references cited in the evaluations (including the original paper) should also appear in the bibliometric record. Right now 'CrossRef' seems to be the leading system for this.

  3. Curation and organization: A place to bring together all of the evaluations we have done as the sort of center of our project, to explain it and get positive attention, as well as engage participation and readership.

  4. 'Editorial management': A tool to coordinate our management process (submissions, evaluations, et cetera)

  5. Hosting and open-analysis of 'quantitative' ratings and meta-data: A place to organize the evaluation data, particularly the quantitative ratings and predictions, in ways that people can analyze and use.

    Ideally also...

Some other desirable capabilities

  • Ways to enable evaluators and others to do collaborative annotation on pre-prints

  • Integrations with other platforms including prediction markets and OSF

  • Platforms for people to engage in other ways, perhaps up- and downrating evaluations etc

Further discussion HERE ("Unjournal platform discussion")

We have a partially-built system combining Sciety, Kotahi, and Hypothes.is. However, other tools and bespoke work will be needed to achieve some of the goals stated above.

2023 Update: We have moved mainly to PubPub, at least for hosting evaluation output (see https://unjournal.pubpub.org/)

Partial restatement and update, 2023

\

  1. Ideally also, this would update automatically. We might also want to give some nice analysis and visualization for this (although we could do that in one of the other pages, simply bringing in the data from this place where it’s stored). An API and link to other data archives could be useful.

PreviousTech, tools and resourcesNextHosting & platforms

Last updated 9 months ago

Was this helpful?