arrow-left

All pages
gitbookPowered by GitBook
1 of 22

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

An Introduction to The Unjournal

We are not a journal!

chevron-rightKey links/FAQshashtag
  • Guidelines for Evaluators

hashtag
In a nutshell

The Unjournal seeks to make rigorous research more impactful and impactful research more rigorous. We are , practitioners, and open science advocates led by .

The Unjournal encourages better research by making it easier for researchers to get feedback and credible ratings. We coordinate and fund public journal-independent evaluation of hosted papers and dynamically presented projects. We publish evaluations, ratings, manager summaries, author responses, and links to evaluated research on .

As the name suggests, we are not a journal!

We are working independently of traditional academic journals to build an open platform and a sustainable system for feedback, ratings, and assessment. We are currently focusing on quantitative work that informs global priorities in economics, policy, and other social sciences.

How to get involved?

We are looking for research papers to evaluate, as well as evaluators. If you want to suggest research, your own or someone else's, you can let us know . If you want to be an evaluator, . You can in being a member of the management team, advisory board, or reviewer pool. For more information, check our guide on . Why The Unjournal? Peer review is great, but conventional academic publication processes are wasteful, slow, and rent-extracting. They discourage innovation and prompt researchers to focus more on "gaming the system" than on the quality of their research. We will provide an immediate alternative, and at the same time, offer a bridge to a more efficient, informative, useful, and transparent research evaluation system.

Does The Unjournal charge any fees?

No. We are a nonprofit organization (hosted by ) and we do not charge any fees for submitting and evaluating your research. We compensate evaluators for their time and even award prizes for strong research work, in contrast to most traditional journals. We do so thanks to funding from the and .

At some point in the future, we might consider sliding-scale fees for people or organizations submitting their work for Unjournal evaluation, or for other services. If we do this, it would simply be a way to cover the compensation we pay evaluators and to cover our actual costs. Again, we are a nonprofit and we will stay that way.

hashtag
How do we do this?

  1. Research submission/identification and selection: We identify, solicit, and select relevant research work to be hosted on any open platform in any format that can gain a time-stamped DOI. Authors are encouraged to present their work in the ways they find most comprehensive and understandable. We support the use of and other formats that foster replicability and open science. (See: ).

  2. Paid evaluators (AKA "reviewers"): We compensate evaluators (essentially, reviewers) for providing thorough feedback on this work. (Read more: )

chevron-rightThis is not an original idea, and there are others in this space, but...hashtag

For example, this proposal is closely related to Life's ; see their updated (Oct 2022) model . COS is also building a "lifecycle journal" model. However, we cover a different research focus and make some different choices, discussed below. We also discuss other , many of whom we are building partnerships with. However, we think we are the only group funded to do this in this particular research area/focus. We are also taking a different approach to previous efforts, including funding evaluation (see ) and asking for quantified ratings and predictions (see ).

chevron-rightFundinghashtag

Our current funding comes from:

grant (ACX passed it to the Long Term Future Fund, who awarded it). This funding was extended through mid-2023. We have submitted some other grant applications; e.g., see our unsuccessful ; other grant applications are linked below. We are sharing these in the spirit of transparency.

hashtag
Change is hard: overcoming academic inertia

Academics and funders have complained about this stuff for years . . . and we suspect our critiques of the traditional review and publication process will resonate with readers.

So why haven't academia and the research community been able to move to something new? There is a difficult collective action problem. Individual researchers and universities find it risky to move unilaterally. But we believe we have a good chance of finally changing this model and moving to a better equilibrium because we will:

  • Take risks: Many members of The Unjournal management are not traditional academics; we can stick our necks out. We are also bringing on board established senior academics who are less professionally vulnerable.

  • Bring in new interests, external funding, and incentives: There are a range of well-funded and powerful organizations—such as the and —with a strong inherent interest in high-impact research being reliable, robust, and . This support can fundamentally shift existing incentive structures.

hashtag
Our webpage and our objectives

This GitBook serves as a platform to organize our ideas and resources and track our progress towards The Unjournal's dual objectives:

  1. Making "peer evaluation and rating" of open projects into a standard high-status outcome in academia and research, specifically within economics and social sciences. This stands in contrast to the conventional binary choice of accepting or rejecting papers to be published as PDFs and other static formats.

  2. Building a cohesive and efficient system for publishing, accruing credibility, and eliciting feedback for research aligned with effective altruism and global priorities. Our ultimate aim is to make rigorous research more impactful, and impactful research more rigorous.

chevron-rightFeedback and discussionhashtag

19 Feb 2024: We previously set up some discussion spaces; these have not been fully updated.

  • Please let me know if you wish to engage (email contact@unjournal.org)

hashtag
Where do I find . . . /where do I go next?

See

chevron-rightOrphaned notes -- please ignorehashtag

  • We target these areas (1) because of our current management team's expertise and (2) because these seem particularly in need of The Unjournal's approach. However, we are open to expanding and branching out.

Please do weigh in, all suggestions and comments will be credited. See also Unjournal: ; remember to callout contact@unjournal.org if you make any comments

Eliciting quantifiable and comparable metrics: We aim to establish and generate credible measures of research quality and usefulness. We intend to benchmark these against traditional previous measures (such as journal tiers) and assess the reliability, consistency, and predictive power of these measures. (Read more: Why quantitative metrics?)

  • Public evaluation: Reviews are typically public, including potential author responses. This facilitates dialogue and enhances understanding.

  • Linking, not publishing: Our process is not "exclusive." Authors can submit their work to a journal (or other evaluation service) at any time. This approach also allows us to benchmark our evaluations against traditional publication outcomes.

  • Financial prizes: We award financial prizes, paired with public presentations, to works judged to be the strongest.

  • Transparency: We aim for maximum transparency in our processes and judgments.

  • Allow less risky "bridging steps": As noted above, The Unjournal allows researchers to submit their work to traditional journals. In fact, this will provide a benchmark to help build our quantitative ratings and demonstrate their value.
  • Communicate with researchers and stakeholders to make our processes easy, clear, and useful to them.

  • Make our output useful: It may take years for university departments and grant funders to incorporate journal-independent evaluations as part of their metrics and reward systems. The Unjournal can be somewhat patient: our evaluation, rating, feedback, and communication will provide a valuable service to authors, policymakers, and other researchers in the meantime.

  • Leverage new technology: A new set of open-access tools (such as those funded by Sloan Scholarly Communicationsarrow-up-right) makes what we are trying to do easier, and makes formats other than static PDFs more useful every day.

  • Reward early adopters with prizes and recognition: We can replace "fear of standing out" with "fear of missing out." In particular, authors and research institutions that commit to publicly engaging with evaluations and critiques of their work should be commended and rewarded. And we intend to do this.

  • Please let me know if you want edit or comment access to the present Gitbook.
  • Please do weigh in; all suggestions and comments will be credited

  • We are considering future outcomes like replication and citations.
  • We will also consider funding later rounds of review or evaluations of improved and expanded versions of previously evaluated work.

  • Why Unjournal?
    Key writings (outlining/presenting the proposal)
    Slide deckarrow-up-right
    'Why would researchers want to submit their work' (a top FAQ)
    a team of researchersarrow-up-right
    David Reinsteinarrow-up-right
    our PubPub pagearrow-up-right
    using this formarrow-up-right
    apply herearrow-up-right
    express your interestarrow-up-right
    how to get involvedarrow-up-right
    OCFarrow-up-right
    Long-Term Future Fundarrow-up-right
    Survival and Flourishing Fundarrow-up-right
    dynamic documentsarrow-up-right
    the benefits of dynamic docs
    Why do we pay?
    "Publish, Review, Curate" modelarrow-up-right
    herearrow-up-right
    Parallel/partner initiatives and resources
    Why pay evaluators (reviewers)?
    Guidelines for evaluators
    Survival and Flourishing Fund (successful)
    ACX/LTFF grant proposal (as submitted, successful)
    FTX application here
    and continue to do so every day on social mediaarrow-up-right
    Sloan Foundationarrow-up-right
    Open Philanthropyarrow-up-right
    reasoning-transparentarrow-up-right
    Content overview
    public-facing FAQ in progressarrow-up-right

    Impactful Research Prize (pilot)

    circle-info

    As of December 2023, the prizes below have been chosen and will be soon announced. We are also scheduling an event linked to this prize. However, we are preparing for even larger author and evaluator prizes for our next phase. Submit your researcharrow-up-right to The Unjournal or serve as an evaluator to be eligible for future prizes (details to be announced).

    Submit your work to be eligible for our “Unjournal: Impactful Research Prize” and a range of other benefits including the opportunity for credible public evaluation and feedback.

    First-prize winners will be awarded $2500, and the runner-ups will receive $1000.

    Note: these are the minimum amounts; we will increase these if funding permits.

    Prize winners will have the opportunity (but not the obligation) to present their work at an online seminar and prize ceremony co-hosted by The Unjournal, , and

    hashtag

    circle-info

    To be eligible for the prize, submit a link to your work for public evaluation .

    • Please choose “new submission” and “Submit a URL instead.”

    The Unjournal, with funding from the and the , organizes and funds public-journal-independent feedback and evaluation. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation, and aim to expand this widely. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.

    We aim to publicly evaluate 15 papers (or projects) within our pilot year. This award will honor researchers doing robust, credible, transparent work with a global impact. We especially encourage the submission of research in "open" formats such as hosted dynamic documents (Quarto, R-markdown, Jupyter notebooks, etc.).

    The research will be chosen by our management team for public evaluation by 2–3 carefully selected, paid reviewers based on an initial assessment of a paper's methodological strength, openness, clarity, relevance to , and the usefulness of further evaluation and public discussion. We sketch out .

    All evaluations, including quantitative ratings, will be made public by default; however, we will consider "embargos" on this for researchers with sensitive career concerns (the linked form asks about this). Note that submitting your work to The Unjournal does not imply "publishing" it: you can submit it to any journal before, during, or after this process.

    If we choose not to send your work out to reviewers, we will try to at least offer some brief private feedback (please on this).

    All work evaluated by The Unjournal will be eligible for the prize. Engagement with The Unjournal, including responding to evaluator comments, will be a factor in determining the prize winners. We also have a slight preference for giving at least one of the awards to an early-career researcher, but this need not be determinative.

    Our management team and advisory board will vote on the prize winners in light of the evaluations, with possible consultation of further external expertise.

    Deadline: Extended until 5 December (to ensure eligibility).

    Note: In a subsection below, , we outline the basic requirements for submissions to The Unjournal.

    hashtag
    How we chose the research prize winners (2023)

    The prize winners for The Unjournal's Impactful Research Prize were selected through a multi-step, collaborative process involving both the management team and the advisory board. The selection was guided by several criteria, including the quality and credibility of the research, its potential for real-world impact, and the authors' engagement with The Unjournal's evaluation process.

    1. Initial Evaluation: All papers that were evaluated by The Unjournal were eligible for the prize. The discussion, evaluations, and ratings provided by external evaluators played a significant role in the initial shortlisting.

    2. Management and Advisory Board Input: Members of the management committee and advisory board were encouraged to write brief statements about papers they found particularly prize-worthy.

    3. Meeting and Consensus: A "prize committee" meeting was held with four volunteers from the management committee to discuss the shortlisted papers and reach a consensus. The committee considered both the papers and the content of the evaluations

    This comprehensive approach aimed to ensure that the prize winners were selected in a manner that was rigorous, fair, and transparent, reflecting the values and goals of The Unjournal.

    Brief version of call

    I (David Reinstein) am an economist who left UK academia after 15 years to pursue a range of projects (see my web pagearrow-up-right). One of these is The Unjournalarrow-up-right:

    The Unjournal (with funding from the Long Term Future Fundarrow-up-right and the Survival and Flourishing Fund) organizes and funds public-journal-independent feedback and evaluation, paying reviewers for their work. We focus on research that is highly relevant to global priorities, especially in economics, social science, and impact evaluation. We encourage better research by making it easier for researchers to get feedback and credible ratings on their work.

    circle-exclamation

    We are looking for your involvement...

    hashtag
    Evaluators

    We want researchers who are interested in doing evaluation work for The Unjournal. We pay an average of $400-$500 per complete and on-time evaluation, and we award monetary prizes for the strongest work. Right now we are particularly looking for economists and people with quantitative and policy-evaluation skills. We describe what we are asking evaluators to do : essentially a regular peer review with some different emphases, plus providing a set of quantitative ratings and predictions. Your evaluation content would be made public (and receive a DOI, etc.), but you can choose if you want to remain anonymous or not.

    To sign up to be part of the pool of evaluators or to get involved in The Unjournal project in other ways, please or email contact@unjournal.org.

    hashtag
    Research

    We welcome suggestions for particularly impactful research that would benefit from (further) public evaluation. We choose research for public evaluation based on an initial assessment of methodological strength, openness, clarity, , and the usefulness of further evaluation and public discussion. We sketch , and discuss some potential examples (see research we have chosen and evaluated at unjournal.pubpub.org, and a larger list of research we're considering ).

    If you have research—your own or others—that you would like us to assess, please fill out . You can submit your own work (or by contacting ). Authors of evaluated papers will be eligible for our (details).

    hashtag
    Feedback

    We are looking for both feedback on and involvement in The Unjournal project. Feel free to reach out at .

    View our

    The latter link requires an ORCID ID; if you prefer, you can email your submission to
    in detail.
    Members of the committee allocated a total of 100 points among the 10 paper candidates. We used this to narrow down a shortlist of five papers.
  • Point Voting: The above shortlist and the notes from the accompanying discussion were shared with all management committee and advisory board members. Everyone in this larger group was invited to allocate up to 100 points among the shortlisted papers (and asked to allocate fewer points if they were less familiar with the papers and evaluations).

  • Special Considerations: We decided that at least one of the winners had to be a paper submitted by the authors or one where the authors substantially engaged with The Unjournal's processes. However, this constraint did not prove binding. Early-career researchers were given a slight advantage in our consideration.

  • Final Selection: The first and second prizes were given to the papers with the first- and second-most points, respectively.

  • Rethink Prioritiesarrow-up-right
    EAecon.arrow-up-right
    herearrow-up-right
    Long Term Future Fundarrow-up-right
    Survival and Flourishing Fundarrow-up-right
    global prioritiesarrow-up-right
    these criteria herearrow-up-right
    Recap: submissions
    herearrow-up-right
    fill out this brief formarrow-up-right
    relevance to global priorities
    these criteria here
    herearrow-up-right
    herearrow-up-right
    this formarrow-up-right
    herearrow-up-right
    Impactful Research Prizes
    data protection statementarrow-up-right

    How to get involved

    The Unjournal call for participants and research

    See In a nutshell for an overview of The Unjournal.

    circle-info

    In brief (TLDR): If you are interested in being on The Unjournal's management committee, advisory board, or evaluator pool, please fill out this formarrow-up-right (about 3–5 min).

    If you want to suggest research for us to assess, please fill out this formarrow-up-right. You can also submit your own work herearrow-up-right, or by contacting .

    Please note that while data submitted through the above forms may be shared internally within our Management Team, it will not be publicly disclosed. Data protection statement linked .

    hashtag
    Overview and call

    I am , founder and co-director of The Unjournal. We have an open call for committee members, board members, reviewers, and suggestions for relevant work for The Unjournal to evaluate.

    The is building a system for credible, public, journal-independent feedback and evaluation of research.

    chevron-rightBriefly, The Unjournal’s basic process is:hashtag
    • Identify, invite, or select contributions of relevant research that is publicly hosted on any open platform or archive in any format.

    We maintain an open call for participants for four different roles:

    1. (involving honorariums for time spent)

    2. members (no time commitment)

    3. (who will often also be on the Advisory Board)

    You can express your interest (and enter our database) .

    chevron-rightSome particular research area/field priorities (15 Aug 2023)hashtag

    We're interested in researchers and research-users who want to help us prioritize work for evaluation, and manage evaluations, considering

    ... research in any social science/economics/policy/impact-assessment area, and

    ... research with the potential to be among the most globally-impactful.

    hashtag
    Evaluators

    We will reach out to evaluators (a.k.a. "reviewers") on a case-by-case basis, appropriate for each paper or project being assessed. This is dependent on expertise, the researcher's interest, and a lack of conflict of interest.

    Time commitment: Case-by-case basis. For each evaluation, for the amount of time to spend.

    Compensation: We pay a minimum of $200 (updated Aug. 2024) for a prompt and complete evaluation, $400 for experienced evaluators. We offer additional prizes and incentives, and are committed to an average compensation of at least $450 per evaluator. .

    Who we are looking for: We are putting together a list of people interested in being an evaluator and doing paid referee work for The Unjournal. We generally prioritize the pool of evaluators who signed up for our database before reaching out more widely.

    Interested? Please fill out (about 3–5 min, same form for all roles or involvement).

    hashtag
    Projects and papers

    We are looking for high-quality, globally pivotal research projects to evaluate, particularly those embodying open science practices and innovative formats. We are putting out a call for relevant research. Please suggest research . (We offer bounties and prizes for useful suggestions – see note.) For details of what we are looking for, and some potential examples, and accompanying links.

    You can also .

    We provide a separate form for research suggestions. We may follow up with you individually.

    hashtag
    Contact us

    If you are interested in discussing any of the above in person, please email us () to arrange a conversation.

    We invite you to fill in to leave your contact information, as well as outlining which parts of the project you may be interested in.

    Note: This is under continual refinement; see our for more details.

    Content overview

    A "curated guide" to this GitBook; updated June 2023

    circle-info

    You can now ask questions of this GitBook using a chatbot: click the search bar or press cmd-k and choose "ask Gitbook."

    hashtag
    Some key sections and subsections

    hashtag
    Learn more about The Unjournal, our goals and policies

    For authors, evaluators, etc.

    Writeups of the main points for a few different audiences

    Important benefits of journal-independent public evaluation and The Unjournal's approach, with links to deeper commentary

    How we choose papers/projects to evaluate, how we assign evaluators, and so on

    hashtag
    Other resources and reading

    Groups we work with; comparing approaches

    What research are we talking about? What will we cover?

    hashtag
    Detail, progress, and internal planning

    These are of more interest to people within our team; we are sharing these in the spirit of transparency.

    A "best feasible plan" for going forward

    Successful proposals (ACX, SFF), other applications, initiatives

    Key resources and links for managers, advisory board members, staff, team members and others involved with The Unjournal project.

    Note: we have moved some of this "internal interest content" over to our Coda.io knowledge base.

    Research & operations-linked roles & projects

    circle-info

    We are again considering application for the 'evaluation metrics/meta-science' role. We will also consider all applicants for our field specialist positions, and for roles that may come up in the future.

    The potential roles discussed below combine research-linked work with operations and administrative responsibilities. Overall, this may include some combination of:

    • Assisting and guiding the process of identifying strong and potentially impactful work in key areas, explaining its relevance, its strengths, and areas warranting particular evaluation and scrutiny

    • Interacting with authors, recruiting, and overseeing evaluators

    • Synthesizing and disseminating the results of evaluations and ratings

    • Aggregating and benchmarking these results

    • Helping build and improve our tools, incentives, and processes

    • Curating outputs relevant to other researchers and policymakers

    • Doing "meta-science" work

    circle-info

    See also our field specialist team pool and evaluator pool. Most of these roles involve compensation/honorariums. See

    hashtag
    Possible role: Research and Evaluation Specialist (RES)

    chevron-rightPossible role detailshashtag

    Potential focus areas include global health; development economics; markets for products with large externalities (particularly animal agriculture); attitudes and behaviors (altruism, moral circles, animal consumption, effectiveness, political attitudes, etc.); economic and quantitative analysis of catastrophic risks; the economics of AI safety and governance; aggregation of expert forecasts and opinion; international conflict, cooperation, and governance; etc.

    Work (likely to include a combination of):

    . (Nov. 2023: Note, we can not guarantee that we will be hiring for this role, because of changes in our approach.)

    Jobs and paid projects with The Unjournal

    circle-info

    19 Feb 2024. We are not currently hiring, but expect to do so in the future

    To indicate your potential interest in roles at The Unjournal, such as those described below, please fill out and link (or upload) your CV or webpage.

    • If you already filled out this form for a role that has changed titles, don’t worry. You will still be considered for relevant and related roles in the future.

    Frequently Asked Questions (FAQ)
    Explanations & outreach
    Why Unjournal?
    Our policies: evaluation & workflow
    Parallel/partner initiatives and resources
    What is global-priorities-relevant research?
    Plan of action
    Grants and proposals
    UJ Team: resources, onboarding
    Identify and characterize research (in the area of focus) that is most relevant for The Unjournal to evaluate
  • Summarize the importance of this work, its relevance to global priorities and connections to other research, and its potential limitations (needing evaluation)

  • Help build and organize the pool of evaluators in this area

  • Assist evaluation managers or serve as evaluation manager (with additional compensation) for relevant papers and projects

  • Synthesize and communicate the progress of research in this area and insights coming from Unjournal evaluations and author responses; for technical, academic, policy, and intelligent lay audiences

  • Participate in Unjournal meetings and help inform strategic direction

  • Liaise and communicate with relevant researchers and policymakers

  • Help identify and evaluate prize winners

  • Meta-research and direct quantitative meta-analysis (see "Project" below)

  • Desirable skills and experience:

    Note: No single skill or experience is necessary independently. If in doubt, we encourage you to express your interest or apply.

    • Understanding of the relevant literature and methodology (to an upper-postgraduate level) in this field or a related field and technical areas, i.e., knowledge of the literature, methodology, and policy implications

    • Research and policy background and experience

    • Strong communication skills

    • Ability to work independently, as well as to build coalitions and cooperation

    • Statistics, data science and "aggregation of expert beliefs"

    Proposed terms:

    • 300 hours (flexible, extendable) at $25–$55/hour USD (TBD, depending on experience and skills)

    • This is a contract role, open to remote and international applicants. However, the ability to attend approximately weekly meetings and check-ins at times compatible with the New York timezone is essential.

    Length and timing:

    • Flexible; to be specified and agreed with the contractor.

    • We are likely to hire one role starting in Summer 2023, and another starting in Autumn 2023.

    • Extensions, growth, and promotions are possible, depending on performance, fit, and our future funding.

    Advisory/team roles (research, management)
    Express your interest herearrow-up-right
  • If you add your name to this form, we may contact you to offer you the opportunity to do paid project work and paid work tasks.

  • Furthermore, if you are interested in conducting paid research evaluation for The Unjournal, or in joining our advisory board, please complete the form linked herearrow-up-right.

    Feel free to contact contact@unjournal.org with any questions.

    hashtag
    Quick links to role descriptions below

    Administration, operations and management roles

    Research & operations-linked roles & projects

    Standalone project: Impactful Research Scoping (temp. pause)

    hashtag
    Additional information

    Express interest in any of these roles in our survey form.arrow-up-right

    The Unjournal, a not-for-profit collective under the umbrella and fiscal sponsorship of the Open Collective Foundationarrow-up-right, is an equal-opportunity employer and contractor. We are committed to creating an inclusive environment for all employees, volunteers, and contractors. We do not discriminate on the basis of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetic information, disability, age, or veteran status.

    See our data protection statement linked herearrow-up-right.

    circle-info

    In addition to the jobs and paid projects listed here, we are expanding our management team, advisory board, field specialist team pool, and evaluator pool. Most of these roles involve compensation/honorariums. See Advisory/team roles (research, management)

    this quick survey formarrow-up-right

    Pay evaluators to give careful feedback on this work, with prizes and incentives for strong evaluation work.

  • Elicit quantifiable and comparable metrics of research quality as credible measures of value (see: evaluator guidelines). Synthesize the results of these evaluations in useful ways.

  • Publicly post and link all reviews of the work. Award financial prizes for the work judged strongest.

  • Allow evaluators to choose if they wish to remain anonymous or to "sign" their reviews.

  • Aim to be as transparent as possible in these processes.

  • A pool of Evaluators (who will be paid for their time and their work; we also draw evaluators from outside this pool)

    Some particular areas where we are hoping to expand our expertise (as of 15 Aug 2023) include:

    - Biological & pandemic risk

    - AI governance, AI safety

    - Animal welfare, markets for animal products

    - Long-term trends, demography

    - Macroeconomics/growth/(public) finance

    - Quantitative political science (voting, lobbying, etc.)

    - Social impact of new technology (including AI)

    herearrow-up-right
    David Reinsteinarrow-up-right
    Unjournal teamarrow-up-right
    Management Committee members
    Advisory Board
    Field Specialists
    The roles are explained in more detail here.
    herearrow-up-right
    here are some guidelines
    See here for more details
    this formarrow-up-right
    herearrow-up-right
    see this postarrow-up-right
    put forward your own workarrow-up-right
    herearrow-up-right
    this form arrow-up-right
    policies

    Administration, operations and management roles

    These are principally not research roles, but familiarity with research and research environments will be helpful, and there is room for research involvement depending on the candidate’s interest, background, and skills/aptitudes.

    There are currently one such role:

    Communications, Writing, and Public Relations Specialist (As of November 2023, still seeking freelancers)

    Further note: We previously considered a “Management support and administrative professional” role. We are not planning to hire for this role currently. Those who indicated interest will be considered for other roles.

    .

    hashtag
    Communications, Writing, and Public Relations Specialist

    As of November 2023, we are soliciting applications for freelancers with skills in particular areas

    The Unjournal is looking to work with a proficient writer who is adept at communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. As we are in our early stages, this is a generalist role. We need someone to help us explain what The Unjournal does and why, make our processes easy to understand, and ensure our outputs (evaluations and research synthesis) are accessible and useful to non-specialists. We seek someone who values honesty and accuracy in communication; someone who has a talent for simplifying complex ideas and presenting them in a clear and engaging way.

    The work is likely to include:

    1. Promotion and general explanation

      • Spread the word about The Unjournal, our approach, our processes, and our progress in press releases and short pieces, as well as high-value emails and explanations for a range of audiences

      • Make the case for The Unjournal to potentially skeptical audiences in academia/research, policy, philanthropy, effective altruism, and beyond

    Most relevant skills, aptitudes, interests, experience, and background knowledge:

    • Understanding of The Unjournal project

    • Strong written communications skills across a relevant range of contexts, styles, tones, and platforms (journalistic, technical, academic, informal, etc.)

    • Familiarity with academia and research processes and institutions

    Further desirable skills and experience:

    • Academic/research background in areas related to The Unjournal’s work

    • Operations, administrative, and project management experience

    • Experience working in a small nonprofit institution

    Proposed terms:

    • Project-based contract "freelance" work

    • $30–$55/hour USD (TBD, depending on experience and capabilities). Hours for each project include some onboarding and upskilling time.

    • Our current budget can cover roughly 200 hours of this project work. We hope to increase and extend this (depending on our future funding and expenses).

    .

    Advisory/team roles (research, management)

    circle-info

    Nov. 2023: We are currently prioritizing bringing in more field specialists to build our teams in a few areas, particularly in:

    • Catastrophic risks, AI governance and safety

    • Animal welfare: markets, attitudes

    As well as:

    • Quantitative political science (voting, lobbying, attitudes)

    • Social impact of AI/emerging technologies

    • Macro/growth, finance, public finance

    In addition to the "work roles," we are looking to engage researchers, research users, meta-scientists, and people with experience in open science, open access, and management of initiatives similar to The Unjournal.

    We are continually looking to enrich our general team and board, including our , These roles come with some compensation and incentives.

    (Please see links and consider submitting an expression of interest).

    Standalone project: Impactful Research Scoping (temp. pause)

    circle-info

    Nov. 2023 update: We have paused this process focus to emphasize our field specialist positions. We hope to come back to hiring researchers to implement these projects soon.

    hashtag
    Proposed projects

    We are planning to hire 3–7 researchers for a one-off paid project.

    There are two opportunities: Contracted Research (CR) and Independent Projects (IP).

    Project Outline

    • What specific research themes in economics, policy, and social science are most important for global priorities?

    • What projects and papers are most in need of further in-depth public evaluation, attention, and scrutiny?

    • Where does "Unjournal-style evaluation" have the potential to be one of the most impactful uses of time and money? By impactful, we mean in terms of some global conception of value (e.g., the well-being of living things, the survival of human values, etc.).

    This is an initiative that aims to identify, summarize, and conduct an in-depth evaluation of the most impactful themes in economics, policy, and social science to answer the above questions. Through a systematic review of selected papers and potential follow-up with authors and evaluators, this project will enhance the visibility, understanding, and scrutiny of high-value research, fostering both rigorous and impactful scholarship.

    Contracted Research (CR) This is the main opportunity, a unique chance to contribute to the identification and in-depth evaluation of impactful research themes in economics, policy, and social science. We’re looking for researchers and research users who can commit a (once-off) 15–20 hours. CR candidates will:

    • Summarize a research area or theme, its status, and why it may be relevant to global priorities (~4 hours).

      • We are looking for fairly narrow themes. Examples might include:

        • The impact of mental health therapy on well-being in low-income countries.

    We will compensate you for your time at a rate reflecting your experience and skills ($25–$65/hour). This work also has the potential to serve as a “work sample” for future roles at The Unjournal, as it is highly representative of what our and are commissioned to do.

    We are likely to follow up on your evaluation suggestions. We also may incorporate your writing into our web page and public posts; you can choose whether you want to be publicly acknowledged or remain anonymous.

    Independent Projects (IP)

    We are also inviting applications to do similar work as an “Independent Project” (IP), a parallel opportunity designed for those eager to engage but not interested in working under a contract, or not meeting some of the specific criteria for the Contracted Research role. This involves similar work to above.

    If you are accepted to do an IP, we will offer some mentoring and feedback. We will also offer prize rewards/bounties for particularly strong IP work. We will also consider working with professors and academic supervisors on these IP projects, as part of university assignments and dissertations.

    You can apply to the CR and IP positions together; we will automatically consider you for each.

    Get Involved!

    If you are interested in involvement in either the CR or IP side of this project, please let us know .

    Organizational roles and responsibilities

    circle-info

    5 Sep 2024: The Unjournal is still looking to build our team and evaluator pool. Please consider the roles below and express your interest herearrow-up-right or contact us at contact@unjournal.org.

    hashtag
    Management committee members

    Activities of those on the management committee may involve a combination of the following (although you can choose your focus):

    • Contributing to the decision-making process regarding research focus, reviewer assignment, and prize distribution.

    • Collaborating with other committee members on the establishment of rules and guidelines, such as determining the metrics for research evaluation and defining the mode of assessment publication.

    • Helping plan The Unjournal’s future path.

    Time commitment: A minimum of 15–20 hours per year.

    Compensation: We have funding for a $57.50 per hour honorarium for the first 20 hours, with possible compensation for additional work. will be further compensated (at roughly $300–$450 per paper).

    Who we are looking for: All applicants are welcome. We are especially interested in those involved in global priorities research (and related fields), policy research and practice, open science and meta-science, bibliometrics and scholarly publishing, and any other academic research. We want individuals with a solid interest in The Unjournal project and its goals, and the ability to meet the minimal time commitment. Applying is extremely quick, and those not chosen will be considered for other roles and work going forward.

    hashtag
    Advisory board (AB) members

    Beyond direct roles within The Unjournal, we're building a larger, more passive advisory board to be part of our network, to offer occasional feedback and guidance, and to act as an evaluation manager when needed (see our ).

    There is essentially no minimum time commitment for advisory board members—only opportunities to engage. We sketch some of the expectations in the fold below.

    chevron-rightAdvisory board members: expectations (sketch: Aug. 15, 2023)hashtag

    As an AB member...

    • you agree to be listed on our page as being on the advisory board.

    hashtag
    Field specialists (FS)

    chevron-rightNov. 2023 prioritieshashtag

    We are currently prioritizing bringing in more to build our teams in a few areas, particularly including:

    • Catastrophic risks, AI governance and safety

    FSs will focus on a particular area of research, policy, or impactful outcome. They will keep track of new or under-considered research with potential for impact and explain and assess the extent to which The Unjournal can add value by commissioning its evaluation. They will "curate" this research and may also serve as evaluation managers for this work.

    circle-check

    Some advisory board members will also be FSs, although some may not (e.g., because they don't have a relevant research focus).

    Time commitment: There is no specific time obligation—only opportunities to engage. We may also consult you occasionally on your areas of expertise. Perhaps 1–4 hours a month is a reasonable starting expectation for people already involved in doing or using research, plus potential additional paid assignments.

    Our document also provide some guidance on the nature of work and the time involved.

    Compensation: We have put together a preliminary/trial compensation formula (); we aim to fairly compensate people for time spent on work done to support The Unjournal, and to provide incentives for suggesting and helping to prioritize research for evaluation. In addition, will be compensated at roughly $300–$450 per project.

    Who we are looking for: For the FS roles, we are seeking active researchers, practitioners, and stakeholders with a strong publication record and/or involvement in the research and/or research-linked policy and prioritization processes. For the AB, also people with connections to academic, governmental, or relevant non-profit institutions, and/or involvement in open science, publication, and research evaluation processes. People who can offer relevant advice, experience, guidance, or help communicate our goals, processes, and progress.

    Interested? (about 3–5 min, using the same form for all roles).

    chevron-rightIf you become a field specialist, what happens next?hashtag

    You will be asked to fill out to let us know what fields, topics, and sources of research you would like to "monitor" or dig into to help identify and curate work relevant for Unjournal evaluation, as well as outlining your areas of expertise (the form takes perhaps 5–20 minutes).

    This survey helps us understand when to contact you to ask if you want to be an evaluation manager on a paper we have prioritized for evaluation.

    Guided by this survey form (along with discussions we will have with you, and coordination with the team), we will develop an “assignment” that specifies the area you will cover. We will try to divide the space and not overlap between field specialists. This scope can be as broad or focused as you like.

    chevron-rightField specialist "area teams" hashtag

    We are organizing several teams of field specialists (and management and advisory board members). These teams will hold occasional online meetings (perhaps every 3 months) to discuss research to prioritize, and to help coordinate 'who covers what'. If team members are interested, further discussions, meetings, and seminars might be arranged, but this is very much optional.

    As of 25 Oct 2023, we have put together the following teams (organized around fields and outcomes)

    chevron-right"Monitoring" a research area or source as a field specialisthashtag

    The Unjournal's field specialists choose an area they want to monitor. By this we mean that a field specialist will

    • Keep an eye on designated sources (e.g., particular working paper series) and fields (or outcomes or area codes), perhaps every month or so; consider new work, dig into archives

    See:

    hashtag
    Contact us

    If you are interested in discussing any of the above in person, please email us () to arrange a conversation.

    We invite you to (the same as that linked above) to leave your contact information and outline which parts of the project interest you.

    Note: These descriptions are under continual refinement; see our for more details.

    Plan of action

    Building a "best feasible plan"..

    circle-exclamation

    What is this Unjournal?... See our summary.

    hashtag
    Post-pilot goals

    See the vision and broad plan presented (and embedded below), updated August 2023.

    hashtag
    Pilot targets

    chevron-rightWhat we need our pilot (~12 months) to demonstratehashtag
    1. We actually "do something."

    2. We can provide credible reviews and ratings that have value as measures of research quality comparable to (or better than) traditional journal systems.

    Updated: Partial update 10 Dec 2022.

    hashtag
    Building research "unjournal"

    See for proposed specifics.

    hashtag
    Setup and team

    ✔️

    ✔️/⏳ Define the broad scope of our research interest and key overriding principles. Light-touch, to also be attractive to aligned academics

    ⏳ Build "editorial-board-like" teams with subject or area expertise

    Status: Mostly completed/decided for pilot phase

    hashtag
    Create a set of rules for "submission and management"

    • Which projects enter the review system (relevance, minimal quality, stakeholders, any red lines or "musts")

      • ⏳ See for a first pass.

    • How projects are to be submitted

    Status: Mostly completed/decided for pilot phase; will review after initial trial

    hashtag
    Rules for reviews/assessments

    • To be done on the chosen open platform (Kotahi/Sciety) unless otherwise infeasible (10 Dec 2022 update)

    • Share, advertise, promote this; have efficient meetings and presentations

      • Establish links to all open-access bibliometric initiatives (to the extent feasible)

    See our .

    Status: Mostly completed/decided for pilot phase; will review after the initial trial

    hashtag
    Further steps

    See our .

    chevron-rightKey next steps (pasted from FTX application)hashtag

    The key elements of the plan:

    Build a "founding committee" of 5–8 experienced and enthusiastic EA-aligned/adjacent researchers at EA orgs, research academics, and practitioners (e.g., draw from speakers at recent EA Global meetings).

    1. Host a meeting (and shared collaboration space/document), to come to a consensus/set of practical principles.

    chevron-rightAside: "Academic-level" work for EA research orgs (building on )hashtag

    The approach below is largely integrated into the Unjournal proposal, but this is a suggestion for how organizations like RP might consider how to get feedback and boost credibility:

    1. Host article (or dynamic research project or 'registered report') on OSF or another place allowing time stamping & DOIs (see for a start)

    Status: We are still working with Google Docs and building an external survey interface. We plan to integrate this with PubPub over the coming months (August/Sept. 2023)

    Independent evaluations (trial)

    circle-info

    Disambiguation: The Unjournal focuses on commissioning expert evaluations, guided by an ‘evaluation manager’ and compensating people for their work. (See the outline of our main process). We plan to continue to focus on that mode. Below we sketch an additional parallel but separate approach.

    Note on other versions of this content.

  • Keeping track of our progress and keeping everyone in the loop

    • Help produce and manage our external (and some internal) long-form communications

    • Help produce and refine explanations, arguments, and responses

    • Help provide reports to relevant stakeholders and communities

  • Making our rules and processes clear to the people we work with

    • Explain our procedures and policies for research submission, evaluation, and synthesis; make our systems easy to understand

    • Help us build flexible communications templates for working with research evaluators, authors, and others

  • Other communications, presentations, and dissemination

    • Write and organize content for grants applications, partnership requests, advertising, hiring, and more

    • Potentially: compose non-technical write-ups of Unjournal evaluation synthesis content (in line with interest and ability)

  • Familiarity with current conversations and research on global priorities within government and policy circles, effective altruism, and relevant academic fields

  • Willingness to learn and use IT, project management, data management, web design, and text-parsing tools (such as those mentioned below), with the aid of GPT/AI chat

  • Experience with promotion and PR campaigns and working with journalists and bloggers

    This role is contract-based and supports remote and international applicants. We can contract people living in most countries, but we cannot serve as an immigration sponsor.

    Express interest herearrow-up-right
    Express your interest herearrow-up-right

    Long-term trends and demographics

    Management committee members
    Organizational roles and responsibilities

    The impact of cage-free egg regulation on animal welfare.

  • Public attitudes towards AI safety regulation.

  • Identify a selection of papers in this area that might be high-value for UJ evaluation (~3 hours).

    • Choose at least four of these from among NBER/"top-10 working paper" series (or from work submitted to the UJ – we can share – or from work where the author has expressed interest to you).

  • For a single paper, or a small set of these papers (or projects) (~6 hours)

    • Read the paper fairly carefully and summarize it, explaining why it is particularly relevant.

    • Discuss one or more aspects of the paper that need further scrutiny or evaluation.

    • Identify 3 possible evaluators, and explain why they might be particularly relevant to evaluate this work. (Give a few sentences we could use in an email to these evaluators).

    • Possible follow-up task: email and correspond with the authors and evaluators (~3 hours).

  • through our survey form herearrow-up-right
    How to get involved
    Evaluators

    Reinstein's story in brief

    davidreinstein.orgarrow-up-right

    I was in academia for about 20 years (PhD Economics, UC Berkeley; Lecturer, University of Essex; Senior Lecturer, University of Exeter). I saw how the journal system was broken.

    • Academics constantly complain about it (but don't do anything to improve it).

    • Most conversations are not about research, but about 'who got into what journal' and 'tricks for getting your paper into journals'

    • Open science and replicability are great, and dynamic documents make research a lot more transparent and readable. But these goals and methods are very hard to apply within the traditional journal system and its 'PDF prisons'.

    Now I'm working outside academia and can stick my neck out. I have the opportunity to help fix the system. I work with research organizations and large philanthropists involved with effective altruism and global priorities. They care about the results of research in areas that are relevant to global priorities. They want research to be reliable, robust, reasoning-transparent, and well-communicated. Bringing them into the equation can change the game.

    Helping monitor and prioritize research for The Unjournal to evaluate (i.e., acting as a field specialist; see further discussion below). Acting as an evaluation manager for research in your area.

    you have the option (but not the expectation or requirement) to join our Slack, and to check in once in a while.
  • you will be looped in for your input on some decisions surrounding The Unjournal's policies and direction. Such communications might occur once per month, and you are not obligated to respond.

  • you may be invited to occasional video meetings (again optional).

  • you are “in our system” and we may consult you for other work.

  • you will be compensated for anything that requires a substantial amount of your time that does not overlap with your regular work.

  • Animal welfare: markets, attitudes

    As well as:

    • Quantitative political science (voting, lobbying, attitudes)

    • Social impact of AI/emerging technologies

    • Macro/growth, finance, public finance

    • Long term trends and demographics

    Within your area, you keep a record of the research that seems relevant (and why, and what particularly needs evaluation, etc.) and enter it in our database. (Alternatively, you can pass your notes to us for recording.)

    We will compensate you for the time you spend on this process (details tbd), particularly to the extent that the time you spend does not contribute to your other work or research. (See incentives and normsarrow-up-right trial herearrow-up-right.)

    Development economics (not health-focused)
  • Global health and development "health-related" outcomes and interventions in LMIC

  • Economics, welfare, and governance

  • Psychology, behavioral science, and attitudes

  • Innovation and meta-science

  • Environmental economics

  • Other teams are being organized (and we are particularly recruiting field specialists with interests and expertise in these areas):

    • Catastrophic risks, AI governance and safety

    • Animal welfare: markets, attitudes

    • Quantitative political science (voting, lobbying, attitudes)

    • The social impact of AI/emerging technologies

    • Macro/growth, finance, public finance

    • Long-term trends and demographics

    Let us know what you have been able to cover; if you need to reduce the scope, we can adjust it

  • Suggest/Input work into our database … papers/projects/research that seems relevant for The Unjournal to evaluate. Give some quick ‘prioritization ratings’

  • If you have time, give a brief on why this work relevant for UJ (impactful, credible, timely, open presentation, policy-relevant, etc) and what areas need particular evaluation and feedback

  • Evaluation management work
    evaluation workflow
    field specialists
    Incentives and normsarrow-up-right
    incentives and normsarrow-up-right
    evaluation management work
    Please fill out this formarrow-up-right
    Unjournal Field Specialists: Incentives and norms (trial)arrow-up-right
    fill in this formarrow-up-right
    policies
  • We identify important work that informs global priorities.

  • We boost work in innovative and transparent and replicable formats (especially dynamic documents).

  • Authors engage with our process and find it useful.

  • (As a push) Universities, grantmakers, and other arbiters assign value to Unjournal ratings.

  • How reviewers are to be assigned and compensated

    Harness and encourage additional tools for quality assessment, considering cross-links to prediction markets/Metaculus, to coin-based 'ResearchHub', etc.

    Post and present our consensus (coming out of this meeting) on key fora. After a brief "followup period" (~1 week), consider adjusting the above consensus plan in light of feedback, and repost (and move forward).

  • Set up the basic platforms for posting and administering reviews and evaluations and offering curated links and categorizations of papers and projects. Note: I am strongly leaning towards https://prereview.org/ as the main platform, which has indicated willingness to give us a flexible ‘experimental space’ Update: Kotahi/Sciety seems a more flexible solution.

  • Reach out to researchers in relevant areas and organizations and ask them to "submit" their work for "feedback and potential positive evaluations and recognition," and for a chance at a prize. The Unjournal will not be an exclusive outlet. Researchers are free to also submit the same work to 'traditional journals' at any point. However, whether submitted elsewhere or not, papers accepted by The Unjournal must be publicly hosted, with a DOI. Ideally the whole project is maintained and updated, with all materials, in a single location. 21 Sep 2022 status:_ 1-3 mostly completed. We have a good working and management group. We decided a platform and we're configuring it, and we have an interim workaround. We've reached out to researchers and organizations and got some good responses, but we need to find more platforms to disseminate and advertise this. We've identified and are engaging with four papers for the initial piloting. We aim to put out a larger prize-driven call soon and intake about 10 more papers or projects.

  • Link this to PREreviewarrow-up-right (or similar tool or site) to solicit feedback and evaluation without requiring exclusive publication rights (again, see Airtable listarrow-up-right)

  • Directly solicit feedback from EA-adjacent partners in academia and other EA-research orgs

  • Next steps towards this approach:

    • Build our own systems (assign "editors") to do this without bias and with incentives

    • Build standard metrics for interpreting these reviews (possibly incorporating prediction markets)

    • Encourage them to leave their feedback through the PREreview or another platform

    Also: Commit to publish academic reviews or share in our internal group for further evaluation and reassessment or benchmarking of the ‘PREreview’ type reviews above (perhaps taking the FreeOurKnowledge pledge relating to thisarrow-up-right).

    herearrow-up-right
    here
    Pilot: Building a founding committee
    here
    guidelines for evaluators
    12-month plan
    post at onscienceandacademia.orgarrow-up-right
    my resources list in Airtablearrow-up-right
    hashtag
    Initiative: ‘independent evaluations’

    The Unjournalarrow-up-right is seeking academics, researchers, and students to submit structured evaluations of the most impactful research emerging in the social sciences. Strong evaluations will be posted or linked on our PubPub communityarrow-up-right, offering readers a perspective on the implications, strengths, and limitations of the research. These evaluations can be submitted using this formarrow-up-right for academic-targeted research or this formarrow-up-right for more applied work; evaluators can publish their name or maintain anonymity; we also welcome collaborative evaluation work. We will facilitate, promote, and encourage these evaluations in several ways, described below.

    hashtag
    Who should do these evaluations?

    We are particularly looking for people with research training, experience, and expertise in quantitative social science and statistics including cost-benefit modeling and impact evaluation. This could include professors, other academic faculty, postdocs, researchers outside of academia, quantitative consultants and modelers, PhD students, and students aiming towards PhD-level work (pre-docs, research MSc students etc.) But anyone is welcome to give this a try — when in doubt, piease go for it.

    We are also happy to support collaborations and group evaluations. There is a good track record for this — see: “What is a PREreview Live Review?arrow-up-right”, ASAPBio’s Crowd preprint reviewarrow-up-right, I4replication.orgarrow-up-right and repliCATSarrow-up-right for examples in this vein. We may also host live events and/or facilitate asynchronous collaboration on evaluations

    Instructors/PhD, MRes, Predoc programs: We are also keen to work with students and professors to integrate ‘independent evaluation assignments’ (aka ‘learn to do peer reviews’) into research training.

    hashtag
    Why should you do an evaluation?

    Your work will support The Unjournal’s core mission — improving impactful research through journal-independent public evaluation. In addition, you’ll help research users (policymakers, funders, NGOs, fellow researchers) by providing high quality detailed evaluations that rate and discuss the strengths, limitations, and implications of research.

    Doing an independent evaluation can also help you. We aim to provide feedback to help you become a better researcher and reviewer. We’ll also give prizes for the strongest evaluations. Lastly, writing evaluations will help you build a portfolio with The Unjournal, making it more likely we will commission you for paid evaluation work in the future.

    hashtag
    Which research?

    We focus on rigorous, globally-impactful research in quantitative social science and policy-relevant research. (See “What specific areas do we cover?”arrow-up-right for details.) We’re especially eager to receive independent evaluations of:

    1. Research we publicly prioritize: see our public list of researcharrow-up-right we've prioritized or evaluated. (Also...)

    2. Research we previously evaluated (see public listarrow-up-right, as well as https://unjournal.pubpub.org/arrow-up-right )

    3. Work that other people and organizations suggest as having high potential for impact/value of information (also see)

    You can also suggest research yourself herearrow-up-right and then do an independent evaluation of it.

    hashtag
    What sort of ‘evaluations’ and what formats?

    We’re looking for careful methodological/technical evaluations that focus on research credibility, impact, and usefulness. We want evaluators to dig into the weeds, particularly in areas where they have aptitude and expertise. See our guidelinesarrow-up-right.

    The Unjournal’s structured evaluation forms: We encourage evaluators to do these using either:

    1. Our Academic (main) stream formarrow-up-right: If you are evaluating research aimed at an academic journal or

    2. Our ‘Applied stream’ formarrow-up-right: If you are evaluating research that is probably not aimed at an academic journal. This may include somewhat less technical work, such as reports from policy organizations and think tanks, or impact assessments and cost-benefit analyses

    circle-info

    See here for guidance on using these forms for independent evaluationsarrow-up-right

    Other public evaluation platforms: We are also open to engaging with evaluations done on existing public evaluation platforms such as PREreview.orgarrow-up-right. Evaluators: If you prefer to use another platform, please let us know about your evaluation using one of the forms above. If you like, you can leave most of our fields blank, and provide a link to your evaluation on the other public platform.

    Academic (~PhD) assignments and projects: We are also looking to build ties with research-intensive university programs; we can help you structure academic assignments and provide external reinforcement and feedback. Professors, instructors, and PhD students: please contact us (contact@unjournal.orgenvelope).

    hashtag
    How will The Unjournal engage?

    hashtag
    1. Posting and signal-boosting

    We will encourage all these independent evaluations to be publicly hosted, and will share links to these. We will further promote the strongest independent evaluations, potentially re-hosting them on our platforms (such as unjournal.pubpub.orgarrow-up-right)

    However, when we host or link these, we will keep them clearly separated and signposted as distinct from the commissioned evaluations; independent evaluations will not be considered official, and their ratings won’t be included in our ‘main data’arrow-up-right (see dashboard herearrow-up-right; see discussion).

    hashtag
    2. Offering incentives

    Bounties: We will offer prizes for the ‘most valuable independent evaluations’.

    As a start, after the first eight quality submissions (or by Jan. 1 2025, whichever comes later), we will award a prize of $500 to the most valuable evaluation.

    Further details tbd. As a reference...

    All evaluation submissions will be eligible for these prizes and “grandfathered in” to any prizes announced later. We will announce and promote the prize winners (unless they opt for anonymity).

    Evaluator pool: People who submit evaluations can elect to join our evaluator pool. We will consider and (time-permitting) internally rate these evaluations. People who do the strongest evaluations in our focal areas are likely to be commissioned as paid evaluators for The Unjournal.

    We’re also moving towards a two-tiered base compensation for evaluations. We will offer a higher rate to people who can demonstrate previous strong review/evaluation work. These independent evaluations will count towards this ‘portfolio’.

    hashtag
    3. Providing materials, resources and guidance/feedback

    Our PubPub pagearrow-up-right provides examples of strong work, including the prize-winning evaluationsarrow-up-right.

    We will curate guidelines and learning materials from relevant fields and from applied work and impact-evaluation. For a start, see "Conventional guidelines for referee reports" in our knowledge base.arrow-up-right We plan to build and curate more of this...

    hashtag
    4. Partnering with academic institutions

    We are reaching out to PhD programs and pre-PhD research-focused programs. Some curricula already involve “mock referee report” assignments. We hope professors will encourage their students to do these through our platform. In return, we’ll offer the incentives and promotion mentioned above, as well as resources, guidance, and some further feedback.

    hashtag
    How does this benefit The Unjournal and our mission?

    1. Crowdsourced feedback can add value in itself; encouraging this can enable some public evaluation and discussion of work that The Unjournal doesn’t have the bandwidth to cover

    2. Improving our evaluator pool and evaluation standards in general.

      1. Students and ECRs can practice and (if possible) get feedback on independent evaluations

      2. They can demonstrate their ability this publicly, enabling us to recruit and commission the strongest evaluators

    3. Examples will help us build guidelines, resources, and insights into ‘what makes an evaluation useful’.

    4. This provides us opportunities to engage with academia, especially in Ph.D programs and research-focused instruction.

    hashtag
    About The Unjournal (unfold)

    The Unjournalarrow-up-right commissions public evaluations of impactful research in quantitative social sciences fields. We are an alternative and a supplement to traditional academic peer-reviewed journals – separating evaluation from journals unlocks a range of benefitsarrow-up-right. We ask expert evaluators to write detailed, constructive, critical reports. We also solicit a set of structured ratings focused on research credibility, methodology, careful and calibrated presentation of evidence, reasoning transparency, replicability, relevance to global priorities, and usefulness for practitioners (including funders, project directors, and policymakers who rely on this research).[2] While we have mainly targeted impactful research from academia, our ‘applied stream’arrow-up-right covers impactful work that uses formal quantitative methods but is not aimed at academic journals. So far, we’ve commissioned about 50 evaluations of 24 papers, and published these evaluation packages on our PubPub communityarrow-up-right, linked to academic search engines and bibliometrics.

    herearrow-up-right

    Our team

    hashtag
    See also: Governance of The Unjournal

    The Unjournal was founded by David Reinsteinarrow-up-right , who maintains this wiki/GitBook and other resources.

    circle-info

    See our "" at Unjournal.org for an updated profile of our team members

    hashtag
    Management Committee

    (Note on terminology)

    See description under .

    • , Founder and Co-director

    • , Interdisciplinary Researcher at ; Co-director

    • , Social Scientist and Associate Professor in the Guelph Institute of Development Studies and Department of Political Science at the University of Guelph, Canada

    hashtag
    Advisory board

    See description under .

    , Infectious Disease Researcher, London School of Hygiene and Tropical Medicine

    , Associate Professor of Marketing, London Business School

    , Applied Researcher (Global Health & Development) at Founder's Pledge

    , Professor of Economics, UC Santa Barbara

    , Post-Doctoral Researcher in the Department of Quantitative Methods and Economic Theory at the University of Chieti (Italy)

    , Metascience Program Lead, Federation of American Scientists

    , Managing Editor at : writing and research on global health, development, and nutrition

    , Professor of Statistics and Political Science at Columbia University (New York)

    , Associate Professor, University of Melbourne (Australia): expert judgment, biosciences, applied probability, uncertainty quantification

    , Late-Stage PhD Student in Information Systems at the University of Cologne, Germany

    , PhD Student, Applied Economics, University of Minnesota

    Postdoctoral researcher, Institute for Interactive Systems and Data Science at Graz University of Technology (Austria)

    , Associate Researcher, INRAE, Member, Toulouse School of Economics (France)

    , Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)

    hashtag
    Field Specialists

    The table below shows all the members of our team (including field specialists) taking on a research-monitoring role (see for a description of this role).

    hashtag
    Staff, contractors, and consultants

    , Research Specialist: Data science, metascience, aggregation of expert judgment

    , Operations generalist

    , Generalist assistance

    , Communications (academic research/policy)

    , Communications and copy-editing

    , Communications and consulting

    , technical software support

    Red Bermejo, Mikee Mercado, Jenny Siers – consulting (through ) on strategy, marketing, and task management tools

    We are a member of . They are working with us to update PubPub and incorporate new features (editorial management, evaluation tools, etc.) that will be particularly useful to The Unjournal and other members.

    hashtag
    Other people and initiatives we are in touch with

    chevron-rightSubstantial advice, consultation, collaborative discussionshashtag
    • Abel Brodeur, Founder/chair of the

    • The

    chevron-rightSome other people we have consulted/communicating, details, other noteshashtag
    • Cooper Smout, FoK collaboration possibilities: through their pledges, and through an open access journal Cooper is putting together, which the Unjournal could feed into, for researchers needing a ‘journal with an impact factor’

    See also (in ACX grant proposal).

    hashtag

    hashtag

    Explanations & outreach

    Several expositions for different audiences, fleshing out ideas and plans

    hashtag
    TLDR: See In a nutshell

    hashtag
    Podcasts, presentations, and video

    circle-info

    See/subscribe to

    hashtag
    Journal independent evaluation and The Unjournal

    hashtag
    EA Anywhere (Youtube) – bridging the gap between EA and academia

    • See slide deck (Link: ; offers comment access)

    hashtag
    ReproducibiliTea podcast

    hashtag
    Slide decks

    hashtag
    Presentation for EA Anywhere, online event, 5 Nov. 2023 1-2pm ET

    (Link: ; offers comment access)

    chevron-rightEarlier slide deckshashtag

    July 2023: The slide deck below was last updated in late 2022 and needs some revision. Nonetheless, it illustrates many of the key points that remain relevant.

    Nov 2022: Version targeted towards OSF/Open Science

    hashtag
    "Slaying the journals": Google doc

    Earlier discussion document, aimed at EA/global priorities, academic, and open-science audiences

    hashtag
    "

    • 2021 A shorter outline posted on

    hashtag
    EA forum posts

    Press releases

    hashtag
    Impactful research prize winners

    Impactful Research Prize Winners

    hashtag
    SFF Grant

    Outreach texts

    An important part of making this a success will be to spread the word, to get positive attention for this project, to get important players on board, network externalities, and change the equilibrium. We are also looking for specific feedback and suggestions from "mainstream academics" in Economics, Psychology, and policy/program evaluation, as well as from the Open Science and EA communities.

    hashtag
    Key points to convey

    See

    hashtag
    As social media blurbs

    chevron-rightGood news (funding)hashtag

    The "Unjournal" is happening, thanks to ACX and the LTFF! We will be organizing and funding:

    • Journal-independent peer review and rating,

    chevron-rightJournal rents and hoopshashtag

    Do you love for-profit journals

    • taking your labor and selling it back to your university library?

    chevron-rightBreaking out of the bad equilibriumhashtag

    Journals: Rent-extracting, inefficient, pdf-prisons, gamesmanship. But no researcher can quit them.

    Until The Unjournal: Rate projects, shared feedback, pay reviewers.

    No trees axed to print the latest "Journal of Fancy Manuscripts." We just evaluate the most impactful work.

    Target, Tone: Same as above, but less sarcastic, using language from Economics … maybe also appealing to library and university admin people?

    chevron-right(Longer version of above)hashtag

    Traditional academic journals: Rent-extracting, inefficient, delaying innovation. But no researcher or university can quit them.

    Or maybe we do have some escape bridges. We can try to Unjournal. Projects get rated, feedback gets shared, reviewers get paid. No trees get chopped down to print the latest "Journal of Fancy Manuscripts." We are starting small, but it only takes one domino.

    chevron-rightDisgruntled researchers, the wasteful journal gamehashtag

    Your paper got rejected after two glowing reviews? Up for tenure? How many more journals will you have to submit it to? Will you have to make the same points all over again? Or will the new referees tell you the exact opposite of the last ones?

    Don't worry, there's a new game in town: The Unjournal. Submit your work. Get it reviewed and rated. Get public feedback. Move on . . . or continue to improve your project and submit it wherever else you like.*

    *And we are not like the "Berkeley Electronic Press". We will never sell out, because we have nothing to sell.

    chevron-rightProjects not (just) papershashtag

    Tired of the 'pdf prison'? Got...

    • a great web interface for your project, with expandable explanations

    chevron-rightPeer reviewers should get paid and have their feedback matterhashtag

    Referee requests piling up? You better write brilliant reviews for that whopping $0, so the author can be annoyed at you and they can disappear into the ether. Or you can help The Unjournal, where you get paid for your work, and reviews become part of the conversation.

    Aim tone: similar to 2–3

    chevron-rightResearch should target global prioritieshashtag

    Social science research:

    • builds methods of inferring evidence from data;

    chevron-rightEA organizations/researchers need feedback and credibilityhashtag

    You are a researcher at an organization trying to find the most effective ways to improve the world, reduce suffering, prevent catastrophic risks, and improve the future of humanity. You, your team, your funders, and the policymakers you want to influence . . . they need to know if your methods and arguments are strong, and if your evidence is believable. It would be great if academic experts could give their honest feedback and evaluation. But who will evaluate your best work, and how will they make this credible? Maybe The Unjournal can help.

    Target: Researchers and research-related ops people at EA and EA-adjacent orgs. Perhaps OP in particular.

    Tone:

    hashtag
    How and where to promote and share

    chevron-rightPitch to ACX (and LTFF) mediahashtag
    • ACX will announce this, I shared some text

    • Post on ACX substack

    chevron-rightSocial media/forums, etc (see Airtable 'media_roll')hashtag

    Social media

    1. Twitter: Academia (esp. Econ, Psych, Global Health), Open science, EA

    hashtag

    Updates (earlier)

    circle-info

    22 Aug 2024: we will be moving our latest updates to our

    hashtag
    March 25 2024: Workshop: Innovations in Research Evaluation, Replicability, and Impact

    Related articles and work

    We are not the only ones working and advocating in this space. For a small tip of the iceberg...

    hashtag

    hashtag

    Evaluating Pivotal Questionsarrow-up-right
    contact@unjournal.orgenvelope
    contact@unjournal.orgenvelope
    reach out to usenvelope
    contact@unjournal.orgenvelope
    contact@unjournal.orgenvelope
    contact@unjournal.orgenvelope
    contact@unjournal.orgenvelope

    Kris Gulatiarrow-up-right, Economics PhD student at the University of California, Merced

  • Hansika Kapoorarrow-up-right, Research Author at the Department of Psychology, Monk Prayogshalaarrow-up-right (India)

  • Tanya O'Garraarrow-up-right, Senior Research Fellow, Institute of Environment & Sustainability, Lee Kuan Yew of School of Public Policy, National University of Singapore

  • Emmanuel Orkoharrow-up-right, Research Scientist (fellow) at North-West Universityarrow-up-right (South Africa)

  • Anirudh Tagatarrow-up-right, Research Author at the Department of Economics at Monk Prayogshalaarrow-up-right (India)

  • Eva Vivaltarrow-up-right, Assistant Professor in the Department of Economics at the University of Toronto

  • Other academic and policy economists, such as Julian Jamisonarrow-up-right, Todd Kaplanarrow-up-right, Kate Rockettarrow-up-right, David Rhys-Bernardarrow-up-right, David Roodmanarrow-up-right, and Anna Dreber Almenbergarrow-up-right

  • Cooper Smout, head of https://freeourknowledge.org/arrow-up-right

  • Brian Nosekarrow-up-right, Center for Open Science

  • Ted Miguelarrow-up-right, Faculty Director, Berkeley Initiative for Transparency in the Social Sciences (BITSS)

  • Daniel Saderi, PreReviewarrow-up-right

  • Yonatan Calearrow-up-right, who helped me put this proposal together through asking a range of challenging questions and offering his feedback

  • Daniel Lakensarrow-up-right, Experimental Psychologist at the Human-Technology Interaction group at Eindhoven University of Technology (Netherlands), has also completed research with the Open Science Collaboration and the Peer Reviewers’ Openness Initiative

  • Participants in the GPI seminar luncheon
  • Paolo Crosetto (Experimental Economics, French National Research Institute for Agriculture, Food and Environment) https://paolocrosetto.wordpress.com/arrow-up-right

  • Cecilia Tilli, Foundation to Prevent Antibiotics Resistance and EA research advocate

  • Sergey Frolov (Physicist), Prof. J.-S. Caux, Physicist and head of https://scipost.org/arrow-up-right

  • Peter Slattery, Behaviourworks Australia

  • Alex Barnes, Business Systems Analyst, https://eahub.org/profile/alex-barnes/arrow-up-right

  • Paola Masuzzo of IGDORE (biologist and advocate of open science)

  • William Sleegers (Psychologist and Data Scientist, Rethink Priorities)

  • Nathan Young https://eahub.org/profile/nathan-young/arrow-up-right; considering connecting The Unjournal to Metaculus predictions

  • Edo Arad https://eahub.org/profile/edo-arad/arrow-up-right (mathematician and EA research advocate)

  • Hamish Huggard (Data science, ‘literature maps’)

  • Team pagearrow-up-right
    roles
    David Reinsteinarrow-up-right
    Gavin Taylorarrow-up-right
    IGDOREarrow-up-right
    Ryan Briggsarrow-up-right
    rolesarrow-up-right
    Sam Abbottarrow-up-right
    Jonathan Bermanarrow-up-right
    Rosie Bettlearrow-up-right
    Gary Charnessarrow-up-right
    Daniela Cialfiarrow-up-right
    Jordan Dworkinarrow-up-right
    Jake Eatonarrow-up-right
    Asterisk Magarrow-up-right
    Andrew Gelmanarrow-up-right
    Anca Haneaarrow-up-right
    Alexander Herwixarrow-up-right
    Conor Hughesarrow-up-right
    Jana Lasser, arrow-up-right
    Nicolas Treicharrow-up-right
    Michael Wiebearrow-up-right
    here
    Jordan Pietersarrow-up-right
    Kynan Behanarrow-up-right
    Laura Sofia-Castroarrow-up-right
    Adam Steinbergarrow-up-right
    Toby Weedarrow-up-right
    Nesim Sisaarrow-up-right
    Anti-Entropyarrow-up-right
    Knowledge Futuresarrow-up-right
    Institute for Replicationarrow-up-right
    repliCATS projectarrow-up-right
    List of people consultedarrow-up-right
    Related: EA/global priorities seminar series
    Plan of action
    ... of projects (not just "pdf-imprisoned papers"),
  • focusing on Economics, Psychology, and Impact Evaluation research,

  • relevant to the world's most pressing problems and most effective solutions.

  • Target: Academics, not necessarily EA aligned. But I don’t think this is deceptive because the funders should give a tipoff to anyone who digs, and ultimately The Unjournal might also go beyond EA-relevant stuff.

    Tone: Factual, positive

    making you jump through arcane hoops to "format your article"?
  • forcing you through inscrutable sign-in processes?

  • Then please don't bother with The Unjournal.

    Target: Academics, not necessarily EA aligned who are frustrated with this stuff.

    Tone: Sarcastic, irreverent, trying to be funny

    Aim, tone:
    Similar to the above
    an R-markdown dynamic document, with interactive tools, data, code.
  • or your software or data is the project.

  • Can't submit it to a journal but need feedback and credible ratings? Try The Unjournal.

    Target: More open-science and tech-savvy people

    builds clear logical arguments;
  • helps us understand behavior, markets, and society; and

  • informs "policy" and decision making . . . but for whom and for what goal?

  • The US government and traditional NGOs are often the key audience (and funders). "It's easier to publish about US data and US policy," they say. But most academics think more broadly than that. And Economics as a field has historically aimed at "the greatest social good." The Unjournal will prioritize research that informs the most effective interventions and global priorities, for humanity (and animals) now and in the future.

    Target: EAs and EA-aligned researchers, researchers who might be "converted"

    Tone: Straightforward, idealistic

    Casual but straightforward
    The Unjournal is in large part about shifting the equilibrium in academia/research. As I said in the application, I think most academics and researchers are happy and ready for this change but there's a coordination problem to resolve. (Everyone thinks "no one else will get on this boat," even though everyone agrees it's a better boat). I would love to let ACX readers (especially those in research and academia) know there's a "new game in town." Some further key points (please let me know if you think these can be stated better):
    • The project space is unjournal.org, which I'd love to share with the public ... to make it easy, it can be announced as "bit.ly/eaunjournalarrow-up-right" as in "bitly dot com EA unjournal"... and everyone should let me know if they want editor access to the gitbook; also, I made a quick 'open comment space' in the Gdoc HEREarrow-up-right.

    • I'm looking for feedback and for people interested in being part of this, and for 'nominations' of who might be interested (in championing this, offering great ideas, being part of the committee)

    • We will put together a committee to build some consensus on a set of workable rules and standards (especially for "how to choose referees," "what metrics should they report," and "how to define the scope of EA-relevant work to consider"). But we won't "hold meetings forever"; we want to build an MVP soon.

    • I think this could be a big win for EA and RP "getting more relevant research," for improving academia (and ultimately replacing the outdated system of traditional journals), and for building stronger ties between the two groups.

    • Researchers should know:

      • We will pay reviewers to offer feedback, assessment, and metrics, and reviews will be public (but reviewers might be anonymous -- this is a discussion point).

      • We will offer substantial cash prizes for the best projects/papers, and will likely ask the winners to present their work at an online seminar

    Facebook

    EA Forum post (and maybe AMA?)

    EA orgs

    Open science orgs (OSF, BITSS, ...)

    Academic Economics (& other fields) boards/conferences/groups?

    Universities/groupings of universities

    Slack groups

    • Global EA

    • EA Psychology

    • Open science MooC?

    Research evaluation is changing: New approaches go beyond the traditional journal model, promoting transparency, replicability, open science, open access, and global impact. You can be a part of this.

    Join us on March 25 for an interactive workshop, featuring presentations from Macie Daley (Center for Open Science), David Reinsteinarrow-up-right (The Unjournal), Gary Charnessarrow-up-right (UC Santa Barbara), and The Unjournal’s Impactful Research Prize and Evaluator Prize winners. Breakout discussions, Q&A, and interactive feedback sessions will consider innovations in open research evaluation, registered revisions, research impact, and open science methods and career opportunities.

    The event will be held fully online on Zoom, on March 25 from 9AM- 11:30 AM (EST) and 9:30 PM - Midnight (EST) to accommodate a range of time zones. UTC: 25-March 1pm-3:30pm and 26-March 1:30am-4am. The event is timetabled: feel free to participate in any part you wish.

    See the event page herearrow-up-right for all details, and to registr.

    hashtag
    Jan 2024: Impactful Research and Evaluation Prizes winners announced

    Impactful Research Prize Winners

    hashtag
    Aug. 30, 2023: "Pilot's done, what has been won (and learned)?"

    hashtag
    Pilot = completed!

    With the completed set of evaluations of "Do Celebrity Endorsements Matter? A Twitter Experiment Promoting Vaccination in Indonesia"arrow-up-right and "The Governance of Non-Profits and Their Social Impact: Evidence from a Randomized Program in Healthcare in DRCarrow-up-right,” our pilot is complete:

    • 10 research papers evaluated

    • 21 evaluations

    • 5 author responses

    You can see this output most concisely in our PubPub collection herearrow-up-right (evaluations are listed as "supplements," at least for the time being).

    For a continuously updated overview of our process, including our evaluation metrics, see our "data journalism" notebook hosted herearrow-up-right.

    Just a peek at the content you can find in our lovely data notebook! Mind the interactive hover-overs etc.

    Remember, we assign individual DOIs to all of these outputs (evaluation, responses, manager syntheses) and aim to get the evaluation data into all bibliometrics and scholarly databases. So far, Google Scholar has picked up one of our outputs.arrow-up-right (The Google Scholar algorithm is a bit opaque—your tips are welcome.)

    hashtag
    Following up on the pilot: prizes and seminars

    We will make decisions and award our pilot Impactful Research Prizearrow-up-right and evaluator prizes soon (aiming for the end of September). The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will largely be driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

    Following this, we are considering holding an online workshop (that will include a ceremony for the awarding of prizes). Authors and (non-anonymous) evaluators will be invited to discuss their work and take questions. We may also hold an open discussion and Q&A on The Unjournal and our approach. We aim to partner with other organizations in academia and in the impactful-research and open-science spaces. If this goes well, we may make it the start of a regular thing.

    circle-info

    "Impactful research online seminar": If you or your organization would be interested in being part of such an event, please do reach out; we are looking for further partners. We will announce the details of this event once these are finalized.

    hashtag
    Other planned follow-ups from the pilot

    Our pilot yielded a rich set of data and learning-by-doings. We plan to make use of this, including . . .

    • synthesizing and reporting on evaluators' and authors' comments on our process; adapting these to make it better;

    • analyzing the evaluation metrics for patterns, potential biases, and reliability measures;

    • "aggregating expert judgment" from these metrics;

    • tracking future outcomes (traditional publications, citations, replications, etc.) to benchmark the metrics against; and

    • drawing insights from the evaluation content, and then communicating these (to policymakers, etc.).

    hashtag
    The big scale-up

    hashtag
    Evaluating more research: prioritization

    We continue to develop processes and policies around "which research to prioritize." For example, we are discussing whether we should set targets for different fields, for related outcome "cause categories," and for research sources. We intend to open up this discussion to the public to bring in a range of perspectives, experience, and expertise. We are working towards a grounded framework and a systematic process to make these decisions. See our expanding notes, discussion, and links on "what is global-priorities relevant research?arrow-up-right"

    We are still inviting applications for the paid standalone project arrow-up-righthelping us accomplish these frameworks and processes. Our next steps:

    1. Building our frameworks and principles for prioritizing research to be evaluated, a coherent approach to implementation, and a process for weighing and reassessing these choices. We will incorporate previous approaches and a range of feedback. For a window into our thinking so far, see our "high-level considerationsarrow-up-right" and our practical prioritization concerns and goalsarrow-up-right.

    2. Building research-scoping teams of field specialists. These will consider agendas in different fields, subfields, and methods (psychology, RCT-linked development economics, etc.) and for different topics and outcomes (global health, attitudes towards animal welfare, social consequences of AI, etc.) We begin to lay out possible teams and discussions here arrow-up-right(the linked discussion spaces are private for now, but we aim to make things public whenever it's feasible). These "field teams" will

      • discuss and report on the state of research in their areas, including where and when relevant research is posted publicly, and in what state;

      • the potential for Unjournal evaluation of this work as well as when and how we should evaluate it, considering potential variations from our basic approach; and

      • how to prioritize work in this area for evaluation, reporting general guidelines and principles, and informing the aforementioned frameworks.

      Most concretely, the field teams will divide up the space of research work to be scoped and prioritized among the members of the teams.

    hashtag
    Growing The Unjournal Team

    Our previous call for field specialists is still active. We received a lot of great applications and strong interest, and we plan to send out invitations soon. But the door is still open to express interest!

    New members of our team: Welcome Rosie Bettle (Founder's Pledge)arrow-up-right to our advisory board, as a field specialist.

    hashtag
    Improving the evaluation process and metrics

    As part of our scale-up (and in conjunction with supporting PubPubarrow-up-right on their redesigned platform), we're hoping to improve our evaluation procedure and metrics. We want to make these clearer to evaluators, more reliable and consistent, and more useful and informative to policymakers and other researchers (including meta-analysts).

    We don't want to reinvent the wheel (unless we can make it a bit more round). We will be informed by previous work, such as:

    • existing research into the research evaluation process, and on expert judgment elicitation and aggregation;

    • practices from projects like RepliCATS/IDEAS, PREreview BITSS Open Policy Analysis, the “Four validities” in research design, etc.; and

    • metrics used (e.g., "risk of bias") in systematic reviews and meta-analyses as well as databases such as 3ie's Development Evidence Portalarrow-up-right.

    Of course, our context and goals are somewhat distinct from the initiatives above.

    We also aim to consult potential users of our evaluations as to which metrics they would find most helpful.

    (A semi-aside: The choice of metrics and emphases could also empower efforts to encourage researchers to report policy-relevant parameters more consistently.)

    We aim to bring a range of researchers and practitioners into these questions, as well as engaging in public discussion. Please reach out.

    hashtag
    "Spilling tea"

    Yes, I was on a podcast, but I still put my trousers on one arm at a time, just like everyone else! Thanks to Will Ngiam for inviting me (David Reinstein) on "ReproducibiliTeaarrow-up-right" to talk about "Revolutionizing Scientific Publishing" (or maybe "evolutionizing" ... if that's a word?). I think I did a decent job of making the case for The Unjournal, in some detail. Also, listen to find out what to do if you are trapped in a dystopian skating rink! (And find out what this has to do with "advising young academics.")

    I hope to do more of this sort of promotion: I'm happy to go on podcasts and other forums and answer questions about The Unjournal, respond to doubts you may have, consider your suggestions and discuss alternative initiatives.

    circle-info

    Some (other) ways to follow The Unjournal's progress

    • Check out our PubPub pagearrow-up-right to read evaluations and author responses.

    • (David Reinstein) on Twitter or Mastodon, or the hashtag #unjournal (when I remember to use it).

    • Visit for an overview.

    MailChimp link: Sign up below to get these progress updates in your inbox about once per fortnight, along with opportunities to give your feedback.

    Alternatively, fill out this quick surveyarrow-up-right to get this newsletter and tell us some things about yourself and your interests. The data protection statement is linked herearrow-up-right.

    hashtag
    Progress notes since last update

    circle-info

    Progress notes: We will keep track of important developments here before we incorporate them into the official fortnightly "Update on recent progress." Members of the UJ team can add further updates here or in this linked Gdocarrow-up-right; we will incorporate changes.

    See also Previous updates

    Hope these updates are helpful. Let me know if you have suggestions.

    main home page 'news'.arrow-up-right
    Evidence base
    • The effect of publishing peer review reports on referee behavior in five scholarly journalsarrow-up-right

    • Improving Peer Review in Economics: Stocktaking and Proposalsarrow-up-right

    • What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economicsarrow-up-right

    Improving peer review in Economics:arrow-up-right
    our YouTube channelarrow-up-right
    bit.ly/unjourrnalpresentarrow-up-right
    Presentation summarized with time-stamped hyperlinks (+~AI generated content)arrow-up-right
    bit.ly/unjourrnalpresentarrow-up-right
    bit.ly/unjourrnalpresentarrow-up-right
    HEREarrow-up-right
    [link]arrow-up-right
    Moving science beyond ... static journals" ... How EA/nonprofits can helparrow-up-right
    Moving science beyond closed, binary, static journals; a proposed alternative; how the "Effective Altruist" and nontraditional nonprofit sector can help make this happenarrow-up-right
    onscienceandacademia.orgarrow-up-right

    Previous updates

    hashtag
    Progress notes since last update

    circle-info

    "Progress notes": We will keep track of important developments here before we incorporate them into the official fortnightly "Update on recent progress." Members of the UJ team can add further updates here or in ; we will incorporate changes.

    hashtag
    Update on recent progress: 21 July 2023

    hashtag
    Funding

    The SFF grant is now 'in our account' (all is public and made transparent on our ). This makes it possible for us to

    • move forward in filling staff and contractor positions (see below); and

    • increase evaluator compensation and incentives/rewards (see below).

    We are circulating a sharing our news and plans.

    hashtag
    Timelines, and pipelines

    Our "Pilot Phase," involving ten papers and roughly 20 evaluations, is almost complete. We just released the evaluation package for .” We are now waiting on one last evaluation, followed by author responses and then "publishing" the final two packages at . (Remember: we publish the evaluations, responses and synthesis; we link the research being evaluated.)

    We will make decisions and award our (and possible seminars) and evaluator prizes soon after. The winners will be determined by a consensus of our management team and advisory board (potentially consulting external expertise). The choices will be largely driven by the ratings and predictions given by Unjournal evaluators. After we make the choices, we will make our decision process public and transparent.

    hashtag
    "What research should we prioritize for evaluation, and why?"

    We continue to develop processes and policy around which research to prioritize. For example, we are considering whether we should set targets for different fields, for related outcome "cause categories," and for research sources. This discussion continues among our team and with stakeholders. We intend to open up the discussion further, making it public and bringing in a range of voices. The objective is to develop a framework and a systematic process to make these decisions. See our expanding notes and discussion on

    In the meantime, we are moving forward with our post-pilot “pipeline” of research evaluation. Our management team is considering recent prominent and influential working papers from the National Bureau of Economics Research () and beyond, and we continue to solicit submissions, suggestions, and feedback. We are also reaching out to users of this research (such as NGOs, charity evaluators, and applied research think tanks), asking them to identify research they particularly rely on and are curious about. If you want to join this conversation, we welcome your input.

    hashtag
    (Paid) Research opportunity: to help us do this

    We are also considering hiring a small number of researchers to each do a one-off (~16 hours) project in “research scoping for evaluation management.” The project is sketched at ; essentially, summarizing a research theme and its relevance, identifying potentially high-value papers in this area, choosing one paper, and curating it for potential Unjournal evaluation.

    We see a lot of value in this task and expect to actually use and credit this work.

    If you are interested in applying to do this paid project, please let us know .

    hashtag
    Call for "Field Specialists"

    Of course, we can't commission the evaluation of every piece of research under the sun (at least not until we get the next grant :) ). Thus, within each area, we need to find the right people to monitor and select the strongest work with the greatest potential for impact, and where Unjournal evaluations can add the most value.

    This is a big task and there is a lot of ground to cover. To divide and conquer, we’re partitioning this space (looking at natural divisions between fields, outcomes/causes, and research sources) amongst our management team as well as among what we now call...

    hashtag
    " (FSs), who will

    • focus on a particular area of research, policy, or impactful outcome;

    • keep track of new or under-considered research with potential for impact;

    • explain and assess the extent to which The Unjournal can add value by commissioning this research to be evaluated; and

    Field specialists will usually also be members of our Advisory Board, and we are encouraging expressions of interest for both together. (However, these don’t need to be linked in every case.) .

    Interested in a field specialist role or other involvement in this process? Please fill out (about 3–5 minutes).

    hashtag
    Setting priorities for evaluators

    We are also considering how to set priorities for our evaluators. Should they prioritize:

    • Giving feedback to authors?

    • Helping policymakers assess and use the work?

    • Providing a 'career-relevant benchmark' to improve research processes?

    We discuss this topic , considering how each choice relates to our .

    hashtag
    Increase in evaluator compensation, incentives/rewards

    We want to attract the strongest researchers to evaluate work for The Unjournal, and we want to encourage them to do careful, in-depth, useful work. for (on-time, complete) evaluations to $400, and we are setting aside $150 per evaluation for incentives, rewards, and prizes. Details on this to come.

    Please consider signing up for our evaluator pool (fill out ).

    hashtag
    Adjacent initiatives and 'mapping this space'

    As part of The Unjournal’s general approach, we keep track of (and keep in contact with) other initiatives in open science, open access, robustness and transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts. gives a preliminary breakdown of similar and partially-overlapping initiatives, and tries to catalog the similarities and differences to give a picture of who is doing what, and in what fields.

    hashtag
    Also to report

    hashtag
    New

    • , Professor of Economics, UC Santa Barbara

    • , Associate Researcher, INRAE, Member, Toulouse School of Economics (animal welfare agenda)

    • , Associate Professor, expert judgment, biosciences, applied probability, uncertainty quantification

    hashtag
    Tech and platforms

    We're working with PubPub to improve our process and interfaces. We plan to take on a to help us work with them closely as they build their platform to be more attractive and useful for The Unjournal and other users.

    hashtag
    Our hiring, contracting, and expansion continues

    • Our next hiring focus: . We are looking for a strong writer who is comfortable communicating with academics and researchers (particularly in economics, social science, and policy), journalists, policymakers, and philanthropists. Project-based.

    • We've chosen (and are in the process of contracting) a strong quantitative meta-scientist and open science advocate for the project: “Aggregation of expert opinion, forecasting, incentives, meta-science.” (Announcement coming soon.)

    • We are also expanding our Management Committee and Advisory Board; see

    hashtag
    Potentially relevant events in the outside world

    hashtag
    Update on recent progress: 1 June 2023

    Update from David Reinstein, Founder and Co-Director

    hashtag
    A path to change

    With the , we now have the opportunity to move forward and really make a difference. I think The Unjournal, along with related initiatives in other fields, should become the place policymakers, grant-makers, and researchers go to consider whether research is reliable and useful. It should be a serious option for researchers looking to get their work evaluated. But how can we start to have a real impact?

    Over the next 18 months, we aim to:

    1. Build awareness: (Relevant) people and organizations should know what The Unjournal is.

    2. Build credibility: The Unjournal must consistently produce insightful, well-informed, and meaningful evaluations and perform effective curation and aggregation of these. The quality of our work should be substantiated and recognized.

    3. Expand our scale and scope: We aim to grow significantly while maintaining the highest standards of quality and credibility. Our loose target is to evaluate around 70 papers and projects over the next 18 months while also producing other valuable outputs and metrics.

    I sketch these goals , along with our theory of change, specific steps and approaches we are considering, and some "wish-list wins." Please free to add your comments and questions.

    hashtag
    The pipeline flows on

    While we wait for the new grant funding to come in, we are not sitting on our haunches. Our "pilot phase" is nearing completion. Two more sets of evaluations have been posted on our .

    With three more evaluations already in progress, this will yield a total of 10 evaluated papers. Once these are completed, we will decide, announce, and award the recipients for the and the prizes for evaluators, and organize online presentations/discussions (maybe linked to an "award ceremony"?).

    hashtag
    Contracting, hiring, expansion

    No official announcements yet. However, we expect to be hiring (on a part-time contract basis) soon. This may include roles for:

    • Researchers/meta-scientists: to help find and characterize research to be evaluated, identify and communicate with expert evaluators, and synthesize our "evaluation output"

    • Communications specialists

    • Administrative and Operations personnel

    of these roles. And to indicate your potential interest and link your CV/webpage.

    You can also/alternately register your interest in doing (paid) research evaluation work for The Unjournal, and/or in being part of our advisory board, .

    We also plan to expand our ; please reach out if you are interested or can recommend suitable candidates.

    hashtag
    Tech and initiatives

    We are committed to enhancing our platforms as well as our evaluation and communication templates. We're also exploring strategies to nurture more beneficial evaluations and predictions, potentially in tandem with replication initiatives. A small win: our Mailchimp signup should now be working, and this update should be automatically integrated.

    hashtag
    Welcoming new team members

    We are delighted to welcome (FAS) and (INRA/TSE) to our , and (Monk Prayogshala) to our !

    • Dworkin's work centers on "improving scientific research, funding, institutions, and incentive structures through experimentation."

    • Treich's current research agenda largely focuses on the intersection of animal welfare and economics.

    • Tagat investigates economic decision-making in the Indian context, measuring the social and economic impact of the internet and technology, and a range of other topics in applied economics and behavioral science. He is also in the .

    hashtag
    Update on recent progress: 6 May 2023

    hashtag
    Grant funding from the Survival and Flourishing Fund

    The Unjournal was through the 'S-Process' of the Survival and Flourishing Fund. More details and plans to come. This grant will help enable The Unjournal to expand, innovate, and professionalize. We aim to build the awareness, credibility, scale, and scope of The Unjournal, and the communication, benchmarking, and useful outputs of our work. We want to have a substantial impact, building towards our mission and goals...

    To make rigorous research more impactful, and impactful research more rigorous. To foster substantial, credible public evaluation and rating of impactful research, driving change in research in academia and beyond, and informing and influencing policy and philanthropic decisions.

    Innovations: We are considering other initiatives and refinements (1) to our evaluation ratings, metrics, and predictions, and how these are aggregated, (2) to foster open science and robustness-replication, and (3) to provide inputs to evidence-based policy decision-making under uncertainty. Stay tuned, and please join the conversation.

    Opportunities: We plan to expand our management and advisory board, increase incentives for evaluators and authors, and build our pool of evaluators and participating authors and institutions. Our previous call-to-action (see ) is still relevant if you want to sign up to be part of our evaluation (referee) pool, submit your work for evaluation, etc. (We are likely to put out a further call soon, but all responses will be integrated.)

    hashtag
    Evaluation 'output'

    We have published a total of 12 evaluations and ratings of five papers and projects, as well as three author responses. Four can be found on our PubPub page (most concise list ), and one on our Sciety page (we aim to mirror all content on both pages). All the PubPub content has a DOI, and we are working to get these indexed on Google Scholar and beyond.

    The two most recently released evaluations (of Haushofer et al, 2020; and Barker et al, 2022) both surround "" [link: EA Forum post]

    Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.

    See the evaluation summaries and ratings, with linked evaluations and

    hashtag
    Update on recent progress: 22 April 2023

    hashtag
    New 'output'

    We are now up to twelve total evaluations of five papers. Most of these are on our (we are currently aiming to have all of the work hosted both at PubPub and on Sciety, and gaining DOIs and entering the bibliometric ecosystem). The latest two are on an interesting theme, :

    Two more Unjournal Evaluation sets are out. Both papers consider randomized controlled trials (RCTs) involving cognitive behavioral therapy (CBT) for low-income households in two African countries (Kenya and Ghana). These papers come to very different conclusions as to the efficacy of this intervention.

    These are part of Unjournal's .

    More evaluations coming out soon on themes including global health and development, the environment, governance, and social media.

    hashtag
    Animal welfare

    To round out our initial pilot: We're particularly looking to evaluate papers and projects relevant to animal welfare and animal agriculture. Please reach out if you have suggestions.

    hashtag
    New features of this GitBook: GPT-powered 'chat' Q&A

    You can now 'chat' with this page, ask questions, and get answers with links to other parts of the page. To try it out, go to "Search" and choose "Lens."

    hashtag
    Update on recent progress: 17 Mar 2023

    See our latest post on the EA Forum

    1. Our new platform (), enabling DOIs and CrossRef (bibliometrics)

    2. "self-correcting science"

    3. More evaluations soon

    hashtag
    Update on recent progress: 19 Feb 2023

    hashtag
    Content and 'publishing'

    1. Our is up...

    2. With our ("Long Term Cost-Effectiveness of Resilient Foods"... Denkenberger et al. Evaluations from Scott Janzwood, Anca Hanea, and Alex Bates, and an author response.

    3. Two more evaluations 'will be posted soon' (waiting for final author responses.

    hashtag
    Tip of the Spear ... right now we are:

    • Working on getting six further papers (projects) evaluated, most of which are part of our NBER

    • Developing and discussing tools for aggregating and presenting the evaluators' quantitative judgments

    • Building our platforms, and considering ways to better format and integrate evaluations

    hashtag
    Funding, plans, collaborations

    We are seeking grant funding for our continued operation and expansion (see below). We're appealing to funders interested in Open Science and in impactful research.

    We're considering collaborations with other compatible initiatives, including...

    • replication/reproducibility/robustness-checking initiatives,

    • prediction and replication markets,

    • and projects involving the elicitation and 'aggregation of expert and stakeholder beliefs' (about both replication and outcomes themselves).

    hashtag
    Management and administration, deadlines

    • We are now under the 'fiscal sponsorship' (this does not entail funding, only a legal and administrative home). We are postponing the deadline for judging the and the prizes for evaluators. Submission of papers and the processing of these has been somewhat slower than expected.

    hashtag
    Other news and media

    • EA Forum: "recent post and AMA (answering questions about the Unjournal's progress, plans, and relation to effective-altruism-relevant research

    • March 9-10: David Reinstein will present at the , session on "Translating Open Science Best Practices to Non-academic Settings". See . David will discuss The Unjournal for part of this session.

    hashtag
    Calls to action

    hashtag
    See: . These are basically still all relevant.

    1. Evaluators: We have a strong pool of evaluators.

    chevron-rightHowev=er, atm we are particularly seeking evaluators:hashtag
    • with quantitative backgrounds, especially in economics, policy, and social-science

    • comfortable with statistics, cost-effectiveness, impact evaluation, and or Fermi Montecarlo models,

    Recall, we pay at least $250 per evaluation, we typically pay more in net ($350), and we are looking to increase this compensation further. Please fill out (about 3-5 min) if you are interested

    1. Research to evaluate/prizes: We continue to be interested in submitted and suggested work. One area we would like to engage with more: quantitative social science and economics work relevant to animal welfare.

    Hope these updates are helpful. Let me know if you have suggestions.

    Impactful Research Prize Winners

    The Unjournal is delighted to announce the winners of our inaugural Impactful Research Prize. We are awarding our first prize to Takahiro Kubo (NIES Japan and Oxford University) and co-authors for their research titled "Banning wildlife trade can boost demandarrow-up-right". The paper stood out for its intriguing question, the potential for policy impact, and methodological strength. We particularly appreciated the authors’ open, active, and detailed engagement with our evaluation process.

    The second prize goes to Johannes Haushofer (NUS Singapore and Stockholm University) and co-authors for their work "The Comparative Impacts of Cash Transfers and a Psychotherapy Program on Psychological and Economic Wellbeingarrow-up-right". Our evaluators rated this paper among the highest across a range of metrics. It was highly commended for its rigor, the importance of the topic, and the insightful discussion of cost-effectiveness.

    We are recognizing exceptional evaluators for credible, insightful evaluations. Congratulations to Phil Trammell (Global Priorities Institute at the University of Oxford), Hannah Metzler (Complexity Science Hub Vienna), Alex Bates (independent researcher), and Robert Kubinec (NYU Abu Dhabi).

    We would like to congratulate all of the winners on their contributions to open science and commitment to rigorous research. We also thank other authors who have submitted their work but have not been selected at this time - we received a lot of excellent submissions, and we are committed to supporting authors beyond this research prize.

    Please see the full press release, as well as award details, below and :

    You'll be able to submit your research project/paper to the unjournal (or recommend others' work) at any point in the "publication process"; it is not exclusive, and will not prevent you from 'publishing elsewhere'

  • You're encouraged to submit (time-stamped) 'projects' including dynamic documents connected to data, and interactive presentations

  • “curate” these research objects: adding them to our database, considering what sorts of evaluators might be needed, and what the evaluators might want to focus on; and

  • potentially serve as an evaluation managerarrow-up-right for this same work.

  • Jordan Dworkinarrow-up-right, Program Lead, Impetus Institute for Meta-science

  • Michael Wiebearrow-up-right, Data Scientist, Economist Consultant; PhD University of British Columbia (Economics)

  • .

    Tech support/software developers

    We are pursuing collaborations with replication and robustness initiatives such as the "Institute for Replication"arrow-up-right and repliCATSarrow-up-right
  • We are now 'fiscally sponsored' by the Open Collective Foundation; see our page HEREarrow-up-right. (Note, this is an administrative thing, it's not a source of funding)

  • with the original research (e.g., through Hypothes.is collaborative annotation)

  • into the bibliometric record (through DOI's etc)

  • and with each other.

  • with interest and knowledge of key impact-relevant areas (see What is global-priorities-relevant research?; e.g., global health and development),

  • willing to dig into details, identify a paper's key claims, and consider the credibility of the research methodology and its execution.

  • Awareness∩Credibility∩Scale→ImpactAwareness \cap Credibility \cap Scale \rightarrow ImpactAwareness∩Credibility∩Scale→Impact
    this linked Gdocarrow-up-right
    OCF pagearrow-up-right
    press releasearrow-up-right
    "The Governance Of Non-Profits And Their Social Impact: Evidence from a Randomized Program In Healthcare In DRCarrow-up-right
    https://unjournal.pubpub.org/arrow-up-right
    Impactful Research Prize
    What is global-priorities relevant research?arrow-up-right
    NBERarrow-up-right
    Unjournal - standalone work task: Research scoping for evaluation managementarrow-up-right
    through our CtA survey form herearrow-up-right
    "Field Specialists
    this general involvement formarrow-up-right
    herearrow-up-right
    Theory of Changearrow-up-right
    We've increased the base compensation
    the good old formarrow-up-right
    This Airtable viewarrow-up-right
    Advisory Board membersarrow-up-right
    Gary Charnessarrow-up-right
    Nicolas Treicharrow-up-right
    Anca Haneaarrow-up-right
    KFG membershiparrow-up-right
    Communicationsarrow-up-right
    calls to action
    Institute for Replication grantarrow-up-right
    Clusterfakearrow-up-right
    recent news
    HEREarrow-up-right
    Pubpubarrow-up-right
    “Banning wildlife trade can boost demand for unregulated threatened species”arrow-up-right
    "The Governance Of Non-Profits And Their Social Impact: Evidence From A Randomized Program In Healthcare In DRC”arrow-up-right
    Impactful Research Prize
    Here's a brief and rough description arrow-up-right
    here’s a quick formarrow-up-right
    herearrow-up-right
    Management Committee
    Jordan Dworkinarrow-up-right
    Nicholas Treicharrow-up-right
    Advisory Board
    Anirudh Tagatarrow-up-right
    Management Committee
    an active participant arrow-up-right
    COS SCORE projectarrow-up-right
    recommended/approved for a substantial grantarrow-up-right
    HERE
    herearrow-up-right
    herearrow-up-right
    Is CBT effective for poor households?arrow-up-right
    HERE (Haushofer et al)arrow-up-right
    HERE (Barker et al).arrow-up-right
    PubPub pagearrow-up-right
    as noted in a recent EA Forum Postarrow-up-right
    'direct NBER evaluation' streamarrow-up-right
    unjournal.pubpub.orgarrow-up-right
    Evaluations of "Artificial Intelligence and Economic Growth";arrow-up-right
    Sciety Grouparrow-up-right
    first posted evaluationarrow-up-right
    "Direct evaluation" track
    grants and proposals
    Open Collective Foundationarrow-up-right
    Impactful Research Prize
    Unjournal's 1st eval is up: Resilient foods paper (Denkenberger et al) & AMA": arrow-up-right
    COS Unconferencearrow-up-right
    agendaarrow-up-right
    How to get involved
    THIS arrow-up-right
    FORMarrow-up-right
    Follow @GivingToolsarrow-up-right
    Action and progress
    Are non-monetary rewards effective in attracting peer reviewers? A natural experimentarrow-up-right
    linked herearrow-up-right
    https://www.youtube.com/watch?v=iG_dTJclYe0www.youtube.comchevron-right
    The case, the basic idea
    EA Anywhere
    What “pivotal” and useful research ... would you like to see assessed? (Bounty for suggestions) — EA Forumforum.effectivealtruism.orgchevron-right
    Proposal: alternative to traditional academic journals for EA-relevant research (multi-link post) — EA Forumforum.effectivealtruism.orgchevron-right
    Peer Review: Implementing a "publish, then review" model of publishingeLifechevron-right
    Julia Bottesini
    EVALUATING RESEARCHEVALUATING RESEARCHchevron-right
    The big idea: should we get rid of the scientific paper?the Guardianchevron-right
    Stuart Ritchie
    "A Journal is just a Twitter feed"
    Logo
    Logo
    Economists want to see changes to their peer review system. Let’s do something about it.CEPRchevron-right
    Logo
    Sign up to our mailing list to receive updates!
    By clicking Subscribe, you agree to the processing of your email address in accordance with our privacy policy.
    Logo
    Logo
    Logo
    Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space — EA Forumforum.effectivealtruism.orgchevron-right
    Logo