This page is mainly for The Unjournal management, advisory board and staff, but outside opinions are also valuable.
Unjournal team members:
Priority 'ballot issues' are given in our 'Survey form', linked to the Airtable (ask for link)
Key discussion questions in the broad_issue_stuffview inquestions table, linking discussion Google docs
Considering papers/projects
Direct-evaluation track: when to proceed with papers that have "R&R's" at a journal?
'Policy work' not (mainly) intended for academic audiences?
We are considering a second stream to evaluate non-traditional, less formal work, not written with academic standards in mind. This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers. See comments ; see also Pete Slattery’s proposal , which namechecks the Unjournal.
E.g., for animal welfare...
We further discuss the case for this stream and sketch and consider some potential policies for this .
Evaluation procedure and guidelines
Internal discussion space:
Feedback and discussion vs. evaluations
DR: I suspect that signed reviews (cf blog posts) provide good feedback and evaluation. However, when it comes to rating (quantitative measures of a paper's value), my impression from existing initiatives and conversations is that people are reluctant to award anything less than 5/5 'full marks'.
Why Single-blind?
Power dynamics: referees don't want to be 'punished', may want to flatter powerful authors
Connections and friendships may inhibit honesty
'Powerful referees signing critical reports' could hurt ECRs
Why signed reports?
Public reputation incentive for referees
(But note single-blind paid review has some private incentives.)
Fosters better public dialogue
Compromise approaches
Author and/or referee choose whether it should be single-blind or signed
Random trial: We can compare empirically (are signed reviews less informative?)
Use a mix (1 signed, 2 anonymous reviews) for each paper
Anonymity of evaluators
We may revisit our "evaluators decide if they want to be anonymous" policy. Changes will, of course never apply retroactively: we will carefully keep our promises.
However, we may consider requesting certain evaluators/evaluations to specifically be anonymous, or to publish their names. A mix of anonymous and signed reviews might be ideal, leveraging some of the benefits of each.
Which metrics and predictions to ask, and how?
We are also researching other frameworks, templates, and past practices; we hope to draw from validated, theoretically grounded projects such as.
Discussion amongst evaluators, initial and revised judgments?
See the 'IDEAS protocol' and , 2022
Revisions as part of process?
Timing of releasing evaluations
Should we wait until all commissioned evaluations are in, as well as authors' responses, and release these as a group, or should we sometimes release a subset of these if we anticipate a long delay in others? (If we did this, we would still stick by our guarantee to give authors two weeks to respond before release.)
Non-Anonymity of Managing editors
Considerations
My memory is that when submitting a paper, I usually learn who the Senior Editor was but not the managing editor. But there are important differences in our case. For a traditional journal the editors make an ‘accept/reject/R&R’ decision. The referee’s role is technically an advisory one. In our case, there is no such decision to be made. For The Unjournal, ME’s are choosing evaluators, corresponding with them, explaining our processes, possibly suggesting what aspects to evaluate, and perhaps putting together a quick summary of the evaluations to be bundled into our output. But we don’t make any ‘accept/reject/R&R’ decisions … once the paper is in our system and on our track, there should be a fairly standardized approach. Because of this, my thinking is:
We don’t really need so many ‘layers of editor’ … a single Managing Editor (or co-ME’s) who consult other people on the UJ team informally … should be enough
Presenting and hosting our output
Status, expenses, and payments
Our status
The Unjournal is now an independent 501(c)(3) organization. We have new (and hopefully simpler and easier) systems for submitting expenses.
Evaluation manager process
Update Feb. 2024:We are moving the discussion of the details of this process to an internal Coda link (, accessible by team members only). We will present an overview in broad strokes below.
See also for an overview and flowchart of our
Submitting for payments and expenses
Evaluators: to claim your payment for evaluation work, please complete this very brief form.
You will receive your payment via a Wise transfer (they may ask you for your bank information if you don't have an account with them).
We aim to process all payments within one week.
Confidentiality: Please note that even though you are asked to provide your name and email, your identity will only be visible to The Unjournal administrators for the purposes of making this payment. The form asks you for the title of the paper you are evaluating. If you are uncomfortable doing this, please let us know and we can find another approach to this.
Anonymity and 'salted hash' codes
This information should be moved to a different section
Why do we call it a 'salted hash'
The 'hash' itself represents a one-way encryption of either your name or email. We store this information in a database shared only internally at The Unjournal. If you are asking for full anonymity, this information is only kept on the hard drive of our co-manager, operations RA, and potentially the evaluator.
But if we used this anyone who knows your name or email could potentially 'check' if you were the person it pertained to. That's why we 'salt' it: we add an additional bit of 'salt', a password only known to our co-managers and operations RA before we encrypt it. This better protects your anonymity.
What bank/payment information might we need?
Type: ABA [or?]
Account Holder: name
Email:
Abartn: ?????????
City:
State:
Country:
Post Code:
First Line:
Legal Type: PRIVATE
Account Type: CHECKING [or ?]
Account Number: ...
Additional invoice information
full
process (including the evaluation manager role).
Compensation: As of April Dec 2023, evaluation managers are compensated a minimum of $300 per project, and up to $500 for detailed work. Further work on 'curating' the evaluation, engaging further with authors and evaluators, writing detailed evaluation summary content, etc., can earn up to an additional $200.
If you are the evaluation manager please follow the process described in our private Coda space here
In brief, evaluation managers:
Engage with our previous discussion of the papers; why we prioritized this work, what sort of evaluators would be appropriate, what to ask them to do.
Inform and engage with the paper's authors, asking them for updates and requests for feedback. The process varies depending on whether the work is part of our "Direct evaluation" track or whether we require authors' permission.
Find potential evaluators with relevant expertise, contact them. We generally seek two evaluators per paper.
Suggest research-specific issues for evaluators to consider. Guide evaluators on our process.
Read the evaluations as they come in, suggest additions or clarifications if necessary.
Rate the evaluations for awards and bonus incentives.
Share the evaluations with the authors, requesting their response.
Optionally, provide a brief "evaluation manager's report" (synthesis, discussion, implications, process) to accompany the evaluation package.
We give the authors two weeks to respond before publishing the evaluation package (and they can always respond afterwards).
Once the evaluations are up on PubPub, reach out the evaluators again with the link, in case they want to view their evaluation and the others. The evaluators may be allowed to revise their evaluation, e.g., if the authors find an oversight in the evaluation. (We are working on a policy for this.)
At the moment (Nov. 2023) we don't have any explicit 'revise and resubmit' procedure, as part of the process. Authors are encouraged to share changes they plan to make, and a (perma)-link to where their revisions can be found. They are also welcome to independently (re)-submit an updated version of their work for a later Unjournal evaluation.
This page should explain or link clear and concise explanations of the key resources, tools, and processes relevant to members of The Unjournal team, and others involved.
19 Feb 2024: Much of the information below is out of date. We also plan to move most of this content to our internal (Coda) system
The main platforms for the management team are outlined below with links provided.
Slack group and channels
Please ask for group access, as well as access to private channels, especially "management-policies". Each channel should have a description and some links at the top.
Slack is for quick conversations and coordination.
Airtable
Please ask for an invitation. Airtable is an interactive online relational database. Only our management team and selected others (with careful controls) should have access to the Airtable.
Each of the 'tables' in the Airtable is explained in the first ('readme') table...
The Airtable is serving several functions at the moment, including:
Project management (see "broad_goals" and "tasks" tables)
"CRM" & external comms (see "people-orgs", "participants_etc", "org-link", "text_templates", ...)
Read: (Google Doc: internal access only) for a concise understanding of what you need to know and use most. Feel free to ask sidebar questions.
Watch: (Dropbox: internal access only)
11 Oct 2023 partial update:
Divided the airtable into two airtables, one focusing on research and evaluation, and the other focusing on everything else
GitBook (edit access optional)
See
Management team: You don't need to edit the GitBook if you don't want to, but we're trying to use it as our main place to 'explain everything' to ourselves and others. We will try to link all content here. Note you can use 'search' and 'lens' to look for things.
PubPub
Access to the PubPub is mainly only needed for doing 'full-service evaluation manager work'.
Google drive: Gdocs and Gsheets
Please ask for access to this drive. This drive contains meeting notes, discussion, grant applications and tech details.
Open Collective Foundation
This is for submitting invoices for your work.
Advisory board
The main platforms needed for the advisory board are outlined below with links provided.
Slack group and channels
Members of the advisory board can join our Slack (if they want). They can have access to private channels (subject to discretion) other than the 'management-policies' channel
Airtable: with discretion
Advisory board members can be given access to our Airtable, but subject to discretion. Before adding an advisory board member to the Airtable, please move any content related to their own research, their evaluation work, their job application status, etc.
Short of full Airtable access, AB members they may also be given specific access to 'survey links' and key views.
Open Collective Foundation
This is for submitting invoices for your work.
Evaluation managers/managing evaluations
In addition to the management team platforms explained above, additional information for how to use the platforms specifically for managing evaluations is outlined below.
Airtable
To use Airtable for evaluation management please see this . The section is titled "Managing evaluations of research". To find this in the google drive, it is under "forms_templates_tips_guidelines_for_management".
PubPub
For details on our current PubPub process please see this . To find this in the google drive, it is under "hosting and tech".
Open Collective Foundation
This should be used for payments for evaluators. Please see the link below for how to do this.
Research-linked contractors
Evaluators
Authors
Notes:
Airtable: Get to know it's features, it's super-useful. E.g., 'views' provide different pictures of the same information. 'Link' field types connect different tables by their primary keys, allowing information and calculations to flow back and forth.
Airtable table descriptions: as well as by hovering over the '(i)' symbol for each tab. Many of the columns in each tab also have descriptions.
Additional Airtable security: We also keep more sensitive in this AIrtable encrypted, or moved to a different table that only David Reinstein has access to.
Research scoping discussion spaces
15 Aug 2023: We are organizing some meetings and working groups, and building some private spaces ... where we are discussing 'which specified research themes and papers/projects we should prioritize for UJ evaluation.'
Research we prioritize, and short comments and ratings on its prioritization is currently maintained in our Airtable database (under 'crucial_research'). We consider 'who covers and monitors what' (in our core team) in the 'mapping_work' table). This exercise suggested some loose teams and projects. I link some (private) Gdocs for those project discussions below. We aim to make a useful discussion version/interface public when this is feasible.
Team members and field specialists: You should have access to a Google Doc called "Unjournal Field Specialists+: Proposed division (discussion), meeting notes", where we are dividing up the monitoring and prioritization work.
Some of the content in the sections below will overlap.
General discussions of prioritization
Development economics, global health, adjacent
'Impactful, Neglected, Evaluation-Tractable' work in the global health & RCT-driven intervention-relevant part of development economics
Mental health and happiness; HLI suggestions
Economics as a field, sub-areas
Syllabi (and ~agendas): Economics and global priorities (and adjacent work)
Microeconomic theory and its applications? When/what to consider?
Animal welfare
The economics of animal welfare (market-focused; 'ag econ'), implications for policy
Attitudes towards animals/animal welfare; behavior change and 'go veg' campaigns
Impact of political and corporate campaigns
The environment
Environmental economics and policy
Psychology and 'attitudes/behavioral'
: How can UJ source and evaluate credible work in psychology? What to cover, when, who, with what standards...
Moral psychology/psychology of altruism and moral circles
Innovation, scientific progress, technology
Innovation, R&D, broad technological progress
Meta-science and scientific productivity
Social impact of AI (and other technology)
Catastrophic risks (economics, social science, policy)
Pandemics and other biological risks
Artificial intelligence; AI governance and strategy (is this in the UJ wheelhouse?)
International cooperation and conflict
Applied research/Policy research stream
See .
Other
Long term population, growth, macroeconomics
Normative/welfare economics and philosophy (should we cover this?)
Empirical methods (should we consider some highly-relevant subset, e.g., meta-analysis?)
Editorial/evaluation management (discussed below)
Data storage ("output_eval")
Surveys and internal discussion/consensus ("questions", "responses")
Going forward, some of these functions may be replaced by other tools
We're moving task management to
ClickUp
(nearly set up, onboarding soon)_
Much of the evaluation management process will be moved to PubPub (coming soon, we hope)
ClickUp may also be used for much of the internal knowledge base content, including much of the present "Management details" Gitbook section.
Use discretion in sharing: advisory board members might be authors, evaluators, job candidates, or parts of external organizations we may partner with
Things we consider in choosing evaluators (i.e., 'reviewers')
Did the people who suggested the paper suggest any evaluators?
We prioritize our "evaluator pool" (people who signed up)
Expertise in the aspects of the work that needs evaluation
Interest in the topic/subject
Conflicts of interest (especially co-authorships)
Secondary concerns: Likely alignment and engagement with Unjournal priorities. Good writing skills. Time and be motivation to write review promptly and thoroughly.
Avoiding COI
Mapping collaborator networks through Research Rabbit
Our RR database contains papers we are considering evaluating. To check potential COI, we use the following steps:
After choosing a paper, we select the button "these authors." This presents all the authors for that paper.
After this, we choose "select all," and click "collaborators." This presents all the people that have collaborated on papers with the authors.
Finally, by using the "filter" function, we can determine whether the potential evaluator has ever collaborated with an author from the paper.
If a potential evaluator has no COI, we will add them to our list of possible evaluators for this paper.
Note: Coauthorship is not a disqualifier for a potential evaluator; however, we think it should be avoided where possible. If it cannot be avoided, we will note it publicly.
Governance of The Unjournal
Updated 11 Jan 2023
Administrators, accounts
The official administrators are David Reinstein (working closely with the Operations Lead) and Gavin Taylor; both have control and oversight of the budget.
Roles: Founding and management committee
Major decisions are made by majority vote by the Founding Committee (aka the ‘Management Committee’).
Members:
Roles: Advisory board
Advisory board members are kept informed and consulted on major decisions, and relied on for particular expertise.
Advisory Board Members:
Communication and style
Style
To aim for consistency of style in all UJ documentation, a short style guide for the GitBook has been posted . Feel free to suggest changes or additions using the comments.
Note this document, like so many, is under construction and likely to change without notice. The plan is to make use of it for any outward-facing communications.