Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Mapping collaborator networks through Research Rabbit
We use a website called Research Rabbit (RR).
Our RR database contains papers we are considering evaluating. To check potential COI, we use the following steps:
After choosing a paper, we select the button "these authors." This presents all the authors for that paper.
After this, we choose "select all," and click "collaborators." This presents all the people that have collaborated on papers with the authors.
Finally, by using the "filter" function, we can determine whether the potential evaluator has ever collaborated with an author from the paper.
If a potential evaluator has no COI, we will add them to our list of possible evaluators for this paper.
Note: Coauthorship is not a disqualifier for a potential evaluator; however, we think it should be avoided where possible. If it cannot be avoided, we will note it publicly.
Updated 11 Jan 2023
The official administrators are David Reinstein (working closely with the Operations Lead) and Gavin Taylor; both have control and oversight of the budget.
Major decisions are made by majority vote by the Founding Committee (aka the ‘Management Committee’).
Members:
Advisory board members are kept informed and consulted on major decisions, and relied on for particular expertise.
Advisory Board Members:
9 Apr 2024: This section outlines our management structure and polices. More detailed content is being moved to our private (Coda.io) knowledge base.
Tech, tools and resources has been moved to it's own section Tech, tools and resources
The Unjournal is now an independent 501(c)(3) organization. We have new (and hopefully simpler and easier) systems for submitting expenses.
Evaluators: to claim your payment for evaluation work, please complete this very brief form.
You will receive your payment via a Wise transfer (they may ask you for your bank information if you don't have an account with them).
We aim to process all payments within one week.
Confidentiality: Please note that even though you are asked to provide your name and email, your identity will only be visible to The Unjournal administrators for the purposes of making this payment. The form asks you for the title of the paper you are evaluating. If you are uncomfortable doing this, please let us know and we can find another approach to this.
This information should be moved to a different section
Update Feb. 2024: We are moving the discussion of the details of this process to an internal Coda link (here, accessible by team members only). We will present an overview in broad strokes below.
See also Mapping evaluation workflowfor an overview and flowchart of our full process (including the evaluation manager role).
Compensation: As of April Dec 2023, evaluation managers are compensated a minimum of $300 per project, and up to $500 for detailed work. Further work on 'curating' the evaluation, engaging further with authors and evaluators, writing detailed evaluation summary content, etc., can earn up to an additional $200.
If you are the evaluation manager please follow the process described in our private Coda space here
Engage with our previous discussion of the papers; why we prioritized this work, what sort of evaluators would be appropriate, what to ask them to do.
Inform and engage with the paper's authors, asking them for updates and requests for feedback. The process varies depending on whether the work is part of our "Direct evaluation" track or whether we require authors' permission.
Find potential evaluators with relevant expertise, contact them. We generally seek two evaluators per paper.
Suggest research-specific issues for evaluators to consider. Guide evaluators on our process.
Read the evaluations as they come in, suggest additions or clarifications if necessary.
Rate the evaluations for awards and bonus incentives.
Share the evaluations with the authors, requesting their response.
Optionally, provide a brief "evaluation manager's report" (synthesis, discussion, implications, process) to accompany the evaluation package.
See also:
See also: Protecting anonymity
We give the authors two weeks to respond before publishing the evaluation package (and they can always respond afterwards).
Once the evaluations are up on PubPub, reach out the evaluators again with the link, in case they want to view their evaluation and the others. The evaluators may be allowed to revise their evaluation, e.g., if the authors find an oversight in the evaluation. (We are working on a policy for this.)
At the moment (Nov. 2023) we don't have any explicit 'revise and resubmit' procedure, as part of the process. Authors are encouraged to share changes they plan to make, and a (perma)-link to where their revisions can be found. They are also welcome to independently (re)-submit an updated version of their work for a later Unjournal evaluation.
Did the people who suggested the paper suggest any evaluators?
We prioritize our "evaluator pool" (people who signed up; see "")
Expertise in the aspects of the work that need evaluation
Interest in the topic/subject
Conflicts of interest (especially co-authorships)
Secondary concerns: Likely alignment and engagement with Unjournal's priorities. Good writing skills. Time and motivation to write the evaluation promptly and thoroughly.
This page is mainly for The Unjournal management, advisory board and staff, but outside opinions are also valuable.
Unjournal team members:
Priority 'ballot issues' are given in our 'Survey form', linked to the Airtable (ask for link)
Key discussion questions in the broad_issue_stuff
view inquestions
table, linking discussion Google docs
We are considering a second stream to evaluate non-traditional, less formal work, not written with academic standards in mind. This could include the strongest work published on the EA Forum, as well as a range of further applied research from EA/GP/LT linked organizations such as GPI, Rethink Priorities, Open Philanthropy, FLI, HLI, Faunalytics, etc., as well as EA-adjacent organizations and relevant government white papers. See comments here; see also Pete Slattery’s proposal here, which namechecks the Unjournal.
E.g., for
We further discuss the case for this stream and sketch and consider some potential policies for this HERE.
Internal discussion space: Unjournal Evaluator Guidelines & Metrics
DR: I suspect that signed reviews (cf blog posts) provide good feedback and evaluation. However, when it comes to rating (quantitative measures of a paper's value), my impression from existing initiatives and conversations is that people are reluctant to award anything less than 5/5 'full marks'.
Power dynamics: referees don't want to be 'punished', may want to flatter powerful authors
Connections and friendships may inhibit honesty
'Powerful referees signing critical reports' could hurt ECRs
Public reputation incentive for referees
(But note single-blind paid review has some private incentives.)
Fosters better public dialogue
Inhibits obviously unfair and impolite 'trashing'
Author and/or referee choose whether it should be single-blind or signed
Random trial: We can compare empirically (are signed reviews less informative?)
Use a mix (1 signed, 2 anonymous reviews) for each paper
We may revisit our "evaluators decide if they want to be anonymous" policy. Changes will, of course never apply retroactively: we will carefully keep our promises. However, we may consider requesting certain evaluators/evaluations to specifically be anonymous, or to publish their names. A mix of anonymous and signed reviews might be ideal, leveraging some of the benefits of each.
We are also researching other frameworks, templates, and past practices; we hope to draw from validated, theoretically grounded projects such as RepliCATS.
See the 'IDEAS protocol' and Marcoci et al, 2022
#considering-for-future-enabling-minor-revisions
Should we wait until all commissioned evaluations are in, as well as authors' responses, and release these as a group, or should we sometimes release a subset of these if we anticipate a long delay in others? (If we did this, we would still stick by our guarantee to give authors two weeks to respond before release.)
To aim for consistency of style in all UJ documentation, a short style guide for the GitBook has been posted here. Feel free to suggest changes or additions using the comments. Note this document, like so many, is under construction and likely to change without notice. The plan is to make use of it for any outward-facing communications.
15 Aug 2023: We are organizing some meetings and working groups, and building some private spaces ... where we are discussing 'which specified research themes and papers/projects we should prioritize for UJ evaluation.'
This is guided by concerns we discuss in other sections (e.g., 'what research to target', 'what is global priorities relevant research')
Research we prioritize, and short comments and ratings on its prioritization is currently maintained in our Airtable database (under 'crucial_research'). We consider 'who covers and monitors what' (in our core team) in the 'mapping_work' table). This exercise suggested some loose teams and projects. I link some (private) Gdocs for those project discussions below. We aim to make a useful discussion version/interface public when this is feasible.
Team members and field specialists: You should have access to a Google Doc called "Unjournal Field Specialists+: Proposed division (discussion), meeting notes", where we are dividing up the monitoring and prioritization work.
Some of the content in the sections below will overlap.
Unjournal: Which research? How to prioritize/process it?
'Impactful, Neglected, Evaluation-Tractable' work in the global health & RCT-driven intervention-relevant part of development economics
Mental health and happiness; HLI suggestions
Givewell specific recommendations and projects
Governance/political science
Global poverty: Macro, institutions, growth, market structure
Evidence-based policy organizations, their own assessments and syntheses (e.g., 3ie)
How to consider and incorporate adjacent work in epidemiology and medicine
Syllabi (and ~agendas): Economics and global priorities (and adjacent work)
Microeconomic theory and its applications? When/what to consider?
The economics of animal welfare (market-focused; 'ag econ'), implications for policy
Attitudes towards animals/animal welfare; behavior change and 'go veg' campaigns
Impact of political and corporate campaigns
Environmental economics and policy
Unjournal/Psychology research: discussion group: How can UJ source and evaluate credible work in psychology? What to cover, when, who, with what standards...
Moral psychology/psychology of altruism and moral circles
Innovation, R&D, broad technological progress
Meta-science and scientific productivity
Social impact of AI (and other technology)
Techno-economic analysis of impactful products (e.g., cellular meat, geo-engineering)
Pandemics and other biological risks
Artificial intelligence; AI governance and strategy (is this in the UJ wheelhouse?)
International cooperation and conflict
See discussion here.
Long term population, growth, macroeconomics
Normative/welfare economics and philosophy (should we cover this?)
Empirical methods (should we consider some highly-relevant subset, e.g., meta-analysis?)
This page should explain or link clear and concise explanations of the key resources, tools, and processes relevant to members of The Unjournal team, and others involved.
5 Sep 2024: Much of the information below is out of date. We have moved most of this content to our internal (Coda) system (but may move some of it back into hidden pages here to enable semantic search)
See also (and integrate): Jordan's 'Onboarding notes'
The main platforms for the management team are outlined below with links provided.
Please ask for group access, as well as access to private channels, especially "management-policies". Each channel should have a description and some links at the top.
We are no longer using Airtable; the process, and instructions. have been moved into Coda.
See Tech scoping
Management team: You don't need to edit the GitBook if you don't want to, but we're trying to use it as our main place to 'explain everything' to ourselves and others. We will try to link all content here. Note you can use 'search' and 'lens' to look for things.
Access to the PubPub is mainly only needed for doing 'full-service evaluation manager work'.
Please ask for access to this drive. This drive contains meeting notes, discussion, grant applications and tech details.
This is for submitting invoices for your work.
The main platforms needed for the advisory board are outlined below with links provided.
Members of the advisory board can join our Slack (if they want). They can have access to private channels (subject to ) other than the 'management-policies' channel
We are no longer using Airtable (except to recover some older content; the process, and instructions have been moved into Coda.io
In addition to the management team platforms explained above, additional information for how to use the platforms specifically for managing evaluations is outlined below.
We are no longer using Airtable; the process, and instructions. have been moved into Coda.
For details on our current PubPub process please see this google doc. To find this in the google drive, it is under "hosting and tech".
Airtable: Get to know it's features, it's super-useful. E.g., 'views' provide different pictures of the same information. 'Link' field types connect different tables by their primary keys, allowing information and calculations to flow back and forth.
Airtable table descriptions: as well as by hovering over the '(i)' symbol for each tab. Many of the columns in each tab also have descriptions.
Additional Airtable security: We also keep more sensitive in this AIrtable encrypted, or moved to a different table that only David Reinstein has access to.
Use discretion in sharing: advisory board members might be authors, evaluators, job candidates, or parts of external organizations we may partner with