Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Peer Commuities In organization (PCI) and the Peer Community Journal, a diamond open access journal, have considerable overlap with The Unjournal model. They started out (?) as a "recommendation system" but now have established the PCI Journal to "publish unconditionally, free of charge, exclusively, immediately (as soon as possible) [and on an opt-in basis] . . . any article recommended by a PCI." .
Especially relevant to The Unjournal are these aspects of their program:
The standard "recommender" model has an approved recommender volunteer to play the role of managing editor for a paper and make the decisions; authors are consulted to recommend reviewers.
This might bring up concerns about conflict of interest, e.g., I become "recommender" for a friend or for the stuff that supports my agenda.
There are 17 "Peer Communities in" (i.e., research areas)—mainly in life sciences (some seem to have just started; there are no public preprints up).
Authors must
They (opt-in) "publish" the article rather than being an "overlay journal," to improve their indexing possibilities (but this is opt-in; you can also submit elsewhere and there are "PCI-friendly" journals).
They depend on volunteer evaluations.
Their evaluation is 0/1 and descriptive rather than quantitative.
(See sections below)
As part of The Unjournal’s general approach, we keep track of and maintain contact with other initiatives in open science, open access, robustness/transparency, and encouraging impactful research. We want to be coordinated. We want to partner with other initiatives and tools where there is overlap, and clearly explain where (and why) we differentiate from other efforts.
The Airtable view below gives a preliminary breakdown of some initiatives that are the most similar to—or partially overlap—ours, and tries to catalog the similarities and differences to give a picture of who is doing what and in what fields.
See especially eLife and Peer Communities In
Sciety is essentially a hub for curating the sort of evaluations that Unjournal aims to do. Users can access research works that have been publicly evaluated.
There are several initiatives of public—and sometimes journal-independent—peer evaluation, including around two dozen , such as the , , and . However, these are nearly exclusively in biology and related areas.
Sciety’s mission is to grow a network of researchers who evaluate, curate and consume scientific content in the open. In doing so, we will support several long-term changes to scientific communication:
Change peer review to better recognize its scholarly contribution
Shift the publishing decision from editors to authors
Move evaluation and curation activity from before to after publication
Our community-driven technology effort is producing an application that can support the changes in behaviour required to secure this future.
eLife's is a fairly well respected (?) journal in Life Sciences. Their New Model (originally called "publish, review, curate") was big news. Their three-month update seems fairly stable and successful. Here's their FAQ. Their model is similar to ours in many ways, but it's mainly or exclusively for life sciences. They use Sciety for curation.
They don't have explicit quantitative metrics, but an "eLife assessment . . . is written with the help of a common vocabulary to ensure consistency," which may proxy this.
Evaluators (reviewers) are not compensated. ("We offer remuneration to our editors but not to our peer reviewers.")
Reviewers' names are not displayed. ("All public comments posted alongside a preprint will be signed by eLife and not by individuals, putting the onus on eLife as an organisation and community to ensure that the outputs of our peer-review process are of the highest standard.")
They charge a $2,000 APC. Presumably, this is true for all "reviewed preprints" on the eLife website, whether or not you request it become a "version of record."
The evaluation is non-exclusive unless you request that the reviewed preprint be a "'Version of Record' that will be sent to indexers like PubMed and can be listed in funding, job applications and more."
Some share of the work they cover are registered reports.
Organization/discussion following a thread in...
Some conversation highlights:
Kris Gulati: Recently I've been talking to more global priorities-aligned researchers to get to know what people are working on. I noticed they're somewhat scattered around (Stanford, PSE, Chicago, Oxford etc.). Additionally, sometimes established academics don't always entirely grasp global priorities-focused work and so it can be tough to get feedback on ideas from supervisors or peers when it's pretty different to the more orthodox research many academics focus on. One way of remedying this is to have an informal seminar series where people interested in GP work present early stages ideas and can receive feedback on their ideas, etc.
David Mannheim: Yes, this seems very promising. And I think that it would be pretty easy to get a weekly seminar series together on this on Zoom.
Robin Hanson: Why limit it to PhD students? All researchers can gain from feedback, and can offer it.
Eva Vivalt: Sounds great. GPI also has seminars on global priorities research in philosophy and economics that might be of interest. . . . [followed by some notes of caution] I'm just worried about stretching the senior people too thin. I've been attending the econ ones remotely for a while, and only this semester did I finally feel like it was really mature; a senior person less would be a setback. I fully think there should be many groups even within econ, and at different institutions; that would be a healthy ecosystem.
Kris, responding to Dave Rhys-Bernard: If GPI's seminar series is meant to be private, then it's worth running something additional, given we can get a decent critical mass of attendance and some senior people are happy to attend.
DR: I think a focus on empirical economics, social science, and program evaluation would be most promising (and I could help with this). We could also incorporate "applications of economic theory and decision theory." Maybe I would lean away from philosophy and "fundamental theory," as GPI's seminar seems to concentrate on that.
Rethink Priorities would probably (my guess) be willing to attach our name to it and a decent number of our research staff would attend. I think GPI might be interested, hopefully Open Philanthropy and other organizations. Robin Hanson and other academics have expressed interest.
We could try to get a MWE/PoC online seminar 1x per month, for example
Start with...
Presentations of strong, solid working papers and research projects from reputable authors
EA-adjacent and -aligned academics and academically connected EA-org researchers (at RP, Open Phil, GPI, etc.)
"Job-markety PhD students"
Make it desirable to present
Selective choices, "awards" with small stipends? Or "choose a donation"?
Guarantee of strong feedback and expertise
Communication/coverage of work
Encourage experts to attend and give concrete feedback
Open and saved chat window
Write up feedback; consider drawing future presenters from the attending crowd
DR: I would not publicize this until we get #the-featured-research-seminar-series off the ground . . . just do it informally, perhaps.
15-20 minute presentations
Provide or link writeup for follow-up comments and #collaborative-annotated-feedback
Such a seminar needs association with credible people and orgs to get participation.
Do we need any funding for small honorariums or some such?
Do we want to organize "writeups" and publicity? Should we gain support for this?
Economic theory and empirics are different groups . . . . I think a focus on empirical and applied work would have a critical mass.
See draft above
I think a really neat feature, as a companion to the seminar, could be that the author would (ideally) post a working paper or project website and everyone would leave collaborative annotation comments on it.
This kind of feedback could be golden for the author.
GPI lunchtime seminar (not public)
EA Global talks
RP has internal talks; hope to expand this
Link: "Asterisk is a quarterly magazine of clear writing and clear thinking about things that matter."
Asterisk is a new quarterly journal of ideas from in and around Effective Altruism. Our goal is to provide clear, engaging, and deeply researched writing about complicated questions. This might look like a superforecaster giving a detailed explanation of the reasoning they use to make a prediction, a researcher discussing a problem in their work, or deep-dive into something the author noticed didn’t quite make sense. While everything we publish should be useful (or at least interesting) to committed EAs, our audience is the wider penumbra of people who care about improving the world but aren't necessarily routine readers of, say, the EA forum.
Includes "Speculative pieces with 'epistemic signposts'"
Possible scope for collaboration or sharing
Followup on crucial research—I will share non-sensitive parts of Airtable
Sharing a database/CRM of
Experts to vouch
Interested academics and good writers
Shared thinking on "what is relevant" and "how to classify things"
Unjournal could '"feed in" to Asterisk: Academic article, then you do a writeup; they have funding, can pay authors ~$4,000 for 4000 words; can't guarantee that academic work will feed into Asterisk
Passing back and forth relevant work and directions to go in
Some shared cross-promotion (e.g., at universities and in policy circles, where both Unjournal and Asterisk are relevant)
Ben West on how to make the EA Forum involve more rigorous processes
Ozzy at QURI — I will contact him about "how to do better evaluation"
This initiative and EA/gp Unjournal will interact with the EA forum and build on initiatives coming there.
Some of these links come from a conversation with Aaron Gertler
Here's where to suggest new Forum features.
Here's an example of a PR FAQ post that led us to develop a new feature.
Note: Reinstein and Hamish Huggard have worked on tools to help transform R-markdown and bookdown files. Some work can be found on this Repo (but may need some explanation).
Jaime Sevilla has thoughts on creating a peer-review system for the Forum. (See embedded doc below, link here.)
To create a quick and easy prototype to test, you fork the EA Forum and use that fork as a platform for the Unjournal project (maybe called something like "The Journal of Social Impact Improvement and Assessment").
People (ideally many from EA) would use the Forum-like interface to submit papers to this Unjournal.
These papers would look like EA Forum posts, but with an included OSF link to a PDF version. Any content (e.g., slides or video) could be embedded in the submission.
All submissions would be reviewed by a single admin (you?) for basic quality standards.
Most drafts would be accepted to The Unjournal.
Any accepted drafts would be publicly "peer reviewed." They would achieve peer-reviewed status when >x (3?) people from a predetermined or elected board of editors or experts had publicly or anonymously reviewed the paper by commenting publicly on the post. Reviews might also involve ratings the draft on relevant criteria (INT?). Public comment/review/rating would also be possible.
Draft revisions would be optional but could be requested. These would simply be new posts with version X/v X appended to the title.
All good comments or posts to the journal would receive upvotes, etc., so authors, editors and commentators would gain recognition, status and "points" from participation. This is sufficient for generating participation in most forums and notably lacking in most academic settings.
Good papers submitted to the journal would be distinguished by being more widely read, engaged with, and praised than others. If viable, they would also win prizes. As an example, there might be a call for papers on solving issue x with a reward pool of grant/unconditional funding of up to $x for winning submissions. The top x papers submitted to The Unjournal in response to that call would get grant funding for further research.
A change in rewards/incentives (from "I had a paper accepted/cited" to "I won a prize") seems to have various benefits.
It still works for traditional academic metrics—grant money is arguably even more prized than citations and publication in many settings
It works for non-academics who don't care about citations or prestigious journal publications.
As a metric, "funds received" would probably better track researchers' actual impact than their citations and acceptance in a top journal. People won't pay for more research that they don't value, but they will cite or accept that to a journal for other reasons.
Academics could of course still cite the DOIs and get citations tracked this way.
Reviewers could be paid per-review by research commissioners.
Here is a quick example of how it could work for the first run: Open Philanthropy calls for research on something they want to know about (e.g., interventions to reduce wild animal suffering). They commit to provide up $100,000 in research funding for good submissions and $10,000 for review support. Ten relevant experts apply and are elected to the expert editorial boards to review submissions. They will receive 300 USD per review and are expected to review at least x papers. People submit papers; these are reviewed; OP awards follow-up prizes to the winning papers. The cycle repeats with different funders, and so on.
I suppose I like the above because it seems pretty easy and actionable to do over as a test run for something to refine and scale. I estimate that I could probably do it myself if I had 6–12 months to focus on it. However, I imagine that I am missing a few key considerations as I am usually over-optimistic! Feel free to point those out and offer feedback.
Notable:
PREreview.org
Mercatus commissions external reviews
NBER, CEPR etc -- very loosely filtered within member network
World Bank, Federal Reserve, etc. Internal review?
Open Philanthropy?
We designed and disseminated a survey taken by over 1,400 economists in order to (i) understand their experiences with peer review and (ii) collect opinions about potential proposals to improve the system.
...
We reviewed the existing literature about peer review, drawing on sources from inside and outside of economics. ... We then built a (non-comprehensive) themed bibliography,
... we took the additional step of preparing a list of over 160 proposals.
Other peer-review models Our current peer-review system relies on the feedback of a limited number of ad-hoc referees, given after a full manuscript was produced. We consider several changes that could be made to this model, including:
Post-publication peer review: Submissions could be published immediately and then subjected to peer review, or they could be subject to continued evaluation at the conclusion of the standard peer-review process.
Peer review of registered reports: Empirical papers could be conditionally accepted before the results are known, based on their research question and design. A limited number of journals have started to offer publication tracks for registered reports.
Crowdsourced peer review and prediction markets: Rather than relying on a small number of referees, the wisdom of crowds could be leveraged to provide assessments of a manuscript's merits.
Non-economists and non-academics as referees: Besides enlarging the size of the pool of referees who assess a paper, the diversity of the pool could be increased by seeking the opinion of researchers from other disciplines or non-academics, such as policy makers.
Collaborative peer review platforms: Communication between authors, reviewers, and editors could be made more interactive, with the implementation of new channels for real-time discussion. Collaborative platforms could also be set up to solicit feedback before journal submission occurs.