Why publish science in peer-reviewed journals?

The recent announcement of a new journal sponsored by the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust generated a bit of discussion about the issues in the scientific publishing process it is designed to address—arbitrary editorial decisions, slow and unhelpful peer review, and so on. Left unanswered, however, is a more fundamental question: why do we publish scientific articles in peer-reviewed journals to begin with? What value does the existence of these journals add? In this post, I will argue that cutting journals out of scientific publishing to a large extent would be unconditionally a good thing, and that the only thing keeping this from happening is the absence of a “killer app”.

Google Scholar in 2015?

The publishing process as it stands currently

As most readers here are aware, the path to publishing a scientific paper has two major obstacles: first, the editor of a journal has to decide that a paper is potentially “interesting” enough for publication in their journal; if it passes that threshold, it is then sent out for “peer review” by two to four people chosen by the editor. The reviewers make a recommendation about whether or not the journal should publish the paper—if they all like it, chances are it will be accepted (potentially after additional experiments); if one of them hates it, chances are it will be rejected; if the reviews are mixed, the editor makes a judgment call. In total, this process involves a handful of people at most, and takes around a few months to a year (of course, if the paper is rejected, you generally start all over again).

The problems with this system have been pointed out ad nauseam; the most succinct statement of the issues I’ve seen is in a nice commentary by former British Medical Journal editor Richard Smith. To summarize, peer review is costly (in terms of time and money), random (the correlation in perceived “publishability” of a paper between two groups of reviewers is little better than zero), ineffective at detecting errors, biased towards established groups and against originality, and sometimes abused (in that reviewers can steal ideas from papers they review or block the publication of competitors).

We can do better

So why do we stick with this system, despite its many flaws? These days, there’s zero cost to self-publishing a paper online (for example, on a preprint server like arXiv or Nature Precedings), so going through a peer-reviewed journal for publication itself seems a bit silly. However, journals do perform a service of sorts: they filter papers, ostensibly based on interest to a community, and also ostensibly ensure that the details of an analysis are correct and clear enough to be replicated (though in practice, I doubt these filters work as intended).

So let’s take this goal–that of filtering papers based on quality, interest to a community, and reproducibility–as the legitimate service provided by peer-reviewed journals. When phrased like this, it’s simply absurd that our way of achieving this goal involves a handful of unaccountable, often anonymous, reviewers and editors, and takes so much time and money. Certainly the best judge of the interest of a paper to the community is, well, the community itself. Ditto for the best judge of the quality and reproducibility of a paper. So let’s imagine a different system. What features would this system have?

1. Immediate publication without peer review. This is simply a feature taken from preprint servers like arXiv, and addresses the issues of speed and cost of publication.

2. One-click recommendation of papers. Now we need to find a way to filter the papers in step 1. Imagine a feed of new papers (like the feed in reddit or Facebook); one simple and relatively effective filter is to allow individuals to express that they like a paper with a single click (again, like reddit, Facebook, or Google+). It seems stupid, but it’s little effort and extremely effective in some situations.

3. Connection to a social network. Of course, in some cases I don’t really care if a lot of people like a paper; instead, what I want to know is: do people I trust like the paper? If I’m trying to find the best recent papers on copy number variation, I don’t necessarily care if a thousand people like a paper, but I would probably take a second look at a paper recommended by (my GNZ colleague and copy number variation expert) Don Conrad.

4. Effective search based on the collective opinion on a paper. Many times, I’m searching for the best papers in a field somewhat outside my own. One of the most useful features in Google Scholar in this regard is that it immediately tells you how many citations a paper has received; in general, this is highly correlated with the community opinion of a paper. This breaks down for new papers, all of which have zero citations. Often, I’d like to be able to search the relatively recent literature and sort based on the criteria in steps 2 and 3.

You can imagine additional sorts of features that would be useful in a system like this—comments, voting on comments themselves, encouragement of reproducible research via Sweave or some other mechanism—but the aspects above are probably essential.

Does a system like this perfectly address all of the issues with peer review mentioned above? No–my guess is that this sort of system would also be somewhat biased towards established research groups, just as peer review is. But for all other aspects, this sort of system seems superior.

How do we get there?

Many of the above ideas, of course, are not new (see, e.g., discussions by Cameron Neylon). And much of academia has bought into the “peer-reviewed journal” system for evaluation of individuals–that is, when evaluating an individual researcher for a grant or tenure, the quality of the journal in which a piece of work is published is often used as a proxy for the quality of the work itself. It’s easy to see how this system became entrenched, but it is obviously not compatible with the publishing model outlined above (nor is it an ideal system in its own right, unless you’re someone with a knack for getting crappy papers published in Nature). So what’s the way out?

One thing to note is that many of these ideas about community ranking could be incorporated into the “standard” publishing route. Indeed, some aspects of these ideas (comments, rating of articles) have been implemented by the innovative PLoS journals, but have been greeted with deafening yawns from the research community. Why is this? Certainly, it’s partially because these systems are non-trivial to use (you have to log in to some new system every time you want to rate a paper), but most importantly, there’s no sense of community—I’ll never see your comment on a paper, as relevant as it is, unless I come across it by chance, and there’s no mechanism to tell me when a paper is being read and liked by many people I trust. The current implementations of these ideas simply don’t perform the filtering mechanism that they’re designed to replace–if I see that a PLoS One paper is highly rated, this doesn’t help me at all; I’ve already found the paper!

This situation has created the perfect niche for a killer app—one that solves all of these issues and will actually be used. I’m not sure exactly what this will look like, but it will likely tap into already existing online identities and social networks (Google+? Facebook?), will require approximately no learning curve, and will be of genuine utility (i.e., it will deliver the good PLoS One papers to me, rather than waiting for me to find them). Once a system like this exists, and it can be shown that it’s possible to judge people (for grants, tenure, etc.) in this system based on the impact of their work, rather than the prestige of the journal in which the work is published, it’s an additional easy step to eliminate the journals altogether.

Conclusions

Before the internet, peer-reviewed journals and researchers had a happy symbiosis: scientists had no way of getting their best scientific results to the largest audience possible, and journals could perform that service while making a bit of profit. Now, that symbiosis has turned into parasitism: peer-reviewed journals actively prevent the best scientific results from being disseminated, siphoning off time and money that would be better spent doing other things. The funny thing is, somehow we’ve been convinced that this parasite is doing us a favor, and that we can’t survive any other way. It’s not, and we can.

Thanks to everyone, including Daniel Macarthur, Jonathan Pritchard, Daniel Gaffney, and others, who discussed these issues with me and/or gave me feedback on an earlier version of this post. [NB: Being named here in no way implies endorsement of the opinions expressed]

  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Twitter
  • Google Bookmarks
  • FriendFeed
  • Reddit

165 Responses to “Why publish science in peer-reviewed journals?”


  • Kasper Daniel Hansen

    On the comment system: having a well-designed comment system would be fantastic as a way to filter papers. However, such a system really has little to do with peer-review; there is no reason why such a system could not be implemented on top of what we already have. In fact, as I understand it, Faculty of 1000 is an attempt at filtering the literature in such a way.

  • In fact, as I understand it, Faculty of 1000 is an attempt at filtering the literature in such a way.

    Yes, this is true. PLoS journals also have commenting and rating systems, and Nature has a commenting system. All of these systems are relatively unused (though I have noticed people putting F1000 ratings for papers on their websites and CVs, so maybe these things are getting noticed). But I think these implementations are confusing cause and effect–the best online communities have commenting systems, but having comments does not in itself create a community.

    I agree that many of these things could be implemented on top of what we already have; all the pieces are in place, just waiting for the right software.

  • Noah Fahlgren

    I am not defending the current system per se. It certainly has it’s faults, but I think the free-for-all system you suggest has it’s own problems. A couple things jump out for me. 1) While peer review is “ineffective at detecting errors,” not peer reviewing is 100% ineffective at detecting errors before publication. Will errors be corrected after publication? 2) Social promotion of articles sounds like a good idea in principle, but browse through any of the PLoS journals and find a paper that has been rated by anyone. I can tell you they are hard to find. The vast majority of articles have no comments or ratings.

  • Your post makes sense. The current system of peer-review just defends the interests of the happy few: the publishing industry and tenured scientists.

  • One idea will be for professional review firms to get established for the sole purpose of reviewing science paper. They could employ scientist and pay them to review papers in timely and constructive manner. Once they pass this review, then the researcher can post their research online. This way the papers published online have at least been verified through a professional review. Alternatively, journals can pay post doctoral fellows to review papers for them in a timely and constructive manner. In most labs it is the post docs who review the papers anyways and providing this service in a pay-as-you review manner will speed the process and produce higher quality reviews. The common theme to my comments is to move away from volunteer based reviews to a payment structure.

  • Noah,

    Thanks for the comments.

    Will errors be corrected after publication?

    Any system has to have a way of pointing out errors by authors, judging whether they are indeed errors by the authors (and not by the reader), and correcting them. Currently, this involves pre-publication reading by 2-4 peer reviewers, and an entirely unclear way of pointing out and correcting errors (some journals have formal technical comments and some don’t. See this for an account of how hard it is to publish a critique of a paper). I’m not sure this is much better than nothing at all, and it’s easy to imagine alternatives to this, especially in the context of community review (allowing comments on a paper, for example, and the addition of new data to a paper to address a comment).

    browse through any of the PLoS journals and find a paper that has been rated by anyone. I can tell you they are hard to find. The vast majority of articles have no comments or ratings.

    Yes, this is true. The PLoS experiment with comments and rating has so far not worked. I don’t think this is an issue with commenting and rating systems, per se, as much as a problem with the implementation, though this is certainly the least tested point in my argument. The fact that no one comments on PLoS articles on the PLoS website does not mean that people don’t have opinions about articles; they just express them in different venues. The key is to have that venue.

  • For more on the same theme, +Peter Murray-Rust has written a couple of recent posts on the topic:
    http://blogs.ch.cam.ac.uk/pmr/2011/07/09/what-is-wrong-with-scientific-publishing-and-can-we-put-it-right-before-it-is-too-late/
    http://blogs.ch.cam.ac.uk/pmr/2011/07/10/what-is-wrong-with-scientific-publishing-an-illustrative-“true”-story/

    And there’s this letter to Nature Biotech on industry access to the literature:

    http://www.nature.com/nbt/journal/v29/n7/full/nbt.1909.html?WT.ec_id=NBT-201107

    Will cross-post this comment at GU as well.

  • Interesting idea and in broad outline Pickrell is certainly right–the traditional peer review system is dying. It’s certainly a good thing to free ourselves from the stranglehold of autocratic journal editors. A more open-source model is certainly the way to go eventually.

    But Abdallah’s point gets at the major weakness of the argument, and the major strength of the traditional system. When I receive an article to review, I do a lot more than “Like” it or write a comment. I spend hours with it. I read it several times and compose a lengthy analysis, often several pages long, with specific comments and detailed critique. I do this for only a handful of papers a year, and I do it only when an editor sends me an article–i.e., out of a sense of duty and obligation.

    In dropping the conventional peer-review system, the main worry will be that removing that obligation and ending up with a much more superficial reviewing system. Abdallah’s idea of professional review firms is interesting and would provide another job stream for the oversupply of PhDs. Hell, it’d probably pay better than your average postdoc.

  • I think that peer review, problematic as it is, does serve one function that would definitely NOT be taken care of by any online comment system. The reviewers at high quality journals are not bat-shit crazy. The commenters on online publications often are.

    What I mean is this: we have an idea of what such an open non-reviewed system would look like, in the world of science blogs. Watch what happens when one of those blogs publishes something on, say, climate. Before you can say “gee, how crazy can things get?” you see armies of trolls writing the most amazing crap in the comment section. If you use any analysis of comments like that to evaluate a paper, you will be led badly astray.

    Well, what about the social network thing? You mention you would be particularly impressed by the opinion of your friend who you know to be an expert. But many times, scientific results have to be USED, and decisions have to be made about what papers to rely on by people who may not know anyone who is known to be a reliable expert.

    So, while you can put a lot of weight on the opinion of your acquaintance known to be an expert (by the way, isn’t this exactly what editors do when they select reviewers?), much of the time that option won’t be available.

    One thing that I believe is a very good development is getting rid of the “sufficient interest” stage of the review process (e.g., PLoS ONE), and focusing on the “is it correct?” evaluation. The sufficient interest criterion is explicitly linked to the page limits of dead-tree journals. Any editor of such a journal will admit (brag) that they receive more high quality submissions than they have room for, so they must impose another criterion, and they use a subjective estimate of how interesting the paper will be.

  • Watch what happens when one of those blogs publishes something on, say, climate. Before you can say “gee, how crazy can things get?” you see armies of trolls writing the most amazing crap in the comment section. If you use any analysis of comments like that to evaluate a paper, you will be led badly astray.

    This is not insurmountable. See, e.g, YouTube before and after it allowed you to vote up and down comments. Before: a wasteland. After: things that are actually kind of informative (or funny). Or see Amazon reviews. You can imagine ways to upweight comments by individuals who have given consistently useful comments in the past (imagine you could correlate an individual’s rating of a paper with its citation count 5 years later, for example). I understand that the commenter “noise” in some fields might be greater than others, but I’m not sure this is a fatal flaw.

  • One thing that I believe is a very good development is getting rid of the “sufficient interest” stage of the review process (e.g., PLoS ONE), and focusing on the “is it correct?” evaluation. The sufficient interest criterion is explicitly linked to the page limits of dead-tree journals.

    I completely agree with this, by the way. The one thing missing is the filter for high quality papers I’m interested in–PLoS One publishes a fire hose of papers. If there were a system in place (like the one I describe in points 2 and 3 in the post) to filter through these, this would be a very important first step.

  • Sounds like similar to what http://www.scholasticahq.com is doing.

  • With regard to item #2 of your list, something like this already exists and is thriving in some niches. E.g. the arxiv/quant-ph community uses http://scirate.com as an overlay to the arXiv to vote for papers.

  • What I distrust about this system (and worries me about some of the comments) is that you seem to argue that the wisdom of crowds should be allowed to trump expertise.
    The current system is arranged the way it is so that knowledgeable people can recognize and comment on how to improve quality work as much as block poor work or work that doesn’t fit the publication.
    Is that system outdated? Hell yes, it worked when publishing was expensive and science was a practice of an elite cadre.
    The real problem is mandatory publishing for scholars, which makes these kinds of gatekeeper standards counter-productive and destructive. Change that and you will be getting somewhere.

  • Hi Joe,

    Great write up. I have to say that I am 100% for the idea. A fellow grad student and I actually discussed a very similar concept a few months ago, which led to another grad student friend of ours (we are all EvBio) forwarding us your article with the heading “Scooped by another UofC grad student”!! I think this system of academic publication will continue to gain support as more people from our generation (the ones that grew up using community-oriented sites like Wikipedia, Reddit, etc.) further infiltrate academia.

    One important point my colleague and I discussed, but I don’t think you addressed, was the idea of editing. In my opinion, the whole purpose of a comment system would be rendered moot if the author of the manuscript wasn’t capable of addressing/incorporating some of the comments into the manuscript. I think an easy (but perhaps not perfect) way of addressing this problem would be to include “previous versions” of the manuscript, in addition to the most recent version, on the manuscript’s page. Such a system would be identical to the one used by Wikipedia to maintain edits to its pages, in that you can view the very first entry of any page if you so choose. If such a method was implemented, it would be important to modify the standard for citations to include the date the manuscript was accessed, similar to the protocol used for citing software, in order to insure that author’s citing the work don’t have to worry about later alterations to the manuscript.

    I think another potential benefit of community driven manuscripts would be the ability to direct academics towards relevant current literature in their field. For example, imagine Manuscript 1 makes some discovery and presents Hypotheses A, B, and C as potential explanations for the observed phenomenon. Later, Manuscript 2 addresses hypotheses A and C. It would be neat to have the ability to annotate Manuscript 1 at the point when Hypotheses A and C are discussed with a forwarding address to Manuscript 2. This could actually be driven by the community, with the authors of Manuscript 2 being able to add this annotation personally, or with other members of the community adding articles they have read, which relate to specific statements in the manuscript. Basically, it would be an academic form of SoundCloud (e.g. http://soundcloud.com/ministryofsound/dj-fresh-gold-dust-flux-pavillion-remix).

    Thanks for the write up, looking forward to the day this becomes a reality!

  • This may just be a psychological effect of the peer-review system, but when I write a paper, I know exactly the people I’m writing it for. The people who I think should care about a paper are the people that ideally will review it, but hopefully will read it once published. If I have trouble getting it published, I can email it to them to get their opinions on making it publishable.

    I don’t care what the general public thinks. Certainly somebody that I’ve never met, heard of, or had any contact with could offer me good comments, or derive something useful from my research. On the other hand, when I’m writing a paper, I’m writing for the people I’m interested in collaborating with, my friends, colleagues and people I might want a job from. I don’t think your proposed alternative solves *my* problem as well as peer review does.

  • I think most of the criticism directed towards the commenting system could easily be addressed by not allowing anonymous comments. If each person who wanted to comment was required to provide their name and academic credentials (perhaps via academia.edu), it would provide a sense of professionalism to the comments and prevents tRolLZ4lYFe from posting meaningless spam.

  • Michael,

    Thanks for the comments. I completely agree that editing (I think arXiv keeps track of different versions of a manuscript) and markup (a la PLoS) would be great features in a system like this. Of course, all of this is easier said than done! I think it’s inevitable that things head in this direction (PLoS One is already an important step), but hopefully it’ll be sooner rather than later.

  • Additionally regarding worries about comments, the “wisdom of crowds”, etc.–the problem with current commenting and rating systems on journals is not that they are overwhelmed with noise. As noted early in the thread, the problem is that no one at all uses them (not cranks, not trolls, not legitimate scientists). If a system like this starts having problems with too many comments, that’s 1) a good thing, in that the system works, and 2) a solvable problem.

  • Joel,

    I’ve always thought of the peer review process as exactly the thing keeping me from communicating with my audience, and I’m definitely not sure how it helps…

    Again, I don’t think any system like this will plausibly run into problems of too much involvement from people who aren’t interested in your line of work (unless it’s some politically controversial area or something), so a system like this would ideally be a direct line to the people you’re interested in.

  • my caveat: I like the idea, and I see the same issues with pre-publication peer-review as mentioned in your post, but..

    as a commenter mentioned above, any new system is going to have to answer the problem of updating and revision (which I see as a strength of the current system). I don’t see commenting as being able to point out the major, or even many of the minor issues with the paper. Even if it is, the incentive for the authors to make large revisions seems minimal.

    Webmedcentral (which I wrote about here: http://blog.openhelix.eu/?p=7439 ) seems to illustrate this issue. Granted, it’s not the killer app, but most of the papers published there are rarely if ever revised. One reason for that is even just getting reviews. Reviewers are recruited by the authors or from the viewership, and as one author who published there wrote ( http://blog.openhelix.eu/?p=7593 ),

    “So I spent about an hour scouring the literature looking for people I thought would be qualified and/or interested reviewers and then mass mailing about 50 of them. Net result: 1 review…”

    He also mentioned until PubMed indexes such research, it won’t be particularly helpful.

    So, this killer app is going to have to solve some other things for me:

    1. incentive for constructive and thorough review/commenting (not just up/down vote or short comments)
    2. incentive for authors to revise and update research and conclusion errors
    3. indexed in a central location like pubmed

  • Another issue is that as soon as your papers’ ratings become explicit selection criteria for grants or tenure, the rating system will be gamed. For relatively small personal financial cost, you could pay an army of Amazon Mechanical Turk workers to +1 your paper. http://goo.gl/IrnQv

    The system could require registration with a Captcha, but Mechanical Turk already gets around the Captcha, since the micro-tasks are performed by actual humans. You could introduce a requirement that registrants must have at least one publication, but then you’d prevent early-stage grad students from participating in the ratings.

    Although I agree that there are problems with using number of citations by peer-reviewed articles as a rating system, it is in effect a quality control step for the *ratings*, which is as important as quality control of the article itself.

    Now obviously the current system is gamed as well, with authors citing their own previous publications and reviews. But you could argue that the volume of self-citation is lower than it might be if the system could be computerized.

  • I like the idea, but I think this ignores the ‘accreditation’ role that journals play. A peer reviewed article is a unit of currency for grant proposals and tenure packages. The ‘glamour magz’ are sought after because they’re sought after–it’s a bubble phenomenon.

    How would tenure committees and grant foundations determine the quality of publications? (it could be done, but without addressing that specifically, most academics will run away).

  • Mike,

    Yes, the is a point that several people have brought up with me, and it’s a valid one–even if there’s a “fitness optimum” somewhere else, if people have to go through a valley to get there, it’s not going to happen.

    I think the answer is that these sorts of things have to, at first, be built into the current publishing system. PLoS is starting this (article level metrics, etc.), but it could be done more systematically. If we get to a point where what people judging your tenure package or grant are looking at are 1) numbers of citations of your articles (not the number of citations of the journal), 2) download/usage statistics, 3) scientific reputation (judged by some metric other than number of articles in Nature; I don’t have a perfectly clear idea of how this would work, but many online systems have a notion of “reputation” that works), then at some point people will care more about those things than the prestige of the journals they’re publishing in. At that point, it’s trivial to simply eliminate journals altogether.

  • Thanks for your comment Joe, but you didn’t quite get my point. I am more concerned at the discounting of expertise and the failure of your approach to address the real problem of research publishing.

    What you missed when I referred to the wisdom of crowds isn’t that I object to the noise it generates, it’s the loss of a point of view and the further erosion of expertise.

    For all the noise about experts muting innovation and intellectual nepotism I think an equally strong argument must be made for the role editors and reviewers play in recognizing quality ideas and promoting them. A peer-reviewed journal article is a finished product of agreed-upon and certified merit. That is why it holds on despite its inefficiencies.

    Democratization in the way you propose simply slips the leash off research publishing and heightens the noise level of articles in print as a consequence.

    Academic requirements ensure that too much junk is already in press. It’s the publish or perish paradigm that should be the real target of reform, not the system of review. All that junk gets printed with the current limits in place. It really adds nothing to the betterment of science to encourage more publishing.

    And in the end I see your proposed system of on-line publishing with this kind of social review in place is simply a technological formalization of what I do around my department and colleagues already. It is an important step before submitting for peer review, not a substitute.

    Reforming academic publishing and more openness are laudable goals. I am sorry, but I don’t see your cosmetic changes to review as real improvements. I do see many bad unintended side effects.

  • Alger,

    Thanks for the comments. You argue that editors (and to some extent, reviewers) can promote good work, which is true. In the current system, an editor that really likes a paper can speed it through this process because of their key role. In an alternative system based on community review, anyone with good standing in the community can bring attention to a paper.

  • For another interesting perspective, see:
    http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/

    wish I could vote up this comment :)

  • ignatiusloyola

    It is too bad that this guy ripped the idea off of me. I have approached the http://www.arXiv.org administrators about this idea, but they said that funding and time were the factors preventing something like it.

    For proof, check out http://www.reddit.com/r/hep, which was something I created many months ago as an example to show the arXiv admins. I am the sole moderator of r/hep.

  • blk, ignatiusloyola,

    Those are great, thanks! I particularly like the way the reddit-type site looks (I’ll take the “ripped off” comment as tongue-in-cheek :). If something like that had some social network type features, it would be almost exactly the sort of thing I’m talking about.

  • Doctor: “the cancer treatment I am recommending for you is not standard, in fact it’s brand new. It was voted up on Med-Reddit today. Oh, and I am also going to recommend that you NOT vaccinate your kids…it causes autism.”

  • “Oh, and I am also going to recommend that you NOT vaccinate your kids…it causes autism.”

    Certainly this must be ironic? The Wakefield vaccine->autism paper was published in Lancet…

  • Alan Weinrich

    As one of the peer reviewers implicitly maligned in the article, I need to respond. When I review a paper, the things I look for include whether the research followed a reasonable scientific process, how well the paper describes that process, and whether the conclusions are supported by the data presented in the paper. Most often, I provide very detailed comments to the authors, which frequently leave me feeling as if I should be included as a coauthor to the paper. I recall recommending rejection of only one paper, because it simply seemed irredeemable.
    In my daily work, I use published scientific information. If there had been no screening mechanism or reliable peer review of that information, it would make my work far less efficient and more prone to error. Frankly, the system proposed in the article scares me.
    I am not sold on the editorial and peer review processes that currently are used to screen and enhance papers to be published. However, I think a credible journal needs to assure its readers that someone who knows something about the topics addressed in the paper has reviewed the papers it publishes. The free-for-all system proposed in this article simply would not serve those who depend on knowing that published papers seemed credible to someone. With all its imperfections, the current peer review systems provide at least that minimum level of confidence.

  • Perhaps as a start, journals may encourage reviewers to make their comments public or even make their identity public if the reviewers consent. I know PlosOne does this.

  • I built science.io to solve exactly these problems. Check it out!

  • I’m curious as to what you all think of Mendeley.

  • The solution to the accountability problem (peer review, etc.) is to require authors to put all their data (deidentified if necessary) online so that others can inspect it. While the system is currently imperfect, this is the biggest information asymmetry that could be easily eliminated and lead to better science.

  • I thought you might be interested in this post I wrote, arguing that scientists should use Quora to disseminate information; it has many of the advantages we would desire in an ideal publication system. http://www.quora.com/Why-should-open-scientists-or-researchers-use-Quora

    If you aren’t familiar with it, Quora is a website on which experts can answer questions (on any topic), and users can ‘recommend’ answers by voting them up, as well as keeping content quality high by voting down bad answers. It works remarkably well, in my opinion. Tell me what you think.

  • somewhat biased towards established research groups…

    this isn’t really a trivial point, unknown young researchers, groups from the less rich countries and so on would struggle with such an open system. If it were to happen tomorrow the psychological quality impact of the journal would be replaced by the impact of the authors or lab or institution that published it

    the current system has a lot of faults but on the other hand is it really a good idea to encourage even more publishing? it’s already an enormous task to keep up with the literature. In my Ph.D days it was a quick look through current contents and an afternoon per week browsing the main journals in the library.

    In physics and maths the pre-print system is effective, but in the biological sciences where lab methods are so important, at least some sanity check on them before publishing is desirable – it doesn’t work 100% but it does work to some extent. Apart from a filter for high-quality work, a filter for acceptable quality also has its uses.

  • How about we create a stack exchange (http://stackexchange.com/) for papers? Its hardly Q&A but instead of a Question you’re submitting a paper.

  • Ken,

    Great stuff! I had the feeling this sort of idea would be more acceptable in the comp. sci. and physics, didn’t realize so many people were working on it there (see also the comments by blk and ignatiusloyola).

  • Keith,

    unknown young researchers, groups from the less rich countries and so on would struggle with such an open system

    I don’t see how this is any different from the current system. One can even imagine that these problems would be lessened in a community-based review system (no cost to publishing, for example, complete open access, no need to convince several reviewers and an editor that you’re legit before anyone can see your work, etc.)

  • I like Menedelay, stack exchange, et al. However, with all respect, they’re not the killer app (yet?)

  • Heng,

    Perhaps as a start, journals may encourage reviewers to make their comments public or even make their identity public if the reviewers consent. I know PlosOne does this.

    This is potentially a nice idea. However, my opinion is that these sorts of fiddlings with the current system are not likely to help much. We’ve set up the system such that the journals are the police, judge, and jury of the literature (pre-publication), and we end up acting like the prison guards in some perverted Milgram experiment, as if our job was to find reasons to prevent the publication of a paper. We don’t need this system at all.

    For example, I saw a nice paper in Nature yesterday you might be familiar with :) I also noticed this: “Received 01 April 2009″. This is a shame.

  • Joe,

    If a small unknown group from Ghana (thinking here of some friends of mine) got through peer review to publish in Human Genetics then they would get noticed. If they published in a completely open environment they would much more likely be lost in the mass of studies.

    It’s a reality that a paper from your group carries with it some guarantee of quality, a paper from my friends group would not, unless at least some sort of guarantee is provided by the journal itself.

    Free for all often favors the rich and the strong, not necessarily the best. Although in a free for all it’s quite likely that “market forces” may just end up pushing back a return to a system similar to what we have now. That’s been the case with blogs to some extent – Nature, SciAm, Wired etc.

    I would bet that if peer review disappeared… it would come back! It would be too tempting to recreate “prestige” journals, and publish in them

  • I also noticed this: “Received 01 April 2009″. This is a shame.

    Maybe the 2009 version contained problems that needed fixing and we are all better off for that?!

  • Keith,

    People read Human Genetics?! :)

    I think papers now get lost in the mass of studies; this is unlikely to change. The only thing that will change is the method for filtering that mass. For example, I simply cannot keep up with the flood of papers that come out in PLoS One. But when a blog picks one up, I often notice that it is of interest to me. There must be some way for me to find those PLoS One papers without this informal filter.

  • Maybe the 2009 version contained problems that needed fixing and we are all better off for that?!

    :) I’ll let Heng speak to that, but I can say I’ve seen pieces of that paper in talks for about two years now, and the main idea and results (using the sequentially Markov coalescent to infer human history) haven’t dramatically changed, as far as I can tell. I would have loved to have seen the actual methods two years ago. I think this comment takes for granted that anything that is published is static and cannot be changed in response to comments. It doesn’t have to work that way (again, see the way arXiv works).

  • Peer review is a relic of an age when publishing was paper-based and expensive, and search engines unheard of. The cost of publishing and disseminating a paper in those times was substantial (paper, ink, printing costs, transportation by carriage, ship, or railway), as was the cost of reading a paper (money, storage space for journals, scanning by eye), as was the cost of identifying interesting papers among colleagues (letter, or difficult and expensive travel).

    In that environment, peer review was a practical solution to drastically limiting the number of papers published while maintaining some “minimum standards”.

    All the scientific reasons for peer review are extinct. Peer review is no longer necessary.

    Not only is it unnecessary, but perhaps also detrimental to the scientific goal of disseminating the best ideas as widely as possible:

    1. It gives undue prominence to stale ideas, and forces scientists to keep their ideas hidden until they are “published”
    2. It creates two groups of people: those with early knowledge of new ideas/data (because they are editors, reviewers or authors), and everybody else
    3. It limits the flow of information, when papers are published in closed-access journals, where “fewer eyes” may scrutinize them
    4. It bounds the types of ideas that can be disseminated: long and involved ideas have to be broken up into pieces to conform to limit lengths, while small but useful ideas have to be attached to something more substantial to become “publishable”, instead of being broadcast right away.

    Peer-review must be scrapped right away; it persists only because (a) some people make money out of it, (b) some people are comfortable in their insulated scholarly communities, and don’t want their ideas to be scrutinized by the wider public, and (c) academics are at a loss of how to replace peer review for career advancement. Point (c) is particularly troublesome, as it shows how far science has devolved since the days of the “gentlemen scholars”.

  • Dienekes,

    Well said.

  • Ben Temperton

    One of the benefits of the peer-review system as it presently stands is that a paper may be sent to a reviewer whose expertise differs, but is equally valid, from the intended audience of the manuscript. My experience with bioinformatic papers is that they tend to be reviewed by other bioinformaticians, who judge the work based on its technical merit. Review by non-bioinformaticians to evaluate the biological significance and likelihood of the findings of the new work is rarer.

    If a review system relies on comments from people who found the paper themselves, chances are it’ll be reviewed by people who deal with similar problems and similar approaches to solving them. In a worst-case scenario, scientific papers would only be reviewed by peers who shared their particular scientific clique, resulting in a self-serving review process. At least in the present system, a paper can be sent by an editor to someone outside of the direct sphere of interest for critique. It’s often these reviews that can force you to question the validity of your findings, resulting in more robust science.

  • But why does peer review need to be scrapped? There are alternatives already, like PLos One. I disagree that peer review is always unhelpful as well – I have been grateful in the past for reviewers comments that have improved the end result. I have tried to return the favour myself.

    There are a lot of problems with the current system, which I suppose is evolving into a more modern system. I don’t think peer review is the main issue, especially as there are alternatives

    1. Money. Yes far too much is spent on subscriptions, there is no reason why scientific publishing of what is mainly public funded work should be a for profit industry. It’s shocking to have to pay $40 for a single article or $200 for a single hardcopy issue. But this is not due to peer review or there would not be open access which is funded by a small fee

    2. Volume of stuff published. In the end you can get most stuff published somewhere. A non peer review system would not improve things here

    3. Careers / grants. Emphasis on publishing in peer review and impact factors is a problem. It encourages rapid publication of lightweight papers rather than one comprehensive study (i think that’s salami slicing isn’t it??). It also encourages publishing essentially the same studies in different journals. This all aggravates problem 2

    Whatever happens to the system it has to deal with the mass, the quality studies will take care of themselves. Here I agree completely with Joe – the killer app to help to keep up to date with the relevant studies in my field, whoever did them, but still that maintains some level of quality assessment. I just don’t think peer review is the big issue here.

  • You condemn peer review as “slow and unhelpful.” I don’t know what experience this is based on, but while this is certainly sometimes the case, my experience with peer review is overall good. I have in many instances received very helpful reviews that have substantially improved the paper. And it doesn’t always take long. I’ve had reviews as quickly as 2 weeks. It usually doesn’t take longer than 6 weeks. I think that’s an appropriate time. Sure, I’ve had the occasional case that really sucked. Like the case where it took more than a year to get the paper through, or that report where the reviewer only read the abstract (and said so). But these are exceptions.

    Roughly, in my experience the better the journal, the better the review process.

    Also, you’re missing the actual reason why researchers don’t embrace a review process based on thumb-up recommendations. It’s very simply that the vast majority of papers would never get any rating. Peer review, despite all its flaws, guarantees that the paper was at least read by one person. If you let papers be judged by +1s, the papers that will be read are those by people who are well known, or people will recommend their friend’s papers. Is that what you want?

    The actual problem, as I argued here, is that peer review is tied to the journals. There is no good reason why the review is tied to the publishing process. I recommend to establish independent review agencies, from which an author (should he want so) can get a review and use it for example with a non-reviewed open access server.

  • I disagree that peer review is always unhelpful as well

    and

    You condemn peer review as “slow and unhelpful.”

    The “slow and unhelpful” comment was a my summary of some of the reasons put forward for a new journal here. And I certainly would not say that peer review is always unhelpful. Of course you sometimes get good comments when people read your paper, that’s entirely uncontroversial. The argument is that this should be unlinked to whether the paper is published.

    And it doesn’t always take long. I’ve had reviews as quickly as 2 weeks. It usually doesn’t take longer than 6 weeks. I think that’s an appropriate time. Sure, I’ve had the occasional case that really sucked. Like the case where it took more than a year to get the paper through, or that report where the reviewer only read the abstract (and said so).

    Diagnosis: Stockholm syndrome :)

  • there is another debate: do we really need another “top quality” journal? Are Nature xxx, Cell, Science, PLos Gen, PLos Bio, etc etc doing such a bad job at rapid review and publication? (Science has been criticised recently to publishing some papers too rapidly…)

    Last week we had the Royal Society (http://royalsocietypublishing.org/site/openbiology/), now this – do we need it?

    What I hope, and have hoped ever since the emergence of PLoS is that we will endup with all the journals being forced to becoming free to read

  • Daniel MacArthur

    Diagnosis: Stockholm syndrome :)

    Indeed. Many comments in this thread point out that peer review can sometimes be extremely helpful, which IMO isn’t controversial. The question isn’t, “is peer review sometimes useful?” – of course it is – but rather, “can we imagine alternative systems for evaluating and disseminating research results that would have better outcomes than the current system?”

    I personally think many journal editors and many peer reviewers do an amazing job of working within the existing system. Nonetheless, it is a source of constant wonder to me that so many scientists have come to regard a system that actively inhibits the rapid, free exchange of scientific information as an indispensable component of the scientific process. We can, and should, do better than this.

  • I personally think many journal editors and many peer reviewers do an amazing job of working within the existing system. Nonetheless, it is a source of constant wonder to me that so many scientists have come to regard a system that actively inhibits the rapid, free exchange of scientific information as an indispensable component of the scientific process. We can, and should, do better than this.

    This is spot on.

  • OK I just read the Richard Smith assessment of peer review (http://breast-cancer-research.com/content/12/S4/S13) also in that article there are valid points (although he uses too many anecdotes while condemning them at the same time, and Ioaniddes has not “shown how much of what is published is false” – he has put forward his own research study, which may or may not be false!).

    One thing about peer review is that it more or less guarantees any scientist, from wherever, a more or less fair and unbiased appraisal (despite the Smith anecdotes and small studies) of his/her work – any post publication peer review system should try to preserve this aspect

    Still though – my priority is a quality filter :)

  • Timothy Gawne

    To paraphrase Winston Churchill, peer-review is the worst possible system for publishing articles: except for all the other systems.

    I personally have found that most of the time peer-reviwe is helpful to me as an author. Even when I get an “irrational” review, it’s often because I didn’t explain what I was doing clearly enough. I have gotten some of my best ideas from peer-review comments.

    You will never get people on casual online social networks to read an article as carefully as someone doing a formal review at a journal.

    Now grant review for money is an entirely different issue, don’t get me started.

    My peeve is simply that there are too many journals and that we are all being pushed to publish at such a high rate that nobody can keep up with the literature and good ideas are sliced and diced 20 different ways. I think it should be harder to publish not easier!

    Of course, there is one place where peer-review is not needed: when you have a clear practical result that stands on it’s own. When the Wright brothers flew their airplane that didn’t need peer-review. If a physicist creates an anti-gravity device that actually hovers in mid-air, peer review be damned. Maybe we need to think less of theoretical papers and more of real demonstrable results…

  • To paraphrase Winston Churchill, peer-review is the worst possible system for publishing articles: except for all the other systems.

    I was unaware of Churchill’s strong views on the merits of peer review :)

    I’m pretty sure the Churchill quote is that democracy is better than the other forms of government that have been tried. Presumably he was (perhaps intuitively) basing this on data like counts of people killed in revolutions, counts of heads of state murdered, any maybe some other things (or at least you can imagine some qualitative historical argument). What other systems of scientific publishing have been tried? And what is the data that they are inferior to peer review?

  • Just thought you should know: it’s

    ad nauseam

    with an ‘a’.

  • Razib Khan has a brief post about this at http://blogs.discovermagazine.com/gnxp/2011/07/beyond-peer-reviewed-journals/

    There is a brief discussion on G+ at https://plus.google.com/105365987579074972007/posts/j8CBSTQGBty

    anywhere else???

    One of the points on the other sites is that social sciences & physics already live comfortably with “peerless” system so why not biology? One possible reason is the numbers – would it be correct that there are far more research studies in the biological sciences?

    Take an example of gene environment studies, it is already very hard to wade through the published studies to weed out the false associations, it is a blessing that there were not 10x more papers published. Yes to an open system, but the quality issue is a big hurdle (until it’s fixed!)

    BTW there is another downside to peer review – it gets exploited by sellers of dubious products who do a small study and get it published somewhere/somehow and then tout their product as “clinically proven in peer review study”. I could happily do without that

    I think there is common ground here among just about all the comments and maybe the Churchill should have another say, maybe for Peer Review, and (I HOPE) for exorbitant subscriptions, it is the “end of the beginning”

  • I would love to see a system such as what is proposed here (with some modifications to address issues people have mentioned). Unfortunately, there is no perfect system that will solve all the problems, just better ways than is being done right now. I agree that the publish or perish mentality is a big factor in many of the current problems.

    One big hurdle is money, as has been said. Currently, only those attending a wealthy school have access to a lot of journals. The less money the school has, the more one is cut off from resources. If one is not affiliated with a school, one is severely limited.

    This sort of approach could address this problem quite handily. The costs of running the operation is not trivial, but could be gotten easily enough. Have each university that supports research pay a small fee. It would not have to be much, considering the number of universities. If the university could dump even one high-priced journal in favor of this system, they would jump at the chance. Small establishments would really win in this situation and the information would be vastly more available to those who need to read it.

    Of course, this is rather a pipe dream for now. A system like this would need a very large buy-in. The current attempts at similar systems (with the exception of PLoS) are not widely known or utlized enough to achieve this goal. But this could easily change in the future with the right system.

  • I’ve always published all my research work in peer-reviewed journals and cannot speak highly enough of the procedure, even if it takes time. The best comments, advice, assistance even, I have ever received has been from the reviewers.

  • I have gotten wonderful comments and advice from reviewers as well. However, I have also gotten an equal number of comments that were flat out wrong and comments that basically said, “I disagree with your conclusions because it goes against my understandings or common consensus, so regardless of the evidence you present, I reject your paper.” This is extremely frustrating for those of us not yet established and I am sure for more experienced people as well. I find it very disheartening to read a reviewer say that he thinks my work is robust, but he disagrees with my interpretation so the paper gets rejected, especially when that reviewer makes the exact same interpretation in their paper a year later. For a time, I dropped out of scientific research for this reason. I came back, but I know of others that dropped out and never came back.

    There has to be a way to retain the good editorial aspects of peer review and avoid the major pitfalls. Perhaps a requirement that the author present a certification from 2-4 people that the paper had been vetted and revised before allowing it to be posted? Of course, the authors will pick people friendly to their work, but that happens already and would avoid papers being rejected out of sheer professional rivalry as many have seen happen.

  • as i said in my post joe pickrell’s post has pushed me toward marginally assenting to the proposition that peer-review should go. mostly because he presents a relatively fully-fleshed alternative vision. of course implementation matters. this isn’t a slam-dunk from my perspective. right now.

    ultimately i think dan macarthur’s point really can’t be overemphasized. in fact, this is a general issue: those who wish to perturb the status quota in a given situation usually aren’t asserting that we live under a nazi regime and that change will lead to utopia. rather, they’re proposing that there will be a net gain in utility by shifting from system A to system B. we can disagree whether this is so, but is there really a point in asserting that “peer-review has benefits” or that “it has worked for me.” no shit. so? even in obviously tyrannical regimes there are upsides and some people benefit. peer-review isn’t nearly that bad, so the negative case isn’t going to be as strong. but that doesn’t mean that there isn’t a negative case, and there isn’t a positive alternative vision. gains in human flourishing sometimes sometimes occur in a series of small steps.

    of course practically when you argue for a switch from the status quo you probably have to be more than “somewhat better.” there’s an endowment effect, people know the devil they know, and are fearful of the fire which they might face after leaving the frying pan. this isn’t irrational, as there are important gains to being able to plan for a situation with less future uncertainty. it probably is going to have to “get worse” before we imagine that it could “get better.”

    a lot of the + vs. – points here is acting if the distribution of outcomes is the same. it isn’t, in that we know how peer-review has played out. this “killer app” scenario that pickrell outlines isn’t a known, and so facing the unknown we have a range of outcomes, from worse to better. as the peer-review system keeps getting bogged down by institutional sclerosis i suspect that more and more people will be open to the risk/reward proposition pickrell proposes, as the status quota outcome becomes so bad that it seems unlikely that the switch will lead to anything worse.

  • One of the points on the other sites is that social sciences & physics already live comfortably with “peerless” system so why not biology? One possible reason is the numbers – would it be correct that there are far more research studies in the biological sciences?

    this is true in relation to the physical sciences. but social sciences? i don’t think so. though i don’t have numbers off the top of my head.

  • If the social sciences and physics already have good “peerless” systems, how do they manage promotion and tenure concerns? Considering that my (and most others) have to deal with P&T committees counting pubs in peer-reviewed journals as the primary promotion concern, I can publish all I want online, but it doesn’t count for anything unless it is a peer-reviewed journal. I even have to give special documentation to prove that an electronic journal is sufficiently peer-reviewed and respected before it is counted, even PLoS and even then is not counted as much as a paper pub. Paper publications OTOH are accepted without comment. So I am very curious how the social science and physics P&T committees deal with this issue.

  • At Kinexus Bioinformatics Corporation, we shall soon be launching Kinetica Online to provide open access to our databases and original research articles. I agree with Joe Pickrell that the current journal system is fast becoming obsolete on many fronts, including mounting costs (author page charges, user subscription fees, need for advertising revenues), publication speed (it can take up to a year or more for publication with typical manuscript review, resubmission and production times), labour (identification of suitable editors and expert reviewers as well as back and forth correspondences), environmental problems (e.g. use of paper and transportation of printed matter) and the fact that few scientists actually search online for articles based on the reputations of scientific journals. In the end, it is the number of times that a particular scientific paper is quoted that counts and not the impact factor of the journal that it appears in.

    With Kinetica Online, our goal is to make our databases and research articles freely and easily accessible. We plan to allow the readers to add commentaries of their own that flag any deficiencies or provide supporting findings. Such peer-review by the scientific community at large will be far more rigorous than reviews by two or three anonymous individuals that are solicited by journal editors. We hope to launch Kinetica Online by September of this year so it will be interesting to see how the scientific community will respond to this initiative.

    The recently announced plans for three of major charitable research foundations to produce their own journal is a step in the same general direction. Perhaps various major universities in the world might launch their own websites for publication of the research findings from their faculty and let the broad global community provide peer-review post-publication. The suitability of these faculty for promotion and tenure could be evaluated based on the number of times their publications have been downloaded, the feedback commentaries on their publications, and the number of times their work has been cited in other publications. Some universities already have their own publishing venues for books. May be now is the right time to extend this to scientific articles.

  • This is a great discussion, but it seems a bit hampered by the implicit assumption that things must be either/or: either we keep the existing system or throw it completely on its head.

    That’s not what Joe’s original post was about, and it’s not realistic. Nobody is proposing an abrupt revolt against peer review via a few gatekeepers; instead, they (and I’ll add myself to make it “we”) are arguing that the current system is wasteful, unjust, and counterproductive to the advancement of science, and that alternative models should therefore be pursued. The point is that people should not be complacent about the current system, and that those who have some power in this system (i.e., those with reputations and resources) bear the responsibility to name its faults and actively pursue other frameworks.

    That’s how I read the post, anyway. The critiques of the status quo that have been voiced here are only revolutionary in the sense that they don’t treat existing structures and traditions as sacrosanct. We as a community have a responsibility to make things better; this will require both a “killer app” (which, if it came to fruition, would naturally take some of the market from the old system) and researchers who aren’t afraid to take a small leap of faith away from the comfort of the establishment.

  • If the social sciences and physics already have good “peerless” systems, how do they manage promotion and tenure concerns?

    A good question worthy of a study perhaps

    I don’t know much about social sciences (except that its peer review was subject of a famous hoax http://en.wikipedia.org/wiki/Sokal_affair). In philosophy peer reviewed publication is definitely important for careers.

    Physics is a different case for several reasons – it’s a smaller circle and certainly as far as the theoretical branch is concerned (as with maths) many articles containing novel “proofs” are kept strictly secret from anyone (let alone peers) before being published or presented at a conference.

    The patent system I suppose is an example of post publication review – horribly unwieldy and expensive, but maybe some aspects could be useful

    As for the benefits of PR, I get some of my best ideas from papers I review :)

  • Nathaniel Comfort had a point yesterday, which is worth expanding. The current system involves a whole army of qualified and engaged reviewers, who spend what must amount to millions of work hours commenting on each others’ work. So far, social media commenting has not reached that level of serious engagement.

    So what’s needed isn’t quick fix killer app software, but a new reward system for mutual involvement with scientific and scholarly manuscripts.

  • I’ll let Heng speak to that, but I can say I’ve seen pieces of that paper in talks for about two years now, and the main idea and results haven’t dramatically changed, as far as I can tell.

    You are right that the method and the major conclusion have not been changed in the past two years. This is actually not the fault of the journals or the reviewers. They have offered great help, which I really appreciate. If someone has to take the blame, it is me. I am aware of the problems with the peer-review system, but this paper is probably not a good example against the system.

  • I would additionally address the probable ways in which academia will adapt to the future of publishing and how much it costs them.

    Firstly, the difference between the sort of online publishing described in this article – with a lack of peer review – and the sort of peer review that will become popular in academia even if the current publishing systems collapses as you describe. If we want to affect the system in terms of “peerlessness,” then we would have to tailor the online publishing system in a particular way.

    Secondly, the application to facilitate such a system needs to be considered separately from the regulating forces of governments, academic institutions, and journals – unless one is designed explicitly along side of one. (In the case above, an institution might be the first to officially sign on an support a “killer app” or rather “open platform” for publishing, and set a trend.) So instead of imagining a new system that may evolve to take the place of the currently collapsing system, work could be done to design a replacement – since while online and more open publishing systems will continue to emerge and flourish, it makes sense to ensure the source of most publications (academia) uses them.

    Finally, in practical terms, we need the man power (moderators on this database-driven “killer app”) to start moving the literature onto this new platform – whatever it may be. This topic has been discussed elsewhere on the web in considerably more detail without the obliquity of the “killer app” paraphrasing. If you’re one of the many who are fed up with powerful special interest groups skewing the scientific literature (“somehow we’ve been convinced that this parasite is doing us a favor”), then do you really believe a mere software application would suddenly divert the current practices of funding and then publishing? Funding and publishing are tied together – without the good press for bad science, how can bad science persist? Therefore, the plan we implement must include ties to various types of institutions (the people publishing), the rules on publishing (is this new place for publications easy to access place, while the other ones are in a separate less accessible one?), and finally, maybe the software – or method – that we use to interact with this body of information can be of help (but I don’t see it being the prerequisite). Those who write code for startups know that you need to know your clientele before writing the first line, and wasting time on a product that will go unused.

  • I haven’t actually read the entire comment thread, but just in case no-one has mentioned this:

    It is not just about finding a way of rating papers to determine their quality. In my opinion, an important part of any replacement for journal-base peer review is adding career value to in-depth comments about a paper, and having such in-depth reviews be upvoted and promoted on the same page as the original paper.

    You need someone to point out deep methodological flaws in papers, and you need to encourage people to go through the paper with a fine-toothed comb (which, in theory, is what reviewers do in the current system). A system where detailed reviews are attached to named authors, and posted as comments on papers, would allow them to be cited, and rated, in the same way as a paper. This would recognise the value and work that goes into detailed reviews of papers, and encourage people to do them (and help avoid problems of superficiality in comments).

  • Noah Fahlgren

    Luke, I think you are spot-on.

  • I would see this as something along the lines of Wikipedia EXCEPT that only an author can change the content.

    ‘Critiques’ can be added as a set of URLs at the end, in the same way ‘references’ are in Wikipedia. The original author would be able to add a section in reply to any critique [or comment], either as an addendum on the original paper or on the same ‘page’ as the critique.

    ‘Comments’ would similarly be on a separate entry – you don’t want thousands of comments downloading to your browser when you look at the paper. Commenter’s access would be revoked if they use abusive language – strictly scientific. The author could request such censure based on a user comment.

    Voting should be ‘ticking the boxes which apply’, such as novel, exciting, useful, humdrum, of little interest, etc. [possibly with a star system - 0 to 5 stars] It would be useful if the voting could be weighted so that someone in the same discipline has a bigger weight than any other scientist, although they would have a bigger weight than a lay person, etc. This would rely on some honesty when users register! Possibly the author could create a list of people he respects which would confer additional weighting, although this could lead to the equivalent of ‘ballot stuffing’!

    Papers would be highlighted as ‘new today’ , ‘new this week’ etc.

    Possibly a ‘discussion with the author’ thread could be added for those who want an interactive dialogue with the author, possibly seeking clarification, or pointing out errors, suggesting ways of developing an idea, etc.

    I am sure Tim Berners-Lee could write it – it is just an extension of what he set out to do when he created the WWW!

  • Dear Joe,
    Interesting discussion, but the 800 lb gorilla in this virtual room is generally ignored. It is the grant system itself that needs reform:

    Gordon, R. & B.J. Poulin (2009). Cost of the NSERC science grant peer review system exceeds the cost of giving every qualified researcher a baseline grant. Accountability in Research: Policies and Quality Assurance 16(1), 1-28.

    Granting is also based on peer review, and the pernicious effects occur in both publishing and granting. They are less serious in publishing precisely because there are so many journals, off and online. The general issue is the democratization of science, which has to include a Scientists’ Bill of Rights to avoid de Tocqueville’s tyranny of the majority.

    I would suggest that the current publishing system is rapidly collapsing under its own weight. Consider, contrary to comments above, that many journals now charge as much as $3000 for free online access to each article, let alone the exorbitant fees for single articles not so covered (for those whose institutions don’t pay blanket subscriptions). The result is that our generally publicly funded work is not available at an affordable price to everyone.

    So let’s start with:

    Scientists’ Bill of Rights
    All scientists have the right to minimal baseline funding
    All scientists have the right to publish
    All scientists have the right to critique others with published comments
    All scientists have the right to know who is critiquing them
    All scientists have the right to teach

    This is a rough, incomplete start, I leave it for others to define “scientist” and amend these rights. Thanks.
    Yours, -Dick Gordon

  • adding career value to in-depth comments about a paper … A system where detailed reviews are attached to named authors, and posted as comments on papers, would allow them to be cited, and rated, in the same way as a paper.

    That’s a nice way to bias heavily toward positive reviews. There is no way most people are going to tell what what they really think of someone’s work when that someone will be writing a tenure review letter tomorrow and sit on your grant study section a month from now. And this is part of the reason why no one comments papers on PLoS.

  • but still that maintains some level of quality assessment

    Why are so many people wanting published papers to be quality assessed for them??? I don’t need or want it! In my field, unless authors are lying through their teethes, I read and I can tell what quality and what’s not. (And if they are lying, I’ll know soon enough). And as long as I am interested on the subject, the fact that it is in BBRC or PLoS ONE will not prevent me from reading any less than if it were in Nature or its wannabe clone PLoS Biology. For things that are outside of my expertise/interests, I am perfectly happy to form my opinion just the way it works for all things non-science: based on semi-random combination of formal reviews, blog posts, grapevine and my own biases. This is normal and is the way everything worked for centuries! And somehow the sky is still not falling and we are all still alive and the progress is being made here and there.

    Just get rid of peer review at once! Peer review is like communism – maybe a good idea on paper but the damn real world just doesn’t want to conform.

  • Perhaps various major universities in the world might launch their own websites for publication of the research findings from their faculty and let the broad global community provide peer-review post-publication. The suitability of these faculty for promotion and tenure could be evaluated based on the number of times their publications have been downloaded, the feedback commentaries on their publications, and the number of times their work has been cited in other publications.

    This is a nice idea, especially because of the alternative metrics for usage in promotion/tenure.

  • @DK

    A quality assessment is for practical reasons – with open publishing there will be much, much more to read but time will be the same. It’s hard enough to keep up right now even with the strict PR filter.

  • Dick,

    Good point about peer review on grants. At least in this venue there are some other models (individual-based funding a la HHMI rather than project-based funding, for example).

  • Bryan is right; it should not be controversial for academics to think about and set up better ways to communicate. For me, ideally something like what I describe would completely take over, but even in my best case scenario, this would be a relatively slow process that involves testing out different approaches and sticking with what works best.

  • A quality assessment is for practical reasons – with open publishing there will be much, much more to read but time will be the same. It’s hard enough to keep up right now even with the strict PR filter.

    This is somewhat misleading, in that we have a filter in place now–I read at least the title of everything in Nature Genetics, but only things in PLoS One that I stumble across. Is this journal-based filter ok? Sure, it works fine for me. Could it be better? Sure. Is this system worth the cost (in time and money) of maintaining it? In the absence of an alternative, yes. But we can and should think of alternatives.

  • Hi Joe

    Not sure what is misleading. OK with PLoS One you read what you stumble across – PLoS One has minimal requirements for publishing and already the list is long. If ALL publishing were like this you would be lucky to stumble across anything – including stuff that would have made it to Nature Genetics. I agree with various above: it needs improving but it’s not necessarily either/or.

    If everything changed tomorrow to a fully open system, without any effective quality indications, filters, whatever, I would expect the “market forces” to create “elite” sites containing what the site editors consider to be high quality studies. This could even happen as a post publication review, or it might invite submissions direct. A “reviewed” collection of published studies would be popular – what type of review, pre- or post-pub is not so important, but it will have a cost, I don’t see that as avoidable.

    PLoS One is great but it could become too successful… “nobody submits to PLos One anymore, they publish too much!” (http://en.wikipedia.org/wiki/Yogi_Berra)

  • I agree that peer review has a lot of problems, but I think you’ve way overstated the case for reform (e.g. calling journals “parasites”, Daniel’s statement (in bold, no less!) that peer review is “a system that actively inhibits the rapid, free exchange of scientific information”).

    Good peer reviewers (you could argue what fraction are good, but I’d guess it’s >75%) read papers much more closely than the average reader skimming an RSS feed, and that in-depth study and the resulting comments are invaluable to researchers. While there are some high profile examples of careful crowdsourced dissection of papers after publication which should have been caught by peer review (e.g. longevity GWAS, arsenic life), there are a ton of papers published every week and almost none get so carefully studied by the community. They benefit hugely from getting the thoughtful input of a handful of unpaid colleagues.

    At several times above the point is made that nobody disputes that peer review is “sometimes useful”. So I guess the urgency of reform comes down to quantifying that sometimes. My personal experience has been that every paper I’ve written has been improved by the peer review process (if not by every single reviewer) and that every review I’ve written has improved the resulting paper.

    PS: Joe, do you have a reference for the statement that “the correlation in perceived “publishability” of a paper between two groups of reviewers is little better than zero”? It’s pretty bold, and I’d be interested to see the details.

    PPS: It’s also amusing that Heng & Richard’s paper was highlighted by Joe to make his point about how much peer review sucks, and then read Heng’s own take on the process.

  • Anyone who as ever served as a reviewer for a 2nd or 3rd tier journal immediately understands why publishing without peer review is a BAD IDEA. Really low quality, frequently with fatal flaws, would flood the medium and only serve to undermine science. If we decide to take such a system seriously then so can our critics. Peer review, with all of its flaws, is what sets science apart from every other discipline. It must exist.

    Regarding the current system being “biased towards established groups … and sometimes abused”; These things would only be exacerbated with publishing without peer review. Established investigators would invariably receive the best rankings, etc regardless of quality. Probably the best empirical evidence for this point are the “member” journals, where those elected to some academy get to conduct their own peer review. The articles are among the most flawed and error-filled, yet highly-cited papers that you’ll find in any top tier journal.

    In addition to this point, when work is of high quality, competing investigators could downgrade work for no good reason (and without peer review).

    The current system isn’t all that broken. Granted, sometimes your feelings will get hurt. Sometimes reviewers will make stupid comments or editors won’t recognize your obvious brilliance. The reality is that not everything you’ve invested lots of time and effort into will be deemed suitable for a top tier journal. But you’ve got to take your lumps (which will often times be deserved), pull up your big boy pants and move on.

  • Do you know about CiteULike? because:

    1) you can publish there stuff without peer-review (you just need to provide a URL allowing people to download your paper, a URL from arXiv for instance, or from you personal website);

    2) one-click recommendation à la YouTube is not implemented, but I ‘m gonna ask them right away;

    3) it is a social network, you can see with who you share most of the papers in your library, you can create a “connection” with these people, you can share papers via Twitter and Facebook, you can also search for papers having comments written by someone else, you can build “watchlist” to follow what papers your connections are posting/reading, …;

    4) about effectively searching based on the collective opinion, I don’t know how their search engine works and their source code is not free if I remember well, but they should be able to answer if they use “collective opinion” to sort the results of a given search.

    Moreover, it is a free service but they seem to have a business model that works. Eg. they just added a “Gold” section where you pay to have more options. Thus it should be sustainable.

  • Nature Precedings has been mentioned here a few times – as far as I can see it does everything that is being asked for by those who wish to scrap peer review… except for one important point, it’s not counted as a publication.

    However to say (in bold) “peer-reviewed journals actively prevent the best scientific results from being disseminated” or “a system that actively inhibits the rapid, free exchange of scientific information” can’t be correct. Dissemination, free-exchange, etc is encouraged by Nature Precedings – and there is a brief (1 day) review to check that there is minimal quality and genuine science

    http://precedings.nature.com/site/help
    What happens after a document is submitted?
    New submissions are reviewed by our curation team to ensure the quality and appropriateness of submitted documents. However, they are not subjected to peer review. Assuming that they satisfy our criteria (see below), submissions are posted immediately. The delay between submission and posting is usually no more than one working day, often much less.

    and: “We will only accept genuine contributions from qualified scientists”

    There is voting and commenting. Anyone, who fits the criteria, can go and publish their latest data today and have a citable, permanent, link by tomorrow. From a brief look at some of the sections though it’s a pretty quiet place. Anybody here have any direct experience with it?

    Then there is PLoS One – similar criteria but IS counted as a publication

    So if there are faults regarding peer review are not to do with inhibiting free exchange of idea or actual results but are more to do with the dependence by institutions other than the journals on peer review publishing for careers and grants.

    Meanwhile those who wish to encourage change and progress (well who doesn’t!?) should actively promote Nature Precedings, use it, blog about papers on it, comment and vote on articles. Who knows – it could become an important resource, even for careers (having a well regarded, commented and voted pre-publication article can do no harm on a CV, especially for the younger scientists, or those from less fortunate countries.

  • It’s also amusing that Heng & Richard’s paper was highlighted by Joe to make his point about how much peer review sucks, and then read Heng’s own take on the process.

    If I saw a paper delayed by two years with the methodology and results largely unchanged, I would also assume this is the fault of the peer-review system. Probably my case is just not typical (who knows?). It is not a good example against the peer-review system, but it is not a good example against Joe, either. Sorry for the confusion. My fault.

  • Jeff,

    Joe, do you have a reference for the statement that “the correlation in perceived “publishability” of a paper between two groups of reviewers is little better than zero”?

    See here and here and citations in the introductions to those papers.

  • Daniel MacArthur

    There has been some conflation in this thread of two quite different concepts:

    1. the existence of closed-access journals (which are, IMO, a fundamental absurdity given the goal of science is to disseminate knowledge);
    2. the need for peer review (which I think we all agree is fundamentally a good idea, with the argument here being how flawed the current implementation is and how best to improve it).

    To emphasise that second point again: as I understand him, Joe isn’t actually arguing that peer review (in the broad sense of a detailed dissection of scientific ideas by one’s peers) should be abolished, but rather that the existing implementation is badly flawed and needs to be changed.

    I should have been much clearer in my (bolded) statement above. Here it is again:

    …it is a source of constant wonder to me that so many scientists have come to regard a system that actively inhibits the rapid, free exchange of scientific information as an indispensable component of the scientific process

    Peer review, in and of itself, needn’t inhibit the exchange of accurate scientific information. But there’s no question in my mind that its current implementation, a slow, opaque process with the results disseminated in largely closed-access journals, is a very bad thing for science as a whole.

  • Anyone who as ever served as a reviewer for a 2nd or 3rd tier journal immediately understands why publishing without peer review is a BAD IDEA….Peer review, with all of its flaws, is what sets science apart from every other discipline. It must exist.

    Believe it or not, I’ve reviewed papers at 2nd and 3rd tier journals…

    Systematic pre-publication peer review of scientific papers, as far as I understand, was implemented around the 1950s. It is simply not what sets science apart from every other discipline.

  • Noah Fahlgren

    I would say most people are arguing that post-publication, ad hoc peer review is possibly equivalent to no peer-review at all.

  • As Daniel says, “peer review” in some abstract sense (i.e. the reading and analysis of papers by your peers) is of course a good thing!

    I’m surprised by the number of people who write that they’re worried that no one would ever carefully read their papers if not formal peer reviewers at a journal. Does everyone write really dull papers??

  • re: CiteULike, I’m a member, but have rarely found it useful. Not totally sure why that is; I think it could be a lack of critical mass.

    I do like the idea of Nature Precedings. It’s missing critical mass, but it could eventually be extremely useful.

  • What if all those papers in 2nd and 3rd tier journals with fatal flaws were published and nobody read them (like, to be honest, is probably the case even after those fatal flaws are fixed). Would that be so awful?

  • Joe,

    I’m not sure that your argument that peer review was “implemented around the 1950s” disproves my point. I could have qualified it with “in the 21st C” but really?? In what field outside of academic publishing is peer review as central. None. It helps to prevent the laziness, stupidity and misconduct that weakens other fields.

    Bri

  • Bri,

    Sorry, should have been more explicit: science has existed for a long time, but systematic pre-publication peer review is recent invention–a response to an increase in scientific output and a limited amount of journal space (like actual, physical space). It is not an integral part of science itself. And since journal space is not longer limited (due to the internet), it’s worthwhile to reconsider how we publish.

    It helps to prevent the laziness, stupidity and misconduct that weakens other fields.

    I’m not sure there’s any concrete evidence of this.

  • “What if all those papers in 2nd and 3rd tier journals with fatal flaws were published and nobody read them (like, to be honest, is probably the case even after those fatal flaws are fixed). Would that be so awful?”

    Yes!! Believe it or not, there are charlatans out there who are more interested in selling an idea than seeking the truth. And many of them write entertaining papers that are widely cited. Making their publishing lives easier is a move in the wrong direction. And I’m less worried about articles being read than even being on the radar. Simple pubmed searches often finds citations to prove premises-by-example for subsequent papers and grants. Would we then need user-rating-thresholds for citing a post hoc “peer review” paper in another work or grant? It is just too messy.

    And you are missing another point. Can anyone publish? The folks from “Ark Encounter” or the Blue Panthers?

  • Ha! you’re right Daniel, 95 100 comments, and counting (probably more going up before i finish this) later it becomes like a Chinese whisper!

    Re-reading the arguments – in synthesis PR is slow and expensive but a “free for all” would drown us all in data, hence the request for the “killer app” to fulfil some needs:

    So let’s take this goal–that of filtering papers based on quality, interest to a community, and reproducibility–as the legitimate service provided by peer-reviewed journals.

    I don’t see how PR inhibits exchange of information though when we have things like Nature Precedings, and of course conferences. Unless you mean it slows down te process of high profile exchange of information. I mean this in the sense of getting it widely read, indexed by Pub Med and citable – in that case, yes PR slows things down.

    Closed access journals is a defnitely a different argument – I actually think it is a more urgent argument as far as exchange of data is concerned, and yes I agree it’s absurd, nowadays.

    Post publication ad-hoc peer review could work – it’s not effective yet in Nat Pre but PLos One which says it will “publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership”. The commenting areas may be sparse but the impact factor of 4.411 suggests that it is working. This must be the closest we could expect to no hold up of publication by peer review, there has to be some quality control before publication. So PLoS One is the answer to problem?

    Almost. The current implementations of these ideas simply don’t perform the filtering mechanism that they’re designed to replace–if I see that a PLoS One paper is highly rated, this doesn’t help me at all; I’ve already found the paper!

    Back to the “killer app” – something that will “deliver the good PLoS One papers to me, rather than waiting for me to find them”. At the moment it doesn’t exist for PLoS One but it does exist for Nature, Cell, Science, etc, it’s not perfect, needs work doing to it, it’s peer review.

    Some questions? – if we moved to post publication peer review, or any other type of system that reduced publication time, would we still be able to have Nature, Cell and Science? If so, how? and finally – would it matter?

  • Daniel MacArthur

    Bri,

    If someone is serious enough about wanting to get something in the “peer-reviewed literature”, no matter how awful it is, they will eventually find a journal that will publish it and reviewers lazy enough to ignore its flaws. The literature is very big place, and the current peer review system doesn’t provide an absolute barrier to publishing complete dross; it’s a leaky membrane at best.

    The question about credentials is a good one, though: should people need a “real” institutional affiliation to publish? I can see arguments both ways here.

  • Note, the italic sentences in my previous posts represent quotes from Joe’s post, not my words (i’m not Johan Hari, or even David r…)

  • Can anyone publish? The folks from “Ark Encounter” or the Blue Panthers?

    I’m proposing a community-based system. One can quite easily imagine mechanisms to manage who is part of a community (academic credentials? recommendation from someone with academic credentials? other ideas?), so I don’t consider this objection a fatal flaw. In any case, I don’t really see this as being a problem– you don’t see arXiv overrun with people disproving the 2nd law of thermodynamics or anything.

  • Daniel MacArthur

    would we still be able to have Nature, Cell and Science?

    One could imagine these organisations moving towards filtering the published literature stream and “promoting” high-quality work, as well as providing news and opinion pieces, and sponsoring regular reviews of fast-moving fields. Not so different from their existing roles, really, with the exception that they were no longer the gatekeeper blocking research from entering the literature stream in the first place. They’d need to work harder to justify their subscription fees, though…

  • Keith,

    Yes, that’s a nice summary! I think something like PLoS One is very close to the answer–all we need is 1) a quality filter, which I think could be software rather than formal peer review, and 2) I guess a change in the culture, such that high-quality papers in PLoS One can be identified and rewarded.

  • Joe said: “I’m a member [of Citeulike], but have rarely found it useful. Not totally sure why that is; I think it could be a lack of critical mass”.

    I’ve asked to the Citeulike team access to their data because I would rather think it’s now pretty big… I’ll keep you posted.

    About Nature Proceedings, indeed it seems interesting. But as you said, PLoS One is quite close to what you advocate. Thus, will you ask them to implement a voting scheme for instance close to the one implemented in the StackOverflow engine (http://stackoverflow.com/)?

  • tim,

    yes, that’s an excellent point, a lot of these things could be worked into plos, I’ll try to contact them. also: was checking out citeulike again (looks like the last time I used it was 2007!) and it does seems to have a lot of these features. the interface is a bit clunky though, no?

  • Well, up to a certain point, interfaces are a matter of taste, but I think Citeulike is making a good job up to now.

    I also realized that there is a way of publicly reviewing a given paper by rating it over a 5-star scale, and optionally adding a review (different than writing a note, notes being usually less polished). Thus, if you can build a search query for highly-rated papers by people having similar papers than you in their library, well, it seems that you just found the tool you were asking for. I will see if this is possible to easily build such a query (and if you start using Citeulike again, let me know of your username ;).

    Moreover, Citeulike is well connected with publishers, eg. it is sponsored by Springer and PLoS One added Citeulike in their metrics a long ago.

  • @Keith Grimaldi

    A quality assessment is for practical reasons – with open publishing there will be much, much more to read but time will be the same. It’s hard enough to keep up right now even with the strict PR filter.

    No, it’s not. There are already way, way too many books of all kinds published – yet somehow we all survive and no one calls for a formal across the board system of book quality assessment because it’s so hard to keep up with the books. Again, in my narrow field it takes no more than 15 min daily to keep up. And so is in yours.

  • @DK

    15 mins per day would be nice!

    Books are not reviewed???

  • @Keith Grimaldi

    Just finding out what you need to read in order to keep up with your professional peers takes more than 15 min a day for you? (Reading is another matter and we already read as much as we’ll ever likely to read; that’s not going to change).

    Books are not reviewed???

    AFAIK, not. I am not aware of the system for books that is anything resembling that of papers in journals. Post publication reviews – of course. But again, those are spontaneous – the way it ought to be for papers as well.

  • Joe’s argument can be summarized simply: open source peer review over academic peer review.

    Open source codebases on github are FAR more robustly peer reviewed than the vast majority of academic papers.

    Did anyone do “academic peer review” of Linux? Of Rails? Of Django, JQuery, Protobuf, or any of the things on this list:

    https://github.com/popular/watched

    Answer: no. Yet many of those projects are not only used in academic papers, they are the foundation for huge websites that all of us use on a daily basis.

    Reason #1: for one thing, open source code usually actually works, whereas most papers are written to obscure exactly what the academic is doing so that they can retain that edge over the competition. Just try getting a raw dataset or key reagent out of some of these people.

    Reason #2: no one can stop you from putting something on github. We recognize at the beginning that there will always be haters for every project. Fine, this work isn’t for them, ignore. Can you imagine if a Python developer had to please some randomly chosen Java developer before showing his work to a mass audience?

    Yet that’s what academic peer review is. It’s 2-3 guys who often want to torpedo your stuff. It’s just statistically provably less robust than 1000 eyes of experts cloning and forking your git repository.

    Funding models need to catch up, but the technological counterexample is already there: open source peer review, not academic peer review.

  • @DK

    On aggregate no, 15 mins is not. It’s also not just keeping up with the new stuff, it’s trawling throught the new stuff, it is also, for example, searching the last 10 years of studies published on a particular argument, say a set of genes involved in inflammation. It’s hard now:

    1. Pub Med -> 3,000+ titles
    2. Select 300+ abstracts & read, quickly (ok scan)
    3. Select 50+ papers and read (read) quickly
    4. Select 10-20 for detailed study

    Multiply 1 & 2 by 10 and it becomes… harder

    Excluding self-publication, books are very very severely vetted/reviewed before being accepted, much more so than journal articles, the publisher investment per book is much higher. Have you not heard of the stories of budding authors spending years before, if ever, finally finding a publisher?

  • Daniel Falush

    I think two things would need to be in place to make an end to journals start to be practical:

    (1) Somewhere where inexperienced authors could send their papers in order to get them improved and for people who are learning to write papers. Peer review is most useful for people who do not have access to strong internal review, which is most people outside major institutions (and some in them). For more experienced people it is still a good idea to have a system to help people to be able to find their own reviewers.

    (2) a formal system for recognizing high quality and extra high quality papers after publication, that should be based on a combination of community metrics and expert panel opinion.

    There would also need to be other outlets to provide things that journals provide like publicity for newsworthy papers. Ending journals above all requires mechanisms to reduce rather than increase institutional inequality.

  • I had a look at the two links provided above on peer review being no better than chance (See here and here and citations in the introductions to those papers.)

    One from 2000 and the other from 2010 giving more or less the same result (which I found a bit alarming, the result I mean).

    The 2000 paper has been cited 104 times according to google – life would be nicer if there was this “killer app” that would be able to filter those 104 papers into repeat studies and whether or not the results were the same, and also find any other similar studies that did not cite the original – program that!

  • Hi,
    Nice post!

    Well, maybe peer-reviewing is necessary to avoid more junk information in the web. I really think a minimal revision is necessary to guarantee the quality of the publications. On the other hand is true that some reviewers take so much time doing it, and sometimes they are not fair at all! Moreover, editors sometimes should pay more attention and select more adequate reviewers, avoiding graduate students, for example. And, what about anonymous manuscripts? Or signing reviewers… Put both at the same level… It could improve the system.

    Finally, if people think uploading research on the net without any control, and the rest of scientists trust on it… there will be just a natural regulation.

    We just need to start this revolution by citing “unpublished” data from the internet…

  • Daniel [Falush],

    I think your point 2 is reasonable (and probably would happen naturally if a community review system existed).

    Regarding point 1, I don’t think a scientific publication system should be designed with the goal of helping people improve their work; it should be designed for the rapid, efficient dissemination and evaluation of that work. Certainly there are better ways to help inexperienced authors get advice than the current system?

  • Keith,

    I particularly like this from the PLoS One paper I linked to:

    The editors’ overall rejection rate was 48%: 88% when all reviewers for a manuscript agreed on rejection (7% of manuscripts) and 20% when all reviewers agreed that the manuscript should not be rejected (48% of manuscripts) [my emphasis]

    So maybe we already have a non-peer-reviewed system!

  • The comment by asdf is spot on.

  • Joe – it looks more and more like a lottery!

    Most here seem to agree that some sort of review is useful with post review being favoured to speed up the publication process. Despite the studies cited the evidence is that peer review does work to some extent. Papers in Nature tend to be high quality, papers in low impact factor journals tend to be a mixed bag.

    Post pub peer review could work if there was a high level of cooperation. Submitters could send personal requests to 2 or 3 “peers” to review their published paper – openly, this could be an accepted “duty” of researchers, it’s open to abuse but let’s assume that most of us have some integrity. I think some sort of coercion is required, without that it would become too random and patchy (like Nature Precedings) – the process needs some sort of formality.

  • Quoted from Bri: “Anyone who as ever served as a reviewer for a 2nd or 3rd tier journal immediately understands why publishing without peer review is a BAD IDEA. Really low quality, frequently with fatal flaws, would flood the medium and only serve to undermine science.”

    I agree with this completely, and I have personally rejected many papers that were just outright bad, and I also agree that there are many charlatans and/or bad scientists out there just trying to get crap or the “least publishable unit” published to help them get grants, tenure (despite its diminishing value), brief recognition/fame, or con (persuade) investors into giving them money. This is particularly true in the biomedical sciences where there is much money to be made with drug development and/or selling some quack idea. In regards to comment by asdf, open source code is free by definition (hence no major money involved), whereas getting the FDA to approve a drug that could generate billions in sales revenue depends currently on convincing a panel of medical doctors/scientists to approve your drug, based mainly on studies that must be published in the peer-reviewed literature. Once the drug is approved for one illness, doctors are then free to prescribe the drug off-label, and many companies have been caught illegally marketing their drug to doctors to be prescribed off-label as a way to increase sales (with millions of dollars at stake). I know unfortunately from personal experience as a physician that it is NOT about the science for most such people, but rather only about the money.

    I see “red” when I read published bad or sometimes even misleading/fraudulent papers that have currently somehow made it past peer review (and get published in some of the top tier journals even), and it is just crazy imagining what crap will get published without some sort of peer review (or some equivalent system) to prevent this stuff from getting published. Why muck up the scientific and clinical literature further with crap, which will then mislead the public? Any change that involves completely throwing out peer review needs to have a well-thought out plan to prevent the bottom-feeders and charlatans from publishing 1) total crap, 2) misleading studies to “favor” their drug or “treatment”or “next big idea” and/or 3) fraud.

  • @Gholson Lyon

    I was of the same opinion about PR keeping, to at least some extent, exploitative stuff out of the literature, is one of it’s uses. However if there was an open, efficient, effective post-pub review it could work even better. Imagine you publish a trashy study to support some product, you get it past PR and into the journal, job done. In the post-pub system you would have to be careful, your paper’s flaws risk being exposed, openly for all to see and it is no longer a useful marketing tool, on the contrary…

    In the perfect future you will be able to “see red” and then express your feelings and expose the bad practices, that should at least make you feel better!

  • Keith has this right: the obvious solution to a literature full of incorrect, misinterpreted or fraudulent data is not to prevent such papers from existing (they’re simply not going away), but to mark them as such.

    The reason people get pissed when an obviously incorrect paper is published in Nature is that somehow the prestige of the journal is given to the paper, and there’s no way (or only a long, very complicated way) to reverse that. This is absurd.

  • Noah Fahlgren

    Ok, Keith and Joe make a good point here.

    Hopefully this is not what the future will look like though http://xkcd.com/386/

    :)

  • Keith writes:
    “Post pub peer review could work if there was a high level of cooperation. Submitters could send personal requests to 2 or 3 “peers” to review their published paper – openly, this could be an accepted “duty” of researchers, it’s open to abuse but let’s assume that most of us have some integrity. I think some sort of coercion is required, without that it would become too random and patchy (like Nature Precedings) – the process needs some sort of formality.”

    Michael Alcorn’s comments above about creating a Wikipedia-like system in which articles can be edited and revised sounds like a great idea to me. This type of system would have several benefits including:

    1. Centralization of research allowing easy referencing of related works and making the entire research/literature review process much easier.

    2. Would allow other researchers to publish extensive citable reviews linked directly to the original article.

    To get back to Keith’s comment, what reviewers need is an incentive to put their time and energy into a critical review. What better incentive (besides money) than a citation. The anthropology journal, Current Anthropology, requires authors to get reviews prior to publication which are then published along with the original article and a reply to the reviewers. Reviewers in this journal are frequently cited for their reviews. A Wikipedia like publication system would allow the original publication, review, and reply system to be speedy and ongoing. Plus, it would give reviewers some incentive by allowing those reviews to be public and citable.

  • Great article. There is money to be made with this killer app $$$$$$

    Apologies if someone has brought up this point already.

    1. IMHO (from a grad student) the FUNDAMENTAL purpose of publication is to share information with everyone in the scientific community. Determining the reliability of that information has been widely discussed in this thread already.

    2. In sharing information you get kudos and progress your career.

    Now the FUNDAMENTAL problem with humans is that they don’t want to share or contribute unless they are rewarded somehow. Internet forums and discussions are the best example of this where lurking is a massive problem.

    http://en.wikipedia.org/wiki/Lurker
    http://en.wikipedia.org/wiki/1%25_rule_%28Internet_culture%29

    This is why reviewers take 3 months to read and review because they are doing it for free! If reviewers got paid money depending on how fast they respond….

    If science was remodeled such that it was based on your contribution to the community “in any form” then people are more likely to share and contribute. Also those dinosaurs would figure out how to use a computer and contribute online!… but one can only hope. At the moment this contribution level is measured by traditional means such as how many societies you have subscribed to! Contributing to your closed secret society doesn’t really get the information out there… blogging on the internet does!

    This would allow a free flow of information including the most dreaded hypothesis and results that didn’t work and non-replication of results from high impact publication. This would save science a lot of time and money!

  • I wholeheartedly agree with this entire post. I would even go one step further: instead of publishing papers as a collection of 4+ figures, I hope in the future people publish individual figures/findings as they become confident in them. Many papers’ findings can be broken down into discrete chunks, and I think it’s a shame we have to wait for them to be bundled together to get all of them. Rather than waiting 2+ years for a result (in neuroscience at least), why not get smaller bits of information every six months? It might make citation harder, but information exchange would be much more rapid.

    I also like the idea of each lab having a findings feed where they can gather all the findings from their lab in one place, and publish it. Then you can just subscribe to that feed if you think it’s interesting, rather than having to search for papers from the lab.

  • B Boulton, D. Gordon, asdf, and G Lyon all touched on key aspects that trouble the popular journal publication system. Peer-review sounds great and appears to be implicitly required by anyone presenting their research. One recurring issue is money. The issue with the major journal publications delaying or preventing dissemination of valuable research, as well as the second and third order journals releasing weak or fraudulent articles, both share the quality of valuing money over science. Some second rate, third rate, or online journals may suffer from those just trying to game the system for easy access to money. On the other hand, the major journals are becoming bloated and increasingly financially oriented. Originally, the primary goal of these journals was to publish as much as possible, but now there is an excess of research to publish. Money may help disseminate knowledge, but in the case of private journals focusing on money produces inefficient and undesirable results.

    So the big journals focus on money. Many people employed by such large journals have experience enabling them to review, rate, and control what is published at the big journals. However these journals seem to be institutions whose primary goal is the maximization of profit(profit in the sense of more money). Thus all scientific work, and knowledge dissemination, is secondary to profit–this appears to be a common feature of many institution regardless of size or purpose.

    I agree that all papers must be reviewed, but I may be more extreme. First, researchers should publish their work in an a free (as in accessible and without charge) fashion. And this research archive could easily support degrees of “publishing.” Categories for undergraduate, graduate, post-doc, industry, and independent research could be made. To prevent random spam and nonsense an id system such as OpenId would be critical (http://openid.net/). In fact, a basis for submitters authenticity could be based in the beginning of their ID. Such as userID@some.uni.edu (swap uni with biz or other term). Of course those pursuing independent research could go to a “signing” style party at a local authorizing institution to add physical authenticity credentials. Once an ID is made individuals could begin submitting (more on that below) and receive further credentials from others that are already marked positively.

    Once an ID is created one could begin joining specific groups by requesting permission to join (groups could then ask for references etc) or answering requests from others. Publication would be possible immediately. Anyone with an account could comment. Those without accounts could view research, but not post comments. The process of review would entail an online version of what happens in most university departments around the world. In fact, any contributer could post detailed reviews or just short narrow reviews to a paper (comments). The software for managing such a system already exists and is free to modify and reuse here: http://sourceforge.net/projects/slashcode/. I recommend slashcode because of its builtin moderation and meta-moderation system.

    This system allows anyone with an ID setup to read and comment on a paper or post a full review. Slashcode is my suggestion due to the user based moderation system for comments. Originally this system is used to minimize, or outright bury, abusive comments. In the case of academic papers, abusive comments would be minimal (I assume), because they would be linked to one’s real identity. Most comment moderation would involve weighing the comments contribution to meaningful analysis and criticism to the paper. This provides a rapid and efficient means for those publishing papers to get useful community feedback. Also the moderation system is based on karma and each user’s karma is limited. As one’s karma builds one can also moderate which ‘costs’ karma. There is also a meta-moderation system to prevent abuses to the moderation system. All of this would require some customization, which is eased by slashcode’s GPL license.

    Regarding the print Journals purpose: they wouldn’t disappear. As scientists and various institutions transitioned to a practical and free system for organizing and disseminating research the journals jobs would also be easier. They would have institutional access to publications. Their staff would return to editors who would benefit from the ongoing community peer-review process. These editors, commentators, etc. would then review the top papers in their respective fields and pick the papers that they deem sufficient to publish.

    Thus, those who relied on the journals to organize and present top papers could continue to do so. The best option is not an either or. The problem appears to be one of utilizing new technologies more efficiently. Of course, the ongoing obstacles to major changes seem to be the cultural and institutional obstacles. Most of the code is already available and freely available for adaptation, so the biggest obstacle to change depends on the choices people are willing to make.

    I must admit I am new here (still an undergraduate). So, I have not been exposed to the inefficiencies and troubles described directly.

  • Hi Again Joe,

    Can you explain more why you feel like peer review gets between you and the audience? I have always felt like peer review was just a way of making sure the paper was really publishable and stood up to scrutiny, i.e. to avoid publishing an embarassing piece of crap. Reviews aren’t fun, but I have never had a paper rejected for an “arbitrary” reason, nor have I ever felt like reviewers were not doing their best at what they do. I’ve had some annoying reviews from obviously incompetent reviewers, but the lead author just talked to the editor about those ones.

    In other news, have you heard of Philica? Philica is somewhere between a preprint archive (very popular in physics and astronomy) and an open access journal, with open reviews, and reviewers gain credibility on the basis of others reviewing them. Philica is open to all subject areas. I just found out about it today, and I aim to find out more: http://philica.com

  • Dr. Adamson,

    I have had more than one reviewer state that the conclusions in our paper went against established theory, that he disagreed with our interpretations, so rejected the paper. The rejection was not based on the work or the methods, but because it did not agree with the current paradigm. To me, that is not an acceptable reason to reject a paper and it runs counter to what science is all about.
    I have also seen conclusions in rejected papers make their way into papers by the reviewer. I have seen truly abyssmal papers published because the author was established and knew the chief editor.
    Yes, these are anecdotal and every system will have problems. But surely there are better ways to get the benefits of peer review and reduce, if not eliminate some of these problems.

  • To Joe D.,

    That sucks and editors shouldn’t stand for it. Perhaps I’ll be an editor someday ;)

  • Joel J. Adamson:

    The thing about journal space is that it’s not scarce anymore. We now have powerful search/ranking technology to sort the wheat from the chaff. The right thing to do is to put it on the web and let others take a look and cite as they will. The example of open source peer review shows that this works for extremely complicated, technical things.

  • The citation count of a paper is a more reliable indicator of its quality than a +/- “wisdom of the crowd”-based system. An impact factor- or acceptance ratio- (for proceedings) adjusted/weighted sum could be another one. A citation from Nature is more important than a citation from a non-rated journal.
    Let’s say that I work in a lab with another 10 people that we generously exchange ‘+’s to each other’s papers. And each of us does that with all his scientific buddies, you know, to “support” each other. How can you remove/handle that noise?

  • Joe,

    Great post. It’s great seeing the community reactions as well. You want some social media based peer review, well, there you have it!

    I’d like to echo asdf that the open source software community provides a great model, github in particular. I wrote a related blog post recently called “We need a github of science” (http://marciovm.com/i-want-a-github-of-science) that got a great reaction: 60k+ views, 100+ comments there and at Hacker News.

    That engagement and reach provide credibility. It’s in a different way than traditional peer-review, but it’s still there.

    GitHub did not become popular by convincing the open-source community to “switch” from existing behaviors. Rather, they first provided a service that is awesome as both a collaboration and broadcasting tool. Because this tool added value without requiring huge community buy-in, it was able to grow incrementally, until it reached a point where the community aspects were easier to realize. I think Mendeley’s approach is similar and commendable, and companies like Academia.edu and ResearchGate are also worth thinking about.

    The flaws of traditional peer-review aren’t hard to find, it’s coming up with the viable alternative which provides upfront value to practicing scientists which is the real challenge.

    Marcio von Muhlen, PhD

  • A novel yet naive proposal. Peer review is unreliable at filtering the same way non-peer review is guaranteed to have no filter. It’s like saying don’t use a brita because it allows 0.01%, rather just drink the tap straight.
    Further an upvote/downvote system would clearly be rated by people briefly perusing the article rather than reading it thoroughly, trying to find if there are some fundamental flaws. If something has a catchy title or conclusion it’s bound to get a ton of upvotes.

  • It’s nice to see that a young research as Joe (who is a rising star in the field) with a “killer” record of publications still understands the flaws of the current publishing process.

    Many young researchers are so fed up with the current publishing process now.

    I recently got a paper reviewed in a “high end” journal. The reviewers did not understand what I did, questioning things that had been validated in a previous highly cited Plos Genetics paper and now used by prominent people in my field. The paper got rejected and there was nothing I could do.

  • After reading many comments over the past few weeks, I’m wondering how much of your disgruntlitude is related to the nature of your field. I come from a smaller field (evolutionary theory), where there is little competition and even less potential for commercial applications of my research. My biggest problem is people who don’t understand the way my research is done (mathematical models), but often that’s my audience and I just need to do a better job to explain my work. Peer review only helps in this respect.

  • @Joel, while I agree that having colleagues comment on your papers before they are published is extremely helpful, should it be a valid method to decide whether it gets published at all? I completely agree that there is benefit from peer review, but there has to be a better way than the way it is currently implemented. The current method was not established to weed out inferior science and prevent publication of bad papers, but to choose the best papers to publish in a limited space, much like any editor of any magazine has to do. Since space is no longer a viable limitation, shouldn’t we adapt peer review to the current situation?
    Perhaps, rather than having the editors and reviewers make the decision, we could have the authors solicit their own reviews. The author could then rewrite as they chose and solicit reviews on the new manuscript. When the author deemed it ready, have it then published along with the comments of the most recent reviews as well as any response to the reviews the author might want to include.

  • Been away from this discussion, but this is exactly right:

    The current method was not established to weed out inferior science and prevent publication of bad papers, but to choose the best papers to publish in a limited space, much like any editor of any magazine has to do. Since space is no longer a viable limitation, shouldn’t we adapt peer review to the current situation?

    I can now imagine a system where journals still exist–they could identify the “best” papers arising naturally from an open system like the one I describe, and publish them, perhaps with commentary. This might be useful if I’m somewhat interested in a field related to my own, but don’t want to follow it closely, just hear about the coolest results.

  • @Joe Pickrell. I agree. Journals will still provide a valuable service in this regard. Under this method, journals could be seen as an additional layer of peer review, choosing what they view as the best papers, but not limiting access to other papers and may even enhance the prestige of a publication.

  • have a look at the paper that is published in “nature” last month by li & durbin (doi:10.1038/nature10231). it took more than 2yrs to get published (they initially submitted the paper in apr 2009). mind you, this is when the paper is sent from the sanger institute…one wonders what would have happened to the fate of a similar paper submitted by an unknown mr.chan from southern chinese university to the same journal. i will leave it to the readers to judge but i would say rejected by the editor on the day of submission!!!!!

    this clearly states what richard smith, ex-editor of bmj and the current board member of plos said it so succinctly, “the current system of pre-publication peer-review does not work…..”

  • dave chamberlin

    A most extraordinary thread. I’m just a fascinated fly buzzing through the science blogs listening in on the conversations and normally keeping my ignorant mouth shut. But this is truly a great idea waiting for the killer app. Imagine if this is actually pulled off. It will mean something truly momentous, for what is being thrown around here is a means to allow scientists on the cutting edge to communicate to each other faster, far more efficiently, and most importantly replace the good old boy network with recognition to those most deserving of it. The problem blocking real change in the real world is of course our old nemesis institutionalized bullshit that survives for it’s own good rather than the common good. Killer apps wait and wait and wait for the folks with the money and the business acumen to get together with those who have actual experience creating a similar product. So to those with the business savy I introduce you my all star line up of science bloggers, Razib Khan, Daniel MacArthur, and Dienekes Pontikos. Of course this is my biased incomplete list based on my fascination with genetics but I think their constructive comments are due not just to thier considerable intelliegence but their experience creating an on-line communication network between creative scientists.

  • Daniel Falush

    If we are going to get rid of journals, how is the guy who sent me the email today (below) in practice going to get help improving his work? I would not claim that the service we offer at Molecular Ecology is brilliant but we see manuscripts from people with a great variety of locations and in particular an enormous variability in knowledge of analysis of genetic data. And the process, while obviously intensely variable, does improve a fair proportion of them, including those we reject.

    I am not saying peer review is the only or best place one gets this; I always thought the Mathematical Genetics talks in Oxford offered something peer review rarely provides in terms of dialectic, for example. You’ve also done your PhD in a very privileged background especially in terms of critical thinking around you and so you have probably have also had your needs better fulfilled through other routes.

    But without realistic proposals for how regular guys in out of the way countries and institutions and without the most relevant expertise around them going to get a similar level of access to help to that provided (albeit rather randomly) by peer review, I think it would be regressive to abolish journals because of their annoying aspects.

    Dear Dr. Falush:

    Please forgive my bold interruption to your routine. I am writing
    about our work in XXXXXXXXXX microbial metagenome on which we
    submitted a ms for consideration to publish in Molecular Ecology. You
    were the subject editor of the submission as you may remember. The ms
    was rejected. Yet we took comfort from your encouraging from your
    comment to us. “It is quite an interesting study system!” said you.
    Indeed we have not given up and continued to work on the system. Now
    we add more data and tried to improve according to the reviewers’
    comments. I am wondering if you are willing to take a look of our
    revised ms. Please advise us if the ms is sufficiently improved to be
    submitted to ME.

    I know this must be an unusual request for you. I look at your
    website and gladly know that you have been to this part of the world.
    Kyushu is just 2-hr flight time from XXXXX. I have sampled XXXX near
    Fukuoka and Miyazaki. You probably can appreciate a regular lab,
    small lab, trying to do good science.

    Thanks for your kind consideration. If you agree to look at our ms, I
    can send it in a week or so with point-to-point response to the
    reviewers’ comments to asssit you to clarify what has been
    improved/added. If for any reason you do not to wish to do so, I’d
    very much like to hear your opinion concerning this work.

    Best.

    XXXXX

  • Hi Daniel.

    Thanks for the comment. I agree, of course, that peer review sometimes helps people improve their papers. I don’t see, however, why preventing a paper from being published based on pre-publication review helps in any way.

    What if the authors in question had simply published their paper online, and interested people had pointed out potential problems? Then the authors could decide how to proceed themselves. Would that be so awful? As it is, it sounds like the authors are begging to have permission to publish their paper, which–given that we have the internet–is an absurd state of affairs. You wrote “It is quite an interesting study system!”, but only you and maybe three other people have been allowed to see what’s interesting about it!

  • i, personally, don’t give any importance to whether one has published or not and where one has published. we all know how peer-review publication works and enough is already said about it. i agree entirely with dave chamberlin’s comment that it is ‘institutional bullshit’ and i am sick & tired of it. the funding bodies are asking us what have we published and what are the impact factors of the journals in which we have published. i want to do some wacky expt, that might lead to no where, but may find real answer(s) to a fundamental problem. i refuse to repeat other people’s work in a different organism/system and get it published somewhere. the big guys say NO, NO WAY…….i am getting more inclined to switch over to theoretical physics/quantum maths where i don’t have to lick the feet of so called self perceived big guys and can work independently at home and without grant support. if i do get into that route, how will i pay my house mortgage is an entirely different subject though!!!!!

  • Daniel Falush

    Dear Joe,

    Thank you very much for your reply.

    I think the guy should put the work online. Why not? I am not sure how likely it is for him to get useful help in that way but I agree that little or no harm can come of it.

    He is not asking for permission to show the work to the world but rather what he needs to do for what is necessary to do to get the badge “this work is good enough for Molecular Ecology”. This badge is valuable to him and it is valuable to potential readers as well.

    I have a slightly different idea on the back of this. Sure abolish peer review for people who need neither the help or the immediate badge. But additionally create a stronger kind of peer review journal where the editor takes both more responsibility and more credit for the paper. This could work by authors posting manuscripts on the internal journal website (they could put it online also if they wished). Then editors actively choose papers they were prepared to edit – this is necessary if they are to take more responsibility – and they then do whatever it needs to improve it. The paper is finally published by the journal but with a summary of major changes made in the review process and with the editor as nearly author (e.g. the name of the editor should come up in searches and editors get significant credit for the post publication success of the papers they edit).

    Daniel

  • Hi Daniel,

    I largely agree with you. I can imagine a system where journals select papers they would like to publish from a pool of online work (perhaps from a system like the one I describe in the post), extensively review them and publish them with commentary. Different journals might take different approaches, and a journal like the one you describe would provide legitimate value to a paper.

    I think there’s a cultural issue here–you write that the authors in question are not writing to you for permission to show their work to others, but in biology people actually do refuse to communicate except through journals. Maybe that, rather than the journals themselves, is the major issue here. I wonder if biologists would have such dread of the publishing process if there was a more established culture of posting and sharing preprints.

  • Daniel Falush

    To lead this culture change by example, you can download a preprint of ours and even some nifty software to apply the methods it describes at http://www.paintmychromosomes.com

    Got the reviews today as it happens but more feedback always welcome.

  • Thanks!

  • One of my favorite “truisms” ever is the oft-quoted
    “Q: How do you know who is a good scientist?”
    “A: Good scientists publish in good journals.”
    “Q: ok then, how do you know what journals are good?”
    “A: Oh, because good scientists publish in them.”

    While I understand the arguments and complaints of the article, I do not think that an “Internet Popularity Contest” is the answer to improve scientific research. The reality is that most people online behave like childish idiots who are largely ignorant of anything outside of their own field, and who have little to lose by broadly trashing things they don’t like for various reasons, whether well-founded or not. Add to this internet culture the competitiveness of scientific funding and science in general, and I think the potential for abuse far outweighs the small potential gains.

    First of all, the “small number of people” reviewing papers is true only for A GIVEN paper. All of us, from time to time, are called on to review papers, just as we are called on to serve on grant study sections. It is a community service that we all have to do; I think the rule of thumb is that most of us treat this responsibility with respect. Moreover, the editors of a journal are not anonymous; and moves to make reviewers’ names published with the article are a clearly good idea.

    Second, authors get to suggest who reviews their papers. This is done not just for the more sinister reason of preventing direct competitors from reviewing (and potentially trashing) a manuscript. It is also done, as most of my colleagues would agree, for more constructive reasons- i.e., we choose reviewers (and often editors) who we think will constructively critique our work PRIOR to publication. This is invaluable at improving the overall content of science. Having a random group of anonymous internet posters bashing away ignorantly at a piece of work online isn’t any more helpful than no input at all.

    Third, the idea of having internet based, non-reviewed literature defining ones’ career achievements and progress is troublesome at best. Make no mistake- if you have gotten a poor score on your grant renewal, you can gripe and moan about how the study section didn’t understand your grant (which by the way, is your fault, not theirs), or how they were prejudiced against your work because they were full of your competitors. In general though, please rest assured that if you had several more publications with that renewal, your score would be higher. Now, in the world in which people are just going to publish un-reviewed material online, do you really feel that this would not be subject to abuse? More to the point- do you think that every primary study section member should have to wade through mounds of un-reviewed “literature” just to see if the author has really shown what they say they have?

    There are many other problems with the proposed dismissal of peer-reviewed publication that come to mind, but need not be listed. I do understand that the current system is not without flaws, and some big ones. One of them is relative anonymity which can be fixed. As to the issue of getting things published quickly- how much quicker is necessary? Sharing scientific ideas that are very recent are exactly why we have meetings, web boards, webinars, newsgroups, seminars, etc. To suggest that this venue should supplant documentation and peer review seems incredibly naive to me.

  • Hi Bob,

    Thanks for your comments. I think an important point is that scientific researchers already are an online community: we almost exclusively exchange ideas online, and our reputations are based on what we publish online (of course, most people don’t publish online themselves; they send their papers to a third party which then puts them on the internet). To a very real extent, papers are judged in an “online popularity contest”–a count of citations.

    The reality is that most people online behave like childish idiots who are largely ignorant of anything outside of their own field, and who have little to lose by broadly trashing things they don’t like for various reasons, whether well-founded or not.

    First, this is, of course, not always the case. I’ve been quite happy with the comments on this site; the quality of comments depends on the quality of the people in the community. Second, peer reviewers can actually prevent the publication of a paper by trashing it for unfounded reasons; in an alternative system, people who trash a paper for no reason could not prevent other people from seeing it.

    Third, the idea of having internet based, non-reviewed literature defining ones’ career achievements and progress is troublesome at best.

    Consider people who write novels, or computer software–they are judged based on what happens *after* they publish their work (often without pre-publication review). I do not find this troublesome in any way. The proposal is not to eliminate critical review of papers, but rather to remove review from the decision about whether to publish a paper.

  • Joe,

    Perhaps we are both a bit overly cynical; me about the general tenor of debate and discourse in an unregulated online world, and you about the overall impact of the review process. The truth is, as so often the case, probably somewhere in the middle on both accounts.

    I have never had a paper that I could not get published, and I have published my fair share. I certainly have had colleagues who have had the occasional situation where a publication review process took much, much longer than necessary (for one of my colleagues it was 18 months AFTER first REVIEW). Top tier journals have refused to send our manuscripts out for review, but when we send the same manuscript to another top-tier journal, it gets published (and makes the cover). So yes, occasionally there are unfair or seemingly idiotic reviewers and editors, but on balance the system works. As someone else suggested, it is the perhaps the “least terrible” of our alternatives.

    In the system where articles would be self-published, where would the workforce come from to critically review an article? You say that they would be reviewed after publication- who would do that? For free? If the journals would trawl through the sea of online “publications”, how would they get exclusive rights to publish something? If they cannnot, why would they publish something? As it is, there are already too many articles out there that make minimal steps forward; we are already allowing so many specialty journals that the “least publishable unit” is already aggravatingly small. Widespread online publication would only make this worse- it would amount to having an infinite amount of “specialty publications”.

    Consider people who write novels, or computer software–they are judged based on what happens *after* they publish their work (often without pre-publication review). I do not find this troublesome in any way. The proposal is not to eliminate critical review of papers, but rather to remove review from the decision about whether to publish a paper.

    People who write novels get paid directly by the publisher, for the amount of copies they sell. Are you suggesting that grant dollars be released based on a direct measure of how many downloads a particular paper has? I guarantee you that this will not have the effect of “spreading the wealth” you might think it will have. Given our funding climate, in fact, just the opposite would happen. The big labs would get more money, and the small ones ever less money.

    Furthermore, this release of the review process from publication places all of the burden on grant review study sections to wade through an ever increasing mound of information, much of it probably useless, in order to determine who should be funded out of our limited resources. I’m not trying to be dogmatic here, but while the utopian ideal of spreading scientific knowledge in some free and unfettered way is nice, the reality is someone has to pay the bills. Outside of privatization, I don’t see how reducing the hurdles to publish helps the funding review process at all.

    Again, I’m not saying it is fun or even worthwhile to struggle to publish or get funding. What I am saying is that I don’t that that the reviewers in either case are wrong 100% of the time when they decide against a publication or grant application. Sometimes, you/we the authors need to go back to the drawing board and work harder, or be better communicators. It is a bitter pill that a lot of us don’t want to swallow, but I guess I’ll be the bad guy here and suggest it should be a more widely taken prescription.

  • People who write novels get paid directly by the publisher, for the amount of copies they sell

    I would love it if Nature paid me each time someone bought a copy of my article!

    Like Razib wrote, a lot of this discussion seems to take place in a world where arXiv and other preprint servers don’t exist. The fact is that in other fields, rapid dissemination of results via the internet occurs alongside the “standard” journal system. This hybrid system would eliminate a lot of the problems with peer review that I mention, as well as give people the warm fuzzy feeling they get from knowing something has been reviewed to an unknown amount of scrutiny by unknown people (I don’t get that feeling, but I’ve been convinced by this thread that many people do!). So maybe that’s a good short-term goal that is more feasible–establish a culture of online pre-publication sharing and discussion in biology.

  • Sounds good. I’ll share my unpublished data with you if you share yours. You first. ;)

    Nice discussion, thanks for the views!

  • Thank you for this opinion piece Joe, it is very informative and encouraging. I’m also experiencing for the first time the terrible bias that now seems to be the rule rather than the exception. Although I work as a professional musician, for many years I have been working on mathematical models not just of musical harmony but of wave motion in general. This unique perspective has, apparently, led me to certain discoveries that would otherwise have gone unnoticed. I wrote an article on mathematical modelling of approximate frequencies (important for musical tuning away from the whole numbers) which caught the attention of a prominent auditory neuroscience. I’m now a member of his research team and even in just a few months all of my suspicions (sorry, hypothesise) have been confirmed. Now I went through the rigmarole of submitting a paper to the Acoustical Society of America and the article not only got rejected in three days – way too short a time to understand the math – but said that it was already well documented in the literature. This simply isn’t true. Certainly there has been much work done on similar problems using similar mathematics, but that is precisely the point. The mathematics is correct only insofar as it agrees with my model in exceptional cases. It is why I wrote it to begin with. Besides, my professor gave me about 30 papers to read spanning over about the same number of years and they all say the same thing in different ways. And when I complained to the journal – obvious I cannot debate the issue with anonymous “reviewers” – I received no reply. Well the editor then should have to take responsibility for this decision. Obviously the choice of people was unworthy.

    You’re right. True peer review should be in an open public forum. These people think that they can escape the responsibility of debate by setting up a closed environment. Isn’t it true that tadpoles who are caught in the same genetic pond turn into toads?

  • Ecologist: Well yeah. The reviewers at high quality journals are not “bat-shit crazy”.. They are usually quite intelligent and usually have their own interest of protecting their own work from competition.

    If they have a slightest chance of rejecting or “stealing” a paper that improves or shows the flaws of their own work – they will probably take it.

    Modern publishing is nothing but serfdom and censorship at best.

    As copyright industries are utterly failing throughout the world right now, there is no reason whatsoever not to put your work online on decentralized file sharing sites.

    https://thepiratebay.se/torrent/6734667/%5Bpaper%5D%5B1.00%5D%5BA_proposal_for_a_free__open_and_decentralized_publ

  • Here is a way of doing science really decentralized – effectively killing any improductive gold digging “publisher” out there. There’s no reason to pay for publishing anything – in fact – it only limits how many people can read your works anyway.

    https://thepiratebay.se/torrent/6734667/%5Bpaper%5D%5B1.00%5D%5BA_proposal_for_a_free__open_and_decentralized_publ

  • I have a plan for a “killer-app.”

    Meritocracy is a proposal for a cloud review system that offers the infrastructure for open peer-review, creates a free marketplace for research & development, and involves students through apprenticeships. In an attempt to synthesize the concerns and solutions proposed within the open science community, and drawing much inspiration particularly from this thread, I am sharing with you a vision for how we can move forward.

    This plan is bound to evolve over time, but what makes it a “killer-app” is it’s cloud-based, open-sourced approach to invite stakeholders to shape its progress and create variable versions of a system that address the specific needs of each scientific community.

    I am eager to hear your thoughts on this initiative and hopefully work with you in the near future.

    http://igg.me/p/67101?a=416044

  • Any new system should encourage comments. A good intuition pump might be the journal “Behavioral and Brain Sciences”, which publishes two “target articles” per issue, followed by dozens of short commentaries which are citable publications themselves. On a server system, the length and content of the commentaries wouldn’t need to be restricted; they could contain new data, replications, or (importantly) failures to replicate the effect. — In my opinion, a new publishing system must REWARD PEOPLE FOR READING, COMMENTING & AUGMENTING papers, in order to reestablish a scientific discourse. At the moment, some people honestly tell me that they write more papers than they read.

  • excellent issues altogether, you just won a new reader.

    What might you recommend about your submit that you just made a few days in the past?
    Any sure?

  • AmericanHealthJournal is looking for partnerships with webmasters in the health genre. American Health Journal is a health site which owns over 3000 of high quality health care videos. We are seeking professionals to write guest blogs to our website. Get in touch with us at our contact page on our site.

  • Why publish ?

    Because this is how researchers get evaluated ! There are some countries I won’t cite that take the idea as far as ranking applicants by calculating (Number of publications) x (impact factor).

    Just how stupid this is, is not the topic, but once you’ve taken that into account, you understand that publishing is a necessity for researchers.

    So, you need to convince the powers to be in Universities and other research institutes to revise their evaluation criteria. Sadly, they are most often the same as those happy few in control of editorial boards, and are thus unlikely to have a poor opinion of the peer-review process. The system is locked in a vicious circle.

    When this is the case, evolution seldom happens by breaking the circle, but by the developement of a parallel system (think Linux / Windows). For this to happen, those youg researchers who have not yet turned to the dark side need to do their best to develop these parallel systems (arxiv, etc). This requires additional efforts, as publishing in established journals remains a necessity, and simple dumplication leads to copyright issues.

    Undoubtly though, the process as it stands today will eventually disappear.

  • I am a proponent of open source. However this is a matter I have a big question mark on. Currently number of publication is something like PloS one is a little out of control. Open source will push it in even more out of control direction.

    With experience I can say that what is happening, is small scale, poorly sampled studies get published in journals and I believe that the trend is picking up with open source. The majorly hampers high quality research, because the novelty of the project ends up going away, because some other research group does rapid publication of a similar idea but not well substantiated. And this idea diminishes the quality of the subsequently published yet simultaneuosly conducted study. I am sure others have experienced this.

    In current research situation, its becoming very difficult to keep track of research across globe. Rapid unfiltered publications will make the situation worse. There has to be some filter at the source level itself, it can’t be merely by how many recommendations an article gets. Peer review has truckloads of issues, yet if fairly done, it works. The problem is on fair implementation and if editors can see when reviewers are biasing their comments on personal philosophies.

    Ultimately lets go back to the basic question of science publication. Do we really want to encourage the community to get everything out like media does, or do we want to retain that stance being reliable? I would opt for the latter.

Comments are currently closed.

Page optimized by WP Minify WordPress Plugin