Vigilant scientists [UPDATED -- March 5th, 2016]

In an editorial entitled “Vigilante science”, the editor-in-chief of Plant Physiology, Michael Blatt, makes the hyperbolic claim that anonymous post-publication peer review by the PubPeer community represents the most serious threat to the scientific process today.

We obviously disagree. We believe a greater problem, which PubPeer can help to address, is the flood of low-quality, overinterpreted and ultimately unreliable research being experienced in many scientific fields, but especially in life sciences. In a famous paper (1), John Ioannidis explained how a combination of low statistical power and publication bias has resulted in the expectation that a majority of publications are unreproducible. And of course a paper may contain many, many problems in addition to bad statistics. Just one example is the issue of contaminated cancer cell lines.

These arguments suggest that a large majority of publications are unreproducible. This statement may appear extreme; is it supported by direct evaluations of reproducibility? Although there have been few such studies, those that exist all point to a grave situation. Two surveys of preclinical studies by pharmaceutical companies Amgen (2) and Bayer (3) reported dismal robustness of “landmark” studies they had hoped to develop., Similarly, the psychology reproducibility project could only reproduce a minority of studies and revealed a generalized reduction in effect sizes (4). We are unaware of any reproducibility studies reporting substantially higher success rates.

Unreliable research, a problem recently acknowledged by Francis Collins (5), the current NIH director, could have an enormous economic cost. Consider that the annual budget of the NIH is $30bn. Extrapolation of the alarming reproducibility rates above would lead to the conclusion that more than half of that money—taxpayers’ money—is funding unreliable research. Today’s research builds on yesterday’s results. But anybody basing their research on unreliable publications is likely to be wasting their time and further resources. In the high-pressure environment of research, such an unwitting error can easily spell the end of a young scientist’s career.

But it gets worse: the negative consequences of unreliable research extend far beyond future research. Many aspects of public policy are based upon the scientific record. Obvious examples are medical guidelines and environmental policy, but the trend is to increase evidence-based policy. Unreliable research can lead to mistaken policy with both economic and human costs.

A dramatic example of the potential human cost is afforded by the Poldermans case. A prominent cardiologist, he was responsible for a series of clinical trials that drove the adoption of guidelines recommending the widespread use of perioperative treatment with beta-blockers to protect against myocardial infarctions. Several of Poldermans’ studies were subsequently shown to have serious integrity problems and he was fired for misconduct. A meta-analysis of the field excluding Poldermans’ discredited research estimated that the beta-blockers increased perioperative mortality by 27% (6). In other words, mistaken guidelines based upon unreliable research (in this case involving misconduct) may have caused preventable deaths. Because the procedures were common and the guidelines widely applied, the number of potential victims almost defies comprehension (as reported by Forbes).

In this context, we believe it is imperative that all possible users of published research be made aware of potential problems as fully and as quickly as possible. Any other course of action will cost money, careers and maybe lives. The central mission of PubPeer is to facilitate this exchange of information. We therefore provide a web platform that can make comments instantly available to every interested reader in the world and aim to remove barriers and discouragements to commenting. As shown in the graph of historical comment traffic, commenting on PubPeer was greatly stimulated after we enabled user-controlled anonymity, which is the only certain defense against legal attack or a breach of site security. The “unregistered” comments, which represent the majority, are not of inferior quality to those of registered users.




In contrast to our desire to disseminate information, Blatt is mostly concerned about the psychological effect on researchers of public and anonymous discussion of their work. From this point of view, his suggestion to “draw the author aside” for a quiet chat “after a seminar” is certainly a good way to minimize hurt feelings, but it is totally ineffective as a strategy for disseminating information. We believe that making any relevant information rapidly available to readers should trump concerns about the authors’ feelings, especially given that they freely chose to publish in the first place. Frankly, a few ruffled academic feathers pale into insignificance when patients’ lives, taxpayer billions and young researchers’ careers are at stake. We also suspect that the researchers’ employers—those same taxpayers and patients—would share this point of view.

There would be less need for PubPeer and anonymous commenting if self-correction and policing of research worked as they should, but we believe they do not and probably cannot, as detailed in a previous blog post. Although we do not doubt Blatt’s personal probity or question the efforts made by the Rockefeller press, unreliable research seems to be endemic in the current system, while authors, journals and institutions are all unquestionably exposed to conflicts of interest when it comes to correcting problems. PubPeer hosts comments on thousands of papers in which image manipulation is manifest, yet visible action is taken in only a few percent of cases. Where are the research police? They are too inefficient, slow, unreliable and, crucially, opaque to be fit for purpose. The arsenic life paper highlighted by Blatt as an example of acceptable post-publication peer review was, ironically, never retracted from Science. The New England Journal of Medicine has not retracted a key Poldermans study, despite serious doubt being cast upon its integrity. Scientists clearly cannot rely on the traditional avenues for correcting problems in the literature; PubPeer offers a way to bypass this logjam of conflicts of interest.

We now address Blatt’s other complaints about anonymous commenting, although we believe they are secondary to the issues outlined above.

No system is perfect and the possibility for abuse of anonymous commenting on PubPeer does exist. However, as seems often to be the case with critics of anonymity on PubPeer, Blatt doesn’t offer concrete examples of abuse on our site. From our accumulated experience of moderating the 37000+ comments on PubPeer, we consider worries about abuse to be overblown. The factual focus of comments is one very important protection; despite Blatt’s carefully worded insinuations, PubPeer does not allow “hearsay”, “allegations” or invite “innuendo” (and we aim to act on all reports of such comments). We also observe that conflicts of interest are much less of an issue with anonymous commenters because they have no way to abuse any power or authority they may possess; they must convince by strength of argument alone. Scientists are simply not convinced by anonymous assertions without factual support, even if such comments were to find their way past our moderation systems. It is probable that as PubPeer grows from its informal beginnings, some form of dedicated editorial or appeals board will be instituted, but our current experience from moderating comments and reports of abuse does not indicate this to be an urgent need.

The argument that researchers find it difficult to reply to a factual or scientific question without knowing who asked it is barely credible. We are with Paul Brookes on this one: scientists should be able to explain and defend the work they have chosen to publish. And in reality no competent scientist would experience the slightest difficulty in defending their work, if it is defensible. In particular, the overwhelming majority of questions on PubPeer would be resolved instantly by showing the original data that the authors describe in their publication. A growing number of enlightened journals require full data-sharing, so there can be no argument that data should be kept secret from ordinary readers, yet most authors (and editors) still succumb to this reflex.

Bizarrely, given his apocalyptic warnings about PubPeer, Blatt states that the bulk of PubPeer comments “relate to small errors and oversights” in image data that are secondary to the “ideas in themselves” of the papers. We encourage the use of PubPeer for all types of factual discussion, be it positive, negative, major or minor. Comments about details should be treated as such, although attention to detail is often important in science. People who rush to judgment on the sole basis that a comment about some minor detail exists on PubPeer have only themselves to blame. If they are scientists they should definitely know better, and we actively advise readers to form their own opinion of comments. PubPeer should be treated as a source of potentially useful information, not a definitive judgment. Note that PubPeer does not aspire to provide in-depth scientific review of comments.

Blatt appears to include in this class of comments about “small errors” the many that highlight signs of image manipulation—not such small errors after all. We do not agree that a lack of scientific discussion accompanying such comments is problematic: if the data can’t be trusted, that is vitally important information and there is little point in discussing the science. It is of course the comments highlighting obvious manipulations or serious errors that authors find so difficult to counter (and may cause them to reach for their lawyers), not the comments about genuinely small errors and oversights. Affected authors often seek to distract from their predicament by complaining about the anonymity that prevents the deployment of ad hominem defenses.

Blatt also bemoans the negativity of most PubPeer comments. We consider this unsurprising and even inevitable, since most authors have been forced by the system to put the most positive spin possible on their results. In addition, science proceeds by the falsification and refinement of hypotheses. Thus, for most papers, the only way is down.

In conclusion, we choose to allow anonymity on PubPeer as a necessary compromise. The arguments in favor are the overwhelming importance of rapidly informing readers about potential issues in publications and the fact that strong anonymity on PubPeer has greatly encouraged commenting. Conversely, we believe that the argument against anonymity, the risk of malicious or unjustified damage to researchers’ reputations, has been overstated, and this is based upon our direct experience of running the site. A time may come when open criticism is no longer considered a risk and anonymity becomes unnecessary to facilitate commenting, but for now PubPeer users clearly prefer to control their anonymity. We believe the balance of benefits currently strongly favors the continuation of anonymity on PubPeer.

[UPDATE -- March 5th 2016]

Michael Blatt has written a follow-up editorial We respond to a few of the issues he raises.

Most importantly, Blatt simply ignores the central point in our blog above, which is that rapidly sharing information about possible problems in a publication is more important than the niceties of academic etiquette. We also provided evidence that strong anonymity encourages this information sharing. The argument for allowing anonymous commenting is therefore that it maximizes a beneficial activity. Even if Blatt as an editor believes that authors rather than readers and users of the publications are his most important customers, he provides no explicit argument for this. Instead of addressing our utilitarian argument, Blatt simply lists a series of rather intangible appeals to the scientific process.

In his original editorial, Blatt glaringly conflated potential misconduct and “scientific” errors. He now distinguishes misconduct from other issues and implicitly acknowledges that unprotected discussion is insufficient to deal with this problem. This represents serious backtracking on his part and implicitly validates much of the commentary on PubPeer, which does concern potential misconduct. Vague mention is made of an initiative to develop a more effective whistleblowing system. We look forward to a robust and effective procedure for dealing with allegations of misconduct, but won’t be holding our breath.

Following our challenge to provide examples where anonymous comments on PubPeer have been used to denigrate researchers unjustly, Blatt provides examples concerning his journal that are by his own admission innocuous—we seriously doubt that his reputation was harmed by those comments. Some speculation by Leonid Schneider is also cited. In short, the examples are not convincing. Although we would take even a single example very seriously and admit there may be some, we note that the PubPeer database is rapidly approaching 50000 comments, so it would not be unreasonable to consider this denominator in evaluating the prevalence of any perceived abuse. We continue to believe that the factual basis of comments, our moderation policies and, above all, the diligence of our users, means that the overall accuracy of PubPeer commentary is excellent. In this context it may be worth pointing out that one of Blatt’s own papers has been commented on PubPeer. Was that an unjustified denigration of his reputation by cowards acting with anonymous impunity? Apparently not, as the authors have issued a mega-correction. Judge for yourselves:

Finally, Blatt makes an argument that unreliable research advances science. We believe this to be dangerous and disingenuous relativism. We feel there is a clear distinction between work that authors, referees and editors should have known was unreliable or wrong at the time of publication, on the one hand, and the difficulties of interpreting experiments at the frontiers of knowledge on the other. Moreover, it is precisely the ability to make this distinction that “quality” journals such as Plant Physiology provide as justification for their existence, at least until they need an excuse for having published low-quality work. Similarly, although some papers may stimulate subsequent advances even if their central claims are invalidated, quality journals still aspire to certifying those claims.

Some of the examples provided by Blatt to glorify unreliable science are bizarre. Everybody agrees that the arsenic life paper falls into the category of vastly overblown claims where the authors, referees and editors should have known better. It was a catastrophic failure of quality control by the journal. In contrast, the failure of Cole and Curtis (1938, 1939) to measure the overshoot of the action potential was entirely due to the fact that their equipment did not give them access to the membrane potential, they could only measure membrane impedance. This was nevertheless a huge breakthrough and is a perfect example of an entirely acceptable state-of-the-art interpretation. Nobody at the time could or should have known better. To suggest otherwise is like criticizing Rutherford for not predicting the existence of the Higgs boson.  We certainly wouldn’t consider the research of Cole and Curtis to be unreliable.

The example involving the influence of the sodium pump on the membrane potential is even more confused. By maintaining the transmembrane sodium and potassium ion gradients, the sodium pump is indirectly essential for the maintenance of the resting membrane potential, but this had been known for a long time. The pump is also electrogenic, as observed in Gadsby’s work, but in animal cells its direct contribution to the membrane potential, which is dominated by passive potassium conductances, is minor, contrary to Blatt’s suggestion. Moreover, the De Weer and Gadsby (1988) reference that Blatt actually cites describes studies of the voltage-dependence of the sodium pump—the effect of the membrane potential on the pump, not of the pump on the membrane potential. For completeness, we note that in plant cells an electrogenic proton-pump (not a sodium pump) does make a direct and significant contribution to the membrane potential.

We feel these misunderstandings illustrate the confused nature of the arguments in Blatt’s editorials.


Comments can be left here:

1. Ioannidis, J. P. A. Why Most Published Research Findings Are False. PLoS Med 2, e124 (2005).

2. Prinz, F., Schlange, T. & Asadullah, K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov 10, 712–712 (2011).

3. Begley, C. G. & Ellis, L. M. Drug development: Raise standards for preclinical cancer research. Nature 483, 531–533 (2012).

4. Collaboration, O. S. Estimating the reproducibility of psychological science. Science 349, aac4716 (2015).

5. Collins, F. S. & Tabak, L. A. Policy: NIH plans to enhance reproducibility. Nature 505, 612–613 (2014).

6. Bouri, S., Shun-Shin, M. J., Cole, G. D., Mayet, J. & Francis, D. P. Meta-analysis of secure randomised controlled trials of β-blockade to prevent perioperative death in non-cardiac surgery. Heart 100, 456–464 (2014).

The PubPeer Foundation

We are pleased to announce the creation of The PubPeer Foundation, a California-registered nonprofit public benefit corporation in the process of obtaining 501(c)(3) nonprofit status in the United States. The overarching goal of the Foundation is to help improve the quality of scientific research by enabling innovative approaches for community interaction. Our initial focus will be on maintaining and developing the PubPeer online platform for post-publication peer review.

The bylaws of the newly created Foundation aim to establish as a service run for the benefit of its readers and commenters, who create all of its content.  We feel that a nonprofit organization constitutes the ideal framework through which to pursue these goals. We are also taking this opportunity to formalize the responsibilities of directors, officers, agents, and subcontractors of the Foundation. First and foremost, they should always act to preserve and defend the anonymity of users of Foundation sites. In addition, they must not comment on Foundation sites except through official channels (such as the blog, the twitter account or as moderators), and they must avoid real and apparent conflicts of interest.

The inaugural Board of the Foundation consists of the three founders of and two associates, respectively: Brandon Stell (President), George Smith, Richard Smith, Boris Barbour (Treasurer) and Gabor Brasnjo (Secretary).

We wish to thank all of the expert commenters of, who are responsible for the success of the site. We also thank and are extremely grateful to our pro bono legal representatives (Nicholas Jollymore of Jollymore Law and Alex Abdo, Daniel Korobkin and Samia Hossain of the ACLU) for defending our site and the community’s right to comment freely under the law.

For anything related to the Foundation or, please continue to contact us through the site ( and not via any professional or personal channels you may discover.


Comments can be left here:

A crisis of trust

When we created PubPeer, we expected to facilitate public, on-the-record discussions about the finer points of experimental design and interpretation, similar to the conversations we all have in our journal clubs. As PubPeer developed, and especially once we enabled anonymous posting, we were shocked at the number of comments pointing out much more fundamental problems in papers, involving very questionable research practices and rather obvious misconduct. We link to a few examples of comments raising apparently serious issues and where the articles were subsequently withdrawn or retracted (for which the reasons were not always given):

The choice of retracted/withdrawn articles was made for legal reasons, but that is all that makes them special. There are many, many similar comments regarding other papers.

Many critical comments have involved papers by highly successful researchers and all the very best journals ( and institutions are represented. So it is hard to argue that these problems only represent a few bad apples that nobody knows or cares about. We have come to believe that these comments are symptomatic of a deep malaise: modern science operates in an environment where questionable practices and misconduct can be winning strategies.

Although we were not anticipating so many comments indicative of misconduct on PubPeer, maybe we should not have been so surprised. The incentives to fabricate data are strong: it is so much easier to publish quickly and to obtain high-profile results if you cheat. Given the unceasing pressure to publish (or perish), this can easily represent the difference between success and failure. At the same time, ever fewer researchers can afford the time to read, consider or check published work. There are also intense pressures discouraging researchers from replicating others’ work: replications are difficult to fund or publish well, because they are considered unoriginal and aggressive; replications are often held to much higher standards than the original work; publishing contradictory findings can lead to reprisals when grants, papers, jobs or promotions are considered; failures to replicate are brushed off as the replicator’s poor experimental technique. So the pressures in science today may push researchers to cheat and simultaneously discourage checks that might detect cheating.

As followers of ‘research social media’ like Retraction Watch and the now-shuttered Science Fraud have already realized, the climate of distorted incentives has been exploited by some scientists to build very successful careers upon fabricated data, landing great jobs, publishing apparently high-impact research in top journals and obtaining extensive funding.

This has numerous direct and indirect negative consequences for science. Honest scientists struggle to compete with cheats in terms of publications, employment and funding. Cheats pollute the literature, and work trying to build upon their fraudulent research is wasted. Worse, given the pressure to study clinically relevant subjects, it is only to be expected that clinical trials have been based upon fraudulent data, unethically exposing patients to needless risk. Cheats are also terrible mentors, compromising junior scientists and selecting for sloppy or dishonest researchers. Less tangible but also damaging, cheats spread cynicism and unrealistic expectations.

One reason we find ourselves in this situation is that the organizations supposed to police science have failed. Most misconduct investigations are subject to clear conflicts of interest. Journals are reluctant to commit manpower to criticizing their own publications. Host institutions are naturally inclined to defend their own staff and to suppress information that would create bad publicity. Moreover, both institutional administrators and professional editors often lack scientific expertise. It is little wonder therefore that so many apparently damaging comments on PubPeer seem to elicit no action whatsoever from journals or institutions (although we know from monitoring user-driven email alerts that the journals and institutions are often informed of comments). Adding to the problem of conflicts of interest, most investigations lack transparency, giving no assurance that they have been carried out diligently or expertly. Paul Brookes recounts a sadly typical tale of the frustrations involved in dealing with journals and institutions. How difficult would it have been to show Brookes the original data or, even better, to post it publicly? Why treat it as a dangerous secret?

It is hard to avoid the conclusion that the foxes have been set to guard the hen house (of course the institutions are important because they have access to the data, a point to which we return below). An external investigator would seem like a good idea. And one exists, at least in the US: the Office of Research Integrity (ORI). However, as Adam Marcus and Ivan Oransky of Retraction Watch explain in a recent New York Times article, the ORI has been rendered toothless by underfunding and an inability to issue administrative subpoenas, so it remains dependent on the conflicted institutions for information. Moreover, other countries may not even have such an organisation.

As also detailed by Marcus and Oransky, even on the rare occasions when blatant frauds are established, the typical punishments are no deterrent. Journals often prefer to save face by publishing ‘corrections’ of only the most egregious errors, even when all confidence in the findings has been lost. Funding agencies usually hand down ludicrously lenient punishments, such as a few years of being mentored or not being allowed to sit on a grant committee, even when millions of federal funding have been embezzled. Most researchers ‘convicted’ of fraud seem able to carry on as if nothing much had happened.

What can be done?

We first eliminate a non-solution. We would be very wary about prescribing increased formalized oversight of experiments, data management, analysis and reporting, a suggestion made by the RIKEN investigation into the stem cell affair. The problem is, who would do the oversight? Administrators don’t understand science, while scientists would waste a lot of time doing any overseeing. If you think you do a lot of paperwork now, imagine a world where every step of a project has to be justified in some report. The little remaining enjoyment of science would surely be sucked dry. (This viewpoint should not, however, be taken as absolving senior authors from their clear responsibility to verify what goes into the manuscripts they sign).

A measure often suggested is to extend checking of manuscripts for plagiarism and image manipulation at journals. This is happening, but it has the serious disadvantage of remaining mostly out of sight. If caught, it is easy for an author to publish elsewhere, maybe having improved his image manipulation if he is less lazy than most cheats. Amusingly, the recent stem cell debacle at Nature provides a perfect illustration of this problem. It has been suggested that one of the image manipulations that ultimately led to the retractions was spotted by a referee at Science, contributing to the paper’s rejection from that journal (see here). Presumably Nature now wish they had known about those concerns when they reviewed the articles. Information about the results of such checking should therefore be centralized and ideally made available to the most important audience: other researchers. We understand that this might be complicated for legal reasons, but all possible avenues, even for restricted dissemination, for instance to referees within the same publishing conglomerate, should be explored.

Another suggestion is to introduce more severe punishments in cases of misconduct. These could be administrative (recovery of grants, job loss, funding or publication bans) or even involve criminal prosecution. We believe that science and the law mix poorly and foresee the potential for some incredibly technical, expensive and inconclusive court cases. Indeed, according to Marcus and Oransky, the difficulties of the Baltimore/Imanishi-Kari case contributed to the current weakness of the ORI. We note also that all formal investigations are incredibly time-consuming. Any researchers co-opted into such investigations will waste a lot of time for little credit. Nevertheless, we contend that more severe punishments, even in just a few clear-cut cases, would send a strong message, help convince the weak-willed and strengthen the hand of vulnerable junior researchers pressured into misconduct by unscrupulous lab heads. Certainly, funding agencies should reconsider their ludicrously lax penalties.

Policing research is always likely to be burdensome and haphazard if it is carried out by organizations subject to conflicts of interest or administered by people with little understanding of science. But that is unfortunately exactly the situation today and we think it must be changed. A more effective approach would be to leverage the motivation and expertise of the researchers most interested in the subject. How much better if they were the policemen, rather than uninterested, conflicted and bureaucratic organizations. This could be done if together we invert the burden of proof. It should be your responsibility as a researcher to convince your peers, not theirs to prove you wrong. If you cannot convince your peers, that should be a problem for you, not a problem for them. Simply managing to publish a conclusion with some incomplete data should not be enough. Although this may sound Utopian, we argue next that there are now mechanisms in place that could realistically create this sea change in attitude.

The key trend is towards greater data access. Traditional publication requires readers to trust the authors who write the paper, as well as the institutions and journals that carry out any investigations. As we have argued above, in a growing number of cases that trust is breaking down. Yet the internet and advances in information technology mean that it is no longer necessary to trust; one can also verify. All methods, data, materials, and analysis can and should be made available to the public without precondition. This will automatically make it harder to cheat and easier to do the right thing, because it is a lot more difficult to fabricate a whole data set convincingly than it is to photoshop the odd image of bands on a gel. Moreover, our personal experience suggests that requiring authors to package their data and analysis in reproducible form will introduce unaccustomed and beneficial rigor into lab work flows. Open data is therefore a policy of prevention being better than cure. Moreover, replications and more formal investigations will be greatly facilitated by having all the original data immediately available, eliminating a significant bottleneck in investigations today.

On the issue of data sharing, PLoS is leading the way: following a recent policy change, everything must be easily accessible as a precondition of publication. Moreover, the policy explicitly states that `… it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access’, so it should not be necessary to request the data from the authors. We applaud this revolutionary initiative wholeheartedly and we strongly encourage other journals to follow this lead. Nature group have also made progress, but under their policy it is often still necessary to request the data from the authors, who are not all equally responsive (whether through poor organization or bad faith), despite their undertakings to the journal. Furthermore, such data requests to the authors still expose researchers to possible reprisals.

A less dramatic but necessary and complementary step would be for journals and referees to insist on complete descriptions of methods and results. If space is constrained, online supplementary information could be used, although we feel this confuses article structure. We believe the trend of hiding the methods section has been a big mistake. As scientists, we were disheartened to hear people opine during the STAP stem cell affair that it was `normal’ for published methods to be inadequate to reproduce the findings. We strongly disagree: all failures to replicate should be treated as serious problems. It is the authors’ and journals’ responsibility to help resolve these failures and to avoid them in the first place. As an aside, could journals PLEASE provide a way to download a single file combining the main article and any supplementary information? This hardly requires an ace web programmer, yet it seems that still only PNAS has managed to get this right. It shows that most publishers never read papers.

The next question is: how to make use of data access and detailed methods? This is where we see a role for PubPeer and other forms of post-publication peer review. The main aim of PubPeer is to centralize all discussion about papers durably. This is now possible, and it has therefore become a simple matter to check on the track record of a specific researcher. Searching PubPeer will show any signs of possible misconduct (as well as less serious issues or even positive commentary). It will also show how the authors have responded to those issues being raised. Currently, most authors keep their heads firmly in the sand, maybe because they have no real answer to the criticisms posted. Nevertheless, a minority of authors do respond convincingly, showing that they can support their published conclusions (see for example). Of course, there are also genuine scientific discussions on PubPeer (e.g. here) as well as a few oddball comments, so it remains important to read the comments and make up your own mind as to their meaning and significance.

By exploiting this centralized information, the high-pressure environment that cheats have navigated so successfully can now become their downfall. Referees and members of committees for recruitment, promotion or funding can now give careful consideration to the scientific community’s opinions about the quality and reliability of applicants’ research. Researchers whose work displays unresolved issues are likely to find their advancement encounters some well deserved friction. As we all know, it only takes the slightest friction in a grant committee for your application not to be funded. Similarly, prospective students, post-docs and collaborators now have an additional data source to evaluate before entrusting their future careers to a group. In this way, platforms like PubPeer can help ensure that cheating, once discovered, has lasting consequences, tilting the balance of benefits towards honest, high-quality research. Scientists will also have much stronger incentives to resolve issues in their work.

We are therefore hopeful for the future. The growing use of post-publication peer review, the movement towards full data access and, hopefully, some improvement in the policies of research organizations and publishers, should usher in a new era of quality in science. Scientists can make use of services like PubPeer and leverage the high pressure under which we all work to insist upon high standards and to unmask cheats. Together, let’s retake control of our profession.


PubChase is a new site that monitors discussion of your favorite articles (and sends you email alerts) as well as provides recommendations of articles.  It’s very slick.  Enter articles into your PubChase library and they will recommend related articles and keep you up-to-date of any discussion of the articles.  You’ve probably already heard about it but if not check it out:

You’ll receive alerts for PubPeer comments on any article in your library.

Screenshot from 2014-05-27 16:44:56

Screenshot from 2014-05-27 16:41:35

*We have no financial interest.  We just think it’s cool and give them access to our API.

[UPDATED] PubPeer comments now on journal websites!

[UPDATE 2] We added the extension for Safari (see below).

With the excuse of potential litigation, the journals have been hesitant to show (or even link to) PubPeer comments.

So we’re doing it for them…

As of today the PubPeer browser extensions are now adding links to PubPeer comments directly on the journal websites and PubMed:

NatureScreenshot CellScreenshot PloSBioScreenshot ScienceScreenshot NatureNeuroScreenshot results Screenshot from 2014-04-03 09:41:34

Install the extensions for Firefox  and Chrome and Safari and never miss a PubPeer comment.  (you could install them on every computer in your lab…)

[UPDATE 1] There is a bug in Firefox 29.0 that breaks the plugin.  Updating Firefox resolves the issue.

Browser Extensions for PubMed

Until we convince PubMed that it’s a good idea to link out to PubPeer comments, we have decided to do it ourselves and have updated our Chrome extension and created extensions for the other major browsers.  These extensions are easy to install (just click on a link below), extremely light weight, and don’t do anything unless you’re browsing a page on PubMed with PubPeer comments. Install them, spread the word, and together we can make it more difficult to ignore post-publication review.

You can download any or all of them using the links below. (We have tried to make them work with your favorite institutional proxies but if we missed one just send us an email and we’ll add it)

Chrome                  Firefox                  Safari



Screenshot from 2014-04-03 09:41:34

Science self-corrects – instantly

Publishing a paper is still considered a definitive event. And what could be more definitive than publishing two Nature papers back to back on the same subject? Clearly a great step forward must have occurred. Just such a seismic event happened on the 29th of January, when Haruko Obokata and colleagues described a revolutionarily simple technique for producing pluripotent cells. A short dunk in the acid bath or brief exposure to any one of a number of stressors sufficed to produce STAP (Stimulus-Triggered Acqusition of Pluripotency) cells, offering enormous simplification in stem cell research and opening new therapeutic avenues.

Nature were presumably conscious of a potential problem when processing the paper. The field of stem cell research had seen the huge Woo-Suk Hwang scandal. It was also shaken by the hasty three-day review by Cell of a paper from the Mitalipov group that contained several very careless errors (and whatever happened to those promised external verifications of its central claims?). Obviously, it wouldn’t do to get caught out like Science and Cell.

So Nature and its referees must have known that these papers would have a bullseye painted on their backs. And, knowing this, one suspects that they would have been extra-careful. Their most trusted referees would take their time to ensure that the authors had dotted all the ‘i’s and crossed all the ‘t’s. The papers indeed underwent a respectable 9 months gestation from submission to acceptance; the authors even complained about the strictness of the referees. So how would Nature’s high(est) quality output get on in the big bad world? One would maybe have expected a few quibbles about finer points of interpretation, but surely the critics of pre-publication review would find little ammunition in such carefully prepared papers.

The papers came out to blaring publicity. The mainstream media were all over the story. Nature had another breakthrough; Riken and Harvard were revelling in their glory. But the internet allows quite accelerated feedback, from anybody and everybody. Inveterate stem-cell blogger Paul Knoepfler was immediately on the case, sharing his field’s bewilderment, a mixture of skepticism and hope. Our site received some chatter, mostly about the predicted consequences of the stated T-cell origin of the stem cells.

Then, on February 4th, less than a week after publication, an anonymous comment on PubPeer pointed out that a gel showed signs of having a lane spliced in ( Although some people don’t consider this practice very serious, most ‘Peers’ take quite a dim view of it. In any case, an unannounced splice was potentially deceptive and probably caused people to examine the papers with a more critical eye. Over the following weeks, quite a number of comments highlighting small (inconsistent scale bars) and potentially serious (possible figure duplications) problems were posted. Numerous comments linked back to Japanese blogs, whose interest was understandable. At the very least, the papers were shown to contain a disconcerting number of unfortunate errors.

In parallel, Knoepfler was giving the stem cell field weather report (turning gloomy), interviewed coauthors Charles Vacanti and Teruhiko Wakayama, and also ran an innovative crowd-sourced replication page, which soon showed that promises of simplicity and hopes of generality would not easily transfer from the abstract to the lab. We learnt that both Riken and Nature were ‘investigating’. Questions were also raised about papers by Vacanti and Wakayama.

Suspense turned to outright febrility following new comments posted on PubPeer this weekend, which may turn out to be the straws that broke the donkey’s back. They claim that figure panels in one of the Nature papers appear to have been duplicated from different experiments in Obokata’s PhD thesis (unfortunately not online). Knoepfler then ran a hugely entertaining piece speculating that Riken might choose to precipitate a retraction in order to avoid being blamed by Harvard, presumably according to the theory that senior authors are blameless victims, especially if they work at Harvard. We also heard that coauthor Wakayama now says he has lost faith in the work, despite being recently quoted as having replicated Obokata’s technique for himself.

One suspects that the denouement is not far off. But some conclusions can already be drawn. We see that post-publication peer review easily outperformed even the most careful reviewing in the best journal. The papers’ comment threads on PubPeer have attracted some 40000 viewers. It’s hardly suprising they caught issues that three overworked referees and a couple of editors did not.

Science is now able to self-correct instantly. Post-publication peer review is here to stay.

Post-publication review from around the web (thanks Altmetric!!!)

Screenshot from 2014-01-30 09:37:17

In the right margin of some PubPeer pages there are now links directing to relevant post-publication peer review outside of the PubPeer database. Our agreement with Altmetric, who have been extremely generous with their data, is that these links are only shown on papers that have been commented on PubPeer.

We hope that this new addition will make PubPeer even more useful for finding, creating, aggregating and interpreting post-publication peer review.

Here are some good examples.





Open letter to Editor-in-chief of Nature

Dear Dr. Philip Campbell, Editor-in-Chief of Nature,

In response to Randy Scheckman’s call to boycott top journals such as yours, you issued a press release in which you recognized that we scientists “…tend towards an over-reliance in assessing research by the journal in which it appears, or the impact factor of that journal”. We are very happy to hear that you recognize this problem. We have a suggestion for how the assessment of research could be refocused on its content rather than being evaluated by sometimes unreliable proxies such as the journal brand.

If journals such as yours actively promoted continued review of articles after their publication (post-publication peer review), they would provide scientists with another, more valuable, system with which to evaluate research.  Existing examples of such services include PubPeer and PubMed Commons. Top journals such as yours have the ability to promote this method of evaluation and could help us to replace our reliance on impact factor and other metrics with thorough evaluation of research.  We trust that top journals such as yours will continue to publish high quality research and this system of evaluation will help to make that more evident to everyone.

Please help to remedy this problem by actively promoting post-publication review.

PubPeer response to ACSNano editors

The editorial board of the journal ACSNano recently angered chemistry bloggers and many of their readers with an editorial implying that bloggers often make irresponsible accusations of fraud. The editorial went on to suggest that bloggers should refrain from such unfair and damaging behavior, and instead allow journal editors to process misconduct investigations with their usual diligence and deliberation. It has been suggested that the editorial was in fact an indirect response to the intense commenting on PubPeer about papers by one of the board members.

In our view the editorial rather clumsily conflated two separate processes. The first, commenting on published data, including highlighting any inconsistencies, is absolutely legitimate. The second, accusing people of misconduct, should indeed not be done lightly. Lumping the two processes together enabled the editors to imply that bloggers were making accusations of fraud, when in fact they were only commenting on published data.

PubPeer, a possible target of the editorial, illustrates the fault in this logic. Thus, although we encourage anonymous comments on papers, we forbid any accusations of fraud, misconduct etc. We have made this explicit on a page explaining that comments should be based on easily verifiable information. This policy has been enforced since the inception of the site. While it is true that some of the anomalies highlighted by commenters would require the most fabulous coincidences to have arisen innocently, accusations of misconduct are still not allowed. They are also unnecessary. Any scientist worth his/her salt can interpret a screaming inconsistency.

The editorial also suggested that bloggers unfairly deny authors the possibility to defend their work. This is laughable to those following PubPeer. Commenters have repeatedly sought responses from authors, who are automatically notified of comments. Authors can always respond, either anonymously or signing their comments. However, they rarely exercise their right to reply to comments unless to clarify a simple misunderstanding.

We therefore believe the editorial cited no examples of bad behavior by “bloggers” for a very good reason: they are few and far between. Instead of loose accusations of fraud and authors being denied the chance to reply, they could only have pointed to long, restrained threads of careful scientific discussion, to which the authors have refused to respond. We would be very interested to hear the opinion of the board members about these threads and their content.

PubMed Commons

We think that although it is off to a rocky start by being overly exclusive, PubMed Commons is a great initiative and a big step in the right direction towards effective post-publication peer review. We shall of course be following the experiment with interest.

The obvious difference with PubPeer is the lack of any anonymity. PubPeer offers three levels of anonymity: registered academics can post signed or anonymous comments, and we also host moderated comments from unregistered contributors, who could be anyone, anywhere.

Our own experience suggests that strong anonymity is the key to encouraging useful comments, as do the failed experiments with journal-run commenting systems. We suspect that PubMed will eventually come to the same conclusion. Thus, a majority of comments on PubPeer are from unregistered contributors, whereas only a tiny minority are signed; registered academics commenting anonymously make up the balance.

Examination of the typical contents of PubPeer comments can easily explain why users choose anonymity. It turns out that the strongest motivation to comment arises when people see a problem with a paper, often a serious one indicative of incompetence, deceit or misconduct. But such critical comments are those most likely to attract reprisals. Most of the comments we receive would not have been made in the absence of the anonymity we provide.

Anonymity does allow low quality and bad faith comments to be made with impunity, but we have found this concerns only a small minority of comments and we feel that it is a necessary price to pay to encourage frank and worthwhile discussion.

Finally, if PubMed does start offering strong anonymity, that is likely to disrupt our plans (and save us money!) but for the moment we are continuing to develop PubPeer to facilitate all formats of scientific discussion.

*Note that PubPeer is open to all sciences, not just biomedical sciences.

Anonymous cowards vs the scientific establishment

Battle lines are being drawn on the internet, between the scientific establishment and volunteer vigilantes trying to impose their own vision of the scientific process through “post-publication peer review”.

On one side is the cream of the scientific aristocracy: a professor with a meteoric career trajectory at Imperial College London, one of the best universities in the world, and the top academic publisher, Nature Publishing Group. On the other side: a few anonymous malcontents carping on an obscure web site called PubPeer (welcome to our site!).

A couple of years ago, the professor’s group published in Nature journals a short series of papers reporting an ultra-sensitive assay, in principle for anything that could be recognized by an antibody. The assay, called “Plasmonic ELISA”, was hailed as a breakthrough in diagnostic medicine and is typical of the kind of sensational “high impact” work encouraged by Nature journals. The story was widely reported in the mainstream press.

However, a few scientists felt that the results were a little too good to be true, or at least that unexpected observations were not well explained and that supporting evidence was missing. One of the critics wrote to the journal in private, detailing those concerns. The journal considered them but, with its referees, decided that there was no substance to the complaints. So the publications were not only initially accepted for publication by the journal and its editors, after a typically rigorous review, but their quality was reconfirmed by a second round of review that specifically addressed the criticisms. A stronger proof of quality is hard to come by: Nature says “yes”, twice.

Scientists can be stubborn, especially when they think they are right and have been told they are wrong. The unhappy critic therefore took advantage of our site, which enables anonymous comments on scientific articles, to air his concerns. Site visitors found them convincing and chimed in with their own remarks and analyses. The flow has been essentially one way, with nearly all commenters agreeing that the publications appear to contain serious problems. The difference of opinion with the professor, Nature Nanotechnology and their referees could not be stronger. Only one side can be right. To date, no substantive rebuttal of the criticisms has been posted, though of course the authors and the referees of the papers would be free to defend their work and judgement, anonymously or otherwise. They haven’t responded because dealing with the ignorant internet riff-raff is beneath them or because they have no answer to the criticisms?

Who do you believe – a prestigious professor publishing in a high-impact Nature journal or the anonymous cowards? You can make up your own mind and join in the discussion here (please be polite and factual):

Example case showing why letters to the editor can be a waste of time.

A lively discussion has developed around some recent high-profile publications reporting some amazing results that were covered in many major media outlets. The comments on one of these papers exemplify perfectly why we created PubPeer and why we feel that post-publication peer review needs to have a formal home. We would like to take a minute to point out these comments and underline why we feel that they are so important.

The traditional routes for scientists to raise questions about a publication are to publish their own paper, write to the authors, or write to the journal. All of these avenues have their disadvantages. A new publication requires a major time (and financial) investment, while an impasse is often reached in interactions with the authors. Both of these approaches identify and expose the critic to reprisals. The final option, writing to the journal, does mostly preserve anonymity, but at the cost of a loss of transparency. Below is a perfect example of this from one of the articles getting a lot of attention on PubPeer at the moment:

In that thread, Peer 2 recounts writing to Nature Nanotechnology, who, to their credit, did take the time to follow up the issue via correspondence with the authors. Ultimately however, although Peer 2 was not satisfied, the editor was, and closed the case. The authors, who have confirmed to us that they are aware of the comments, seem to feel that they have already addressed the issues with the editor of Nature Nanotechnology and that the scientific community should be content with that. However, at the end of the correspondence, the only people aware of the potential issues were Peer 2, the editor of the journal and the authors.

Peer 2 eventually posted the issues on PubPeer, where others interested in the field saw them and added to them with some very thorough reviews of the paper. We are not experts in that field, and we can’t predict what will eventually happen to the paper, but, through reading PubPeer, many others in the field now know about and can evaluate for themselves the issues raised. We consider this to be the main benefit of post-publication review: the dissemination of follow-up discussion so that it is available to all interested parties. Post-publication review can take many avenues (blogs, tweets, etc.) and we encourage the use of all of them. But we also feel that it is essential that others in the field are able to find these reviews easily. That is why a centralized database is essential. If you prefer one of the other methods of review, please also consider cross-posting your remarks to PubPeer, so that your colleagues are more likely to discover your comments.

PubPeer comments integrated on PubMed

We have received many requests asking us to reach out to NCBI and attempt to integrate PubPeer comments directly on PubMed. We have contacted PubMed, but unfortunately if they ever accept our proposal at all, it will take a very long time. We have therefore developed browser plugins that show PubPeer comments directly on PubMed while browsing any results on PubMed. The code for this plugin is written to be very light and will only run when browsing certain pages of PubMed. The idea is that you will never notice that the plugin exists unless you are browsing papers in PubMed that have comments and then you will see which papers have comments.  Once installed, it is as if PubMed has accepted our proposal and are displaying PubPeer comments directly.

We hope that this will help in the discovery of post-publication peer reviews and encourage more participation in creating them. However, to get the most out of the plugin you can do three things:

1) Install the plugin.

2) Get everyone you know to install the plugin.

3) If you know of post-publication reviews of an article anywhere on the internet, add a link to it on the corresponding PubPeer page so others looking at PubMed with the plugin will stumble upon these reviews and use them to interpret the corresponding article.

We hope you enjoy the plugin and let us know what you think and how we can improve it.  At the moment it exists for Chrome/Chromium but we will port it to Firefox and Safari in the near future. (If you want to help in making the Firefox/Safari ports please let us know.)

Here’s the Chrome plugin (it also works on Chromium of course):

Screenshot from 2014-04-03 09:41:34

This plugin is based on the articlenhancer plugin that PubPeer users are putting together.  To get involved in that open source project please visit their github.

Some of these same users are also putting together a plugin for Zotero.  The idea is that Zotero users will be alerted to comments on the articles in their Zotero libraries.  That project is also open source and is hosted here.

Bloggers, direct PubPeer viewers to your blogs…

To make PubPeer as useful as possible for discovering post-publication peer review where it’s happening, we would like to encourage bloggers to add links to their blog posts on the appropriate PubPeer pages. We have thought a lot about how to make this process more automated for you bloggers, but any use of “trackbacks” would require you to find the appropriate PubPeer URI and paste it into your blog’s trackback plugin. We believe that this would not be any less time consuming than doing the copying and pasting in the opposite direction (the URL of your blogpost into a PubPeer comment) but it would be enormously more time consuming for us to write the trackback code to catch the trackbacks. Therefore, we encourage bloggers to paste links to posts about publsihed articles directly on the appropriate PubPeer pages. If you’re not a blogger but you know of a good blog post about an article, also feel free to post the link. It will probably be helpful to readers to add a sentence or two about why you’re posting the link to encourage readers to visit it.

We hope this will be useful for bringing more readers to the appropriate blog posts and helpful to the community by making it easier to discover post-pub peer reviews of your favorite articles. We are also working actively to make PubPeer comments appear directly on PubMed, the arXiv, and other article discovery and research engines. We don’t want to give away all of the details just yet but we will soon unveil PubPeer comments directly on PubMed which will make it easier for users to see post-pub reviews when surfing PubMed.

Finally, if you feel that we’re wrong, and that a trackback system would be much easier for you, please let us know below in the comments or on our contact page.


Quite a few people have expressed disapproval of the anonymity of PubPeer comments (and also of its organizers). This criticism is understandable, but we don’t think the problem is as simple to solve as some suggest. We examine the issues below.

Firstly, note that commenters on PubPeer DO have the option of displaying their real name (on a per publication basis). So those who want the fame and glory associated with criticizing their department chairman’s papers are of course free to get it in full. It seems that most people using PubPeer don’t choose that option; many moreover prefer the ultra-anonymous procedure of “unregistered submissions” that do not require account creation. Our own experience of commenting on papers is consistent with this impression: people are more inhibited by fear than by the fact that they must be anonymous (when in any case they don’t have to be). Many other factors come into play, including just finding the time to study papers with sufficient care to comment properly.

Authors whose papers have been criticized often vociferously complain about the anonymity. One suspects that they don’t always want to enter into a collegial discussion. Legal threats have already been received. I think we all know powerful scientists who can not be guaranteed to be reasonable about criticism of their work. Peer review is anonymous for a very good reason.

Many people suggest providing nicknames and possibly a reputation system. Of course we have considered both of these and may in the future provide them on a optional basis. But the big disadvantage we see is that in the long run they will destroy anonymity without the commenter wishing it, realizing it or being able to prevent it. This will happen because the nicknames would allow a commenter profile to be built up. In the end that profile or a single specific comment will allow somebody to identify a commenter. From that point on, all comments associated with the nickname can be traced back to the true commenter. A similar problem arises with the reputation system, although exploiting the information exposed is marginally more technical. It works by tracking comments that have correlated reputations in time. Whenever your reputation is updated, that update will occur synchronously on all of your comments. A similar commenter profile can thus be constructed (even in the absence of nicknames). The end result would be the same. Thus, we feel that nicknames and/or a reputation system would lull users into a false sense of security but ultimately and irreversibly compromise the anonymity that they might wish to maintain.

For the reasons given above, we have decided to allow anonymous comments on PubPeer, although the site may miss out on “social” driving forces as a result. The absence of any commenter identification also means that readers simply have to read and understand any comments for themselves; it is not possible to rely on any reputation. Although this is a little disorienting, it does have the advantage of forcing readers to exercise their own judgment.

PubPeer organizers remain anonymous for similar reasons. Imagine comments criticizing papers by their PI or department chairman. At the very least, pressure could be applied to censor the unwanted criticism. We just don’t want to deal with that situation. And, analogously to the situation for anonymous comments, the identity of the PubPeer organizers should have no bearing on the pertinence of any comments submitted.

Introducing PubPeer

The process of reviewing published science is constantly occurring and is now commonly being called post-publication peer review. It occurs in many places including on blogs such as this one, review articles, at conferences around the world, and has even been encouraged on the websites of some journals. However, the process of recording and searching these comments is, unfortunately, inefficient and underused by the larger scientific community for several reasons: To successfully impact the publication process, this database of knowledge has to accomplish two important tasks. First it requires participation by a large part of a given scientific community so that it reflects an average impression instead of an outlier’s impression. Second, it requires that the collective knowledge is centralized and easy to search in order find out what the community collectively thinks about an individual paper or a body of work. A recent initiative, the San Francisco Declaration on Research Assessment (DORA), echoes many of these same concerns.

In an attempt to assemble such a database, a team of scientists, have put together a website called that is searchable and encourages participation by the larger scientific community. With a critical mass of usage an organized system of post-publication review could improve both the process of scientific publication as well as the research that underlies those publications.

Those of us involved in the creation of believe that in an ideal world, a scientist’s main goal would be to discover something interesting about the world and simply report it to other scientists to use and build upon. This idealistic view of the scientific process is however not matched in reality because, for academic scientists, our publications count for much more than a simple contribution to the scientific record. For example, the majority of candidates are eliminated from consideration for tenure track positions at a major universities based on the names of the journals that have published their recent findings.

Review committees use this method because publications are the best measure of past and potential scientific output, but by potentially overvaluing “high impact” journal names, these committees and study sections effectively defer to journal editors to help them choose the best candidates for jobs and grants. However, these journals select their articles based on more than just good science – the papers also need to be of ‘wider interest’ and this can sometimes skew the publications towards ‘exciting’ results over those that are more measured, and perhaps more likely to be correct (for instance). The sometimes disproportionate attention given to a high profile paper also makes it a tempting target for more unscrupulous scientists.

It’s never going to be possible for us to thoroughly read all of the papers submitted to a job advertisement, nor all of the papers referenced in grant applications, but we can easily reduce the importance that journal names play in decisions and replace it with something that is more meaningful and directly in our hands instead of the hands of publishers. After reading any publication, we all have impressions about whether the reported observations are useful, interesting, elegant, irrelevant, flawed, etc. If a particular scientific field that is interested in a given publication were able to compile all of it’s impressions of that publication, that collective information would be infinitely more useful to search committees and study sections than the name of the journal in which it was published.

Outlined below are a few aspects of that differentiate it from the current post-publication review systems and which will hopefully make it more widely used.

  • A key issue that we have decided on is the importance of anonymity. One of the reasons that we have never commented on articles directly on journal websites is because the colleagues whose publications we are most qualified to comment on are likely reviewing our publications and grant proposals. Even the most well-intentioned criticism could potentially irk these potential reviewers. Since publications are so precious to everyone’s future career advancement, there is a huge psychological barrier for early stage scientists to attach names to any comments that could be considered critical.Therefore, in order to encourage as much participation in this post-publication review process as possible, PubPeer allows comments to be left anonymously if someone is so inclined. Critics of this feature sometimes email us to point out that anonymity allows for baseless slander or to proclaim that a commenter’s name is essential for judging the validity of a comment. We strongly disagree with this second point because good comments are good regardless of whether they come from a senior scientist or a graduate student. We can all judge for ourselves the content of comments and on PubPeer it is possible to vote the good comments up and the bad comments down into the noise so that community as a whole can decide together what is worth paying attention to. Baseless defamation, rumors, and ad hominen attacks are not tolerated at all and are immediately removed from the site.The people involved with PubPeer are all active scientists and we are trying to remain anonymous for the time being for several reasons: 1) we can imagine scenarios in which pressure could be put on us to remove/alter comments if our identities were known and 2) we would like to protect our families and private bank accounts from the more litigious among our readers.
  • A main drawback of the current practice of post-publication peer review is that the reviews can be spread across many different blogs and journal websites. If one wants to know what the community thinks of a given body of work (whether it be a discipline, an author’s output, a university department, etc.) it takes a major time investment to track down the information from all of these different sources. PubPeer provides a centralized and easily searchable database that contains comments on all published articles.
  • PubPeer also provides for a system of alerts. In order to be effective, authors and others interested should be able to be alerted to comments on their favorite publications or topics. PubPeer automatically notifies corresponding authors of new comments on their articles and anyone can set up email alerts on articles they find interesting.

This was reposted from