PLOS Science Wednesday: Hi Reddit, we're Dr. John Ioannidis and Dr. Robert Kaplan, and we're here to discuss the importance of improved reporting in scholarly research for Open Access Week

Abstract

Hi Reddit,

October 15 – 25 is Open Access Week and this year's theme is "Open For Collaboration". PLOS authors Dr. John Ioannidis and Dr. Robert M. Kaplan are leading this AMA to discuss how more comprehensive reporting in scientific research can lead to more study replication, stronger statistical methods, and improvements to scientific standards, all of which are hallmarks of open access publishing.

In "How to Make More Published Research True", an essay in PLOS Medicine, Dr. John Ioannidis examined the factors behind false or exaggerated results in research and proposed solutions for improving the efficiency and value of scientific investigation.

Dr. Robert M. Kaplan co-authored a PLOS One study, "Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time", which found that the number of null findings increased after the National Heart, Lung, and Blood Institute adopted more transparent reporting standards.

Here is a little more information about the participants for this week's PLOS Science Wednesday AMA:

Dr. John Ioannidis is one of the two Directors of Meta-Research Innovation Center at Stanford. He also holds the C.F. Rehnborg Chair in Disease Prevention at Stanford University, and he is Professor of Medicine, and of Health Research and Policy, and Director of the Stanford Prevention Research Center at the School of Medicine and Director of the PhD program in Epidemiology and Clinical Research as well as Professor of Statistics at Stanford University School of Humanities and Sciences. Dr. Ioannidis has served as member of the editorial board of more than 20 major journals and he is also a member of the PLOS Medicine and PLOS ONE editorial boards. His work has already been cited over 85,000 times in the scientific literature and there are currently more than 1,500 new citations to his work every month. His 2005 PLOS Medicine paper on "Why most published research findings are false" has been accessed 1.5 million times.

Dr. Robert M. Kaplan served as Chief Science Officer at the U.S. Agency for Healthcare Research and Quality (AHRQ) and Associate Director of the National Institutes of Health, where he led the behavioral and social sciences programs. He was formerly the Distinguished Professor of Health Services and Medicine at University of California, Los Angeles, where he led the UCLA/RAND AHRQ health services training program and the UCLA/RAND CDC Prevention Research Center. He was Chair of the Department of Health Services from 2004 to 2009. From 1997 to 2004 he was Professor and Chair of the Department of Family and Preventive Medicine, at the University of California, San Diego. He is a past president of several organizations, including the American Psychological Association Division of Health Psychology, Section J of the American Association for the Advancement of Science (Pacific), the International Society for Quality of Life Research, the Society for Behavioral Medicine, and the Academy of Behavioral Medicine Research. Kaplan is a former Editor-in-Chief of Health Psychology and of the Annals of Behavioral Medicine. His 18 books and over 500 articles or chapters have been cited more than 28,000 times. Kaplan is a member of the National Academy of Medicine (formerly the Institute of Medicine).

We will be taking your questions at 1pm ET (10 am PT, 5 pm UTC), Ask Us Anything!

Hi, I am a just finishing up a PhD in neuroscience and have worked in other life science fields. After much discussion with colleagues it seems that the future of scientific reporting has to be in real time online publication of data (a la physical sciences archive). As far as I can see life science research is currently suffering a major crisis. More than ever before the motivation for doing basic research is to publish high and achieve the commensurate social status. The publish or perish work environment is driving most true scientists away and rewarding those who are happy to fudge data and make tenuous interpretations in order to build a 'sexy' narrative for the likes of Nature, Science, and Cell. The people who succeed in this climate are not usually willing to make a stand, or worse, can't see a problem with the current system of reporting.

I have a few questions:

1) To what extent do you think the current business models of traditional scientific publishing houses directly conflict with the interests of rigorous impartial science?

2) And do you think these traditional publishers are dragging their feet in terms of fostering the progression in reporting that basic life science research so desperately needs?

3) Finally, do you think open access publishing as it currently stands is really part of a useful progression towards the elimination of the 'impact factor as de facto currency' model of basic research incentivisation?

Thanks

thinkscout

John Ioannidis: This is a very insightful comment. I think that there are indeed problems with the current incentive system in scientific publication and on how to advance in a scientific career. Journals and publishing houses are perhaps part of the problem, but they are also part of the solution. For example, there has been progress in many journals to enhance transparency, sharing of data and protocols, use better standards for methods and statistics, and focus on reproducible research rather than just big claims. Many journals and their editors are actually at the forefront of these changes. It is an ongoing struggle and I would like to see journals as allies in this process rather than as competitors or enemies that need to be eliminated.


Hi Drs. Ioannidis and Kaplan,

Increasing rigor in both scholarship and peer-review is always a noble pursuit. However, I'm also curious about your thoughts on the current demands of the tenure and promotion system as a factor involved in increasing academic dishonesty. If there were fewer demands and more time to work on individual projects, could this improve reporting?

Thanks for your time, SW

shadowwork

John Ioannidis: The tenure and promotion system can be a key determinant of what type of science we get. If we reward people for the right reasons, this can be a powerful game-changer. Tenure could be based on a combination of criteria which I called PQRST in a paper I wrote in JAMA with Muin Khoury (productivity-quality-reproducibility-sharing-translation).


Hi Drs. Ioannidis and Kaplan,

Increasing rigor in both scholarship and peer-review is always a noble pursuit. However, I'm also curious about your thoughts on the current demands of the tenure and promotion system as a factor involved in increasing academic dishonesty. If there were fewer demands and more time to work on individual projects, could this improve reporting?

Thanks for your time, SW

shadowwork

Bob Kaplan: I agree with Dr. Ioannidis and recognize the need for promotion systems to evolve. Sometimes I fear that we have met the enemy – – and discovered it is us. To a large extent, decisions about promotion and tenure are governed by peers. And, as a scientific community we have undervalued team science, replication, and evidence integration (as opposed to first or last authored original papers). I completely agree with the Ioannidis-Khoury PQRST criteria. We just need more of our colleagues to get on board.


As increased statistical scrutiny is adopted in science, it seems like big data methods are going to be more prominent in all study designs. This necessitates a large increase in computational hours associated with each publication. With the sequester and general reductions in funding and overhead costs, should individual award amounts be increased? Is there any way to get improved statistically sound data sets at current funding levels (n=60 per condition is not uncommon for sufficient confidence) when a statistician and/or computational scientist may be a new requirement for all study designs moving forward?

softmatter

Bob Kaplan: Big data creates exciting opportunities, but there may also be some problems. In particular, the opportunity to obtain spurious results because there multiple outcome variables creates challenges. In some large data sets investigators have the opportunity to select between literally thousands of variables. Biases associated with sampling problems can get magnified in some studies.


As increased statistical scrutiny is adopted in science, it seems like big data methods are going to be more prominent in all study designs. This necessitates a large increase in computational hours associated with each publication. With the sequester and general reductions in funding and overhead costs, should individual award amounts be increased? Is there any way to get improved statistically sound data sets at current funding levels (n=60 per condition is not uncommon for sufficient confidence) when a statistician and/or computational scientist may be a new requirement for all study designs moving forward?

softmatter

John Ioannidis: This is indeed a major challenge. For most applications of big data, I suspect that we need to support team science and collaborative work among many scientists. Fragmentation may result in many underpowered studies at high risk of both false negatives and false positives. Team science will also enhance the ability to standardize processes, which will hopefully reduce the error noise.


  1. How do we prevent poorly designed research methods from being published?

  2. What do you think of open access journals?

  3. Does poor randomization or improper blinding completely invalidate research?

  4. What are some other problems you think are overlooked when we talk about research?

Thanks for doing the AMA!

motorboat7

John Ioannidis: Thank you! All 4 questions are excellent. 1.How do we prevent poorly designed research methods from being published? Research methods are an interesting category of papers that is not as well studied in terms of its robustness. In most cases, there is a rather small community of scientists who can really evaluate a new proposed method. Also most new research methods don’t get applied in the community, even though they may excellent and better than those that are widely used. I think we need to do some more thinking about how to improve this. 2.What do you think of open access journals? In principle I am in favor of open access, but any journal can be good or bad, regardless of its open access status. E.g. it depends more on whether it has rigorous peer review and whether it adopts standards for high quality of methods and reporting and less on whether it is open access or not. 3.Does poor randomization or improper blinding completely invalidate research? They are both very serious problems, and very common as well, and they can lead to biased results. “Completely” is too extreme perhaps, since a study may have a zillion faults that cancel themselves out and eventually it gets the right answer. But the chances of getting a wrong result are high. 4.What are some other problems you think are overlooked when we talk about research? I wish I had 20 days to talk about this!


More of a personal question for me - I'm finishing a PhD thesis on genetics in cancer. I'm not happy about writing the final paper that I need to get my PhD. I'm reasonably sure that the original data is okay, but results were then imputed and after that, we applied statistical tests to it. After the first run, there was no significant correlation found. Then a supervisor suggested changes to the statistical analysis, and now we do have a significant correlation.

I feel that after all this manipulation of the data, the end result is just not to be trusted in any possible way. In the end, the calculated effect is only small, so it's not a very noteworthy result anyway and will not be published in a high-impact paper anyway, but ...

I could:

  1. work on publishing the article; and I receive my title.

  2. refuse to publish the article; and then not finish my PhD and not get a title (which wouldn't matter a lot for me personally, since I'm no longer working in that field).

My supervisors aren't a lot of help; they want this article published so that I can finish my thesis.

PetraLoseIt

Bob Kaplan: I think your responsibility is to report the results as accurately as you can. It is time to destigmatize null results. In properly powered studies, they are of equal importance to statistically significant findings.


More of a personal question for me - I'm finishing a PhD thesis on genetics in cancer. I'm not happy about writing the final paper that I need to get my PhD. I'm reasonably sure that the original data is okay, but results were then imputed and after that, we applied statistical tests to it. After the first run, there was no significant correlation found. Then a supervisor suggested changes to the statistical analysis, and now we do have a significant correlation.

I feel that after all this manipulation of the data, the end result is just not to be trusted in any possible way. In the end, the calculated effect is only small, so it's not a very noteworthy result anyway and will not be published in a high-impact paper anyway, but ...

I could:

  1. work on publishing the article; and I receive my title.

  2. refuse to publish the article; and then not finish my PhD and not get a title (which wouldn't matter a lot for me personally, since I'm no longer working in that field).

My supervisors aren't a lot of help; they want this article published so that I can finish my thesis.

PetraLoseIt

John Ioannidis:

thank you for sharing this situation. The devil can be in the details, so I hope I am not giving wrong advice here, but in principle I am in favor of publishing all work regardless of the results and to be explicit about what analytical options have been pursued and how. So, I think you should publish the work you have done and state that the exact result depends substantially on the way the data are analyzed. Differences in results based on different analyses can often offer valuable insights. I assume that the two analyses are both statistically reasonable, if one of them is wrong obviously you should not use it regardless of whether it gives nice results or not. If both of them are reasonable, then you need to ask why do they give different results, in what ways do they differ. Also in the paper, you should be transparent about which analysis was pre-specified and which one was exploratory.


On your opinions, do the benefits of open access journals make up for reduced peer review requirements?

UmiNotsuki

John Ioannidis: I think that there is no reason to correlate the strength and rigor of peer review requirements with open access. Peer review is useful, I think we need better and usually more, not less, peer review. Or, at least meaningful peer review: e.g. checking the methods and the data rather than just making abstract comments about style and rhetoric. Many open access journals, e.g. PLoS journals, have extremely high quality peer review. Open access should not be confused with predatory journals that simply publish anything for a fee without any serious review.


Hello! In Nate Silver's Signal/Noise, he mentions that up to 2/3 of published research is unable to be replicated. I am sure there are some important caveats to that but I am curious for your thoughts. First, is that a commonly-accepted stat? Second, is that percent declining over time as better research methods become more standard?

Thanks!

esbforever

John Ioannidis:

thank you for mentioning this. For most scientific fields, replication is not done frequently. In fields where empirical replication efforts have been done in a systematic scale, the failure-to-replicate rates are as high as you mention or higher. For example, in psychology, two-thirds of 100 high-profile papers could not be replicated. In preclinical studies (e.g. oncology drug targets), failure-to-replicate rates have varied from 75% to 89%. Candidate gene studies failed-to-replicate about 99% of the time when we moved to the genome-wide era with more rigorous methods. But there is large diversity across scientific fields, so I am sure that there are fields where almost everything would replicate and others where almost everything would fail to replicate, if we had attempted replications. The second question is very difficult to answer, because the empirical data are limited and interest in replication has been rekindled only relatively recently. Based on theoretical considerations, I would expect that over time the number of "correct" scientific results increases because we have more scientists, better methods, and more analyses done; but it could be that the proportion of "correct" results is actually declining, because increasingly we are working in more difficult areas of complex research where the yield of true discoveries may be relatively low and the noise is more prominent. But I may well be wrong in this speculation and I also believe that the answer would differ across different fields.


Hello! In Nate Silver's Signal/Noise, he mentions that up to 2/3 of published research is unable to be replicated. I am sure there are some important caveats to that but I am curious for your thoughts. First, is that a commonly-accepted stat? Second, is that percent declining over time as better research methods become more standard?

Thanks!

esbforever

Bob Kaplan: I do believe that the quality of scientific investigations and reporting is improving. I also have concerns about the "pass – fail" criterion for replication. If you do an experiment and the result is significant at p= 0.049, and I do the same experiment and obtain p=0.51 it might be concluded that I could not replicate your experiment. In fact, multiple executions of the same experiment will not produce exactly the same results. Perhaps what is more interesting is the distribution of results across similar experiments and the analysis of components of the studies that may have driven the variation in outcomes.


Thanks for doing this folks -

I have a bit of a philosophical question that overlaps with how business works.

There are paywalls blocking a lot of scientific literature and evidence - and I understand the reasons behind that most of the time.

But my question is, in your opinion - can we ever lay claim to true evidence and trial transparency (forgetting for a moment intentionally buried data and publication bias) when a lot of content is blocked off by a paywall?

And of course, there's a reason for those paywalls. Money needs to come from somewhere. Is striking a better balance between transparency, open access and a viable business model possible?

When I've been researching something, I can't tell you how many times a potentially great line of inquiry has been halted by this happening - despite me having two separate sets of access permissions (Athens & University Alumni).

Collaboration is limited if we can't see each others work. What do we do here?

Many thanks.

nocaph

John Ioannidis: I am definitely in favor of more widespread open access, and lack of open access poses serious obstacles. However, I suspect that the major problem in most cases is not access to the published paper but to all the protocol, raw data, material, and software that are the real scientific paper. The published paper is usually more of an advertisement that “this research happened”, just the tip of the iceberg.


Thanks for doing this folks -

I have a bit of a philosophical question that overlaps with how business works.

There are paywalls blocking a lot of scientific literature and evidence - and I understand the reasons behind that most of the time.

But my question is, in your opinion - can we ever lay claim to true evidence and trial transparency (forgetting for a moment intentionally buried data and publication bias) when a lot of content is blocked off by a paywall?

And of course, there's a reason for those paywalls. Money needs to come from somewhere. Is striking a better balance between transparency, open access and a viable business model possible?

When I've been researching something, I can't tell you how many times a potentially great line of inquiry has been halted by this happening - despite me having two separate sets of access permissions (Athens & University Alumni).

Collaboration is limited if we can't see each others work. What do we do here?

Many thanks.

nocaph

Bob Kaplan: This is An excellent question and an important challenge. Study registration does help address this problem. In the case of large randomized clinical trials, registration can identify which studies were initiated. The NIH will now require data to be reported in clinical trials.gov independently of journal publication. PCORI will also require reports of study outcomes within 90 days of study completion.


Hi, I am a recently graduated MD with a strong interest in public health research - one of the organizations that is trying to improve reproducibility in the sciences are the Center for Open Science, the AllTrials initiative and the RIAT group. Do you have any other recommended organizations in this vein? As well, do you have any advice for someone looking for training experience in clinical epidemiology especially as it relates to open access, reproducibility and improving trial transparency?

bythelake23

Bob Kaplan: The Center for Open Science does an outstanding job. I am also a very big fan of the CONSORT group of guidelines. See http://www.consort-statement.org. In addition, you should look at the methodology guidelines developed by PCORI.


Do you think that offering better access to scholarly work will improve the traditional and nontraditional media reporting in any way?

As a side, I subscribe to a few journals that I skim through and read an article occationally if it seems interesting but would really love access to more than just a few journals.

politicize-me

John Ioannidis:

Thank you for the interesting comment. As I stated above, I am in favor of open access, but it will not solve all problems. Especially for the public, reading the full text of a paper will not allow most people to see where the problems are. Even top scientists may not be able to see the problems, unless they have access to protocols and raw data and they are willing to spend a lot of effort to probe into the work. This being said, I have seen many press releases and news articles where indeed if someone could just take a 30 seconds look at the full paper, it would have been obvious that the paper has very little to do with the publicized hype.


Are there organisations which roles could be to check if the results of a study are true or false in order to filter them a little ?

I know that the amount of publication each days is enormous and that researchers would prefer to be in an open environment but if the WHO or another organisation could give a "seal of confirmation" would it help ?

Also, i have not a great knowledge on how medicine research are done but i would think that replication of one experiment or test should be a mandatory step to publish a serious public paper, is it not the case ?

wowy-lied

John Ioannidis: thank you for the interesting questions. There is no false-true assigning service and for most papers it would be impossible to give such a stamp based on what they present. In the absence of replication of the specific claims made by a paper, one can only say that for papers with similar strengths and weaknesses, the chances of being correct are 5%, or 50% or 90%. Thus, replication is extremely important and several fields, e.g. genome epidemiology, have adopted it as a sine qua non, a result is not trusted unless it can be replicated. Other fields are learning that it is important, e.g. as in the case of the reproducibility project in psychology published two months ago in Science. In some cases, replication may be very difficult or inefficient, e.g. a 1 billion randomized trial with 10 years of follow-up would be difficult to repeat.


Are there organisations which roles could be to check if the results of a study are true or false in order to filter them a little ?

I know that the amount of publication each days is enormous and that researchers would prefer to be in an open environment but if the WHO or another organisation could give a "seal of confirmation" would it help ?

Also, i have not a great knowledge on how medicine research are done but i would think that replication of one experiment or test should be a mandatory step to publish a serious public paper, is it not the case ?

wowy-lied

Bob Kaplan: Replication is an important part of the scientific process. However, many of the most important randomized clinical trials in medicine cost tens of millions or even hundreds of millions of dollars. Unfortunately, some of the trials that most influence medical practice are least likely to be replicated.


There is a huge volume of published material in every field these days, with the number of articles increasing exponentially. It is becoming increasingly difficult to find studies relevant to specific topics, let alone evaluate individual articles within the bulk of existing studies. This problem is exacerbated (I think) by less rigorous standards for some open access journals (certainly not PLOS journals).

Do you have any thoughts on how to deal with the existing volume of articles that have already been published with potentially questionable statistics and other problematic methods?

archaeofieldtech

John Ioannidis:

thank you for the interesting question. I agree that there is a vast amount of poor quality work that is published out there. When it comes to predatory journals, I think one can safely just ignore this literature. For the vast majority of other journals, which are not predatory, there will be a mix of high quality, moderate, and low quality work published in each. Most of the time, the data, protocols, and required materials are not available to check in-depth the claims made and even when these are available, it would be an enormous effort to do this for each and every paper. Therefore, I think it makes sense to try to guide ourselves towards assessing the credibility of a paper based on its basic features, i.e. aspects such as design, presence of randomization, sample size, blinding, proper reporting, and others depending on the type of study at hand. One could then get a ballpark estimate that this study is, say, 10% likely to be correct. For influential studies, e.g. those that lead into clinical applications that may save lives or kill people or affect major outcomes and those that get highly-cited (and thus affect many other papers and lines of investigation and funds allocation), I think there should be some effort to replicate/reproduce their results independently, otherwise they can do major harm, if they are wrong.


There is a huge volume of published material in every field these days, with the number of articles increasing exponentially. It is becoming increasingly difficult to find studies relevant to specific topics, let alone evaluate individual articles within the bulk of existing studies. This problem is exacerbated (I think) by less rigorous standards for some open access journals (certainly not PLOS journals).

Do you have any thoughts on how to deal with the existing volume of articles that have already been published with potentially questionable statistics and other problematic methods?

archaeofieldtech

Bob Kaplan:This is certainly an enormous problem in science today. Increasingly, we havevbecome dependent on systematic reviews of evidence. The emerging development of high-quality meta-analyses has been helpful in coming to understand some areas of investigation.


Hi there, I am a big fan of your work. Bias is one of the biggest problem in science today. Do you think it is possible to get a relatively unbiased scientific library without pre-registering of studies and mandatory publishing of the results?

knappis

John Ioannidis:

thank you! I am in favor of pre-registration and of full publication of results. For some types of research, these are strong imperatives, e.g. for randomized trials anything short of this would perpetuate a misleading literature. For exploratory research, I doubt whether pre-registration would work, because the research protocol is not really pre-specified, typically simply there is no protocol. Asking people to pre-register something as being pre-conceived while it is not is asking them to lie and may create even more problems. I think we should foster openness and transparency and try to make sure that investigators are accurately disclosing whether they are conducting exploratory, "blue sky" research or have pre-specified plans. Exploratory research may lead also to interesting leads and discoveries but they need to be thoroughly replicated before anyone can trust them.


Hi there, I am a big fan of your work. Bias is one of the biggest problem in science today. Do you think it is possible to get a relatively unbiased scientific library without pre-registering of studies and mandatory publishing of the results?

knappis

Bob Kaplan: Excellent question. I do think that preregistration of studies and systematic reporting guidelines are our best hope for reducing bias.


Do either of you see a value in having both open access and paid non-open access journals? In other words, do non-open access journals benefit scientific inquiry in ways that open access journals do not?

DrDoSoLittle

John Ioannidis:

interesting question! I would wish we had a randomized trial to answer this, but even then it would need a lot of thought to decide on what would be the appropriate outcomes and other essential aspects of the design. In principle, I am in favor of open access. I can't think of any obvious reason why letting more people see scientific papers would do necessarily harm (even if many/most of these papers have biased and inflated results!). However, if, say, such an obligatory transition to having all journals be open access requires a lot of financial and other resources, the question is whether these resources would be taken away from other aspects of research. In all, the plan to eliminate all paid non-open access journals seems difficult for me to address, I would like to consult with people who have the financial/business expertise (and I guess they would not have randomized trials either!)


I will take a very specific view to replication. What do you do in replicating experiments when you have no idea about the environmental factors present in the factors. Since my field is Plant Breeding, we usually do a broad sense heritability that shows how genetics, environment, or a combination of the two determines what drove the results given. How does this idea get addressed in systems where it is not addressed? Could some of the failed results in experiments be due to environmental interactions that people are unaware of?

jasperjones22

John Ioannidis:

thank you for mentioning this. I have no expertise in plant breeding, so this is a perfect opportunity to make a fool of myself in trying to give an answer. I have some experience in some other fields of human genetics, where indeed we also have both environmental and genetic influences and their possible interaction. If one does not account for the environment and its interactions, this will manifest potentially as heterogeneous results in populations/settings where these interacting factors are different and not accounted for. However, empirical studies suggest that the reason that genetic associations failed in the candidate gene era were mostly biases and not lack of consideration of the environment. Also the successful replication of genetic associations in the current era where we have corrected these biases does not seem to be hindered by environmental heterogeneity. Documenting interactions is not easy and it needs even larger sample sizes, and very accurate measurements of both genes and environment, and phenotypes. I think a useful overview of this topic is in: Recommendations and proposed guidelines for assessing the cumulative evidence on joint effects of genes and environments on cancer occurrence in humans. Boffetta P, Winn DM, Ioannidis JP, Thomas DC, Little J, Smith GD, Cogliano VJ, Hecht SS, Seminara D, Vineis P, Khoury MJ. Int J Epidemiol. 2012 Jun;41(3):686-704. doi: 10.1093/ije/dys010. Epub 2012 May 16.


Thank you for doing this AMA! I would like to know what is meant by "replication culture"? And how is this supposed to be implemented in the life sciences?

LadySovereign

John Ioannidis: thank you for mentioning this. I use the term "replication culture" to describe how a field uses replication practices as a key aspect of its way of conducting and publishing research. There are different aspects of this: re-analysis of existing data and existing analyses; replication in a different sample or population/study by the same or by different investigators; and other aspects of reproducibility checks. I think replication should have a central place in the life sciences. When it is cheap/easy, it can be done routinely, as in the case of genetic associations. When it is expensive and requires a lot of time/effort, one has to prioritize research results for replication. As I mentioned above, personally I would prioritize for replication results that are very influential (e.g. are highly-cited) and those that may make a difference for major health outcomes. Similarly, I don't think that we have resources to try to perform re-analyses of every little piece of unimportant study result that is getting published, but for highly-cited papers and papers that reach a point where decisions will be made that will affect humans, e.g. testing in clinical trials, independent re-analysis would be indicated to make sure that we are on the right path.


Dear Drs. Ioannides and Kaplan.

Thank you for doing this AMA.

How do you feel about pharma sponsored papers? Do you think they can ever truly be considered research articles?

Thanks.

ScaredTuna

Bob Kaplan: Thanks for this question. There there has been a lot of suspicion about industry supported research. In our analysis, we focused on studies that were funded by NIH. However, many of these trials were cosponsored by the pharmaceutical industry. Others had shown that trials funded by industry were less likely to be published within 2 years of study completion and were more likely to report positive outcomes than were trials funded by other sources. In our analysis, we did not find evidence that industry cosponsorship was associated with the probability of a positive result. 23 of 25 (92%) of the NHLBI trials published after 2000 had partial industry sponsorship or contribution of medications. Yet all but two of these trials obtained null results. Previous financial relationships between investigators and industry is also an unlikely explanation, although our data did not allow a strong evaluation of this hypothesis. Prior to 2000, these relationships were reported for only 3% of the trials in contrast to 72% after 2000. Now that disclosures are required, it is apparent that some connection between investigators and industry is almost universal. Our threshold for coding a connection was low—a trial was placed in this category if any of the many investigators had a financial relationship to industry, including speaker bureaus and research sponsorship. If industry exerted influence on study outcomes, we might expect bias in favor of treatment. Yet, despite these relationships, nearly all recent trials report null results. Financial relationship between investigators and industry should have led to more positive trials and does not explain the trend toward null results.


Dear Drs. Ioannides and Kaplan.

Thank you for doing this AMA.

How do you feel about pharma sponsored papers? Do you think they can ever truly be considered research articles?

Thanks.

ScaredTuna

John Ioannidis: thank you for the question and the provocative statement. I have seen many studies sponsored by pharma that are excellent and many superb scientists work in pharma or are supported by pharma, so I would feel uneasy to dismiss all pharma-sponsored papers as not being research. Even though personally I do not get pharma funding for my work, I need to acknowledge that in the current environment pharma typically has no option but to sponsor the types of studies that they sponsor. They are under regulatory pressure to deliver favorable results for their own products. Not surprisingly, they thus deliver favorable results for their products and they make sure that they pass the requirements. In fact in check lists of "quality", pharma-sponsored trials currently score higher than academic studies. However, this whole approach is misleading. I would rather prefer that the burden of testing drugs was left to public funders and non-conflicted investigators. This would avoid biases that are not captured by the "quality" check lists, e.g. choice of design, outcomes, and comparators. This will free a lot of funds for pharma that they can invest in real R&D to find drugs that are tested by independent public research structures. The current funding system is crazy: NIH funds people to find drugs, but nobody cares if these drugs work, they all care about statistically significant results, which are easy to manipulate to get. Then these spurious candidate wonder drugs are taken by pharma and pharma funds trials to prove that their drugs are great. Pharma should fund people to find drugs and NIH (or an equivalent agency that uses research funds to promote public good and public health) should fund evidence-based researchers to test these drugs rigorously and appraise the evidence for them.


You can tell a lot of a person by their car and how they keep it.

What kind of car do you drive? Is it clean? Is it modded?

triple3s

Bob Kaplan: Toyota Prius. Usually clean, but quite dirty today.


Studies with negative results rarely find their way onto Journals, which is one of the reasons people falsify results. How do you think about studies with negative results? Should those be placed with equal importance as those which found positive results?

Specifically, I am interested on physiognomy, and I could only find 2 recent articles which found positive results. So based on the available information, there seems to be a consensus that physiognomy has some merit to it?

muchtooblunt

Bob Kaplan: Null or negative findings have important public health implications. A growing collection of trials suggests that promising treatments do not match their potential when systematically tested and transparently reported. Publication of these trials may lead to the protection of patients from treatments that use resources while not enhancing patient outcomes. For example, a recent economic analysis of the Women’s Health Initiative clinical trial suggested that the publication of the study may have resulted in 126,000 fewer breast cancer deaths, and 76,000 deaths from heart disease between 2003 and 2012. The economic return on investment in the study was estimated to be about $140 in return for each dollar invested in the study12. Transparent and impartial reporting of clinical trial results will ultimately identify the treatments most likely to maximize benefit and reduce harm.


Studies with negative results rarely find their way onto Journals, which is one of the reasons people falsify results. How do you think about studies with negative results? Should those be placed with equal importance as those which found positive results?

Specifically, I am interested on physiognomy, and I could only find 2 recent articles which found positive results. So based on the available information, there seems to be a consensus that physiognomy has some merit to it?

muchtooblunt

John Ioannidis:

thank you for mentioning this, negative results are my pet, I love negative results. Results should be published regardless of whether they are "positive" and "negative". In many cases, a negative result is more informative than a positive one, e.g. a negative result that soundly "kills" a theory or previous finding, may be wonderful because then people don't have to waste more resources on this path and we won't continue to be misled. In some fields, where the prior odds of finding a genuine positive result are low, I have argued that negative results from well-powered studies should be published immediately, while positive results should be scrutinized with more extensive replication before they are published. As for physiognomy, I really have no clue about this literature, so I can't make any informed call, but if it is only two articles and these studies are small and they have obvious problems (the usual situation when I hear about "2 articles"), one would probably have to be cautious.


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.