Wednesday, January 18, 2017

Which of these proposed solutions for unreliable research do you favor?

Munafo et al’s recent publication of “A manifesto for reproducible science” represents a turn away from pointing at problems with how we practice science, and towards offering some solutions. 

I thought it would be interesting to take a poll on a handful of their suggestions.

Of the following, which one do you think should be implemented?

Use the comments in this post to explain why you voted that way.



40 comments:

  1. Many fields require continuing education in order to stay on top of the ever-improving methods available. That kind of formal learning isn't present in the sciences. Our conferences are generally focused on spreading results, not technical details on how a method works. Instead of wasting our time and energy fumbling around trying to figure out a new technique on our own (and potentially learning it wrong), we could just be taught how the method works. Even more so, we could introduce method consultants that can be contacted in the event of any problem with a method.

    ReplyDelete
  2. As the manifesto stated, transparency is a scientific ideal and the word open should be redundant. Open sharing demands accountability from the scientists who provide the data and reliability of the lab is on the line. Having protocols and raw data available would be a critical step in combating the reproducibility crisis that exists in science. With this disclosure, scientists can also analyze raw data and confirm significance rather than relying on the authors for that information. Open sharing is an ideal of science which should be utilized more widely to enforce accountability and to verify published data.

    ReplyDelete
  3. Many of the solutions presented in the manifesto centered on the ideas of transparency and collaboration within and outside of individual fields. To me, the idea of pre-registration made the most sense and seemed to combine many aspects of the other solutions presented together. Pre-registration would create accountability in how experiments are conducted and analyzed – making it much harder to p-hack data, or fabricate results. A peer-review process on pre-registered experiments would allow reviewers to comment on areas of the experiment that they think are weak, or suggest additional data that might be useful to collect. I think the biggest thing that would hinder that pre-registration process is that to some it might feel like a straight-jacket. However, it does allow researchers to change their plan, as science is unpredictable, with proper justification.

    ReplyDelete
  4. Part of the reason I believe that data is not reproducible is because there is no requirement in statistical training. Then, while trying to craft a well informed response to get an A on my comment post I stumbled on this article http://www.nytimes.com/2012/07/29/opinion/sunday/is-algebra-necessary.html. The argument from the article is that math plays such an important role in getting to where you want to be. If you are failing algebra in high school, how will you pass the SAT and get to Harvard? The answer is you won't and even worse your character will be affected. So what does requiring statistics from PhDs look like. I already have to study for a qualifying exam, will I now be subjected to the dooms of standardized testing? Will my grants not be funded because my Emory transcript didn't have "Advanced Statistics for Math Geniuses" or will I not get the dream job in academia because I can't reproduce a Fibonacci sequence? However, I understand that statistics is the key to moral, ethical research. Statistics is necessary for the integrity of science. Personally, I find the idea of pre-registration of experiments stifling to the breadth of creativity scientific study has to offer. You are asking scientists to be placed in the proverbial box, fearful of straying from the straight path that they laid out for themselves. As far as requiring multiple site research, it is a fantastic idea in theory but the execution would be complicated. How would you find enough people to agree on doing the same western blot across America? It makes perfect sense for patient trials because of patient distribution and eliminating biases. I am not convinced this could be done for every single JCB paper. It would most likely slow data production. My first choice when voting was to require open data sharing. I think this is a great idea, I am just not sure we are ready as a scientific community to embrace public peer-review. I think it would be a new generation of scientists unspoken promise to pursue this avenue. It would be a great time commitment and I am worried it would be the same scientists editing and providing feedback. In conclusion, we could enforce statistics training by making it more available at universities and requiring these courses for advancement in career.

    ReplyDelete
  5. This comment has been removed by the author.

    ReplyDelete
  6. I think there is incredible value in continuing one’s education, no matter the field. Most of the time it seems that it is simple ignorance (opposed to outright deceit) that prevents us from performing the correct methodology, control(s), or statistical analysis; therefore, having required “continuing education” workshops or courses mandated by the NIH, for example, could help in these areas. However, I do acknowledge that implementing this would be a challenge. What quality control would there be? How can you control for the individuals that are attending purely to meet the requirement (laptop out working on a manuscript) and the one that is engaged trying to learn about the latest developments in ‘X’? How would these workshops be organized to meet everyone’s needs (be specific enough to be valuable, but not too basic where you feel like it was a complete waste of time)? As I said, I think continuing education is one of the stronger arguments in the manifesto, but who is going to be hall monitor?

    ReplyDelete
  7. Though each of these practices should be implemented, I believe that the most critical step is to promote open communication and collaboration. Because we fear being “scooped” and losing funding, we lose sight of our ultimate goal of sharing scientific truths. Not only does this promote bad science, but it also promotes an environment of isolation. If we were encouraged to publish all of our findings (both the glamorous discoveries and the not-so-pretty negative data), we could save precious time that would be otherwise spent toward testing novel hypotheses. Through open sharing of published data, we may begin the process of open communication, working together and pooling our discoveries. We can overcome our biases, more accurately assessing data as we allow third parties to evaluate our work. Simultaneously, I believe that we should expand this process to openly sharing methods. By encouraging full, accurate descriptions of cited experiments in each publication, we encourage future scientists to not only reproduce the work of predecessors, but also to more effectively design experiments to address their own future questions. Open data and methods sharing will be critical for us if we truly wish to combat the reproducibility crisis. Furthermore, if we promote a more collaborative environment, we can work toward more expediently attaining our ultimate goal of scientific truth.

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. I agree that implementation of any and all of these practices would facilitate a more accurate dissemination of scientific findings; however, I think the most valuable (and most easily accomplished) of these is the practice of open data sharing, which has already begun to gain popularity with a number of scientists. Scientist should be required to publish not just a summary of their research in the form of a journal article, but also a detailed report of all of their methods, statistical analysis, and raw data. In this way other scientists can reproduce study design, or simply re-analyze the data. Given that all scientists come to the table with implicit biases (which are difficult to do away with entirely), allowing for the interpretation of data from many different viewpoints would ultimately serve to actually cancel out any biases. It would also allow individuals who are more specifically trained in methodology or statistical analysis to contribute to our understanding of the data. In this way mistakes, unintentional or otherwise, would begin to be corrected.

    ReplyDelete
  10. Munafo's proposed solutions to improve the reproducibility of science focused primarily on increasing the transparency of research methods and findings. In my opinion, requiring authors to provide the raw, unadulterated data that formed the basis of their conclusions to be freely accessible is critical to the idea of transparency and would greatly increase reproducibility. Having this information available to researchers would contribute to the field of meta-science that would help improve the quality of science. I recently conducted a meta-analysis of nutrition-based clinical trials on linear growth and having the raw data available would have allowed me to produce a more rigorous piece of work by allowing me to confirm the published results by conducting the analyses myself. Furthermore, open sharing of raw data would provide the information needed to access the accuracy of conclusions and data analysis techniques employed by the authors. I ultimately look forward to more journals including open data requirements, as I think it will improve the quality of science.

    ReplyDelete
  11. I think that any of the four proposed soultions would enhance the quality of science. However, it is also important to recognize the difficulty in implementing several of the proposed solutions, fx. required pre-registration. Moreover, I think that requiring research projects to be performed at multiple sites, could undermine the ability for small laboratories with limited funding sources and/or collaborative partners, to conduct research. Over time, large laboratories would dominate if not all, then specific fields of science that small labs cannot compete with. For these reason, I am particularly supportive of requiring open data-sharing post-publications. First, a large proportion of researchers are already sharing their data openly through open access journals or other virtual platforms. Moreover, it will create better transparency to understand how their science was conducted, ultimately helping with improving reproducibility and replication studies.

    ReplyDelete
  12. While all four proposed methods to improve reproducibility of science are all important, I strongly believe that implementing methodological training is feasible and important for future generations of scientists. In Munafo et al. publication of “A manifesto for reproducible science”, it suggests that improving methodological training can address the issue of cognitive bias. Cognitive bias is something scientists easily overlook during their training. In fact, from personal experience cognitive bias was introduced to me briefly during the beginning of my second year of my PhD training with no further discussion. This topic was only brought to my attention in the statistical course I am taking now, which to me is baffling. Thus, through methodological training, cognitive bias can be actively discussed early in a scientist’s training. A method to implement this is to make methodological training a requirement to receive national funds. This is similar to how national agencies make it a requirement for trainees to receive ethics training. Methodological training will help bring awareness to issues such as cognitive bias early on in a young scientist’s training but it can also provide information on how to avoid it and address it.

    Furthermore, the process for analyzing data is ever changing and improving, which means that trainees as well as senior scientists should be actively learning about new techniques. By making it a requirement to receive national funds, it can help keep senior scientists stay updated and encourage active learning of new techniques. While it may be easy to implement a requirement for methodological training early in a scientist’s career, it would be more challenging for junior and senior scientists. A suggestion that Munafo et al. presents, is the idea of creating an easily accessible and “easy-to digest” resource. I think this can be implemented by making it a requirement for junior and senior faculty to complete an online course on statistical methods every few years. This would be similar to safety training and chemical waste training that everyone is required to complete. In addition, it can also be a resource to reference information and to address concerns and questions along the way. To improve reproducibility and address the issues of cognitive bias, trainees and senior scientists must first be aware of this problem and this can be easily implemented by education and proper training.

    ReplyDelete
  13. Although there are infinite ways the scientific community can tackle the issue of unreliable data, the first step towards a solution would be to require all data related to a paper, whether that data is raw or refined, positive or negative, to be open to the public post-publication.

    We tend to take the information presented to us in a paper at face-value, neglecting to reflect upon on how or from where the authors obtained their cleaned-up data. Though even when we do wonder, coming to our own conclusions can prove to be difficult, as many papers fail to include their protocols, raw data, or processes for cleaning up their results. Compounding this problem, the scientific community tends to favor positive/exciting results over negative/boring results, despite the fact negative data can potentially provide useful insight into how we should interpret our positive data. However, this bias towards positive results and lack of information makes it harder for others to draw their own conclusions on the validity of the results, given they have not been given all relevant information on the issue. Showing all data, whether the data is negative, positive, refined, or unrefined, will make it easier for the rest of the community to establish the validity of the results and avoid wasting time on false data.

    Of course, this methods relies on the honesty of the experimenters to provide all of the data they have to be opened to the public. There is still the potential for data to be hidden for the purpose of casting their results into a better light. However, we have to start somewhere, and requiring authors to show every step of the experimental process, from beginning to end, will help begin to tackle to problem of unreliable data.

    ReplyDelete
  14. This comment has been removed by the author.

    ReplyDelete
  15. In my opinion, the value of open data sharing cannot be understated. Increasing this practice would help preserve transparency among scientists, and in doing so, would allow for more comprehensive evaluation of study design and data analysis/interpretation by fellow researchers. While I also value the accountability that pre-registration provides, I worry that this practice would place an undue burden on the already strained peer-review process. As Munafo and colleagues point out, registered reports and their associated reporting guidelines are not always followed, and therefore only partly reduce reporting bias. Enforcing these guidelines more fully would make the peer-review process even more bloated and would require more investment from fellow researchers, which is unlikely to happen as these activities are not incentivized. With respect to the requirement that studies be performed on multiple sites, I would argue that this approach is necessary for human studies (which are highly variable), but that it poses an ethical problem for animal studies. Though some sources do argue for repeating animal studies across sites, this suggestion somewhat opposes ethical obligations to perform studies using the minimum number of animals needed to ensure sufficient power. Is it wasteful to use more animals to duplicate studies across sites? Arguments can be made for either side, but I am not sure that there is as simple an answer to this question. Last, I believe that there is inherent value in continuing education on statistics, but that the overall approach of open data sharing will be more beneficial to the scientific community in the long run as it provides opportunities for a critique of both methodological and analytical approaches.

    ReplyDelete
  16. This comment has been removed by the author.

    ReplyDelete
  17. While many people claim that reproducibility of results remains a core standard in modern science, researchers continue to report that reproducing published data is unexpectedly hard (Baker 2015). While I believe that all of the above practices would aid in producing more statistically accurate results, I argue that encouraging studies to be preformed at multiple sites would carry the most benefits to the biomedical field. Primarily, by including multiple sites, the study inherently builds into itself more variability, thereby forcing any positive result to carry more statistically power, increasing the confidence that can be placed in calling a result "real". Second, multiple sites could increases sample sizes for studies such as primate research, which tend to have low samples sizes due to cost of maintaining the animal model and the necessary tests for the study. As such, through multiple sites, investigators would be able to pool resources to obtain more meaningful results. Lastly, although not necessarily focused on improve statistics, studies that include the input of multiple, diverse experts inputs into study design and execution, I personally believe tend to be better designed and planned than studies of just a single lab. Through forcing collaboration by withholding funding, overall would force investigators to not only join resources but potentially tap into unknown strengths that would not only help make studies more statistically relevant but could help improve the quality of collaborations.

    That is not to say that forcing collaborations by withholding funding does not have drawbacks. For example, this solution does not solve the report bias that occurs in science in which researchers are encouraged to report only positive findings. But this solution provides a method for science to become more collaborative and is a step toward a less biased future.

    Baker, M. 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016).

    ReplyDelete
  18. My vote was placed for “Require the continuing education and certification in methods and statistics”. This is not because I do not thing the other options do not have merit. Pre-registration of study protocols and experiment plan definitely has its benefits. As the manifesto mentioned, it is appropriate for clinical medicine. Where I think this becomes difficult is in the scientific practices in the lab. Often, we as scientists preform preliminary experiments with the tools that we have available to us. Some of these could be completed quite quickly. Requiring the submission of experimental protocol beforehand may introduce an unnecessary lag in between the conception of an experiment and the actual conduction of the experiment. I believe for this reason this option is largely unappealing and difficult to enforce. We had already discussed some of the issues regarding requiring studies to be conducted on multiple sites. This would also introduce potential lag in publishing data as multiple sites would be required to optimize experiments, conduct them, and then organize the data. This could also introduce new problems with authorship, as several people would be contributing the same amount of work to the study. I do thing that data sharing post-publication should be promoted. It would be useful to have access to data sets from other labs. This would not only allow peer review of the initial lab’s conclusion, but also allow the reader to draw their own conclusions from the data set. Despite the value of the three proposed solutions above, I still believe that continued education in this area is the most important solution to be focused on. It is important for students and primary investigators alike to understand that there is a problem in the first place. This can only be done through educating people about the current problems that are faced and how they themselves can act in a more unbiased way while collecting data. Requiring “refresher” courses throughout a person’s career could also ensure that this is a topic people are forced to think about.

    ReplyDelete
  19. While the decision between “continued education" and " open data sharing" was a difficult one for me, it ultimately boiled down to the idea of accountability and whether it exists as its own entity or should naturally possess some type of foundation.
    I imagine the common defense for the "open data sharing" angle is overall accountability--with raw data in hand, fellow colleagues can analyze and interpret your results right along with you. This would make it a lot more difficult for the untrustworthy scientist to fudge data while also forcing the rushed scientist slow down and carefully conduct the proper data analysis to ensure the absence of error. This is great!... if you know how to do it CORRECTLY in the first place, that is.
    The reason I chose “continued training” is because you cannot be held accountable for something you are not initially aware of or educated in. Exposing scientists to the correct statistical training and methodological practices while alerting them of the various types of biases they could possibly encounter during their research careers will create and maintain a uniform expectation within science. While some might argue that ignorance is not an excuse for negligence, I believe education and awareness will give rise to better reproducibility in science by inciting self-accountability within scientists first before placing that into the hands of their colleagues. The uniform playing field created by the educational foundation will not only ensure honest science equipped with true and reproducible results but also set the stage for open data sharing and the accountability that comes along with it.

    ReplyDelete
  20. One of the many issues confounding reproducibility in science is the inherent way in which research is incentivized. For most of the scientific history, scientists have been primarily rewarded for the publishing of their research, and more specifically, the publishing of research that contains novel, positive data. While useful for publishing some of the most exciting data, only having incentives to publish the best and the newest data has caused scientists to increasingly publish more false-positives (Smaldino, P. E. & McElreath, R.). This is mainly due to issues in the use of statistics, reproducibility, and dissemination that are currently undervalued in the research process.

    In my opinion, many of the issues affecting the reproducibility of science can be fixed by shifting the incentives of research from quickly publishing novel data to doing research the right way. Researchers must be awarded for taking the correct steps to obtaining their data, even if the data may not be positive data. In more cases than not, doing research in a reproducible and thorough manner will lead to positive data (or at least correct data) compared to taking shortcuts such as P-hacking. One incentive that was discussed in the manifesto is the use of preregistration in research. If researchers are able to publish all the data they have obtained, in one way or another, they will be more likely to go through the steps to obtain the data that may not be as useful in common literature, but drives the project in the correct direction. Preregistration can also help with publication bias, as both reviewers and other scientists will be able to understand the process that was taken in order to obtain that data, instead of only seeing the beginning and endpoints, which often simplifies the approach taken.

    While I believe shifting incentives may be used to change the nature of current research, I also believe the root of the problem lies in the way scientists are taught to perform their experiments. If researchers aren't taught the correct to way to design and perform experiments, there most likely won't be much change in the current structure of research.

    Smaldino, P. E. & McElreath, R. The natural selection of bad science. R. Soc. Open Sci. 3, 160384 (2016).

    ReplyDelete
  21. While all four of the options presented would help to improve ethical scientific practice, I believe that open data sharing would be the most valuable practice. It is critical that scientists are transparent about the experimental design and methodology of the studies that they are performing. If labs make all of this information available, it will not only make it easier for other scientists to attempt to reproduce the work, but also to critically evaluate the science that has been performed. Additionally, open-data sharing is already being practiced by many scientists who choose to share their study designs, etc. in open access journals and other open forums. As such, making it a more standardized process would be more easily implemented than some other strategies, and would greatly enhance the transparency of the science being produced. While multi-site studies and pre-registration are ideal practices in theory, there are many logistical problems to their implementation that might make scientists hesitant to be compliant with such regulations. Continuing education of statistics and methodology would also be critical to ensuring that scientists are continuously being refreshed on what it means to have a well-structured methodology and ethical statistical analysis of the work they are performing.

    Ultimately, we need to create an environment which continuously encourages scientists to be transparent about the work they are doing, and open-data sharing is the best way to accomplish this.

    ReplyDelete
  22. Obviously a combination of all four of these options would probably be the most effective way to improve reproducibility in scientific research. However, some solutions, such as pre-registration and requiring that studies be done at multiple sites, would be harder to accomplish because they are time consuming, expensive, and not familiar to many researchers. Continuing education in statistics and methods could be very beneficial for improving the statistical skills of researchers at all stages and would be more straightforward to implement. Similarly, open data sharing should be required and is something that scientists do to some extent currently. Making this a requirement, however, would require a cultural change in many areas of science that operate in highly competitive atmospheres.
    Hopefully, as scientists become more aware of the severe repercussions of non-reproducible science, more people will be eager to implement solutions such as these.

    ReplyDelete
  23. Continuing education in methods and statistics would be the most effective solution to making science reliable. Accurate statistics are necessary for data to be reliable and replicable. However, it is very easy to misrepresent statistics, occasionally with intent to deceive, but more often than not, due to lack of knowledge of the appropriate statistical tests to run. With a rise in the amount of data that can be collected, more complex programs and methods are coming into existence which require training in order to be used effectively. This means that everyone needs to engage in constant training even with prior knowledge of some statistical methods. It is also important to include training of good experimental design as statistics are determined at the beginning of studies. This would also likely reduce incidence of post-hoc hypotheses improving the quality of science overall.

    ReplyDelete
  24. There is a growing body of literature which points to an issue with the reproducibility of science, often times work that was published by the best journals in the world. Munafo et al’s recent paper proposed several steps the scientific community should take to address the issue of reproducibility. Of those proposed steps, requiring a continuing education and certification program in statistics is most appealing.
    Many other professions, e.g. medicine and law, have continuing education requirements that all members must fulfill to maintain licensure. The scientific community doesn’t have a licensure system in place for PhD researchers, but a system of continuing education for statistics and research methods could still be implemented without the burden of creating some sort of licensing or bar association. In relation to training in statistics I feel that my education is lacking. As an undergrad I had one 3 credit course in statistics as my entire formal training. In my professional career and as a grad student the practical exposure to statistics has also been limited. I have often wondered if I need to further my training in statistics as I feel unsure of how to properly apply statistical methods to my own work. I get the sense that I am not alone among graduate students in feeling this way, and assume it is a feeling that exists in more experienced researchers as well. A continuing education program would help to address this issues, while most of us are unlikely to go out on our own for more training and interrupt our day, a formal requirement to educate ourselves about statistics may be the incentive we need as a community to improve in this area. Also, for those who are more experienced in stats, I think as we are starting to generate large datasets from various -omics studies it would be useful to cover how these large n data sets need to be handled.

    ReplyDelete
  25. The four choices provided will all improve the quality of the science being published, with that said, I found that implementing continue education and certification would be the most impactful. Many careers that require professional degrees also require continue education. Continue education, a method of keeping certain professionals up to date, is necessary to renew a license. Dentists, physicians, and certain teacher positions (just to name a few) have a certain number of credits that need to be completed per year. The topics that are covered by the credits are up to the professional, but it is recommended to attend the educational conferences that will support their training and specialty. Research scientists do not have these requirements but I think it should be implemented. Regardless of your talent or achievement, one cannot remember everything they were taught. Sometimes the greatest challenge is to recollect basic knowledge. For example, many people (even college level math students) will get the following question wrong X^2=4. Many people will simply answer “2,” but the correct answer is 2 and -2. I went out and asked 5 Emory math students this question. Two of them answered it incorrectly. Their response to getting it incorrect was that they haven’t faced this type of simple question in a long time. One student even mentioned that having a “math basics” class would be a useful tool for upper-level students. This is just one example of what continue education courses can offer, a “basics or refresher” course. They can also introduce topics that are not covered in the typical scientific conference or presentation. These topics can include statistics, useful techniques, and research ethics. Reintroducing these topics regularly can insure that scientists are up to date and well equipped to produce accurate data. Of these topics, statistics may be the most imperative. Using statistics incorrectly or lack of statistical analysis all together can result in non-reproducible science. Regardless of the annoyance, keeping tabs on scientists and confirming that they are being constantly challenged through continue education (e.g. statistics training) is important for the future of science and discovery.

    ReplyDelete
  26. I think each of the 4 options suggested are reasonable practices that could essentially fix the problems with reproducibility. In combinations, the 4 strategies could have a strong influence on promoting reproducibility. I think the hardest part about enforcing these strategies would be the adjustment that would have to be done in order for it to properly executed. Out of all the suggested solutions, the “Require continuing education and certification in methods and statistics” explanation would only be successful if the individuals are interested in making improvements. The last two methods seem to be suggestions that require an outside source to “proof read” in the form of data-sharing and studies being performed on multiple sites. I chose to vote for “Required pre-registration of experimental plans” because it has been shown (in clinical trials where a study plan is standard practice) to be effective. Having a pre-registration experiment plan can eliminate bias since the data does not exist and the outcomes are not yet known. When the results are known, there is an intentional selection of the subset of outcomes that would show statistically significant results. Having results before statistical experiment planning usually increases the likelihood that there will be a stronger evidence for findings than exists in the true results. To actually enforce this strategy would require a lot of additional training especially for those researchers NOT in clinical projects. Mandating study pre-registration should eliminate this issue and ultimately influence more “realistic” data sets. This way, researchers could truly know if something is non-reproducible.

    ReplyDelete
  27. I believe full transparency in methodology and data would do the most to counter unreliability in research because, of the proposed solutions, it would do the most to hold researchers accountable to their published scientific findings. Open data sharing, along with explanations for the logic guiding the choices made in designing, analyzing and interpreting experiments, would allow others to better evaluate the quality of the science conducted by various research groups. Mistakes or oversights that might have been made could thus be more readily caught and corrected with minimal waste of resources. Furthermore, knowing that all of their detailed methods and findings would eventually be subject to public scrutiny, researchers would likely be more circumspect in their approach to designing and executing experiments from the outset; this would ensure that rigor and reproducibility of results were more consistently made top priorities by researchers.

    ReplyDelete
  28. To aid the dissemination of reliable research, Munafo et al. identify pre-registration as a tool to minimize publication bias and analytical flexibility, including P-hacking. Of the options provided by the survey, I believe a detailed pre-registration is the best solution. Pre-registering your study, ideally before conducting the study, would encourage experimental forethought and transparency in a manner which could be beneficial for both the scientific community and the experimenter. Great scientific advancement is achieved when research is innovative, hypothesis-driven, and properly tested. If implemented in a “pre-publication review” manner, pre-specification of all experimental details and early critiquing could lead to better studies, and less waste of resources (including time). There have been many instances in which I have designed a study and started to collect data, only to get suggestions for better experimental design from collaborators. Most times I’m grateful for the input, but occasionally irritated when thinking of all the time put into a faulty experiment. Requiring important details to be defined AND reviewed before the experiment is conducted, is a strategy I wish to uptake in my own research.

    Making publication contingent upon the registered design and analysis, would likely receive much push back because biological science is rarely exact. Scientists have been groomed to publish positive results, even if means one must search for it and construct a post-hoc hypothesis which fits the data. The Registered Reports initiative may encourage participation in such a system by guaranteeing publication if the registered and approved methods are followed. In my opinion, this would elicit a positive change in the arena of scientific literature and academia, by rewarding pre-publication review and providing an avenue for the publication of negative results.

    ReplyDelete
  29. After reading the article I don't actually agree with any of the choices given in the poll. I did choose a requirement that studies be performed at multiple sites as a condition for funding. I don't fully agree with the choice. To me it is the closest thing to strongly encouraged collaboration. I don't think any of the other options will realistically fix what is, according to this article, a serious problem in medical research. I don't believe pre-registration would be of much benefit, somethings a line of research takes shape on its own. Further statistics and methods instruction will help educate about bias towards our own data, but this is only effective if implemented in reality. Data sharing could be effective, but that would not prevent those that falsify with intent from doing so. As mentioned in the article, there is already a big push towards fully publishing large data sets for meta analysis. Perhaps expanding on this would improve the situation. Ultimately, as a community scientists keep each other in check. Either by mandate or incentive, I believe that strong collaboration between researchers would be most effective at combating bias. I think we can all agree, the strongest type of science is when a hypothesis is tested in more than one method, typically of varying strength and sensitivity.

    ReplyDelete
  30. As scientists, continuing to educate ourselves on the cutting-edge research and methods in our field and related fields is paramount to doing solid science. In this generation, more than any other previous generation, our breadth of knowledge about how the world works is expanding rapidly. We have the technology and the resources to accomplish more than ever—and in less time. But more than expanding, our knowledge base is also constantly evolving. The information we took as gospel ten years ago is largely obsolete now, as scientists improve their methods and design more robust experiments. Professors and researchers who earned their doctorate degrees ten, twenty, thirty years ago or more could potentially have a working knowledge of information that has since been debunked. It is of the utmost importance that everyone in the scientific community remain cognizant of changes or improvements in their fields. For example, knowing that a more efficient method has been developed for the experiment you are trying to run will save you much time and effort in the long-run. In order to implement the idea of continuing education, many labs have a journal club associated with their weekly lab meetings. This is an excellent opportunity to study the most recent publications in your field and to learn about what advances have been made. Labs that incorporate an educational aspect to their work environment will create scientists that are more globally aware of what facets of science are thriving and which facets are deteriorating.

    ReplyDelete
  31. To improve science rigor and reproducibility, I believe all students involved in research should be required continued education and certification in methods and statistics. Within the study of statistics there is transparency and understanding of the experimental procedure, analysis and results. With a well-documented statistical method and analysis, there is no space for lies or misunderstandings. It will be ensured a rigorous experimental design and transparency in reporting the specifics about how experiments were performed and how data were collected and analyzed. In theory, this should solve the problem of non-reproducibility. Every scientist will have the tools necessary to publish their discovery as transparent as possible, since they’ve been trained on how to do it correctly.  

    ReplyDelete
  32. While I do agree with the manifesto that most statistical analysis knowledge flows from the mentor to the student, I do believe the student has his or her own responsibility to question those statistical analyses conducted by the mentor and lab. If the student has done his or her homework, taken multiple statistics courses, researched different methods and proper analytical methods and understands a different way to analyze data, there is nothing wrong with questioning their mentor’s or lab’s methods. I believe this same principle applies to working with statisticians. To support my vote for instituting continuing education in statistics, I recall an issue that occurred with a graduate student in my lab. She generated a substantial amount of data that was given to a statistician to analyze. The graduate student analyzed the data independently and compared it to the statistician’s results. The results were different, and the graduate student questioned the results based off her statistics knowledge from multiple statistics courses. The statistician chalked up the differences to use of difference analytical software, noting that he analyzed it similarly. However, months later the statistician realized his mistake and admitted the graduate student was correct in her analytical methods, negating the differences she thought were significant. This could have been avoided had the statistician been required to attend continuing educational courses in statistics, taken courses in other analytical software interfaces, and been more open minded to the graduate student’s questions. The fact that the graduate student did not settle for taking one statistics course allowed her to feel confident enough in her ability to analyze the data. Thus, the institution of continuing statistics education is important for young and old members of the field. Perhaps making it a requirement for NIH funding, or other funding sources would make it a higher priority for all investigators.

    ReplyDelete
  33. Choosing only one of these four as the most important measure to counteract scientific bias, statistical bias, and scientific irreproducibility excludes many tactics that could synergistically achieve solutions to the problems discussed in the manifesto. However, of the four, pre-registering scientific studies and statistical analyses seemed to provide the most confidence for increasing scientific accountability.
    Continuing education of statistics and open data sharing are extremely important; however, these methods still rely heavily on the integrity of the scientist. Even though the scientist may have a greater knowledge of statistical methods, they could still “p-hack” their data until it became significant, misidentify confounders, and misuse statistical methods. The same argument applies to the open sharing concept. Data can always be manipulated regardless of how transparent or widely distributed the information. Preregistration of the experimental protocols and statistical analyses to a large committee of scientific experts and biostatisticians would increase the accountability of the scientist to perform accurate studies, essentially binding the scientist to a “contract” of accountability.
    One huge complaint of the pre-registration method would be the rigidity of scientific protocols, especially when changes to the experimental design are required. A swift, efficient amendment system could be implemented to allow for fair and justified changes to the pre-registered experiments before commencement of the study. Although it would seem to be an enormous burden to document your every experiment and analysis, pre-registration of experimental design could prove an invaluable resource in the movement towards scientific accuracy, reproducibility, and accountability.

    ReplyDelete
  34. I had a pretty difficult time deciding between these options. In principle they all seem like decent ideas for decreasing biases and methodological screw ups. Eventually (after reading the manifesto, contemplating, and rolling a four-sided die) I chose pre-registration of experimental plans. To be clear I detest planning, organization, and bureaucracy. However in its ideal form pre-registration appears to combat the most pernicious threats to reproducibility discussed. Bias in interpretation is removed by planning analysis before obtaining any results. Errors in methodology can be corrected by submitting methodology to review prior to running experiments. Publication bias is averted by making the results open regardless of publication. The main argument against pre-registration is perhaps the feasibility of overcoming the inertia associated with our traditional methods of designing and reporting studies. The other options suffer from flaws of their own. Requiring continuing education and certification in methods and statistics is a great idea. However these training sessions could turn into ways for science tech companies to promote their new products, or just be used as easy (if somewhat lame) vacations for scientists. Requiring that studies be performed on multiple sites to obtain funding is very reasonable for many large-scale studies and likely is the way we are heading in certain areas of research. However not all areas of research are conducive or well-funded enough for multi-site collaborations. Open sharing is another good idea, but it will likely be far better at identifying research that was poorly performed or interpreted after the fact than stopping expenditure of money on poor research.

    ReplyDelete
  35. Wow! I find it very interesting that others agreed with me about continuing education in statistical and methodological methods. That makes me wonder about the root cause of so many of us realizing the importance of continued education. Perhaps its due to our own perceived inadequacies, spurred on by all these papers about how terrible scientists are at statistical methods. Perhaps its the inconsistencies we've found in reading scientific papers, or maybe because we've been bitten in our own butts by our advisors treating statistics as an afterthought: not as something that can help us make our point (so long as all the other steps in the process aren't manipulated for the sake of making the point we'd prefer), but something that when used correctly, can help to inform and bolster our claims about describing some biological or physical reality. Only when we are able to better understand the methods that we're using, both statistically and computationally, can we do our best work. We owe it to ourselves and to all those whom come after us to do our best science! Any immediate benefits we might gain from setting aside rigorous statistical analysis will ultimately result in either wasting our own time or someone else's down the road. I don't think that's how we want to be remembered as scientists.

    ReplyDelete
  36. Several of the proposed mechanisms presented by Munafo et al. for increasing the integrity of research findings are already implemented in many fields, such as continuing education and acknowledgement of experimental plans. So I feel biased in the direction of data transparency, because there is more to be gained even from poorly conducted science than to put groups of people across fields in charge of regulating each other's work anymore than the biomedical research community already does in the form of IACUC, grant applications, etc. with more transparency it could give opportunity for already existing teams of data scientists and meta scientists to play a larger role in research reliability. I do worry if there could be a downside that presses scientists to either 1: generate faulty data 2: be less likely to seek intrinsic expertise prior to data analysis, if indeed more experts can fact check later. Still, I do believe this route would support more reliability in published research and lead to more confident discoveries than any other proposed change, as it would ultimately leave the topics worth discussing and interpreting up to the people able to best interpret them.

    ReplyDelete
  37. While it would be incredibly important to require and provide educational resources and certification to ensure researchers are continuously up to date in methods and statistics, I believe it would be for naught should the studies not be performed on multiple sites. Just as pre-registration failed to garner compliance in the case of the Time-sharing Experiments for the Social Sciences project mentioned in the manifesto, it is highly plausible that pre-registration and continuing education be disregarded in favor of being able to publish cleaner and more convincing data. It is quite easy for a researcher to deviate from the study details and outcomes delineated in their pre-registration, or not take seriously the educational resources they are required to go through for continuing education. Additionally, it would be difficult to monitor all of this--peer review is time consuming enough already--who would be in charge of evaluating the studies submitted for publication for consistency with pre-registration information, or methodological and statistical soundness according to the educational resources provided? For the last option--even if there was open data, it would be difficult to manage studies such that sufficient details in methodology and statistical analysis are always provided alongside the data so that it would be possible for other researchers to go through the same process of analyzing it. Even if this information were provided, there is still the question of whether or not other researchers would dedicate the time to do such evaluations and report irreproducibility.

    As such, I believe the most efficient solution is requiring studies be carried out in multiple sites in order to receive funding. While it could possibly lead to delayed funding approval because researchers would need to search for willing collaborators with the right resources and same interests, it would make the impact of one scientist’s personal bias much less significant. By requiring multi-site implementation (along with study pre-registration), it is built in that there will be multiple researchers conducting experiments and multiple statistical analysts processing the data. For all of these researchers and analysts to miraculously bias the results in the same exact direction using the same means to make the data appear to support an outcome would be unlikely. Therefore, unless they took great pains to coordinate their biases and plans to affect these biases onto the data, it would be detrimental to the study results and overall convincingness of the data for bias to be present. In this way, it would be possible to strongly discourage any bias and experimental manipulation due to this bias (whether conscious or not), and strongly encourage that researchers at each site tightly adhere to what was submitted in the pre-registration stage so their data is most consistent between sites.

    ReplyDelete
  38. As many have stated, all four of these options for reducing scientific bias are excellent and solving this problem is not a one-sided approach. That being said, I believe pre-registration of scientific experiments is a particularly good approach. As noted in the article, this practice is currently used when designing and planning clinical trials and possibly alludes to why there is so much negative data reported in clinical trials-- because negative data is a huge part of scientific work. Additionally, pre-registration of scientific plans would allow researchers to have a lower likelihood of unknowingly performing the same experiment, lowering the chance that somebody gets "scooped."

    ReplyDelete
  39. This comment has been removed by the author.

    ReplyDelete
  40. Although I think all of these are good options, the solution of pre-registration seemed the most appealing. Before I got loosely intimate with science (let's say, freshman in college), I thought the field was this omnipresent cooperative force that had the power of 'curing' the world while being much too technical for my liking. I have sense discovered that science is my truest passion, but the keyword I used was cooperative. When you say it out loud, it almost seems funny that one person, or even one lab, would have the brainpower to figure out exactly how and why hippocampal neurons degenerate in AD pathogenesis. The field is comprised of experts in one to several workings of subcellular biology. Therefore, we need each others expertise to piece together a cohesive story. I could also see this solution getting in the way of the stark competition that's built into maintaining a scientific career.

    I don't foresee one solution fixing all of our problems. And I would hope that current established academics have not lost their love for learning, voluntarily participating in continuing education courses before we have to make them mandatory.

    ReplyDelete