Tuesday, May 3, 2016

What can we learn from the Duke University Scientific Fraud Scandal?





I would like to bring to your attention to one of the most notorious cases of scientific misconduct in recent memory. The case involved a prominent cancer researcher at Duke University, Dr. Anil Potti. His research claimed that he could use genomic technology to generate a “fingerprintunique to the individual patient” and use it to predict up to “90 percent accuracy “which early stage lung cancer patients were likely to have reoccurrenceand benefit from chemotherapy" and more importantly, predict which chemotherapy drugs would work best for that particular patient. Scientists have called it the “holy grail” of individualized cancer therapy. Dr Anil Potti’s findings have been published in a number of prestigious journals including the New England Journal of Medicine and Nature Medicine. Many companies were formed and clinical trials were started on the basis of this research. It was heralded and thought to change the way we treat lung cancer, and that would be true if the research was in fact real. It was not.
            Dr. John Minna, a lung cancer researcher at University of Texas Southwestern Medical Center, wanted to use a similar approach as outlined by Dr. Anil Potti to treat his own patients and collaborated with two statisticians at M. D. Anderson, Keith Baggerly and Kevin Coombes to understand how to conduct the genomic tests needed to determine which chemotherapy drug to give patients. However, when Dr. Baggerly and Dr.Coombe started to analyze Dr. Potti’s data, they found a number of mistakesranging from potential careless mistakes to “unexplicable” errors. At this same time, Duke University started three clinical trials. Dr. Coombe and Dr. Baggerly tried to warn the scientific community about the factualness of Dr. Potti’s data. Nevertheless, Duke University continued their trial. It was not until a trade article published in Cancer Letters which showed that Dr. Potti “falsified parts of his resume” and reported that he was a Rhodes Scholar on a number of grant applications, which was a lie. The scientific community finally checked Dr. Potti’s data and realized that the entire result was fabricated. However, some patients died during the clinical trails and were given the false hope that this technology could improve their clinical outcome.

            There are a number of things to be frustrated about in this case. At the top of that extensive list is the idea that a researcher can knowingly publish false data that has a real impact on the lives of cancer patients, many of whom are agreeing to participate in this trial because they have run out of options. Secondly, that the scientific community did nothing about the findings presented by Dr. Coombe and Dr. Baggerly, and it took a forged resume to open the eyes of the scientific community. I am a strong believer that scientific research is self-correcting. Given enough time, especially in the case of potentially groundbreaking research, scientists can separate out the false from the real data. However, the case at Duke University indicates that sometimes it just takes too long to unmask the fraud. In the case of cancer patients, they do not have the luxury of time. There needs to stronger safeguards in place to detect instances of fraud. Maybe that is easier said than done. Prestigious scientific journals favor the once in a life time finding, especially if it contradicts an established theory. Perhaps, we need to seriously think about what changes can be made to better the peer-review system to make sure such heinous cases as the Duke University scandal do not happen again.       

3 comments:

  1. Thank you for sharing, I had not heard of this specific incident. I am truly appalled that it took such a long time to clear his research. The worst part being that people suffered from this falsification in data. What do you think would be a good proposition to prevent these types of papers from getting through and harming people?

    ReplyDelete
  2. This is unfortunate to read. It's a shame that scientific reputation and the pressure to publish and maintain competitive standing could spiral into such a disaster. The field should seriously reconsider the amount of credit and expectation attached to reputation in science. I'm sure the lab was under a lot of pressure to publish something, but it is unacceptable and completely unethical to give false hope to sick patients using fraudulent data. If science required more statistical rigor, including demonstrated evidence of sample randomization, power analysis, etc, irreproducibility would be much less of an issue, leading to better efficiency of experimentation, and results that are rock solid. Statistics is often treated as an afterthought in biology, but I have a strong feeling that a lot of fraud could be avoided if we change our relaxed view of statistics.

    ReplyDelete
  3. Surely, this is a terrible case of scientific misconduct. There were many things that came to mind when I was reading your blog post, Alex. Let's see if I can arrange my thoughts into an outline that's a bit more manageable to digest. And perhaps, since we are a stats class, propose some sort of stats based solution to scientific misconduct (because this definitely isn't the only case).

    Your blog elicited many thoughts from me: from the way we organize our society to thinking about human motivation. Cleary, for Dr. Potti, the falsification of his résumé and of his scientific work may have represented some sort of pathological disorder, or perhaps just a drive to lie out of feeling inferior. Either way, I think there are preventable measures we can take to stall these types of fraudulent cases, and it actually comes at the price of eliminating statistical machines that us humans have made.

    I'm talking about rankings. Our classmate, Allyson, has written a post about rankings and shows that there is some murkiness to how they are calculated. Aside from that, our human nature to classify things and use data and statistics to justify to needs to have limits. Because although we can rank things and assign some sort of value to those rankings, this is -- in my opinion -- a destructive way to use statistics, or whatever you want to call it. It causes a hierarchy of value that's deemed objective, when, in reality, we can all look at USNews and Forbes and see clearly the wealthiest schools exist at the top of rankings. So be it for whatever is going on behind those ivy-lined gates! All that is is to say this hierarchy of ranking creates value that reflects on the human psyche: if we go to one of these schools, we must have similar value by association. It's like saying "correlation means causation." It simply isn't true. But it hurts those outside of these "good institutions." They feel they need to "catch up" when they may have learned more or less the same things in the same manner. This may have been the case for Dr. Potti, who I believe went to an Indian college then to medical school in North Dakota. Veritable "no wheres" if you ask stuffy academics. He probably started lying to get somewhere, and that's a terrible situation in and of itself. What's even worse is you see this in the WaPost article where the journalist noted that even Duke as an institution ignored the warnings of a medical student just so it could ride the wave of fame of these lies. I'm telling you, these "statistical rankings" we've made are just bad all around.

    Nonetheless, statistics ain't all that bad! Hell, it was statisticians who caught the guy. That right there, the process of scientific reproducibility and this "checks and balances" system, is what makes science great. However, I think it comes to late. The checks on the community usually come out after a paper is published. What if we had a hirable staff of statisticians come in before each major publication and verify results? These instances would cost lives (as your link to a NYT article so painfully pointed out). To me, it proves that statisticians aren't number crunchers, but high-quality quality control folks that should be sequestered to a cubicle at a "big data" firm. Rather, regarded and held as permanent positions at universities to ensure they're putting out good stuff. Imagine that type of stats for good -- for all!

    ReplyDelete