Monday, April 11, 2016

Statistics helps explain challenges in cancer prevention research

Last Friday in Winship Cancer Institute auditorium, there was the speaker Dr. Yang talking about tea in cancer prevention. Below is one of his slides which illustrates that tea might help prevent skin, oral, esophagus, lung cancers etc..
It could be very wrong to say that tea definitely prevents these cancers in "potential" human patients based on what have been taught in biostats class.  Thankfully, Dr. Yang did not make that statement in his talk but instead listed findings or research that has been done across the globe to test ingredients in the tea that might help prevent cancers in lab settings or clinical trials.  Even at the very end of Professor's Yang's talk, he did not jump to the conclusion of cancer prevention effects of tea.  This might sound frustrating to young researchers who has the ambition to prevent cancers someday in the future.  But it is a good talk to me if I combine biostats concepts and what Dr. Yang talked about or what others in this field has been pursuing for many years.  

The first thing I learnt was to keep both scientific hypothesis and statistical hypothesis in mind before you actually perform an experiment or start a research project.  It's easy to get lost when you gradually learn more about your research subject but without a clear science question to answer.  For example, in cancer prevention research of a lab setting, if you want to test if ingredient A has the effects of lessing prostate tumor burden in mouse model, you should stick to it even later in research you found that ingredient A somehow magically decreased the mouse weight and may have effects in preventing obesity.  Then you suddenly changed your hypothesis in order to get your research published as soon as possible.  This is the so-called HARKing (hypothesizing after results are known).  In this way, there is greater possibility that you are biased to jump to the false positive results without careful statistical decisions beforehand.  

Also, it's also critical to keep effect size in mind rather than just rely on p-value when you want to extrapolate a lab finding to the clinical trials.  Nowadays, we've seen so many failures in clinical trials where the drug used has been shown to have "significant" effects in lab research.  Part of the reason is that we consider p value to have magic power to draw the conclusion but ignoring the effect size, especially in the case of cancer prevention.  We've seen that 8 out of 10 mice in our lab setting responded to the drug and showed a delay in tumor formation (treated group showed five days delay of developing the same size of tumor).  Statistical test was run and it had low p value.  However, irrespective of the fact that mice are different from humans, would the clinical data capture the difference which corresponds to difference of five days delay of tumor in mice?  Not quite. Confounders in clinical trials are difficult to control, which make the prevention study even more challenging. 

It's still questionable to persuade individuals to have certain supplement or drink tea even if you've seen the significant benefits in clinical trials.  The study in population may not apply to every individual.  In one research presented by Dr. Yang, the clinical trials showed that the benefits could only been seen in females but not males and they proposed that smoking might be the confounder.  If smoking is really the problem here, then drinking tea might not do any good to a smoking individual. We need to treat the clinical data with caution before we make any conclusions or approve any supplements that's said to prevent cancer.

It needs both statistics and sound judgement to do good science, especially in cancer research.            

No comments:

Post a Comment