While meeting with a visiting speaker last week, the subject came up of studying biological pathways that only account for <1% of a given disease. For example, schizophrenia is a very heterogeneous disease with multiple proposed risk factors. Genome-wide association studies have been conducted to identify specific risk factors. Although these studies have been successful in identifying candidate molecules, such as microRNA-137 and then gene C4, how helpful can they be if they only contribute to a small percentage of schizophrenia cases? This made me think of a question that keeps coming up in this course: when is clinical significance more important than statistical significance? Though the studies have statistically significant results, how significant are they to the health field if they cannot help patients with schizophrenia?
I believe this problem somewhat originates from the drive to get funding from the NIH. When applying for grants, your project becomes much more appealing when a disease is attached to it. Thus, a protein that a large group of the scientific community didn't care about before becomes much more appealing when it's tied to some disease, no matter how small the percentage of risk is. As soon as you have a study showing a statistically significant result for a pathway implicated in a disease, your research gets some attention. But how much does that matter in the scheme of the actual disease when the mutation you are observing accounts for <5% of a disease?
While this may not be considered a form of "bias", I believe that attempting to put a health-related spin on research is severely affecting the way that we conduct research. Making overreaching conclusions can mislead readers and misdirect future research projects. These studies may have the right numbers and statistics, but do they have the right scientific approach of questioning?