Tuesday, January 19, 2016

Taking Science Public

            As I was reading through the posted articles, two in particular caught my attention, because their topics were near and dear to my interests. I am interested in how the general population interacts with science, scientists, and how this perception can be shaped or warped. The articles posted on Vox and written by Julia Belluz took different angles at examining or negotiating the relationship between the lay and the lab. While I absolutely don’t deny the prevalence of irreproducibility in “science”, I’m not really ready to hop on the hype train like some others who seem happy to disregard the myriad advances which have been made by science despite it apparently being broken. One thing I liked from the articles was the frankness of the interviewee in the Why you can't always believe what you read in scientific journals piece. (S)he spoke candidly about the politics of science, specifically in his last comment, which was tangentially related to irreproducibility or “bad science” but actually highlighted one of the actual problems with science: it’s done by humans. This seems like a far more foundational or central issue, and a more interesting conversation to have. It’s not “peer review” or “good stats” (as someone who is good in lab and terrible at stats, I am herein making the assumption that someone who is as good at stats as I am in lab would find it just as easy to “massage” the data and obtain the answer they desire), it’s that we are prideful, ambitious, and defensive in perfectly normal, human levels.

            I honestly have no segue into what my next thoughts were, but I didn’t want to go on for too long about something that wasn’t really that related to the dialogue we’re trying to have. One thing I will note about Ms.Belluz (herself a decorated science journalist) is that she seems a little quick to redirect the attention away from the individuals who disseminate the majority of these falsehoods or exaggerated claims, the journalists themselves. While I don’t believe it is ethical to continue portraying science to the public as this infallible discipline with participants entirely unfazed by their own human emotions, I also think it is important to think about how we phrase and shape arguments out of what statistics or data we have. The example to which I alluded earlier is a clear instance of presentation dictating the takeaway. Belluz doesn’t say “while it is true that about 30% of cases where specific phrases were used are cases where doctors are using these terms, and it is sometimes unjustified. However, dwarfing the 30% of cases which pertained to doctors, 55% of cases pertained to journalists using these phrases.”, a short paragraph I could write in a manner that seems a little biased in favor of scientists. Instead she begins the paragraph by getting that 55% out in the open, then focuses the remaining paragraph elaborating on the smaller percentage of cases which are perpetrated by doctors, leaving readers with the notion of doctors who “medically overhyping”, not journalists. This is then of course laid to rest with a warning statement addressing the grave danger that is medical overhype. The order in which we present data, the careful phrasing we use, and the overall presentation of specific data all have a significant effect on the takeaway message a reader gets. One question I struggle with after reading some of the articles is: how can we have an honest discussion about the realities of science and how reliable or unreliable studies are, without creating a million tiny Jenny McCarthys? Is the nature of public debate and discussion nuanced enough to handle the realities of scientific research, many of which have been true for centuries? How can the largest proportion of cases, perpetrated by journalists, be policed? Should they be?


  1. So what about the biases that come with the fame of the name? Such as Jenny McCarthy. She dropped out of college to work for Playboy, lacking even a Bachelor's degree. Yet, she appears to be confident enough in her understanding of Immunology, Autism, and Vaccinology to argue against scientific experts and vaccine measures that have been successful for decades. Somehow, she has been able to convince a portion of the public to support her anti-vaccine rhetoric. How? Because of her fame, her stance is important solely because most people have heard about Jenny McCarthy but not Edward Jenner, Louis Pasture, or Maurice Hilleman. When "important" people outside of the scientific domain try and interpret the results obtained by the professionals, the public is confused about who to believe. And let's not kid ourselves, Americans love a good conspiracy theory. With the Healthcare system nickeling and diming, us, it is easy to think of it as corrupt. The public naturally has a bias against the Healthcare system. The root of the anti-vaccine debate isn't the healthcare system though, its laboratory science, anti-vaccinators are targeting vaccines because they want to blame the healthcare industry, but its hurting the wrong enemy: laboratory science. Maybe what we need is a better liaison between the jargon of scientific journals and the simplicity required for public understanding, perhaps more people like Julia Belluz. Perhaps, researchers also need better training conveying their ideas in lay terminology to better spread scientific knowledge.

  2. Your post reminds me of some of the "medical overhype" I encountered when looking for a paper for our BadStats assignment. The paper I chose had absolutely horrific methods that rendered their results uninterpretable, and yet that paper had been cited by over two dozen news outlets in the past week and a half. It astounded me that so many people will blindly publish a news article on a scientific paper without actually knowing about the methods (good or bad) that go into a paper. Another example, someone else in our small group wrote about an anti-vaccine paper with HORRIBLE methods that absolutely twisted the preliminary data, claiming negative effects of vaccines that clearly didn't exist in the data set when under proper analysis. I think we do need some method to increase education for journalists about how to read basic science articles and look for methods and bias in the research. Even just a basic understanding of how to read and interpret a scientific article would help journalism so much. Alternatively, maybe we need more scientists to go into popular science writing. I don't know how much the general public would care about the credentials of someone commenting on popular science, but I know I sure would. There's definitely no easy answer as to how to stop medical sensationalism; we can't risk having the public thinking that good science is bad and bad science is good. But maybe a conversation across the lines of journalism and science may be a good place to start.

  3. I think I agree with the idea that in many ways the mass media plays a pivotal role in promoting "medical overhype"; however, I do not agree with the idea that promoting "medical overhype" rests solely on the blame of journalists. Doctors can also play a role in this as well. Doctors may have a bias in that they believe their treatment is the best method for fighting a disease. In the early stages of the treatment of breast cancer, the radical mastectomy was considered the only method to combat breast cancer, and as the name implies, the procedure was extremely invasive. However, doctors still believe it to be the best even when presented with a superior alternative. There is a lot of name recognition and prestige in designing a treatment to a disease. Doctors will laud their treatments to get many people to start using them. The journalists hear the doctor talk about how great this cure is and believe it, since it is coming from a doctor and report it saying that science has discovered the new cure for cancer. However, even in this instance, it is not just the doctor who is at fault. The journalist is also expected to carry out good journalistic practices and not just rely on a single source. Similar to how scientists should not rely on one methodology to test a hypothesis, journalists take multiple approaches to scrutinize the authenticity of the data presented. They will need independent researchers to verify these claims and analyze the data itself. Many journalists do not have the scientific wherewithal to properly analyze scientific papers, and because of this, "medical overhype" still is being reported. Given the time restraint and turnover of news headlines, I'm not convinced that this problem will dissipate.