top of page
Search

How do we arrive at scientific "fact" - what are our responsibilities communicating with society?

  • Lewis SURE Program
  • Jul 31, 2019
  • 5 min read

-Dr. Sarah Powers, Biology


Earlier this month, the New York Times published an article, “The 5G Health Hazard That Isn’t,” which caused me to stop and think about the responsibility that has been attributed to me because my education gives me the credential of “scientist.” I will let you read the article on your own, as my intention is not to get on a soapbox to weigh in on whether or not people should have health concerns about the implementation of 5G networks. I did, however, find myself contemplating the concept of authority – ultimately, we often look to others to tell us how we should assess things we don’t know a lot about. Knowing that this is how humans function, when members of society rely on “someone who knows” to distill information, how do we make sure that the conveyed message is really representative of “fact,” and what credentials are necessary to define someone as an “expert”? Perhaps more pertinent to each of us, how careful are we to make sure that we use our authority as “scientist” when we really can speak as a content expert, yet also avoid having our opinions become “fact”?


If you think back to your early days of science education, before you learned the process of practicing the scientific method – asking a question, gathering evidence to better understand that thing in question, and then interpreting meaning from your observations – you likely relied on textbooks and your teachers to present you with knowledge. There was a certain level of assumption built into those lessons; probably without even questioning it, you (and your classmates) had faith that the information presented to you was true.


We learned that if you hold a baseball in your hand at shoulder height and then let go, it will fall to the ground, rather than float up into the sky. Did you learn this concept in a context where your teacher, instead of specifying a baseball, claimed that if you held any object in your hand and let go, it will fall to the ground? Perhaps you were content to take this lesson at face value, or maybe even had the opportunity to test it out with some different objects – a baseball, a basketball, a paperclip, a feather – and the observations you made helped solidify this concept as truth. But what would have happened if you were handed a helium-filled balloon and told to let go? If you had never encountered one before, how would this observation have impacted your ability to trust the statement that all objects will fall to the ground?


Fast forward to your current research experience. Hopefully, your college coursework has helped you to realize that those “facts” you learned back in grammar school represent conclusions made after gathering a lot of evidence. Rather than getting bogged down in complex calculations or overwhelming sets of data, young students are often taught about key concepts as fundamental truths we can trust because they were discovered by scientists, and subsequently supported by additional scientific research. It is only later, once we’ve learned additional vocabulary or analytical abilities, that we tend to return to the data supporting those key concepts. In your discipline of choice, you’re now working on adding to that narrative, but you’re not starting from scratch. At the beginning of the summer, we spent some time discussing literature searches and the importance of gleaning information from reputable sources, so that you can be sure you are building on a strong foundation. Here, too, we are making big assumptions. We put faith in authors submitting truthful observations in the work they are attempting to publish, and we rely on the peer review process to critically tease apart quality reports from the manuscripts that are incomplete or contain misinterpretations of data or are otherwise problematic. For the unfortunate instances when fraudulent work has made it through to publication, we rely on continued research in that discipline to catch and report concerns, and for the publishing entity to oversee retractions and make known that integrity has been compromised. As a scientist, you are learning that what we know about your topic of choice is fluid, and that we need to make adjustments about what we know as “fact” as more information becomes available.


Unfortunately, we can’t all be subject experts in everything. I can appreciate that it is pretty cool that the Event Horizon Telescope allowed scientists to image a black hole, but I don’t have the training or expertise to even start evaluating the science that made this possible. I can’t sit down and read the technical publication behind this work – I am making an assumption that the reports generated for consumption by the general public are a good summary and representation of the scientific content. What if it later comes to light that there was a fundamental flaw in the process that led to this story being the big media hit that it was? If I use a Google search to try to return to this topic later, will I be alerted to the scientific revision, or will the most prominent search results always take me back to the initial release information? Here we get to the disconnect between how “fact” is known within the scientific community, versus “fact” that is known by society in general.


If I do put on my “authority-as-a-scientist” hat (given my training as an Immunologist, I believe it is fair to claim some expertise on the subject of vaccines), I can point to an instance of misrepresentation that has led to tangible consequence. In 1998, a publication in the journal Lancet made an association between the measles, mumps, and rubella (MMR) vaccine and autism. Subsequently, the paper has been fully retracted as fraudulent (the associated progression of the evaluation of the article post-publication is summarized here), but the original study remains as foundational evidence for the modern anti-vaccine movement. What is the consequence? Within the United States, we know that the rate of vaccination with MMR has declined, and there has been a significant increase in the number of measles diagnoses – measles was declared an eliminated disease in the United States in 2000, but so far this year (January 1 through July 18, 2019) there have been 1,148 confirmed cases across 30 states.


So, back to the original concern. How do I make sure that I am a good steward of scientific information and avoid projecting ideas as “fact” when they really aren’t? Well, part of that is easy – given that I don’t know anything about black holes, for instance, I won’t be calling a press conference to talk about that subject any time soon. But, when the topic is more closely related to something that I do identify with as “expert,” are there things I might share as an opinion that others may interpret as fact? For instance, if I casually mention to a friend that I didn’t get the flu vaccine this year, does that convey information I didn’t intend? The reality may be that I didn’t get vaccinated because I’m lazy and just didn’t take the time to go and get the shot – but might my statement be interpreted as “there must be some reason for not getting the flu vaccine, so I won’t either”? I can’t possibly forecast how everyone will always interpret things I say, but I can make an attempt to consider how I share information, and be aware of how I use a voice of authority intentionally. Have you considered how you have used authority to convey information? How can you make sure to distinguish between fact and opinion?

 
 
 

Recent Posts

See All

Comentarios


Follow

©2019 by Lewis University. Proudly created with Wix.com

bottom of page