It is common for popular science articles and books to misrepresent science, a practice that isn’t limited to popular publications. Textbooks, peer reviewed publications, and college courses sometimes promote misinformation. To avoid being bamboozled, think for yourself or go to the source and evaluate the evidence for yourself. Science is hard; methods and statistics used within and between scientific domains vary greatly. A brief look at a paper’s abstract is often done when people evaluate studies, reviews, or research reports. Sometimes this is enough to get a general overview, or at least to gather the information one is looking for. However, sometimes a thorough read and investigation of the paper is appropriate. Evaluating a paper—and determining its level of validity (and different types of validity) and reliability—is cognitively demanding. With a little education, including the appropriate mindware, a general understanding of popular science and scholarly science is attainable.
Mindware (a term coined by cognitive scientist David Perkins) is defined as rules, procedures and other forms of knowledge that are stored in the brain and can be retrieved to make decisions and solve problems (Stanovich 2009). Scientific mindware involves knowledge structures that can be retrieved from memory when making decisions or judgments about scientific information (or information promoted or portrayed as scientific). There are two primary components: scientific literacy and scientific cognition (Hale 2018). In the context of my research, scientific literacy is synonymous with general scientific knowledge. This form of literacy is sometimes referred to as a type of derived scientific literacy (Norris and Phillips 2003). Various forms of scientific literacy are important, as are other science-related concepts. Scientific cognition involves multiple components and subcomponents (Feist 2006): philosophy of science, scientific methodology, quantitative/probabilistic reasoning, and elements of logic.
Quantitative research uses statistics, so a basic understanding of statistics is important when evaluating research. Discussions on research statistics generally involve two categories: Frequentist and Bayesian. If you had a college course on statistics, it was most likely based on Frequentist models. The differences between the two are beyond the scope of this article; for more, see Hale 2019. Consumers of science don’t have to be experts in stats to evaluate research, but a basic understanding, including an understanding of limitations, is important. Statistics help in organizing and finding patterns in the results, and one of the key problems people have with interpreting stats is called “Person-Who Statistics” (Stanovich 2007).
Person-who statistics are situations in which well-established statistical trends are questioned because someone knows a “person who” went against the trend (Stanovich 2007). For example, a person might say, “Look at my grandpa. He is ninety years old, has been smoking since he was thirteen, and is still healthy,” implying that smoking is not bad for health based on that anecdote. This type of statement demonstrates a misunderstanding of statistical implications. Statistics are taken from samples and involve averages or scores for a group. The stats reported in the results sections of articles do not reveal individual scores, and there will be variation among individual scores. Knowing someone who goes against the statistical trend doesn’t mean the trend is invalid. When explaining this concept, I often refer to income averages. People understand that not everyone has an income equal to the average, and if an individual has an income different than the average that doesn’t mean the reported average is invalid. Researchers should do a much better job of explaining what the statistics in the study represent; without a clear description, there is considerable room for misinterpretation. When science describes, predicts, or explains something, it is understood that the conclusion is tentative and doesn’t hold true for everyone. How well the findings generalize to other contexts should always be considered.
The Scientific Mind
Building a scientific mind requires attaining the right mindware. The benefits to building a scientific mind are broad. Being a scientific thinker allows one to read and understand scientific journal articles, helps distinguish science from pseudoscience, protects against charlatans, involves better general thinking skills, is essential to rational/critical thinking, and will help in keeping one informed on matters of policy relevant to science. Below are tips for acquiring the necessary mindware.
Learn the appropriate terminology. It is important to spend enough quality time learning relevant terminology. You won’t master terms from all areas of science, so emphasize deep learning of terms relevant to fields you’re most interested in. You can’t comprehend the material if you don’t understand the terms. My students often call me a semantic stickler—a label I embrace. It is important to use terms in a consistent way and consider their contextual nature.
Consider the source. Limit your science reading to reliable sources. Even reliable sources may report incorrect information, so while source is important, it is important to evaluate each article, report, or commentary based on its own merits. Source is important, but don’t make a mistake of thinking because it comes from a reputable source that it is always high quality. Even top-level journals sometimes publish articles that are later retracted due to methodology flaws, statistical errors, or other problems.
Use the expert-expert heuristic. A few years ago when I was speaking with cognitive scientist Keith Stanovich (codesigner of The Rationality Quotient, author of What Intelligence Tests Miss), he suggested when learning a new subject I should refer to the work of the field’s elite. There is a lot of information in most areas of science; instead of trying to read it all, go straight to the top and read the work of the top people in the field. Look at the references they provide. A little investigation will usually reveal the field’s top people; they are generally referenced by other leading authorities, and after reading a few of their works their knowledge seems obvious. Read the work that experts in the field recommend.
Avoid the peer review myth. Peer review is important in the process of scientific publication; however, peer reviewed publications are not always quality publications, and some peer reviewed papers are retracted. The peer review myth occurs when it is assumed that an article is high quality based solely on its status of appearing in a peer reviewed publication. When evaluating scientific data, in addition to whether it is published in a peer reviewed journal, it is important to take into consideration other factors, including funding sources, study replication, study design, sample size, conflicting interest, sampling error, different measures of reliability and validity, reporting limitations, and other possible criticisms of the study. There are good studies that never get published in peer review publications, and some low quality studies are published by peer review publishers.
The weight of evidence. Both studies that support a claim and studies that do not support a claim should be considered. One study supporting a claim isn’t enough. When considering the preponderance of evidence, relevant studies and their value need to be weighed. This often requires a vast literature search, and lots of people, including academics, fall short when it comes to weighing evidence. It isn’t reasonable to suggest that everyone engage in this painstaking activity. If you are writing, teaching, or lecturing on the topic, it is necessary that you conduct a wide literature search. Take your time; don’t rush it. I suggest first looking at systematic reviews and meta-analyses and then, if further investigation is needed, referring to some of the study’s references.
Science is hard but learnable for most people. A comprehensive science education should involve general scientific knowledge and the thinking strategies needed to demonstrate scientific cognition. The cognitive skills underpinning scientific thinking are important and are needed to reach the apex of human thought and rational/critical thinking.
Feist, G.J. 2006. The Psychology of Science and the Origins of the Scientific Mind. New Haven, CT: Yale University Press.
Hale, J.P. 2018. Scientific cognition and scientific literacy. Kentucky Academy of Science Newsletter (Fall).
———. 2019. Research Statistics: Frequentist & Bayesian. Available online at http://jamiehalesblog.blogspot.com/2020/01/research-statistics-frequentist-bayesian.html.
Norris, S.P., and L.M. Phillips. 2003. How literacy in its fundamental sense is central to scientific literacy. Science Education 87(2): 224–240.
Stanovich, K. 2007. How To Think Straight About Psychology, 8th Edition. New York, NY: Pearson.
———. 2009. What Intelligence Tests Miss: The psychology of Rational Thought. London: Yale University Press.