Unfollow The Science
Scientists have always lamented the state of scientific communication.
I was browsing old issues of Science magazine, as one does, and came across a cluster of special correspondences in 1957 declaring trust in science at an all-time low. It’s funny to read these complaints today. The post-war period was a time of unprecedented scientific innovation. Fueled by a grand vision of Western soft power and competition with communism, academic science and engineering flourished. By 1957, supercomputers were mass-produced, air travel had expanded to the general public, and there were countless new therapies for debilitating diseases like cancer. We were making all sorts of fundamental breakthroughs, including discovering quantum electrodynamics and DNA. Modern researchers often point to these glory days as a time when science had nothing but low-hanging fruit, and the public trusted its public institutions. And yet, reading these articles, you’d think the 1950s were a scientific dark age.
Dael Wolfe penned “Science and Public Understanding.” Wolfe was an executive officer at the American Association for the Advancement of Science, and his editorial has all of the lofty yet empty rhetoric of the bureaucrat. He doesn’t list any particularly specific complaints but articulates a timeless conventional wisdom. First, he laments that the common people don’t appreciate the age of miracles in which they live. They fail to bask in the glory of “shirts that need no ironing, socks that do not wear out, a stove that lights its own fire and extinguishes that fire when the meal is cooked, a shiny box for the kitchen in which food can be kept fresh for long periods, spectacles with a hearing aid concealed in the temple.” Wolfe worries that this disconnect between science and the public arises because science is now too complicated for any individual to understand. He thinks we can remedy this by improving scientific communication by both scientists and journalists. He advocates for more science education in schools and against specialized degrees where scientists can avoid the humanities and humanists can avoid the sciences.
Fred Decker, a professor of physics at Oregon State University in Corvallis, has a more pointed critique in “Scientific Communications Should Be Improved.” I mean, how can you argue with that title? Decker recalls the golden age of scientific communication: “Authentic, complete, prompt, and understandable reports of scientific developments have always been needed, and in the past they have always inestimably aided scientific achievement.” He doesn’t give any examples of the past glory, of course. But he does point to some of the big scientific scandals of his day. “Controversy has raged over such subjects as battery additives, lung cancer, fluoridation, cloud seeding, the Salk vaccine, and radiation danger.”
Decker, whose research focused on the physics of weather, was particularly irked by claims about cloud seeding. Weather modification was a hot topic in the 1950s. Automatic control was finding all sorts of unexpected fruitful applications, and people thought that weather and climate might also be controllable at the planetary scale. Combining improved weather prediction by computer with improved weather control by cloud seeding could lead to a global HVAC system with improved climate comfort and better conditions for agriculture. This sounds like science fiction today, but it wasn’t a niche idea in the 1950s. Many prominent scientists including John von Neumann were proponents of this weather control program. But weather modification technology didn’t pan out. Decker, writing in the middle of this controversy (a “replication crisis,” if you will), calls out what he sees as the main problems causing over-promising in weather modification.
His main argument is for more rigor. More rigor and care in experiment design. More rigor in establishing benchmarks and metrics of progress. More rigor in scientific publication. He wants all data to be out in the open so even unpromising results can be publicly assessed. He wants a moratorium on the term “statistical significance.” He wants clearer articulation of the uncertainty in experimental outcomes and less media focusing on “breakthroughs.” Like Wolfe’s calls for better education, we’re still arguing about how to make science more rigorous. I can find you dozens of articles from the last week pleading for higher standards of scientific rigor.
My favorite passage in Decker’s piece is his timeless defense of the critic:
“Every scientist should carry the message that there are in reality the following three categories of individuals in any scientific controversy over conclusions (i) the enthusiastic innovators; (ii) those who conclude that the innovators are wrong; and (iii) those who insist on obtaining more definitive data before they join either of the first two groups. A scientific tragedy today is that often the third group is ignored and carelessly classed with the second group.”
The more things change…
Perhaps the most worrying of the three editorials is “Safety Testing of Poliomyelitis Vaccine” by Paul Meier. Meier, a biostatistician at Johns Hopkins, unpacks what went wrong between the field trials of the Salk vaccine and the subsequent widespread delivery of the vaccine. Notably, many children contracted polio because a defective lot of vaccine manufactured by Cutter Industries contained live polio virus. Meier notes that while the trials of the vaccine were a success, they ended with uncertainty about the proper manufacturing process and required dosage levels for the Salk vaccine. Yet the public communication only asserted that the vaccine was an unmitigated success. He notes that information packets from the National Foundation for Infantile Paralysis that were sent to pediatricians and family care doctors remarkably lacked nuance:
“All doubts about the safety of the vaccine are dismissed. It is said to be ‘as safe as any biologic product can possibly be,’ and it is stated that the safety of the vaccine ‘has become a question for historians rather than clinicians.’ To the query, ‘What is the estimated calculated risk of inducing poliomyelitis infection by the inoculation of vaccine under present safety standards?’ the foundation reply is, ‘None. No risk.’ Despite the unresolved questions on the safety of the vaccine and the very small risk to any individual of acquiring paralytic polio in a single season, the foundation described the need for vaccinating as many children as possible before the 1956 polio season as ‘akin to a medical emergency.’”
This lack of transparency ended up leading to sloppy manufacturing and harming many children. In turn, this severely damaged public trust in vaccination. Meier concludes that “failure adequately to inform the public, more particularly the physicians who must largely accept the responsibility for advising the rest of the public, seems likely to lead to the deterioration of the confidence and respect which scientists should enjoy.”
He’s not wrong, of course. These 70-year-old criticisms ring true today. We’ve never been good at communicating scientific nuance nor at establishing systems of scientific rigor. We could write the same thing about behavioral economics or superconductivity or Alzheimer’s research or, of course, anything related to COVID-19. It’s funny how we can mouth the same platitudes for a century and think that somehow the same solutions are what are needed.
But perhaps these booms and busts in confidence are part of science. Overpromising and underdelivering killed cloud seeding. The Cutter Incident transformed vaccine safety standards and led to a more effective polio vaccine. Perhaps part of the scientific method is getting way out over our skis only to be humbled back to the drawing board. And perhaps part of the scientific method is complaining about how bad we are at explaining our scientific results.