btw have you read "Radical uncertainty" by Kay and King? I haven't read it myself yet (it's on the list...) but I got the recommendation from someone, in the context of similar/related discussions. If I got his summary right, I think one of the main claims in that book is against the idea that "quantifying" uncertainty about policy questions in some scientific/math way is always the way to go.
You might read Federalist 37 on this. Short Madison: expecting uniform decisions from people with diverse interests concerning difficult and complex issues is usually folly. Falling back on "our values" is highly unlikely to lead to anything approaching a disinterested decision.
Madison is right about this. Our "common sense" is almost never common or analytically sophisticated enough to be useful. (For the following, I'm doing a straight lift from Deborah Mayo.) That's why we find ourselves falling back on science. It has the only real possibility of reaching useful decisions about complex matters. Sure, there are human interests involved and controversy is central to the entire endeavor. But … we learn from the controversy; indeed, that's what science is about generally. Further, it is only by exploring unanswerable questions that we gain the capability to produce the measures and procedures we need to make the questions answerable.
Ok, Mayo off. But I think she's right and Meehl's wrong. On the whole idea of publishing too much and using techniques - particular - significance testing - incorrectly we're bo0th with you.
I dunno, this seems overly defeatist to me. Are you and Meehl and Sarewitz honestly going with a strong argument like "Once any question is political, it's automatically unanswerable?" Weaker versions seem obviously true, but the full version seems a lot to swallow.
Isn't every possible safety issue that might be regulated a political question? Smoking causes cancer? Seatbelts and airbags save lives?
I'd be more inclined to say that a lot of our political questions are not answerable, and making them political doesn't make them any more answerable --- it makes them somewhat less answerable. But there are still some questions that are answerable even when they're politicized?
The weaker version I subscribe to "Once any question is political, it's automatically unanswerable by science." Public health questions are actually fraught with complicated evidence bases. Public health strategies are always value and ought laden.
People have written excellent books about why the arguments about the evidence for "smoking causes cancer" were so heated and politically fraught. (Like this one: https://www.merchantsofdoubt.org/)
I'm confused why you believe this, rather than a much weaker statement like "Once any question is political, it's harder to address with science than it would be otherwise?"
It seems like today, the scientific community almost universally agrees that smoking causes lung cancer. The question was politicized, so it maybe took much longer to answer than if it hadn't been, but we got there. This seems like an obvious counterexample to your strong statement?
I think I'm agreeing with the overall message of this series --- the smoking question was ultimately answerable only because both the relevant population size and the effect size were huge --- it didn't take fancy statistics or particular delicate experimental designs to determine smoking caused lung cancer, right?
I think the point Ben is making (different from the one Meehl makes) is that there is a "nonpolitical" question:
"Does smoking increase the risk of lung cancer?"
And a "political" question:
"What restrictions should we put on smoking?"
Where the first is answerable by science, and the second is a values / ethics question.
I disagree with this characterization for the following reason: the distinction Ben is making is not between political and nonpolitical, but between descriptive and prescriptive. Ben believes that public health questions automatically become prescriptive, but that is not generally true: the health risks of alcohol are widely known and there is little appetite for further regulation.
I believe Ben (and others who believe this) would respond that public health questions are primarily researched to influence policy decisions, and the rot of bias climbs up the tree. This motte to the prior bailey (something like "once any question is political, value biases inevitably effect judgement) is also, I think indefensible: plenty of researchers are biased in favor of their work, and the goal of scientific processes is to weed that out. These are problems with existing institutions, not fundamental problems to answering those questions.
What Meehl is saying is a much more specific point: that in any fields, you are limited by the tools you are given, and that this applies as much to the soft sciences as the hard. One of the big themes of the course is that null hypothesis testing is a tool of much more limited utility than psychology believed at the time, and that many of the questions pursued by psychology at the time were searching for effects that were only possible to detect with generous assumptions for and questionable usage of null hypothesis testing,
I’m curious if any of you have read the book by Mark Shapiro called exposed? It’s an interesting one where he talks about how the European union actually put together a regulatory process that documented toxic substances, and then actually avoided them because,and this is the crux of his book, most of the European Union is single payer, healthcare. As the payer, suddenly the governments concerned about how much things are going to cost to keep people healthy. The most important part I think is it shows that Americas business first approach has left its citizens exposed. When our regulatory system was put in place for toxics, a huge portion of the existing chemicals were simply grandfathered in, even though they were toxic. What a regulatory system! I don’t think that the European Union’s approach was hyper political. I think it was just about dollars and cents. But it should really be about human and environmental health. Boy what a concept.
I think perhaps you’re right that it’s harder to address things with signs when they become political but I still think it’s important.
I can think of one other example. A scientist at UC Berkeley did a statistical analysis of the various studies done on cellular impacts. Looking at all the research there is nothing obvious, but when he divided the research between industry supported research and independent, he saw a significant problem. Then he looked into the way, the industry research was applied and found flaws in subject selection, and approach in many of the industry studies.
Peer review is crucial, but regulation is also important. The problem is even the regulation can get lobbied and manipulated. Your right it’s difficult, but I still think it’s crucial.
You just described what Hebert Simon described in 1969! Design is the sciences of the artificial. While natural science is the science of what the world is, design is the sciences of what the world ought to be. Great article as usual Ben.
There was a policymaker, Vannevar Bush, in 1945 who wrote the text “As we may think”. He imagined a future world with a technological device that would allow for more ease of transmitting information from one domain to another. A lot of his technology are eerily similar and predictive of what we have now. Id argue that this wasn’t a coincidence. Bush was not only a scientist but also an administrator working in the white house with a significant amount of influence for a certain period. He made policy decisions based on his own vision… which ended up designing some of the foundation of the US scientific infrastructure we still live in today (albeit much worse funded than the past). In that, he as a scientist decided what the world ought to be based on his own ethics and vision. If you read the text, there wasn’t much citations or “factual claims” in there! It was purely speculative.
I mean, Bush is the reason there's an NSF. Probably the most influential person in setting up government funding of science in the US. This is his manifesto: https://www.nsf.gov/od/lpa/nsf50/vbush1945.htm
I was an active scientist in my early life and had some wonderful exposure to some excellent researchers and teachers.
I got a job while in school working as an tech assistant at Scripps oceanographic for a graduate student and he was quite influential to my scientific development. He was following in the footsteps of his advisor, who believed that the best way to understand is to observe, form an idea, and then perturb the system and see what happens. What then occurs is a process of exploring, and I watched my associate learn about in this case, the benthic community off the coast of La Jolla. He discovered an interesting relationship between arthropods and bottom feeding fish. While he was working on his thesis, he got a job with Shell Oil Company to look at the organisms beneath a ocean oil platform. Because anything like that in the ocean is essentially an artificial reef, there is an abundance of life, and that’s all they wanted to know, because my friend, who’s a, curious man wanted to understand what was going on with that community beyond the oil well. They didn’t want to do that. They didn’t want to know if there was something going on that could be detrimental. My friend was just simply interested in understanding the system.
As a young student, I heard lots of interesting ideas that later I discovered were not well thought out. This was the time of expanded molecular biology research.
Frankly, the most predominant thing that sticks in my mind about the time is that scientist should have training in ethics. They should have to understand that they may develop some thing that has a great danger like for example, the plastics that were such wonderful things at the time and now ominously everywhere or the ever present chemicals used for nonstick pans that were known to be in the blood of people early on and yet the company and the scientist continued.
Some of this stuff is obvious. Industry scientist know when they are walking on dangerous ground.
Your article talks about this and directly in terms of theories, which I would call hypothesis. Often scientists have a hypothesis and they’re looking for organisms and a system to test it in it. This is a backwards approach.
I have to say your article was very stimulating and I’m looking forward to following. Thank you.
Wouldn't we go with the theory that is most likely to be true, not necessarily the one we want to be true? Surely we can never be scientifically dispositive, but we can nevertheless reason that some theories are more likely to be true than others. These are the ones we'd operationalize.
The thing about theories is they rely on axioms, and these axioms are asserted. If you question the foundation, then the whole theory can be doubted. But axioms, while not true or false, can be reasonable or unreasonable, and we can use this basis to select the most reasonable theory.
So how I see it (and I am not Ben but I love these types of discussion) is that theory informs what the world is.
At the end of the day, we make "scientific decisions" that impact other people's lives not only based on theory but also based on ethics, values, and our own philosophy of what the world should be. Theory can't predict what the world will be because we haven't lived in that world yet. History may inform what could happen if we choose to make this decision (and the theory maybe informed by history as well), but ultimately, we can twist history and theory to mold around our own philosophy and political beliefs on what the world should be.
My personal way of thinking about this is that performativity is indeed very powerful but also has its limits. We're in a perpetual push-and-pull between our ability to reshape the world and the world's materiality pushing back on us. Sometimes we can have a good idea of how the world will respond using scientific methods, but there is always the possibility (or probability) of surprise. Where this balance lies depends a lot on the specific arena under discussion, and lends itself more to case-by-case nuance.
Altogether I think it's important both to claim our own agency and to be respectful of the rest of the world's agency too, and that we don't have unfettered power to shape it in our desired image (which would also be a very frightening/authoritarian thing in its own way). (To be clear I'm not saying that's what Kevin was saying, but I think this is one of those things that can cause people to talk past each other.) Our scientific theories/arguments always have built-in "oughts" AND the world won't always listen to them...
Love the point that this motivates the need for humanities training as part of engineering! One of the lines reminded me of the great Latour quote: "reality, as the latin word res indicates, is what resists."
This is fantastic.
btw have you read "Radical uncertainty" by Kay and King? I haven't read it myself yet (it's on the list...) but I got the recommendation from someone, in the context of similar/related discussions. If I got his summary right, I think one of the main claims in that book is against the idea that "quantifying" uncertainty about policy questions in some scientific/math way is always the way to go.
Will check it out!
Ben,
You might read Federalist 37 on this. Short Madison: expecting uniform decisions from people with diverse interests concerning difficult and complex issues is usually folly. Falling back on "our values" is highly unlikely to lead to anything approaching a disinterested decision.
Madison is right about this. Our "common sense" is almost never common or analytically sophisticated enough to be useful. (For the following, I'm doing a straight lift from Deborah Mayo.) That's why we find ourselves falling back on science. It has the only real possibility of reaching useful decisions about complex matters. Sure, there are human interests involved and controversy is central to the entire endeavor. But … we learn from the controversy; indeed, that's what science is about generally. Further, it is only by exploring unanswerable questions that we gain the capability to produce the measures and procedures we need to make the questions answerable.
Ok, Mayo off. But I think she's right and Meehl's wrong. On the whole idea of publishing too much and using techniques - particular - significance testing - incorrectly we're bo0th with you.
I dunno, this seems overly defeatist to me. Are you and Meehl and Sarewitz honestly going with a strong argument like "Once any question is political, it's automatically unanswerable?" Weaker versions seem obviously true, but the full version seems a lot to swallow.
Isn't every possible safety issue that might be regulated a political question? Smoking causes cancer? Seatbelts and airbags save lives?
I'd be more inclined to say that a lot of our political questions are not answerable, and making them political doesn't make them any more answerable --- it makes them somewhat less answerable. But there are still some questions that are answerable even when they're politicized?
The weaker version I subscribe to "Once any question is political, it's automatically unanswerable by science." Public health questions are actually fraught with complicated evidence bases. Public health strategies are always value and ought laden.
People have written excellent books about why the arguments about the evidence for "smoking causes cancer" were so heated and politically fraught. (Like this one: https://www.merchantsofdoubt.org/)
I'm confused why you believe this, rather than a much weaker statement like "Once any question is political, it's harder to address with science than it would be otherwise?"
It seems like today, the scientific community almost universally agrees that smoking causes lung cancer. The question was politicized, so it maybe took much longer to answer than if it hadn't been, but we got there. This seems like an obvious counterexample to your strong statement?
I think I'm agreeing with the overall message of this series --- the smoking question was ultimately answerable only because both the relevant population size and the effect size were huge --- it didn't take fancy statistics or particular delicate experimental designs to determine smoking caused lung cancer, right?
I think the point Ben is making (different from the one Meehl makes) is that there is a "nonpolitical" question:
"Does smoking increase the risk of lung cancer?"
And a "political" question:
"What restrictions should we put on smoking?"
Where the first is answerable by science, and the second is a values / ethics question.
I disagree with this characterization for the following reason: the distinction Ben is making is not between political and nonpolitical, but between descriptive and prescriptive. Ben believes that public health questions automatically become prescriptive, but that is not generally true: the health risks of alcohol are widely known and there is little appetite for further regulation.
I believe Ben (and others who believe this) would respond that public health questions are primarily researched to influence policy decisions, and the rot of bias climbs up the tree. This motte to the prior bailey (something like "once any question is political, value biases inevitably effect judgement) is also, I think indefensible: plenty of researchers are biased in favor of their work, and the goal of scientific processes is to weed that out. These are problems with existing institutions, not fundamental problems to answering those questions.
What Meehl is saying is a much more specific point: that in any fields, you are limited by the tools you are given, and that this applies as much to the soft sciences as the hard. One of the big themes of the course is that null hypothesis testing is a tool of much more limited utility than psychology believed at the time, and that many of the questions pursued by psychology at the time were searching for effects that were only possible to detect with generous assumptions for and questionable usage of null hypothesis testing,
I’m curious if any of you have read the book by Mark Shapiro called exposed? It’s an interesting one where he talks about how the European union actually put together a regulatory process that documented toxic substances, and then actually avoided them because,and this is the crux of his book, most of the European Union is single payer, healthcare. As the payer, suddenly the governments concerned about how much things are going to cost to keep people healthy. The most important part I think is it shows that Americas business first approach has left its citizens exposed. When our regulatory system was put in place for toxics, a huge portion of the existing chemicals were simply grandfathered in, even though they were toxic. What a regulatory system! I don’t think that the European Union’s approach was hyper political. I think it was just about dollars and cents. But it should really be about human and environmental health. Boy what a concept.
I think perhaps you’re right that it’s harder to address things with signs when they become political but I still think it’s important.
I can think of one other example. A scientist at UC Berkeley did a statistical analysis of the various studies done on cellular impacts. Looking at all the research there is nothing obvious, but when he divided the research between industry supported research and independent, he saw a significant problem. Then he looked into the way, the industry research was applied and found flaws in subject selection, and approach in many of the industry studies.
Peer review is crucial, but regulation is also important. The problem is even the regulation can get lobbied and manipulated. Your right it’s difficult, but I still think it’s crucial.
You just described what Hebert Simon described in 1969! Design is the sciences of the artificial. While natural science is the science of what the world is, design is the sciences of what the world ought to be. Great article as usual Ben.
There was a policymaker, Vannevar Bush, in 1945 who wrote the text “As we may think”. He imagined a future world with a technological device that would allow for more ease of transmitting information from one domain to another. A lot of his technology are eerily similar and predictive of what we have now. Id argue that this wasn’t a coincidence. Bush was not only a scientist but also an administrator working in the white house with a significant amount of influence for a certain period. He made policy decisions based on his own vision… which ended up designing some of the foundation of the US scientific infrastructure we still live in today (albeit much worse funded than the past). In that, he as a scientist decided what the world ought to be based on his own ethics and vision. If you read the text, there wasn’t much citations or “factual claims” in there! It was purely speculative.
I mean, Bush is the reason there's an NSF. Probably the most influential person in setting up government funding of science in the US. This is his manifesto: https://www.nsf.gov/od/lpa/nsf50/vbush1945.htm
I was an active scientist in my early life and had some wonderful exposure to some excellent researchers and teachers.
I got a job while in school working as an tech assistant at Scripps oceanographic for a graduate student and he was quite influential to my scientific development. He was following in the footsteps of his advisor, who believed that the best way to understand is to observe, form an idea, and then perturb the system and see what happens. What then occurs is a process of exploring, and I watched my associate learn about in this case, the benthic community off the coast of La Jolla. He discovered an interesting relationship between arthropods and bottom feeding fish. While he was working on his thesis, he got a job with Shell Oil Company to look at the organisms beneath a ocean oil platform. Because anything like that in the ocean is essentially an artificial reef, there is an abundance of life, and that’s all they wanted to know, because my friend, who’s a, curious man wanted to understand what was going on with that community beyond the oil well. They didn’t want to do that. They didn’t want to know if there was something going on that could be detrimental. My friend was just simply interested in understanding the system.
As a young student, I heard lots of interesting ideas that later I discovered were not well thought out. This was the time of expanded molecular biology research.
Frankly, the most predominant thing that sticks in my mind about the time is that scientist should have training in ethics. They should have to understand that they may develop some thing that has a great danger like for example, the plastics that were such wonderful things at the time and now ominously everywhere or the ever present chemicals used for nonstick pans that were known to be in the blood of people early on and yet the company and the scientist continued.
Some of this stuff is obvious. Industry scientist know when they are walking on dangerous ground.
Your article talks about this and directly in terms of theories, which I would call hypothesis. Often scientists have a hypothesis and they’re looking for organisms and a system to test it in it. This is a backwards approach.
I have to say your article was very stimulating and I’m looking forward to following. Thank you.
Wouldn't we go with the theory that is most likely to be true, not necessarily the one we want to be true? Surely we can never be scientifically dispositive, but we can nevertheless reason that some theories are more likely to be true than others. These are the ones we'd operationalize.
The thing about theories is they rely on axioms, and these axioms are asserted. If you question the foundation, then the whole theory can be doubted. But axioms, while not true or false, can be reasonable or unreasonable, and we can use this basis to select the most reasonable theory.
So how I see it (and I am not Ben but I love these types of discussion) is that theory informs what the world is.
At the end of the day, we make "scientific decisions" that impact other people's lives not only based on theory but also based on ethics, values, and our own philosophy of what the world should be. Theory can't predict what the world will be because we haven't lived in that world yet. History may inform what could happen if we choose to make this decision (and the theory maybe informed by history as well), but ultimately, we can twist history and theory to mold around our own philosophy and political beliefs on what the world should be.
My personal way of thinking about this is that performativity is indeed very powerful but also has its limits. We're in a perpetual push-and-pull between our ability to reshape the world and the world's materiality pushing back on us. Sometimes we can have a good idea of how the world will respond using scientific methods, but there is always the possibility (or probability) of surprise. Where this balance lies depends a lot on the specific arena under discussion, and lends itself more to case-by-case nuance.
Altogether I think it's important both to claim our own agency and to be respectful of the rest of the world's agency too, and that we don't have unfettered power to shape it in our desired image (which would also be a very frightening/authoritarian thing in its own way). (To be clear I'm not saying that's what Kevin was saying, but I think this is one of those things that can cause people to talk past each other.) Our scientific theories/arguments always have built-in "oughts" AND the world won't always listen to them...
Yes, that’s exactly it. I wrote something along these lines in my very first post on substack: https://realizable.substack.com/p/engineering
Love the point that this motivates the need for humanities training as part of engineering! One of the lines reminded me of the great Latour quote: "reality, as the latin word res indicates, is what resists."