I'm a little unclear why we'd call this a polling failure. Nate Silver did in fact suggest that the single most likely outcome was Trump winning every swing state (https://www.newsweek.com/donald-trump-kamala-harris-polls-swing-states-1974158). You seem to be arguing that because Silver and other pollsters predicted the elections as being very close, that one party winning all the swing states indicates a "failure", but Silver at least clearly understood the biggest source of randomness was the latent bias the polls had no way of capturing, and expected that bias to be strongly correlated across swing states.
I'd also object to the sentence "The ambitious program to roll back the administrative state got a democratic mandate from the most diverse coalition a modern Republican has ever amassed." The popular vote margin here was about 1.6%, which makes it the tightest popular vote margin since 2000 and the second tightest since 1970 (https://en.wikipedia.org/wiki/List_of_United_States_presidential_elections_by_popular_vote_margin; the 1960's were tighter). The Republicans are of course turning this into a "broad mandate" claim, but it's nothing like (say) 1984 when Reagan won the popular vote for his second term by 18%.
There's something important and correct about what you're saying, which is broadly that these polls aren't very useful and we've turned them into a big stupid circus. Ultimately noisy measurements of a number (percentage of Republican votes) where we only care about one bit of information on whether that number is above a threshold (50%) are pretty low value when the measurement noise is large compared to the distance to threshold, and that's the situation we were in for this race. And the right answer is to pay less attention to them, not to try to read the tea leaves. I certainly agree this isn't supporting substantive politics or an informed electorate.
I'm ultimately agnostic on your claim that weighting renders polling useless. Again, at the object level, Silver suggested the actual outcome as most likely, exactly through his fancypants weighting process. (I don't think he or Gelman would actually make a claim of "neutral objectivity"; I think they'd agree they're engaging in an at-best semi-scientific guessing process?) In this case, the weighting doesn't get you a clear answer, but it's not obviously not working; it seems plausible to me that if the election were a little less close some of these methods might "amplify signal"? But we of course have no way of knowing.
Certainly I agree that everyone is paying way too much attention to the polls.
Two clarifications: - all we meant by "democratic mandate" is "the republicans won the election, president, house, and senate." I'm bad at predictions, but I would not at all be shocked if the Democratic party wins the next one.
- the weighting we discuss here is done by the pollsters, not the forecasters. it's post-stratification based on pollster priors on electorate composition. every poll does it. though our caricature is extreme, this new york times article shows you can get widely varying estimates by applying different pollster's weightings.
As to your point about forecasting accuracy, I respectfully disagree. I'm against looking at histograms to pull out ways that data infotainment was retrospectively right. This includes looking back and declaring that Nate Silver was talking about the mode the entire time. This is what fortune tellers do.
Can you say a little more on Nate Silver? The article I linked where he claimed that Trump winning all swing states was the most likely outcome is from two weeks *before* the election; it's not a retrospective claim. Are you saying Nate Silver was basically doing something equivalent to astrology? How could we even tell?
My take was that Silver was basically saying, consistently, before the election, "Hey, this is close, I can't really tell who's going to win, the outcome is dominated by randomness I don't have a handle on, and I do think it's quite likely that the bias is highly correlated across swing states." To me this seems like a moderately epistemically humble stance, and the actual outcome was moderate evidence in favor of its reasonableness. If Silver had said "there's a 75% chance that Harris is going to get 55% or more of the popular vote" and we got this outcome, I'd say Silver was full of it; if he had given these predictions and Harris had gotten 55% of the popular vote, I'd say Silver was full of it. We're not in either of those situations. I think Silver's predictions are basically reasonable, and the problem is that people want accuracy and certainty where no accuracy or certainty is to be had. But at this point I'm not sure what you think.
If you want me to be nice to Silver, I will say this time that on the morning on election day, he said he didn't know what was going to happen.
Bu you didn't need hundreds of polls, daily updates, and millions of simulations to come to the conclusion that the electoral college was going to depend on 10 states, and we wouldn't know how those states would vote until after the election. In fact, assigning fake numbers and selling substack subscriptions for this completely obvious observation is the opposite of epistemic humility.
Also, it wasn't clear to me from the Newsweek article you linked if the author had read that off Silver's data or Silver had written this in a paywalled Newsletter. That difference matters. The piece I saw from Silver was gloating about the mode after the election had happened.
Even if we grant the prospective guessing, then we get into this business of what these probabilistic predictions mean. We don't know what his code is doing. So what are we supposed to do when he gives us an empirical distribution of EC votes from 10000 simulations in his closed source stata code? What does it mean that in this histogram one outcome has probability 25% and the other 16%? If the event which was assigned 16% probability happened, then what? You could say "events with 1 in 6 probability happen all the time." This infinite plausible deniability wrapped up in three digits of precision is astrology for nerdy men.
Thanks. I don't personally care if you're nice to Silver or not.
I didn't subscribe to Silver's paid substack, but I read his free content and my recollection was that his claim that Trump winning all seven states was the single most likely outcome was in the free content. (I googled for it and the newsweek article came up first and I was too lazy to google any further.)
I'm less opinionated than you on selling substack subscriptions. I think we could have easily lived in a world where the trends were a little clearer and this kind of work could have added value.
But iterating using my mental model of you, perhaps you'd say that the set of worlds where that's useful is either a very small set or an empty set, because if one party is going to win by a lot, any dumb simple easy polling method will tell you that and you don't need this, and if there's a gap smaller than some threshold, this stuff isn't going to help anyways (as it didn't this time around). This kind of approach is only "useful" in some narrow world where both (a) there is a useful but small amount of "signal", and (b) the methods have some way of amplifying the signal. I'd guess we agree that (a) is possible in principle but probably unlikely, and for (b) I'd say "I dunno maybe I *want* to believe" and you'd say "Come on man!" If this is your argument I guess I'm actually pretty sympathetic to it.
I still don't care that much about substack subscriptions, I guess nerdy men need their astrology too.
Annoying that the NYT article implies that the point of election forecasting is to get a probability of winning: " forecasts go a step further, analyzing the polling and other data to make a prediction about who is most likely to win, and how likely."
I doubt many forecasters would agree that this is their goal. It's more about predicting vote margins, trying to get insight into how close things might be across different locations. The forecasters I know are hesitant to even provide probability of win information because its so easy for people to lose sight of the uncertainty. E.g., a vote share prediction that's off by half a percentage point can change the probability of winning by six percentage points, which seems like a lot! But half a percentage point would not at all be surprising given what we know about sources of uncertainty. (Example from here: http://www.stat.columbia.edu/~gelman/research/published/jdm200907b.pdf)
I find election forecasts interesting because the demand is so out of whack with what they could actually provide. I think it is possible to learn a lot from them if you take them less as oracles and more as tools for thinking. They offer a way to explore different possible outcomes under different assumptions about sources of error, giving some people a way to engage more than they would with politics. But, that's just not what most people want from them, and I doubt it ever will be. So I agree they are kind of doomed to failure, but it's not necessarily because they have no potential to be useful.
The forecasters want it both ways. They want to be seen as dispassionate political data gurus and also want to make money by fueling interest in daily horse race reporting. All of the big dog forecasters (Silver, 538, etc.) put out win probabilities. They all know they can make money off turning politics into yet-another-avenue for gambling. But when they profit, the rest of us lose.
Fair enough, many do seem to give in to the pressure. I've seen small improvements to how the top-line displays express uncertainty over the last few elections (e.g., switching to frequency framing) but the fact that most still give some notion of predicted winner at the top is unfortunate.
i genuinely do not understand what it would mean for them to be "correct" on this issue. if you have a forecast model, what is the information that it provides that is *not* a prediction and exceeds what post-election analysis of actual voting provides us with? why do this? the uncertainty in these cases is mathematical but does not correspond ipso facto to real possible outcomes, which would only be theoretically possible if we believe that these models robustly capture something lawful about voter behavior that we can then analyze out of them. that idea strikes me as almost trivially absurd - do forecasters actually believe that?
Not sure I understand. Correct on a probabiity? You can't evaluate a single probability.
What you can look at are things ilke how predictable vote share tends to be from various sources of information available before an election. And there, at least for much of US history it has been possible to predict outcomes within some seemingly reasonable margin (a few percentage points). See the ilnk I shared above for more on what goes into these forecasts and what indicators have historically been predictive.
i guess i should say that i'm pretty skeptical that all the scientific-sounding, epistemologically virtuous stuff that people say about these systems is really related to their social uses. reading that post reinforces that for me, the technical aspects seems hopelessly far away from basic facts about capitalism, history, and collective neurosis that drive the culture, to my mind. i don't mean to shut down discussion of the technical points, but i find often that we sort of say "ah, but what if they were implemented in a better system/way," and i don't really see what the point of that hypothetical is
sorry, i meant what would the correct behavior of these forecasters look like? i understand what their overall accuracy has been, but what information does one get from doing this that one couldn't instead get from analyzing electoral data after elections? my point is, if it's not a prediction, why do it at all?
Fundamentals are a component of presidential election forecasts. I would suggest looking into what information goes into such a forecast if you are not familiar with them.
Btw, I'm not necessarily defending election forecasts as currently promoted. Given that the demand is mostly about who will win, I don't think they are very helpful. However, there's a difference between acknowledging that the demand is badly aligned with what info they can provide (e.g., relative uncertainty) versus concluding that nothing about presidential elections is predictable, which is sometimes what critics seem to want to imply.
i don't know if i speak for Ben on this one, but i'm not sure i think it matters if something is predictable, so much as whether some aspect of the act of predicting it produces knowledge we otherwise would not have. i guess again, i would say that the phrase "nothing about presidential elections is predictable" either means something technical (in which case i just want to know how that allegedly translates into qualitative, useful knowledge) or it means something colloquial, which i guess you don't mean but which is pretty clearly the reason they're so influential, as i think we agree. if you took away all the natural-language terms that make people think that they're getting that instead of the technical stuff, would anyone commission them?
One other comment - the description of poststratification is a bit overly caricatured for me. Adjustments are typically done at a finer level than party affiliation, using what is known about the population in the place where the poll was done. Describing it as reweighting to 50/50 by party makes it sound very naive. But polling without poststrafication would be naive.
You and I disagree about this more broadly, though I know I'm the outlier amongst folks who have written statistics papers. Regression correction is always putting a statistical thumb on the scale and is never justified. If you can't randomly sample, you shouldn't survey.
I think you are on the right track here but I would slightly reframe it. If you can't sample randomly, you shouldn't model. But you definitely should survey! It just needs to be accepted as the qualitative data that it is
I agree with this. The main pushback I get when I write about polling is that "pollsters and forecasters always emphasize that the numbers are uncertain." But this framing as "uncertainty quantification" still implies something quantitative under the hood. Your characterization of suggests there needs to be "uncertainty qualification" instead. This would certainly be more honest.
i feel like this would amount to extremely extensive and poorly executed anthropology. i don't think anyone collects data so that people can gaze meaningfully at the individual data points (or small clusters). the implicit theories of mind and conversation that surveys contain are also deeply suspect imo
I agree that corrections involve hard to quantify uncertainty are like any modeling choice, but it's hard for me to view poststratification as part of the problem. The alternative would seem to be not attempting any inference in many settings where we currently rely on estimates to inform policy decisions. True random sampling is very rare except for the most artificial settings.
If by "inference" you mean the technical definition used by statisticians, then I agree we should never attempt this if we didn't randomized.
If by "inference" you mean the natural language definition, of considering evidence, suggesting explanations, and deliberating, then I don't see why numerical post-stratification is a precondition.
love it -- and I think the logic is exactly the same to explain the "vibecession," the divergence between subjective economic perceptions and "objective" economic indicators.
We've had decades of treating the indicators as *the economy itself*. Politicians and the media have reified this. Any metric that's optimized for, obviously, becomes a bad metric.
And the other mechanism, which applies for political polling as well, is that there has been...quite a bit of technological development for communicating subjective beliefs. I don't know the relative importance of these two mechanisms, but it does seem useful to combine the narratives about the decline of the value of 20th-century "objective" economic and political indicators
“Whatever happened, it was supposed to be razor thin.” That is DEFINITELY not what they were saying; 538 had a whole article about why it probably wouldn’t be a razor thin margin.
Apparently internal polling was much more accurate than whatever polling method Nate and the NYTimes did. In fact, the democrat Michigan senator (who won Michigan just recently) warned Harris that she was underwater and in danger of losing the state based on internal polls. I imagine their polling has to involve some sort of house to house canvasing which I presume is much more accurate than telephone polling. They must have had some internal data on what the grievances against Harris was, but the message to address those grievances must not have registered with either Harris or the voters (or both) since the democrat senator won in the same statewide election that Harris lost.
This just leads me to the next question. Is the democrat party too technocratic and number driven? Or is that just this particular democratic presidential candidate ran a particularly tone deaf campaign that caused her to lose the working class vote? The US senate ended up being less of a blowout against the democrats than what would be expected given how badly harris lost, and the house of rep is shaping up to be republican majority by very thin margins.
Interesting piece!
I'm a little unclear why we'd call this a polling failure. Nate Silver did in fact suggest that the single most likely outcome was Trump winning every swing state (https://www.newsweek.com/donald-trump-kamala-harris-polls-swing-states-1974158). You seem to be arguing that because Silver and other pollsters predicted the elections as being very close, that one party winning all the swing states indicates a "failure", but Silver at least clearly understood the biggest source of randomness was the latent bias the polls had no way of capturing, and expected that bias to be strongly correlated across swing states.
I'd also object to the sentence "The ambitious program to roll back the administrative state got a democratic mandate from the most diverse coalition a modern Republican has ever amassed." The popular vote margin here was about 1.6%, which makes it the tightest popular vote margin since 2000 and the second tightest since 1970 (https://en.wikipedia.org/wiki/List_of_United_States_presidential_elections_by_popular_vote_margin; the 1960's were tighter). The Republicans are of course turning this into a "broad mandate" claim, but it's nothing like (say) 1984 when Reagan won the popular vote for his second term by 18%.
There's something important and correct about what you're saying, which is broadly that these polls aren't very useful and we've turned them into a big stupid circus. Ultimately noisy measurements of a number (percentage of Republican votes) where we only care about one bit of information on whether that number is above a threshold (50%) are pretty low value when the measurement noise is large compared to the distance to threshold, and that's the situation we were in for this race. And the right answer is to pay less attention to them, not to try to read the tea leaves. I certainly agree this isn't supporting substantive politics or an informed electorate.
I'm ultimately agnostic on your claim that weighting renders polling useless. Again, at the object level, Silver suggested the actual outcome as most likely, exactly through his fancypants weighting process. (I don't think he or Gelman would actually make a claim of "neutral objectivity"; I think they'd agree they're engaging in an at-best semi-scientific guessing process?) In this case, the weighting doesn't get you a clear answer, but it's not obviously not working; it seems plausible to me that if the election were a little less close some of these methods might "amplify signal"? But we of course have no way of knowing.
Certainly I agree that everyone is paying way too much attention to the polls.
Two clarifications: - all we meant by "democratic mandate" is "the republicans won the election, president, house, and senate." I'm bad at predictions, but I would not at all be shocked if the Democratic party wins the next one.
- the weighting we discuss here is done by the pollsters, not the forecasters. it's post-stratification based on pollster priors on electorate composition. every poll does it. though our caricature is extreme, this new york times article shows you can get widely varying estimates by applying different pollster's weightings.
https://www.nytimes.com/interactive/2016/09/20/upshot/the-error-the-polling-world-rarely-talks-about.html
As to your point about forecasting accuracy, I respectfully disagree. I'm against looking at histograms to pull out ways that data infotainment was retrospectively right. This includes looking back and declaring that Nate Silver was talking about the mode the entire time. This is what fortune tellers do.
Can you say a little more on Nate Silver? The article I linked where he claimed that Trump winning all swing states was the most likely outcome is from two weeks *before* the election; it's not a retrospective claim. Are you saying Nate Silver was basically doing something equivalent to astrology? How could we even tell?
My take was that Silver was basically saying, consistently, before the election, "Hey, this is close, I can't really tell who's going to win, the outcome is dominated by randomness I don't have a handle on, and I do think it's quite likely that the bias is highly correlated across swing states." To me this seems like a moderately epistemically humble stance, and the actual outcome was moderate evidence in favor of its reasonableness. If Silver had said "there's a 75% chance that Harris is going to get 55% or more of the popular vote" and we got this outcome, I'd say Silver was full of it; if he had given these predictions and Harris had gotten 55% of the popular vote, I'd say Silver was full of it. We're not in either of those situations. I think Silver's predictions are basically reasonable, and the problem is that people want accuracy and certainty where no accuracy or certainty is to be had. But at this point I'm not sure what you think.
If you want me to be nice to Silver, I will say this time that on the morning on election day, he said he didn't know what was going to happen.
Bu you didn't need hundreds of polls, daily updates, and millions of simulations to come to the conclusion that the electoral college was going to depend on 10 states, and we wouldn't know how those states would vote until after the election. In fact, assigning fake numbers and selling substack subscriptions for this completely obvious observation is the opposite of epistemic humility.
Also, it wasn't clear to me from the Newsweek article you linked if the author had read that off Silver's data or Silver had written this in a paywalled Newsletter. That difference matters. The piece I saw from Silver was gloating about the mode after the election had happened.
Even if we grant the prospective guessing, then we get into this business of what these probabilistic predictions mean. We don't know what his code is doing. So what are we supposed to do when he gives us an empirical distribution of EC votes from 10000 simulations in his closed source stata code? What does it mean that in this histogram one outcome has probability 25% and the other 16%? If the event which was assigned 16% probability happened, then what? You could say "events with 1 in 6 probability happen all the time." This infinite plausible deniability wrapped up in three digits of precision is astrology for nerdy men.
Thanks. I don't personally care if you're nice to Silver or not.
I didn't subscribe to Silver's paid substack, but I read his free content and my recollection was that his claim that Trump winning all seven states was the single most likely outcome was in the free content. (I googled for it and the newsweek article came up first and I was too lazy to google any further.)
I'm less opinionated than you on selling substack subscriptions. I think we could have easily lived in a world where the trends were a little clearer and this kind of work could have added value.
But iterating using my mental model of you, perhaps you'd say that the set of worlds where that's useful is either a very small set or an empty set, because if one party is going to win by a lot, any dumb simple easy polling method will tell you that and you don't need this, and if there's a gap smaller than some threshold, this stuff isn't going to help anyways (as it didn't this time around). This kind of approach is only "useful" in some narrow world where both (a) there is a useful but small amount of "signal", and (b) the methods have some way of amplifying the signal. I'd guess we agree that (a) is possible in principle but probably unlikely, and for (b) I'd say "I dunno maybe I *want* to believe" and you'd say "Come on man!" If this is your argument I guess I'm actually pretty sympathetic to it.
I still don't care that much about substack subscriptions, I guess nerdy men need their astrology too.
Annoying that the NYT article implies that the point of election forecasting is to get a probability of winning: " forecasts go a step further, analyzing the polling and other data to make a prediction about who is most likely to win, and how likely."
I doubt many forecasters would agree that this is their goal. It's more about predicting vote margins, trying to get insight into how close things might be across different locations. The forecasters I know are hesitant to even provide probability of win information because its so easy for people to lose sight of the uncertainty. E.g., a vote share prediction that's off by half a percentage point can change the probability of winning by six percentage points, which seems like a lot! But half a percentage point would not at all be surprising given what we know about sources of uncertainty. (Example from here: http://www.stat.columbia.edu/~gelman/research/published/jdm200907b.pdf)
I find election forecasts interesting because the demand is so out of whack with what they could actually provide. I think it is possible to learn a lot from them if you take them less as oracles and more as tools for thinking. They offer a way to explore different possible outcomes under different assumptions about sources of error, giving some people a way to engage more than they would with politics. But, that's just not what most people want from them, and I doubt it ever will be. So I agree they are kind of doomed to failure, but it's not necessarily because they have no potential to be useful.
The forecasters want it both ways. They want to be seen as dispassionate political data gurus and also want to make money by fueling interest in daily horse race reporting. All of the big dog forecasters (Silver, 538, etc.) put out win probabilities. They all know they can make money off turning politics into yet-another-avenue for gambling. But when they profit, the rest of us lose.
Fair enough, many do seem to give in to the pressure. I've seen small improvements to how the top-line displays express uncertainty over the last few elections (e.g., switching to frequency framing) but the fact that most still give some notion of predicted winner at the top is unfortunate.
i genuinely do not understand what it would mean for them to be "correct" on this issue. if you have a forecast model, what is the information that it provides that is *not* a prediction and exceeds what post-election analysis of actual voting provides us with? why do this? the uncertainty in these cases is mathematical but does not correspond ipso facto to real possible outcomes, which would only be theoretically possible if we believe that these models robustly capture something lawful about voter behavior that we can then analyze out of them. that idea strikes me as almost trivially absurd - do forecasters actually believe that?
Not sure I understand. Correct on a probabiity? You can't evaluate a single probability.
What you can look at are things ilke how predictable vote share tends to be from various sources of information available before an election. And there, at least for much of US history it has been possible to predict outcomes within some seemingly reasonable margin (a few percentage points). See the ilnk I shared above for more on what goes into these forecasts and what indicators have historically been predictive.
My understanding is that recent elections have been particularly hard to predict because they are so close. Maybe we are past the era where we can hope to get within a few percentage points on vote share. But concluding that they are pointless because of the difficulty of predicting one binary outcome ever four years mischaracterizes the available information. More on all this here: https://statmodeling.stat.columbia.edu/2024/08/30/why-are-we-making-probabilistic-election-forecasts-and-why-dont-we-put-much-total-effort-into-them/
i guess i should say that i'm pretty skeptical that all the scientific-sounding, epistemologically virtuous stuff that people say about these systems is really related to their social uses. reading that post reinforces that for me, the technical aspects seems hopelessly far away from basic facts about capitalism, history, and collective neurosis that drive the culture, to my mind. i don't mean to shut down discussion of the technical points, but i find often that we sort of say "ah, but what if they were implemented in a better system/way," and i don't really see what the point of that hypothetical is
sorry, i meant what would the correct behavior of these forecasters look like? i understand what their overall accuracy has been, but what information does one get from doing this that one couldn't instead get from analyzing electoral data after elections? my point is, if it's not a prediction, why do it at all?
Fundamentals are a component of presidential election forecasts. I would suggest looking into what information goes into such a forecast if you are not familiar with them.
i understand what goes in, i'm not convinced that any added value comes out
Btw, I'm not necessarily defending election forecasts as currently promoted. Given that the demand is mostly about who will win, I don't think they are very helpful. However, there's a difference between acknowledging that the demand is badly aligned with what info they can provide (e.g., relative uncertainty) versus concluding that nothing about presidential elections is predictable, which is sometimes what critics seem to want to imply.
i don't know if i speak for Ben on this one, but i'm not sure i think it matters if something is predictable, so much as whether some aspect of the act of predicting it produces knowledge we otherwise would not have. i guess again, i would say that the phrase "nothing about presidential elections is predictable" either means something technical (in which case i just want to know how that allegedly translates into qualitative, useful knowledge) or it means something colloquial, which i guess you don't mean but which is pretty clearly the reason they're so influential, as i think we agree. if you took away all the natural-language terms that make people think that they're getting that instead of the technical stuff, would anyone commission them?
One other comment - the description of poststratification is a bit overly caricatured for me. Adjustments are typically done at a finer level than party affiliation, using what is known about the population in the place where the poll was done. Describing it as reweighting to 50/50 by party makes it sound very naive. But polling without poststrafication would be naive.
You and I disagree about this more broadly, though I know I'm the outlier amongst folks who have written statistics papers. Regression correction is always putting a statistical thumb on the scale and is never justified. If you can't randomly sample, you shouldn't survey.
I think you are on the right track here but I would slightly reframe it. If you can't sample randomly, you shouldn't model. But you definitely should survey! It just needs to be accepted as the qualitative data that it is
I agree with this. The main pushback I get when I write about polling is that "pollsters and forecasters always emphasize that the numbers are uncertain." But this framing as "uncertainty quantification" still implies something quantitative under the hood. Your characterization of suggests there needs to be "uncertainty qualification" instead. This would certainly be more honest.
i feel like this would amount to extremely extensive and poorly executed anthropology. i don't think anyone collects data so that people can gaze meaningfully at the individual data points (or small clusters). the implicit theories of mind and conversation that surveys contain are also deeply suspect imo
Why couldn't Joe's suggestion be "let's do well executed anthropology?"
I agree that corrections involve hard to quantify uncertainty are like any modeling choice, but it's hard for me to view poststratification as part of the problem. The alternative would seem to be not attempting any inference in many settings where we currently rely on estimates to inform policy decisions. True random sampling is very rare except for the most artificial settings.
If by "inference" you mean the technical definition used by statisticians, then I agree we should never attempt this if we didn't randomized.
If by "inference" you mean the natural language definition, of considering evidence, suggesting explanations, and deliberating, then I don't see why numerical post-stratification is a precondition.
love it -- and I think the logic is exactly the same to explain the "vibecession," the divergence between subjective economic perceptions and "objective" economic indicators.
We've had decades of treating the indicators as *the economy itself*. Politicians and the media have reified this. Any metric that's optimized for, obviously, becomes a bad metric.
And the other mechanism, which applies for political polling as well, is that there has been...quite a bit of technological development for communicating subjective beliefs. I don't know the relative importance of these two mechanisms, but it does seem useful to combine the narratives about the decline of the value of 20th-century "objective" economic and political indicators
a useful follow-up: https://statmodeling.stat.columbia.edu/2025/03/05/no-an-election-forecast-thats-50-50-is-not-giving-up-no-the-election-forecasters-in-2024-did-not-say-whatever-happened-it-was-supposed-to-be-razor-thin/
“Whatever happened, it was supposed to be razor thin.” That is DEFINITELY not what they were saying; 538 had a whole article about why it probably wouldn’t be a razor thin margin.
Apparently internal polling was much more accurate than whatever polling method Nate and the NYTimes did. In fact, the democrat Michigan senator (who won Michigan just recently) warned Harris that she was underwater and in danger of losing the state based on internal polls. I imagine their polling has to involve some sort of house to house canvasing which I presume is much more accurate than telephone polling. They must have had some internal data on what the grievances against Harris was, but the message to address those grievances must not have registered with either Harris or the voters (or both) since the democrat senator won in the same statewide election that Harris lost.
This just leads me to the next question. Is the democrat party too technocratic and number driven? Or is that just this particular democratic presidential candidate ran a particularly tone deaf campaign that caused her to lose the working class vote? The US senate ended up being less of a blowout against the democrats than what would be expected given how badly harris lost, and the house of rep is shaping up to be republican majority by very thin margins.