> Biden [...] underperformed his polls by 5 to 10 points across the country
While the votes are still being counted and this may change a little bit, he underperformed the last 538 forecast by 2.7 points at current vote count (not 5 to 10 points), just on the edge of the 80% confidence interval (noted because that's what 538 publishes as its uncertainty measure). Note that in social science, the standard cited uncertainty window is the 95% confidence interval, which would be significantly wider.
Now, you might point out that the results were outside of the 95% CI of the polls, which is true, but polls don't predict voting behavior, they measure sentiment. The cited confidence interval of polls addresses only sampling error, but even if as a sentiment measure a poll has no nonsampling error, it has additional known source of error as a measure of voting behavior beyond its sampling error. Notably, polls are typically of either registered voters (which are very different population than "people who will vote") or likely voters (which are still a different population than people who actually will vote, based on some model of the pollster of who will probably vote.) The reason poll based forecasts like 538 exist is because polls, while they are useful inputs for predictions, are not themselves predictions.
On the Senate and House the results are similarly non-extreme outliers compared to the forecasts. The 80% CI for the Senate forecast ranged from 55 D to 52 R. The potential outcomes now have reduced to the range of 50/50 to 52R. The 80% CI for the House forecast was 225D to 254D. Current results are 218D with 16 seats uncalled. Its expected that deviations from the center of the predicted range on these will be correlated, and it looks like they are all hitting at or near the edge of the 80% CI.
Yes, Democrats underperformed the midpoint of the predicted range. But not by enough (and it didn't happen at all in the 2018 midterm) to think that the models are radically wrong.
D'oh. My "5 to 10 points" was off the top of my head based on some numbers I saw being thrown around last week but I didn't bother to double-check. Thanks for keeping me factual.
I wouldn't say "the models are radically wrong" but I think it's still fair to say that last week was disappointing in several ways for the Democratic Party, even with Trump losing.
While the votes are still being counted and this may change a little bit, he underperformed the last 538 forecast by 2.7 points at current vote count (not 5 to 10 points), just on the edge of the 80% confidence interval (noted because that's what 538 publishes as its uncertainty measure). Note that in social science, the standard cited uncertainty window is the 95% confidence interval, which would be significantly wider.
Now, you might point out that the results were outside of the 95% CI of the polls, which is true, but polls don't predict voting behavior, they measure sentiment. The cited confidence interval of polls addresses only sampling error, but even if as a sentiment measure a poll has no nonsampling error, it has additional known source of error as a measure of voting behavior beyond its sampling error. Notably, polls are typically of either registered voters (which are very different population than "people who will vote") or likely voters (which are still a different population than people who actually will vote, based on some model of the pollster of who will probably vote.) The reason poll based forecasts like 538 exist is because polls, while they are useful inputs for predictions, are not themselves predictions.
On the Senate and House the results are similarly non-extreme outliers compared to the forecasts. The 80% CI for the Senate forecast ranged from 55 D to 52 R. The potential outcomes now have reduced to the range of 50/50 to 52R. The 80% CI for the House forecast was 225D to 254D. Current results are 218D with 16 seats uncalled. Its expected that deviations from the center of the predicted range on these will be correlated, and it looks like they are all hitting at or near the edge of the 80% CI.
Yes, Democrats underperformed the midpoint of the predicted range. But not by enough (and it didn't happen at all in the 2018 midterm) to think that the models are radically wrong.