Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> there is no correlation between when a Dem condemned violent protest and whether they won their election

You got a source on that? It's clear that democrats lost ground, despite the Republican party's abject failure to control the coronavirus. This should have been the easiest win in world history, and somehow, the Democrats barely escaped with the presidency. There is pretty clearly something to explain here.



Every single demographic went further red than 2016 besides white men.

There’s a reason for that.


You’ve got data to back that up?



This is based on exit polls.

Exit polls only interview people who vote in person, which may be generally be useful, but in this election are an enormously systematically biased slice of the electorate.

You really can't base any generalizations beyond the in-person voting segment based on them.


At least with the NYT exit polls, they included phone interviews to account for mail in voters. Here is a more systematic post-election analysis, based on a huge survey (100,000+ respondents), from Fox/Associated Press/U Chicago: https://www.foxnews.com/elections/2020/general-results/voter...

This one shows significantly lower Black vote for Trump than the Essence article (12% of men, 6% of women) but an even higher Latino vote (39% of men, 32% of women). Both sets of data show Trump losing support from white men slightly. (Which is consistent with pre-election polling.)


> This is based on exit polls.

Are there some other measurements based on something else, or is it something we now can never know as a result of mail in ballots having no polling and an expected different statistical distribution?


Exits aren't great in the best circumstances, but early unweighted exits are widely believed to be almost completely useless.


I suggest looking at states like Oregon, Washington, Colorado, Utah, etc. that only do mail in voting and see how they measure what used to be measured by exit polls.


> Are there some other measurements based on something else

For 2016 and 2018, Pew had validated voter datasets, based on people in their American Trends Panel who were matched against voter databases and thus known to have voted. Assuming they do the same thing (or some other reputable firm does something similar, or ideally both) for 2020, that would probably be the best data source.


The votes haven't even been fully counted (NY has barely started with their mail-ins), so the exits aren't weighted yet. There are, so far as I know, no valid takes to be made from current exit data.


I posted this elsewhere in the thread but even without exit polls we can make some reasonably solid conclusions from geographical voting data: https://sweep.thedispatch.com/p/the-sweep-dont-trust-the-exi...

At this point it seems pretty clear that Trump made big gains among Latinos compared to 2016; with other demographics the jury is still out (I think.)


You don't need exits to know that; there were sharp swings towards Trump in majority-Latino south Texas counties.


Um... that's exactly what the post you're replying to says?


It's what the exit polls say, although the main lesson we've learned this year is not to trust polls.

What's clearer is that Trump made big gains in areas of the country with a high Latino and Hispanic population - e.g. Santa Ana county, CA, which is 77% Hispanic, saw a 13-point swing to Trump, and there are many similar datapoints from across the country. It's pretty clear Trump carried Florida on the strength of the Latino vote in that state. There's also similar evidence that Trump made big gains this year among Asians.

https://twitter.com/RyanGirdusky/status/1326190869163225090

https://sweep.thedispatch.com/p/the-sweep-dont-trust-the-exi...

So to summarise: Trump definitely made gains among Hispanics and probably made gains among Asians according to voting data; he made gains with every demographic except white men according to exit polls; the former data is more reliable than the latter.


Preelection polling was also showing Trump gains among Asians: https://fivethirtyeight.com/features/how-asian-americans-are...


I'm aware of that. But I don't see what that has to do with my point.


I was more reinforcing your point to the person you responded to.


306 electoral votes is not "barley getting by", that's the exact same margin as Trump in 2016 minus two faithless electors. That's pretty much the definition of a rebuke.


Biden won by razor-thin margins in the swing states, underperformed his polls by 5 to 10 points across the country, and got crushed in several states which pollsters had said were competitive (e.g. Texas). Meanwhile the Democrats lost seats in the House and (probably) failed to take the Senate despite favourable polls and optimistic predictions of a "blue wave".

Happiness = Reality - Expectations. Expectations were very high and the Democrats have a lot of reasons to be unhappy.


> Biden [...] underperformed his polls by 5 to 10 points across the country

While the votes are still being counted and this may change a little bit, he underperformed the last 538 forecast by 2.7 points at current vote count (not 5 to 10 points), just on the edge of the 80% confidence interval (noted because that's what 538 publishes as its uncertainty measure). Note that in social science, the standard cited uncertainty window is the 95% confidence interval, which would be significantly wider.

Now, you might point out that the results were outside of the 95% CI of the polls, which is true, but polls don't predict voting behavior, they measure sentiment. The cited confidence interval of polls addresses only sampling error, but even if as a sentiment measure a poll has no nonsampling error, it has additional known source of error as a measure of voting behavior beyond its sampling error. Notably, polls are typically of either registered voters (which are very different population than "people who will vote") or likely voters (which are still a different population than people who actually will vote, based on some model of the pollster of who will probably vote.) The reason poll based forecasts like 538 exist is because polls, while they are useful inputs for predictions, are not themselves predictions.

On the Senate and House the results are similarly non-extreme outliers compared to the forecasts. The 80% CI for the Senate forecast ranged from 55 D to 52 R. The potential outcomes now have reduced to the range of 50/50 to 52R. The 80% CI for the House forecast was 225D to 254D. Current results are 218D with 16 seats uncalled. Its expected that deviations from the center of the predicted range on these will be correlated, and it looks like they are all hitting at or near the edge of the 80% CI.

Yes, Democrats underperformed the midpoint of the predicted range. But not by enough (and it didn't happen at all in the 2018 midterm) to think that the models are radically wrong.


D'oh. My "5 to 10 points" was off the top of my head based on some numbers I saw being thrown around last week but I didn't bother to double-check. Thanks for keeping me factual.

I wouldn't say "the models are radically wrong" but I think it's still fair to say that last week was disappointing in several ways for the Democratic Party, even with Trump losing.


> the Democrats barely escaped with the presidency

not yet




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: