Sadly typical of HN today that a topic that is interesting technically and politically neutral (the challenge of accurate polling) is derailed up top with an inaccurate culture war comment (there is no correlation between when a Dem condemned violent protest and whether they won their election).
FWIW, Shor’s theory about poll respondents is shared by Robert Cahaly at Trafalgar, but Trafalgar polls missed badly too in 2020 but in the opposite direction. So either that’s not actually the critical factor, or it’s not possible to control for this factor in any sort of reliable way. I find that far more interesting than Shor’s personal HR problems.
>there is no correlation between when a Dem condemned violent protest and whether they won their election
Even assuming this unsupported claim is true, it's not necessarily relevant because correlation isn't the same as causation. For example, it could be that Dems are pretty good at reading their local electorate, and left-leaning areas are more tolerant of violent protest in the name of left causes. So Dems in lefty areas did not condemn violence, whereas Dems in swing areas did condemn violence. If this was true, ceteris paribus you would expect Dems who condemn violence to be more likely to lose, just because they're running more competitive races.
BTW, I think people are unfortunately quite tribal nowadays, and tend to assume that extremists have control of the other side even if that's not the case. And disavowing extremists is only partially effective at changing that perception. Think about it--how much have Trump's many disavowals of the far right shifted your perception that he's in bed with them? https://www.youtube.com/watch?v=Bd0cMmBvqWc Probably not much.
>So either that’s not actually the critical factor, or it’s not possible to control for this factor in any sort of reliable way.
One failed attempt doesn't mean that something can't be done.
> there is no correlation between when a Dem condemned violent protest and whether they won their election
You got a source on that? It's clear that democrats lost ground, despite the Republican party's abject failure to control the coronavirus. This should have been the easiest win in world history, and somehow, the Democrats barely escaped with the presidency. There is pretty clearly something to explain here.
Exit polls only interview people who vote in person, which may be generally be useful, but in this election are an enormously systematically biased slice of the electorate.
You really can't base any generalizations beyond the in-person voting segment based on them.
At least with the NYT exit polls, they included phone interviews to account for mail in voters. Here is a more systematic post-election analysis, based on a huge survey (100,000+ respondents), from Fox/Associated Press/U Chicago: https://www.foxnews.com/elections/2020/general-results/voter...
This one shows significantly lower Black vote for Trump than the Essence article (12% of men, 6% of women) but an even higher Latino vote (39% of men, 32% of women). Both sets of data show Trump losing support from white men slightly. (Which is consistent with pre-election polling.)
Are there some other measurements based on something else, or is it something we now can never know as a result of mail in ballots having no polling and an expected different statistical distribution?
I suggest looking at states like Oregon, Washington, Colorado, Utah, etc. that only do mail in voting and see how they measure what used to be measured by exit polls.
> Are there some other measurements based on something else
For 2016 and 2018, Pew had validated voter datasets, based on people in their American Trends Panel who were matched against voter databases and thus known to have voted. Assuming they do the same thing (or some other reputable firm does something similar, or ideally both) for 2020, that would probably be the best data source.
The votes haven't even been fully counted (NY has barely started with their mail-ins), so the exits aren't weighted yet. There are, so far as I know, no valid takes to be made from current exit data.
It's what the exit polls say, although the main lesson we've learned this year is not to trust polls.
What's clearer is that Trump made big gains in areas of the country with a high Latino and Hispanic population - e.g. Santa Ana county, CA, which is 77% Hispanic, saw a 13-point swing to Trump, and there are many similar datapoints from across the country. It's pretty clear Trump carried Florida on the strength of the Latino vote in that state. There's also similar evidence that Trump made big gains this year among Asians.
So to summarise: Trump definitely made gains among Hispanics and probably made gains among Asians according to voting data; he made gains with every demographic except white men according to exit polls; the former data is more reliable than the latter.
306 electoral votes is not "barley getting by", that's the exact same margin as Trump in 2016 minus two faithless electors. That's pretty much the definition of a rebuke.
Biden won by razor-thin margins in the swing states, underperformed his polls by 5 to 10 points across the country, and got crushed in several states which pollsters had said were competitive (e.g. Texas). Meanwhile the Democrats lost seats in the House and (probably) failed to take the Senate despite favourable polls and optimistic predictions of a "blue wave".
Happiness = Reality - Expectations. Expectations were very high and the Democrats have a lot of reasons to be unhappy.
> Biden [...] underperformed his polls by 5 to 10 points across the country
While the votes are still being counted and this may change a little bit, he underperformed the last 538 forecast by 2.7 points at current vote count (not 5 to 10 points), just on the edge of the 80% confidence interval (noted because that's what 538 publishes as its uncertainty measure). Note that in social science, the standard cited uncertainty window is the 95% confidence interval, which would be significantly wider.
Now, you might point out that the results were outside of the 95% CI of the polls, which is true, but polls don't predict voting behavior, they measure sentiment. The cited confidence interval of polls addresses only sampling error, but even if as a sentiment measure a poll has no nonsampling error, it has additional known source of error as a measure of voting behavior beyond its sampling error. Notably, polls are typically of either registered voters (which are very different population than "people who will vote") or likely voters (which are still a different population than people who actually will vote, based on some model of the pollster of who will probably vote.) The reason poll based forecasts like 538 exist is because polls, while they are useful inputs for predictions, are not themselves predictions.
On the Senate and House the results are similarly non-extreme outliers compared to the forecasts. The 80% CI for the Senate forecast ranged from 55 D to 52 R. The potential outcomes now have reduced to the range of 50/50 to 52R. The 80% CI for the House forecast was 225D to 254D. Current results are 218D with 16 seats uncalled. Its expected that deviations from the center of the predicted range on these will be correlated, and it looks like they are all hitting at or near the edge of the 80% CI.
Yes, Democrats underperformed the midpoint of the predicted range. But not by enough (and it didn't happen at all in the 2018 midterm) to think that the models are radically wrong.
D'oh. My "5 to 10 points" was off the top of my head based on some numbers I saw being thrown around last week but I didn't bother to double-check. Thanks for keeping me factual.
I wouldn't say "the models are radically wrong" but I think it's still fair to say that last week was disappointing in several ways for the Democratic Party, even with Trump losing.
Neither that tweet, nor the paper it referred to, attempted to correlate the exact date at which each Democratic candidate condemned violence with their respective election results.
that is such an impossible ask. We can only look at these things in aggregate. Contrary to popular belief a whole country doesn't turn on a time. Also even if that is your standard, this finding is still relevant.
I think you missed the point: if people feel like their opinions will be held against them, they will not share their true opinions, and polling will fail.
The comment seems on-topic to me regardless of whether the culture war represents reality or not
These HR problems have a negative influence on the quality of the work.
To me it looks like "killing the messenger". Listeners want the most accurate data irrespective of the fact that some unknowns can be controlled for or not.
Furthermore to respond to the culture war that you brought up, I think Shor's theory fits my personal experience that some people stopped stating their political opinion in the open. So it is on topic.
That may very well be due to "HR" being a problem or those that play HR on public platforms. The explanation might be correct or wrong, it might resonate with people or not.
Vox euphemized Schor’s departure in a way that was highly relevant to the subject of the article. The issue isn’t that pollsters couldn’t quantify the problem, it’s that they didn’t even realize the problem existed: https://www.washingtonpost.com/outlook/2020/11/02/shy-trump-...
> Trump voters are not liars. As a Democratic pollster and campaign strategist, I feel strange typing these words. But it is required to debunk the “shy Trump voter” myth.
This was written days before the election.
My post isn’t about “Shor’s personal HR problem.” It’s about a culture that caused very smart people to misperceive the facts on the ground, and misallocate funding as a result, in the middle of a high stakes situation.
People in Silicon Valley should understand better than anyone that culture matters and bears on organizational and institutional effectiveness. Often these are pretty theoretical and abstract debates. But here is a case study that elucidates culture that works versus culture that doesn’t work.
I’m not weighing in on the larger cultural issue. As I caveated, this is context sensitive. Political polling, news, academia, and the like are areas where free discourse with respect to sensitive issues is particularly important for the effectiveness of the institution. This is less true if you’re running a food delivery business.
I have a hard time believing your musings on culture are relevant to the topic of accurate polling when GOP-aligned pollsters, who firmly believed that Trump voters are less likely to answer polls and built that into their models, also missed badly.
What? Trafalgar was one of the only polls calling a close race. In Trumps favour but close nonetheless.
IIRC they were the only ones calling Florida going red. You’re being intentionally disingenuous saying they were equally as bad as the rest of the field predicting a blue wave.
Trafalgar almost certainly got more states wrong (counts aren't all finished, but we do know likely winners in all states). Other sites like 538 predicted all states correctly, except NC and FL (50.5%/48.8% and 50.9%/48.4% respectively, both predictions wrongly favoring Biden [1]). Meanwhile, Trafalgar's polls put Trump up in NV, GA, PA, AZ, and MI (from their website, all late Oct / early Nov polls [2]). Once counts are final, we can more accurately compare margins and errors more specifically, but Trafalgar does not have a good record for 2020. Comparing models to a single pollster isn't completely even, but it's important to note that the Trafalgar polls were mostly outliers, disagreeing with the consensus in many states [3].
Number of states that you get wrong isn’t a great measure though right? If I call if for trump at 51-49 and you call it for Biden at 75-25 and Biden wins by 51-49 I was more correct than you were, to count “states called” is the wrong way to evaluate analysis in the same way that fptp is unrepresentatove of the popular vote, ironically.
I agree with you, and that's why i made effort to differentiate between margins vs winning. In my opinion, 2020's polls were a lot worse than 2016's, but because of Biden's much bigger lead (in the polls and in key states), the polling errors did not cross the threshold to be "wrong" in the winner take all system. Finally, there is an argument that even a Biden 75-25 call gives the correct outcome because each state is a winner take all system (with the exception of NE and ME, iirc). From a stats standpoint, the delta in that example make it very inaccurate, but from a model/communication standpoint, getting each state "right" (in binary terms) is very valuable.
Trafalgar had completely different results than the rest of the consensus, and once counts are more finalized in non swing states (places like CA and NY take forever, but we don't pay attention because they are always blue), we can accurately compare the polling and real results between different pollsters and polling methodologies. Some experts have cautioned against using exit polls for this purpose (what is usually done for a quick read of polling accuracy), because exit polls are only measuring election day in person votes, and thus trend really red this year. Trafalgar had unconventional methodologies like a "shy trump voter" bias where they arbitrarily shifted their numbers to the right, with weirdly consistent results of Trump +3 in a bunch of close/blue states. Perhaps there is more complicated justification of these numbers in the background, but I'm concerned that even if their deltas end up better than the "normal" pollsters, they are generating an inferior product with overbaked data.
They tried to come up with approaches to get around the very effect the article talks about: Trump voters don't answer survey questions. The main differences in methodology were making fewer assumptions about Republican turnout, larger sample sizes, and different survey techniques.
Trafalgar clearly missed some things: traditionally republican collar counties breaking hard for Biden. (My county didn't vote for Obama either time, but voted for Biden by 12, after voting for the republican governor a couple of years ago by 38.) But 538 had some insane misses this year in critical states like Wisconsin (off by 7.7), Ohio (off by 7.3), Florida (off by 5.9), etc. Finalizing counts in NY and CA isn't going to change those numbers--and Trafalgar wasn't analyzing them anyway.
Like I said in an earlier comment, comparisons between polls and a model aren't completely fair. Part of the model's calculation is that 8.8 points or whatever requires an enormous polling error to swing for trump. I do agree that other polls should get criticism/improve their methodologies to avoid the ~5 point margin in FL, or the 7.7 margin you listed. And, I even think that they may not get as much criticism as their errors warrant because many of these states were off by 6+ points but still went for Biden. Still, Trafalgar has a unique methodology that should be understood better before they are extolled as the "best pollster". And, Trafalgar is not immune to similar polling errors, just in the other direction, and with a wrong result. The delta may be more important from a statistical methodology standpoint, but these polls are measuring winner take all states, and "The Trafalgar Group’s Robert Cahaly is an outlier among pollsters in that he thinks President Trump will carry Michigan, Pennsylvania, or both, and hence be reelected with roughly 280 electoral votes" is a pretty poor prediction based on their data. Does this mean that their data is necessarily poor? No, but it isn't a good sign.
We can't measure the deltas with as much confidence, because counting is far from finalized in many states (especially non swing states). But if trafalgar is the only group putting out polls where trump is ahead in several states, and then those states are lost, isn't that meaningful? Trafalgar was putting a "shy trump voter" bias into their polls, basically shoving the polls arbitrarily right. Some of that may have helped with polling errors that hurt other pollsters, but its clear they went too far, especially with many midwest states that ended up going blue.
Trafalgar got the margin much better than 538 in key states. Your list excludes several that 538 thought would be close but weren't: Ohio and Texas, and which ended up being close but 538 thought would be a blowout: Wisconsin. Putting those states in, you get Trafalgar erring in favor of Trump by 1.76 points, while 538 erred in favor of Biden by 4.37 points: https://docs.google.com/spreadsheets/d/e/2PACX-1vSulsnjJ96c2...
The final count will change these numbers a little bit, but I think Trafalgar will have gotten closer to the final result in these key states by 2.5 points on average.
I'll be interested to see the final count and compare on all states, but Trafalgar's issues are deeper than just deltas in my opinion. They had an arbitrary "shy trump voter" factor, which may be justifiable, but any "shy voter" effect is tautologically immeasurable (and thus just a guess on Trafalgar's part). They had weirdly consistent results where Trump was up by 3 points in lots of purple/blue states, which doesn't line up with the differences we saw in other polls and in the results (so far). I agree that the polling errors are significant in many of the mainstream models, even in the states that were called correctly. These deltas so far are worse than 2016, and should be looked into. But _if_ Trafalgar ends up with a lower delta over all 50 states when this is all finished, they still gave the wrong result in most of the key states that matter for the election. That's significant, because the polls are measuring 50 winner take all systems, and 538 told us 90% chance Biden win, while Trafalgar told us that Trump was favored in almost any state that was worth watching this year. That was wrong, and Trafalgar deserves criticism for it.
They didn't have an "arbitrary shy Trump voter" factor. That's not how they did it. They observed that Trump voters were less likely to answer polls and tried to get at the data other ways.
Also, who cares about 50-state results? One of the key points Trafalgar's founder has made is that polling is a state-by-state exercise. All pollsters use data to estimate e.g. turnout among different demographics. National pollsters use things like exit polls, but Trafalgar digs into state-level voter registration data.
There has never been such a thing as a politically neutral HN. It's just that there wasn't until recently a userbase to counteract the liberal tech groupthink.
That's more recent, to be honest. When I started reading HN, it was very very libertarian (but mostly non-political).
As it's grown bigger, it's started to reflect the demographics of software/data/professional people, with a bias towards the West Coast of the US.
That's presumably what you mean by the liberal tech groupthink (note that disliking Trump does not make one a liberal, which is a mistake a lot of conservatives have made over the past four years).
How long ago was the dominant libertarian scene? I've been reading for about 7 years and it's definitely been heavy liberal with the occasional libertarian belief even back then. Since 2016 it's been full throated liberalism though.
2011 is when I started lurking here. i originally had disgruntledphd, but lost my password and there was no way to recover it (apparently you could contact paul graham, but I didn't know that) so I became disgruntledphd2.
Yeah, exactly. People who want to scream about how Shor got cancelled have zero interest in actually discussing anything Shor writes.
That said, and to be perfectly clear: Civis should not have fired Shor. It was very bad. (To be fair in the other direction: the guy is a known genius in leftie political circles and my understanding is that he's done very well for himself since termination as an independent consultant.) But really it's the only good example of the "Cancel Culture" paranoia, so right wingers feel the need to trot out the argument every time his name comes up.
Oh wow, I'm so happy I'm not the only one thinking this. I noticed this transition over the past few years on these forums and I even commented about it in the past.
It is the path of things on the internet that are interesting and then become popular. Same for Usenet :) Kuro5hin escaped by becoming uninteresting. Our descendants may find a solution.
Things change over time. Also note that HN used to be over-run by libertarians that would make Ayn blush. Kuro5hin escaped by dying. There is lobsters, but it has almost zero discussion and overly focuses on "the rules".
I think norms are better and more powerful. Be the norm you would like to see in the world.
I should qualify my statement as I get voted down into oblivion. Hn still has higher quality comments and I'd ever seen it slashdot. However those high value comments usually come on particular subjects that lie within the hn community's zone of expertise. However for political topics or for science articles, hn comments usually lack expertise or neutrality. They're all falls into whether hn thinks of itself more as a repository of expertise or a forum for individuals to discuss topics where those individuals have expertise in some topic ie a professional which by and large the community is composed. Please excuse any strange typos I am voice typing this as I walk.
FWIW, Shor’s theory about poll respondents is shared by Robert Cahaly at Trafalgar, but Trafalgar polls missed badly too in 2020 but in the opposite direction. So either that’s not actually the critical factor, or it’s not possible to control for this factor in any sort of reliable way. I find that far more interesting than Shor’s personal HR problems.