Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To try and explain Taleb's argument as simply as possible: If someone tells you today that an event is 100% certain, and tomorrow tells you that it is impossible, then you intuitively will not trust their forecasting. If it's certain, then it's certain not to be impossible tomorrow

Now if someone tells you something is a 90% chance today, and a 10% chance tomorrow, again you don't trust their predictions.

There is a probability associated with "changes in probability". The probability of going from 100 to 0 should intuitively be 0%. The probability of going from 90 to 10 intuitively must be something like 20% (it's like 80 of the 90s "didn't happen", with a probability of 20).

A key point is that you can add this up every day, if something goes from 90->10->90 then it's even more unlikely than going 90->10.

So if you extend that intuition, put some real maths behind it, you can tell whether something is "likely to be a real probability" by the rate at which it changes. And if you have lots and lots of repeated predictions, you can be confident that something isn't a good probability (every timeseries has like a <10% chance of doing exactly what it does, so their combined probability [of being good probabilities] is like 0).

Now Nate Silver accepts this, but says that he's not actually putting a probability on the event in the future, but some non-existent "probability on that event in the future if the future was now". But that doesn't correspond to anything useful or measurable (it's untestable!!), and most people will assume it follows the normal meaning of probability, and it's honestly just silly.



I've always treated 538's forecasts as meaning nothing more or less than "We have a model, we fed the current data into it and ran N simulations of the event, and in X% of those simulations, this was the result". Which, as I understand it, is literally how they're generated.

Any discussion of the accuracy of the model (which should be the only thing that admits debate) has to be retrospective. Just saying "well the forecasts changed" doesn't inherently discredit the model, especially because things like elections turn out to be highly sensitive to tiny variations (even something as simple as a rainy election day in a few key precincts can completely flip an outcome).


Yes but that's not how the proles understand the blog. They take it as "x has a y % chance of happening".

Which is Taleb's beef with it. The commoner thinks its Silver putting his money on something, but when it doesn't happen he says "well I didn't say _that_".

And it in the day and age of more robust machine learning models, that the 538 models are so volatile is a pretty weak excuse.


Humans are volatile. Politics even more so.

Our options aren’t “high confidence converging predictions, or 538”.

The options in the current era for understanding of the electorate’s mood are “overly confident individual polls, poorly analyzed with completely inadequate models”, vs “538-style epistemically humble models accompanied by discussions of their confidence, which can be scored in aggregate after each election”.

I’ll take 538 any day.


> The options in the current era for understanding of the electorate’s mood are “overly confident individual polls, poorly analyzed with completely inadequate models”, vs “538-style epistemically humble models accompanied by discussions of their confidence, which can be scored in aggregate after each election”.

Also, “politically motivated actors selling narratives that reinforce their preferred outcome largely without data or with cherry-picked data.” Don't forget that option


But it's not about you. Just being on this forum puts you in the critical thinking 1%.

538 fails in that most people think that the daily stats are Silver's betting positions. He "predicted the 2008 election."

We understand the difference, but the crying campaign staffers last November did not.


> We understand the difference, but the crying campaign staffers last November did not.

Campaigns aren't relying on Silver’s model.


I came here to post this exact comment — except less eloquently. Hi, ubernostrum! :-)


I think that goes to say that probability isn't really useful to most people when we're talking about elections.


Could you use the same reasoning to discount weather predictions?

September 10, 2018 - Hurricane Florence predicted to be category 4 hurricane on landfall, 80% Septemeber 12, 2018 - Hurricane Florence predicted to be category 4 hurricane on landfall, 20% September 14, 2018 - Hurricane Florence makes landfall as Category 1 hurricane

(the above numbers are demonstrative, loosely based on my memory of how events actually happened with Florence)

There is something to be said for predictions about the future based on today's environment, while still allowing for the reality that the environment could change. Predicting a single baseball game right before it happens is just a different kind of prediction than simulating a model that has noisy cross-interacting inputs.

Readers/listeners of 538 need to understand (and Nate spends a lot of time educating about this) exactly what the model is calculating and what it isn't. Nate calls out all the time that the model can only be as good as the polling that provides the inputs. And polls can swing for all sorts of reasons: there's not many of them for a district, only highly biased ones are available, people's actual voting intentions change from week to week.

Am I missing the point of what you're trying to say?


I think a more appropriate interpretation would be to use the same reasoning to acknowledge how freaking hard it is to predict the weather. Especially extreme weather.


> Now Nate Silver accepts this, but says that he's not actually putting a probability on the event in the future, but some non-existent "probability on that event in the future if the future was now".

Could you link to where Silver discusses this? I'm interested in seeing his description of exactly what his numbers mean.


https://fivethirtyeight.com/features/a-users-guide-to-fiveth...

What I'm referring to is the "now-cast", but his other two definitions both seem to shy away from saying "this is flat-out the probability we think of the election".

The point is, you can redefine or choose a definition of probability if you want, but if it's less useful than the normal definition (and confusing to people!) then people are free to criticize your work on that basis.

And there's a very useful, testable, mathematical definition of probability that allows us to equally assess everyone's predicting ability, and Nate Silver is dodging it.

If you're interested in this subject, there's a non-mathematical discussion somewhere in Tetlock's book Superforecasting which is interesting in general.


Oh okay. I think it's fair to just take his polls-plus model as his prediction and ignore the now-cast. But I wouldn't say that showing the now-cast is somehow being sneaky. He's just providing extra information.


Thanks for this! Hadn't thought of this before.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: