Showing posts with label probability. Show all posts
Showing posts with label probability. Show all posts

2022/01/22

Doomsday argument

And now for something completely different: a fun little probability puzzle. 

What's the probability that the human race will end some time in the next 100 years? Surprisingly this question has a logical answer. And not because we have some magic crystal ball.  In fact, our puzzle  specifically assumes we have no information at all about the future.  

Here's how it goes.  Let's step out of time for a second, and consider the total number of humans who will ever exist. Let's say that number is N.  If you are of the Abrahamic faiths, you can call the first one Adam. But we're just having fun so we'll just number them from first to last: 1,2,3, ..., N.  Now let n be your number. So 1 <  n < N, you are somewhere between the first  and the last person ever.  Since we  have no information about the future, we have no clue if you are near the end or near the beginning or somewhere in the middle.  You just happened to land at some random position in the long line of  humans. So we have to assume that any position is equally likely, or technically that n is uniformly distributed between 1 and N. The chance that you are in a particular interval is equal to how big that interval is relative to the whole sequence. There's a 50% chance that you are in the first half and 50% chance that you are in the second half,  there's a 95% chance that you are in the first 95% and a 5% chance that you are in the last 5% of people, etc.  So P(n < f*N) = f and P(n>f*N)  = 1-f, for any fraction f between 0 and 1.  

We don't know N, but we can estimate n, because we can approximately calculate the cumulative population to date. This is more accurate  than you might think because the parts really long time ago where we have poor estimates are also the times where there were very few people.  The left tail is long but thin. Estimates now are around  n = 117 billion.

From the above, the distribution of N is P(N<n/f) = 1-f. That means there's a 5% chance that N < 123B i.e. that there are only 6 billion babies to go before the last one. If we translate that into time, using the current rate of 140M births per year,  it means there's a 5% chance that we have less than 43 years left! And a 50-50 chance that we'll be around for another 800 years. At the other end, a 5% chance that we have more than 16,000 years left, and so on.

I heard about this puzzle known as the "doomsday argument" about a year ago. Of course you can debate about whether this is a realistic model, but it's a cute way to provoke thought about all the minor risks we collectively worry about and the big ones we don't consider rationally. 

Reminds me of a few scenarios discussed in this blog a long time ago:  ineffective posturing on climate changethe asteroid lottery , political pandering in a pandemic... Ouch ouch ouch! Sadly humanity doesn't seem to have gotten wiser in the decade (!) since those posts... 43 years seems like an awfully short time. At least math is eternal!

2014/10/18

Random matrix and phase shifts

I just stumbled across this great article on Tracy-Widom distribution. It talks about random matrices and phase shifts.  This reminded me of some work I did on resource allocation in network interconnection. We derived routing matrix conditions for "peering" and "dis-peering" (the latter a new term) to be equilibria in the decentralized resource allocation game. I wonder what a probablisitic approach with the routing matrix randomized would add to the game theoretic results. Large scale self organizing interconnections (or failure thereof).

2013/06/25

Optimism

A few months ago, I stumbled across a business card left behind on a table. Under the name of the company, it had three words, each followed by a period,  representing, I guess, the three pillars of their "corporate values".  One word struck me: Optimism.   I must have snickered,  because someone asked what was up. I instinctively thought "optimism" was a silly value, but it dawned on me that I had never thought about it explicitly. IMG_5158

Is optimism good?

The question sounds strange because Optimism, today, in American culture, is automatically assumed to be A Good Thing. Like "pro-active". People use that word as if it's synonymous with "good".  E.g. Person A: "Don't do this bad thing." Person B: "It's not bad, it's pro-active!"  Noooo....  Just like sometimes, being pro-active is evil, being optimistic is not automatically good.

Let's define optimism as follows: Having high expectations for a positive outcome. That is to say, compared to most "normal" people's probability distribution of outcomes, yours has more weight on the positive side. Say we both bet on the same horse, and one of us thinks we'll lose and one thinks we'll win.  So when is it good to be the optimist?  I would slice it on three levels:
  1. Of course if you turn out to be right, then great... But that just means you got lucky.
  2. What if you had to make the same choice over and over again, and on average the optimistic view is more accurate? Great, but that's not really optimism, it's having a better probabilistic model, better foresight.
  3. Now what if you believe the same probabilities as everyone else, but you are more willing to take the risky choices and eventually you're better off? You are good at taking the right amount of risk for reward, and if in the long run you are better off (technically i.e. if you are on the efficiency frontier in the risk, reward plane),  then ... well that's good judgement. 
But in all these forms, the optimism is situational! The are plenty of situations where the wise person would take the "pessimistic" position.   Thus, as a fundamental value to live by, "optimism" is actually orthogonal to the things we consider good, truthful etc.

People (including Corporations!) of the world, listen to me: Value luck, foresight or judgement.  Not optimism.  That's just silly.

 

2010/03/31

Known unknowns and unknown unknowns



A lot of people made fun of poor Donald Rumsfeld for his infamous quote..

"Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."

But actually, he had a very good point. There are things that have uncertainty but we know the shape of the uncertainty. For example, I don't know who will the lottery tomorrow, but we do know the probability of any one person winning, or the distribution of winning amounts. And by the way from that we do know that playing the lottery is one of the dumbest activities known to man, though it is a kind of stupidity that we can harness for good perhaps... but I digress. My point is the outcome of the lottery is a known unknown. Which is different type of ignorance than say, not knowing if God exists. There you don't even have a probability space to support a distribution. Once someone asked to me: What's the probability that God exists? Obviously it was a rhetorical question, and it assumed the answer is "very low" (the question came from an atheist). But then I was like: it could be 0.01% or 99.99% or anything. If you have to choose, it might as well be 42.

2009/08/16

Do you feel lucky... punk?

Here are two reasons why humanity might soon go extinct, and why it wouldn't be such a big loss. As you can see, I am in a cheerful mood today. 

Big rock from outer space 

Last year, using the example of the asteroid Apophis that might destroy the world in 27 years, I made the point that human beings are sometimes astonishingly stupid when it comes to making decisions that involve low probability events. If we were rational mathematical creatures, humanity as a whole should be willing to spend billions of dollars to insure against that 0.0023% chance that we will all be wiped out. If you don't like my argument based on the present value of future GDP, here's another way of arriving at the same point. If you are willing to spend a trillion dollars say on nuclear weapons to defend against other humans, and say there's a 1 in 50 chance that you actually need them, logically, you should be willing to spend a billion dollars on threats that have a 1/50,000 chance of happening. (I am using conservative orders of magnitude here, obviously a nuclear war has less than 1/50 chance of happening, so that makes my point even stronger). Today, in this article from Ars Technica, I found out just how stupid we are.
Congress awarded NASA a $1.6 million grant in 1999 to put towards the NEO discovery program. Unfortunately, this was the only funding Congress gave to NASA to pursue this goal.
Yup, the US government allocated $1.6 million dollars to save all of human life from extinction... Total! And just in case you are inclined to blame "the Americans" for being so short sighted, consider that all the other countries in the world are allocating.. ZERO! (Ok maybe they have a couple of telescopes pointing at the sky but we need giant laser beams or something...) At this point, I am almost rooting for the asteroid to kick human ass. We deserve it. 

Small germs from inner space 

And of course, a big stone falling from the sky is not the only threat we face. Tiny germs are threatening us too. Let's take the H1N1 virus -- the swine flu of recent fame. You'd think that at least when it comes to human health, humanity can be rational, right? Not so quick. Let's see how are favorite mammal is dealing with this problem. Consider the following article from the Guardian (great newspaper btw): "Experts warned dispersal of Tamiflu would do more harm than good" about the debate on anti-virus treatments for H1N1. Here's the scientific view, summarized by one expert quoted in the article:
"Some people wanted to take a long-term view of the risk of resistance developing and to seek to preserve the effectiveness of antivirals for the next pandemic, which may be more severe."
"If you get a resistant strain that becomes dominant in the autumn, Tamiflu will then be useless."
And here's another scientist:
"I am concerned about the vast amount of Tamiflu that is going out almost unregulated," he told the Guardian. "We are increasing the possibility that the flu will become resistant sooner or later. At the moment there is no desperate need for Tamiflu. We should be reconsidering its issue, rather than encouraging its use. "I think we should stop the national pandemic flu service. It was put there for an outbreak of far higher mortality than we have. If you get a resistant strain that becomes dominant in the autumn, Tamiflu will then be useless."
Ok, thank God for all these smart scientists who have thought it through! The politicians should logically follow their advice right? Well actually
"It was felt ... it would simply be unacceptable to the UK population to tell them we had a huge stockpile of drugs but they were not going to be made available"
So they just decided to go ahead and do the wrong thing! It's like a parent saying: "If I told my 5 year old not to play with this loaded gun, he would have been upset, so I decided to let him play with it." Mind you we're not talking about some distant threat here. The next mutation of the virus could be this autumn. Granted there's a low probability that it will mutate into a real killer, but that's my whole point. It's a low probability but high impact threat. And faced with that, the British government is willingly increasing the probability of a pandemic that could kill hundreds of millions of people, because they are afraid of being unpopular for the next two months! Seriously! If this was a movie, whose side would you be on? I would be like: Humans suck! Go H1, Go N1, it's your birthday! 

No rare events in the savanna 

None of this is original of course. Evolutionary biologists will say it's because our brain evolved in an environment where we just never had to consider small probabilities. We have no problem dealing with quantities like "if I go left, I get 1 potato, if I turn right I get 12 eggs"... Our brain can compute those things even as a toddler. But things like "1 in 50,000 chance" just don't compute in ye olde wetware. It's only after years of formal schooling, e.g. by the high-school level, that we start to get intuition on really small numbers. Because until the modern age, we didn't need to! Sure there were rare things like being hit by lightning, or having an earthquake, but since there wasn't anything we could do about them, there was no evolutionary advantage to actually being able to reason logically about really small probabilities. Good old superstition would work just as well. You could say "I got hit by lightning because Zeus is angry at me because I didn't offer animal sacrifice". If you are a hunter gatherer living in the bush, that explanation is practically speaking, just as good as the scientific one. But now, by our own hands, we have a world where we do need to reason about small probabilities... Problem is, the brain hasn't caught up! Global warming is another example. Twenty years ago, it was a low probability but high impact threat, just like our two examples above. Scientists were running around screaming "There's a 1 in 100 chance that the polar ice caps will melt! That's huge!" But humanity just couldn't deal with it. People were like: "One in a hundred chance of extincttion? Pffft. I'm feeling lucky. Let me go buy a lottery ticket." 
   
Well now global warming is in the same range of probability as 1 potato and 12 eggs, so people are dealing with it, but it may be too late. Is this the end-game of evolution? Is this what the epitaph will say:
Here lies humanity. They became really good at reproduction -- 6 billion individuals! But not quite good enough at probability.
Maybe it's all part of a master plan. A conspiracy! Apophis contains some organic molecules which are distant relatives of the H1N1 virus. Together the asteroid and the swine flu are collaborating to take us out, and recolonize the planet with a new dominant species that they like better. After all, that could be how we got here too!

2008/10/18

Manipulation of prediction markets

Nice post on manipulation of prediction markets: ...manipulation can improve (!) prediction markets - the reason is that manipulation offers informed investors a free lunch.

Nice also to see our old friend Hanson.

2008/07/13

Electoral markets

In my last entry about prediction markets vs polls, I quoted and linked to some aggregate numbers from Intrade for the US presidential election. Now there's Electoral markets, a website called that displays the prediction markets at the state level. Cool stuff! I came across it in a blog entry that sums it up quite well.

2008/06/22

Apophis & carpe diem & how to save the world with a billion keychains

There's a 0.0023% chance that asteroid Apophis will impact earth in 28 years. Who cares about a 1 in 45,000 chance? Believe it or not, it is useful information.

For example, you could use this information when negotiating a 30-year loan -- structure it so payments are more heavily weighted to the last two years! At what cost? Of course the world won't end, but if there's a chance... you can precisely calibrate your degree of carpe diem. You should be willing to pay up to $1 for every extra $45,000 (plus additional interest) that is deferred to the last two years. Many people spend $1 on a lottery ticket where you have a 1 in 689,065 chance of winning $10,000. And of course a big loan with a small chance you won't have to pay it back is the same as a lottery ticket -- in fact even better since a) this one pays upfront and b) the normal lottery ticket is overpriced by an order of magnitude.

Back to Apophis. Here's another, less selfish, example of how this information can be useful. We want to know how much money it makes sense for us (earth, one world united!) to spend defending against Apophis. The answer is 0.0023% times the present value of world GDP, cumulated from 2036 forward. Oops that's infinity... Wait not necessarily. If we assume GDP stops growing at some point (e.g. the point where all material needs of humanity would be easily met), and we assume a a discount rate strictly greater than zero, the present value of all future GDP is a finite number. So we should multiply that number by 0.0023% and invest it in a laser beam.

Laser beam schematic:



Take that, make it a billion times more powerful, with a nuclear battery, put it in a satellite with some stuff for aiming and we should be ok! Seriously though, we do have 28 years to work on the technology, so no biggie. But how do we create the political will to spend money on it now?

Societies seem to have a hard time making really long-term investments, whether they are democracies or whatever. But they all love lotteries! Let's revisit the deferment of debt idea. The borrower will be willing to pay a premium to take some debt and push it back past the 28th year. The lender is neutral since they get extra fees to compensate for the risk. Thus we have an efficient transaction between a rational borrower and a rational lender. OK and what has that got to do with asteroids? Recall this is essentially a lottery, one that's better than the usual ones, and we know people are willing to pay 10 times the rational price for lottery tickets. Therefore it should be possible to satisfy the lender with just 1/10th of the fee collected from the buyer! And the remaining 9/10th can be used to build a giant laser beam!!! Everyone's happy. In fact, since the beam also works to eliminate or reduce that very risk, the rational lender might even be willing to contribute part of their one tenth. And then everyone's even more happy. Let's call this the GAALBMF: global anti-asteroid laser beam mortgage fund.

2008/05/11

Predictions, elections, polls, fractals, reflixivity & the kitchen sink

People argue about whether prediction markets do a better job of forecasting elections than polls, or it's an illusion due to timing.

Initially, I am inclined to believe this is one area where the market works better. This follows from their most basic properties. Let's assume both are mostly mediocre. That is many polls and prediction markets available, but just no good in general.

Now consider polls. If there was fewer of them, and they were well communicated, we could count on the fact that expert from all sides would scrutinize them and that they would thus be held to the highest standards. Or of course if you average a lot of polls, you should get a more accurate poll of polls, as errors cancel out. In both cases, centralization increases accuracy of polls. Conversely, when looking at any one poll alone chances are, the one you're looking at is a bad/biased one.

For markets on the other hand, even if you are looking at one market alone, if it was biased, all it would take is one person who has seen the other markets to arbitrage the bias away, in effect linking the two markets and making them two views of one more accurate underlying market. Two polls cannot get organically linked and become more accurate than each by itself. You have to add them up yourself. But two markets can! Thus any one market you stumble upon is more likely to be accurate than a poll you stumble across.

This argument seems particularly apt for the US presidential elections, since there's so much slicing and dicing... The polls are all complicated what-if scenarios. So anyway, according to Intrade, which I've written about before, here are the current probabilities for the next US President (taking bid prices, to get lower bounds):
And the Iowa Electronic Markets seem to agree. Thus, the above, in my humble opinion, is as close as you're gonna get to a prediction out there today.

But is it any good?

Getting back to the philosophical argument again... polls are trying to measure current feelings, i.e. they assume there's an underlying "true" preference of the public, and that they are an objective mechanism to reveal it, within a certain Gaussian error. But it could of course be that the error is much larger than we think possible, because the models are completely wrong. As Nassim Nicholas Taleb, the author of Black Swan, that I mentioned here a couple of posts ago, has argued, a lot of mistakes are due to imposing Gaussian models on a reality that has fractal or power-law or heavy-tailed scaling. For polls, if you think there is a true current preference, then I guess the error should be Gaussian (in other words, using Taleb's lingo, the preference is "in mediocristan"). But looking at it over time, as you must for a prediction, maybe a single poll of a few thousand people, even if it's not representing a wider reality, can have a fractal effect, replicating it's belief patterns at larger scales, through media. If that' s the case, most polls will be meaningless, some will be virally important. And prediction markets won't work well either. True the participants size can scale so maybe they can make fractal bets, but no matter how many expert bets the market brings in, it won't improve the information about a black swan type event which is what a fractally scaling popularity would be.

Some people, like George Soros in his recent book (that I just picked up this weekend) argue that, when it comes to human/social phenomena, the underlying reality doesn't exist separately, it is entangled with human attempts to understand it, and manipulate it (he calls it reflexivity). So taking his ideas to polls, are they measuring something that fundamentally may not actually exist? Probably, there's no objective public opinion that exists independently, waiting to be measured. But it exists reflexively (this is my interpretation/application of Soros idea here so sorry if it's wrong). The polls, even if totally arbitrary to start with, by being communicated, may induce the reality they purport to measure. People listen to the news, and the polls, and their future actions are affected in some way, may then come to act in the way that is suggested to them by the polls for people like them. It may or may not be controlled in concentrated way, but if we apply this theory, then polls are as much instruments of action as measurements, in robotic terms, as much actuators as sensors.

2008/01/25