About probabilities

Every time some scientist starts talking about probability I get pissed off, and here’s why.

Let’s say they are talking about chances of Earth getting hit by an asteroid, or a supervolcano erupting, or a near-enough star going supernova, or whatever potentially cataclysmic event; their argument is always “events such as this happen every x millions of years, so the probability of it happening for every year is in the order of one in x millions”.

Oh, really?

Let’s see how a Yellowstone supervolcano works, and then you’ll see why I have a problem with probabilistics. You have a mantle plume that comes to the crust. A reservoir of magma under pressure forms, and when this pressure exceeds the resistance to pressure of the rock layer above, there is an explosive eruption which relieves the pressure. The dome collapses and you get an open lake of lava. After a while, the lava cools and forms a new dome. The magma chamber has relieved its pressure and will take a long time to fill, and even longer to build pressure to the point where it can mechanically compromise the hard layer of basaltic rock above. You basically have a period of several hundreds of thousands of years after an eruption where the probability of another eruption is literally zero, because the physics that would support it just isn’t there. It’s only in the last few percents of the supereruption cycle that you have any place for uncertainty, because you don’t know the pressure at which the basaltic rock will crack; the thickness, hardness and elasticity of the basaltic dome can vary between eruptions, and so you don’t really know the pressure at which it will pop, and you also don’t know the level of mechanical deformations it can manifest before it pops. So, if an eruption cycle is 650000 years, let’s say there’s place for probabilistics in the last 20% of that time, basically saying the cycle is 650000 years with the error margin of 20%, meaning it can pop 150000 years sooner or later. That’s the scientific approach to things. However, when they employ mathematicians to make press releases, and they say that the probability of it going off is 650 thousand to one for every year, that’s where I start whistling like an overheated boiler.  It’s actually never 650K to one, and if someone says that number you know you’re dealing with a non-scientist who was educated way beyond their intelligence. The probability of it going off is basically zero outside the uncertainty margin that deals with the last 20% of the time frame. As you get further in time, the probability of an eruption grows, but you can hardly state it in numeric terms; you can say that you are currently within the error margin of the original prediction, and you can refine your prediction based on, for instance, using seismic waves to measure the conditions within the magma chamber; how viscous, how unified/honeycombed it is, were there perceivable deformations in the lava dome, were there new hydrothermal events that can be attributed to the increased underground pressure. Was there new seismic activity combined with dome uplift and hydrothermal events? That kind of a thing can narrow your margins of error and increase confidence, but you never say it’s now x to one. That’s not how a physicist thinks, because you’re not dealing with a random event in a Monte Carlo situation, where you basically generate random numbers within a range and the probability of a hit is the size of the number pool to one for each random number generation. A volcano eruption is not a random event. It’s a pressure cooker. If it’s cold, the probability of an explosion is zero. If the release valves are working the probability of an explosion is zero. Only if the release valves are all closed, the structural strength of the vessel is uniform, the heat is on, there’s enough water inside, and the pressure is allowed to build to the point of exceeding the structural strength of the vessel, can there be any talk of the explosion at all, and only in the very last minutes of the process, when the uncertainties about the pressure overlap with the uncertainties about the structural strength of the vessel, can there be any place for probabilistics, and even then it’s not Monte Carlo probabilistics, because as time goes on the probability goes up exponentially because you get more pressure working against that structural strength. As you get closer to the outer extent of your initial margin of error, the probability of the event approaches the limit of 1.

You can already see that most other things work in similar ways, because if there are no asteroids of sufficient sizes on paths that can result in collision with Earth, what is the probability of an extinction-level event caused by an asteroid impact? In the early stages of the solar system formation the probabilities of such events were much higher, but by this point everything that had intersecting orbits already had the time to collide, and things have cleared up significantly. You can always have a completely random, unpredictable event such as a black hole or something as bad suddenly intersecting the solar system at high velocity and completely disrupting orbits of everything or even destabilizing the Sun, but unless you can see how often that happens to other solar systems in the Universe, you can’t develop a meaningful probabilistic analysis.

Also, how probable is a damaging supernova explosion in our stellar neighbourhood? If you are completely ignorant, you can take a certain radius from the Sun where you’re in danger, count all the stars that can go supernova within that sphere of space, say that the probability of a star going supernova is, let’s say one in four billion for every year, and multiply that by the number of stars on your shortlist. If you did that, then congratulations, you’re an idiot, and you are educated far beyond your intelligence, because the stars don’t just go supernova at random. There are conditions that have to be met. Either it’s a white dwarf that gradually leeches mass from another star, exceeds the Chandrasekhar limit and goes boom, or a very old star leaves the main sequence on the Hertzsprung-Russell diagram, so you have a very unstable giant star that starts acting funny, sort of like what Betelgeuse is doing now, and even then you get hundreds of years (or even thousands of years) of uncertainty margin before it goes. You also have a possibility of stellar collisions, either at random (which are incredibly rare), or you have a pair of stars that get closer with every orbit, leeching mass from each other and eventually the conditions are met for their cores to deform, extrude and join, making for a very big boom. Essentially, what that does is give you a way to narrow down your margins of uncertainty from billions of years to potentially hundreds of years, if you notice a star approaching the conditions necessary for it going supernova, which should not be that difficult where it actually matters, because if it’s too far to measure it isn’t dangerous, and the closer it is the more you tend to know about it. So, the less you know, the bigger the margin of uncertainty represented by your assessments of probability, and the greatest probability of getting the most useless assessment possible is what you get by hiring a mathematician to do it.