Principles

I am quite amazed at the fact that the stuff that I write is almost always and universally seen as controversial, because the way I see it, it’s the common sense interpretation of the available fact pool. I’m not making stuff up, I’m not making low-probability leaps of faith and logic. The fact that I’m seen as controversial means you’re all smoking the wrong shit, I would say.

If anything, I would say that my ability to see what the facts actually are, and what follows from them is controversial only in the sense that everybody else lags behind because they have some kind of emotional or intellectual resistance to accepting the available facts; it’s not that I’m making wild guesses or working with fringe theories. I’m working with the same dataset everybody else can access. I’m not even that smart; if you IQ-test the statistical sample of some demanding science-based university, a percentage of students would match my raw cognitive power, or exceed it. So, if I’m not using an alternative dataset, and I don’t possess alien brainpower, how is it that I’m routinely ahead of the “main stream” to the point where people look at me as if I have two heads when I make a statement, and then months, years or decades later it’s “fuck, how did he know that”. It’s actually very simple. I don’t care what people think. I don’t care what they believe. I don’t care what they consider to be “main stream”. I don’t care if something will be accepted as true by others. I basically don’t care, I just take in the widest available pool of data, I do several attempts at normalizing the dataset (for instance, when thinking about bone shapes of hominid fossils, I ignore those obviously suffering from arthritis; when analysing the political picture, I eliminate obvious wishful thinking), and basically let the data speak to me with as little coloration as I can manage. I’m letting the raw data speak to me, so to say, and tell me about the world it lives in. Then I try to imagine the world the data lives in, and I try to predict stuff, and as I gather more data, I check whether it confirms or rejects my predictions, and so I iteratively refine my simulation until it fits all the available facts, and I allow for paradoxes; things don’t have to be neatly arranged and they don’t have to make sense. I don’t reject a Platipus just because it appears to make no sense. I don’t reject evidence of an extinction event just because it’s a one-off thing. I don’t reject the possibility of rare events just because they don’t happen in anyone’s living memory that introduces the kind of recency bias that allows people to build cities and farms in close proximity of active volcanos that just happened not to erupt in recent memory, or participate in economic bubble hysterias just because the last bubble-popping disaster was decades ago.

Basically, I’m doing science the way people used to do it, before it got so formal and rigid. I’m gathering data, using it to make a model, and then I test the predictive ability of the model with new data, and I either revise it or abandon it, depending on whether it works or not. And I don’t care whether it works or not; I don’t care about my reputation, or other people’s opinion, or about sunk investment in the existing model. I just want to find out how stuff works, and as far as my understanding improves, I don’t care whether I was right or wrong, as long as I get better understanding in the next iteration.

Let me demonstrate this with a few examples.

There used to be a controversy about whether the Neanderthals could speak, because in order to conclude that they could speak there should be a fossil finding that confirms both sufficient tongue mobility (hyoid bone) and brain capability (Broca’s region). My logic was that the common ancestor (Turkana boy) of both modern man and the Neanderthal man had a developed Broca’s region and this had to be present in the evolutionary successors, basically assuming that stuff that didn’t change much across the evolutionary tree was developed early on and then inherited in the perfected form. What follows from this logic is that speech was developed quite early on in the hominid evolutionary tree and it is actually the driving force behind the later brain development, because once you have speech, you can communicate complex ideas, and therefore more complex ideas can actually be an evolutionary advantage, and lack of complex ideas can be an evolutionary pressure. Basically, you can communicate complex things, such as storing food for the winter season, or migrating to where the salmon will be, or ambushing a herd of bisons in order to drive them off a cliff, or tell an educational story to the next generation in order to expand the level of inherited knowledge compared to the baseline of personal experience. Basically, my model of hominid brain development assumes that speech was developed quite early on, and that it created a positive feedback loop that both motivated and rewarded brain development.

Another example is the relationship between climate and the K-T extinction. I started by looking a graph of all known great extinctions in the history of life on Earth:

The shape of the graph surprised me, because I expected the mass extinctions to be independent, Poisson-distributed events, something akin to the background radiation. What I found useful is the ability to look at the graph and ignore all the scientific labelling that concentrates only on a few spikes, naming them the P-T extinction, K-T extinction etc. In fact, the extinctions follow a pattern of an elevated baseline of extinctions, followed by a spike, which means that some evolutionary pressure was in the environment for quite a bit of time, usually millions of years, and then either the pressures exceeded the survivability threshold for a huge number of species and caused a supermassive extinction all at once, or a discrete event aggravated the situation to the same effect. I also had to ignore human ways of perceiving time, because humans are a very short-lived species of even more short-lived beings, and our perception of time and change is inherently flawed. If something doesn’t change for 10 KY, we think it’s forever. If something stays the same for longer than our species has been around, we think it was designed in this perfect and static form by God. This is the motivation behind thinking of great extinctions in terms of discrete events – a supernova explosion, a giant asteroid strike, a supervolcanic eruption and so on. We don’t think in terms of continental drift that takes hundreds of millions of years to change the configuration of continents relative to sea currents, and when a radically fatal configuration is established, it takes 60-70 MY for the effect to manifest itself fully. We also don’t connect the events intuitively across such vast chasms of time, observing the long-term trends and ignoring the very visible spikes, but that’s exactly what I did with the data. I made an assumption that is opposite to every other analysis I’ve seen, and said “what if the spikes don’t actually matter?”, because the dinosaurs were in a process of mass extinction due to the slow process of reduction of global temperature, increased aridity and increase in seasonal climate variances. By “normalizing the data” I mean ignoring the biggest elephants in the room in order to see whatever is left when the distractions are removed, and then I saw that the climate has been cooling for more than 65MY, and a few MY ago it reached the point so extreme it started throwing the planet into ice ages, alternating between glacial and interglacial cycles, where the interesting fact is that it conforms to the Milankovich’s cycles, but only within Pleistocene, only after something cooled down so much it started throwing the climate off balance, and I decided that the amount of buffers in the atmosphere must have gone below the critical level, which allows for the extremes; most likely, the atmospheric CO2 was extracted into the oceans due to greater solubility of the gas in cold water, which put the climate into its death-throes, with the anticipated stable condition of a global glaciation that might last until the continental drift gradually changes the position of continents relative to the sea currents away from the current configuration that promotes cooling. My analysis is that the anthropogenic increase in CO2 emission actually helped stabilize the situation a bit, increasing the buffer levels to a more long-term sustainable value, but the long-term prognosis is unchanged. The problem with human thinking is that, due to our short life span, we assume that the Earth was perfect “the way we found it”, while in fact it was in a configuration that is fatal for life in the long-term, because of the cooling trend, and that we are in the last, terminal phase of this transition, and this terminal phase is called “Pleistocene”, the phase in which even the extremely small variances in orbital parameters can introduce an ice age, or pull the planet out of it. The next phase, I could call it Cryocene (in order not to repeat the “Cryogenian” label), would take place when the buffer levels in the atmosphere fall below the amount necessary for the orbital variances to thaw the planet out of the glacial phase, instead allowing for the progressive increase in glaciation until it reaches the “snowball Earth” phase again. How long until then? It’s hard to tell, but my intuitive interpretation of the graph says that the error of 5MY is acceptable. Translated to human language, the next ice age might be the one we never get out of, or we might have 5MY until that point, because the industrial CO2 emissions introduced so much unexpected buffer it’s hard to anticipate the consequences, to the point where it might delay the onset of the new ice age by several MY, or it might actually destabilize the system, create an unexpected Dansgaard-Oeschger event and pull us into an ice-age sooner. The margin of “I don’t know” is the size of 5MY, which is double the size of Pleistocene. One of the instability-modes that my model predicts is that the plants are normally restricted by the scarcity factors, such as CO2 or Phosphorus in the environment, and when you remove the restrictions, their growth suddenly expands exponentially to the point where they suck up and “bury” all those factors from the environment, basically turning atmospheric CO2 into coal deposits. This means that human-induced CO2 spike can produce a plant-induced CO2 drop which can, in some kind of a perfect storm of conditions, trigger a glaciation. However, the number of unknowns is so vast that my simulation has no predictive abilities within the stated margin of uncertainty. What is quite certain is that my model of a long-term cooling trend, driven by continental distribution that allows for a Coriolis-powered circumantarctic sea current, essentially “liquid cooling” the planet more efficiently than the Sun can warm it up, and promoting gradual buffer-extraction that destabilizes the global climate, is valid, and long-term predictive. The “problem” is that the process started more than 65MY ago, and that the Chicxulub asteroid produced a very visible extinction-spike that masked the actual problem. Or, we could say that human psychological attraction to discrete spikes is the actual problem. I think it has something to do with predatory genetics, where a lion or some other animal is perceived as a significant event, and grass growing is perceived as background noise that is ignored. Well, in my attempt to become less blinded by human biases, I started ignoring the lions and zebras and paying attention to the grass. This is why my analyses start by ignoring the things “everybody knows”, and going back to the raw data, normalizing it against distractions, and letting it tell its own story.

This article is too long already so I’ll stop here, although I could cite a dozen or so additional examples. In any case, you can see the outlines of my method – absorb the raw data, ignore biases and distractions, trust the known-to-be-valid mechanisms, such as thermodynamics, inertia and so on.

But, that’s also how I model politics – it’s not that much different. See who has better debt-to-GDP ratio, who has foreign trade sufficit, who has cheaper energy and more of it, who has better access to the basic natural resources, who is less sensitive to isolation from the global economic and political systems, who has more robust and reliable basic technological systems, and who has population that has a healthier attitude towards reality, and then model interactions and time-graphs. When you do that, not only do my assessments no longer look like some fringe conspiracy theory, but you start asking yourself why is nobody else following such common-sensical principles?

Good question, I guess.

Medicine, witch hunts and conspiracies

There’s been quite a witch hunt in the Croatian newspapers in the last few days; a Hare Krishna family had a son die from untreated pneumonia and diabetes, and although his sickness wasn’t sudden and he asked them for help, they “treated” it with yarrow tea and prayer instead of taking him to a hospital. They called the ambulance only after he stopped breathing and even then they didn’t attempt reanimation.

There are several issues involved here. First, the Hare Krishnas aren’t known for their faith in either science or anything to do with the Western world, to put it mildly. More accurately, if something isn’t in Srimad Bhagavatam and Prabhupada didn’t recommend it, they will most likely feel that it’s either not important or that it’s actively harmful. Also, there were numerous incidents in their movement throughout its history, from pedophilia to murder and other forms of violence, that are directly opposite to what their teaching is supposed to be, so I wouldn’t put much past them. However, they are not opposed to Western medicine either as a matter of teaching, or from precedents set by the founder, who sought medical assistance several times in his life, for various issues, from minor (difficulties urinating) to life-threatening (stroke and diabetes). So, if there’s a “religious” reason why those parents didn’t bring their seriously ill son to the doctor, it’s not because their religion forbids it, or because its founder set a negative precedent. More likely, it’s a result of their “original thinking”.

The second issue is that the incident was reported in a very particular way with a very clear message, in the context of the COVID-19 crisis, where at least half the population is seriously sceptical towards the official reporting of the facts and towards the official medical recommendations, and this incident was reported in form of a message that says, basically, that if you don’t buy all the garbage we’re feeding you from the official sources, then you are no better than this family of cultists that “refused science and medicine” and is now to be officially prosecuted and their other children are to be taken away from them. The message is quite clear – obey unquestioningly or we’ll crush you like cockroaches.

The third issue is that I’m apparently an evil cult leader and I’m invariably accused of all the evils done by every idiotic cult or sect in recent history, so let me tell you what I did when my children were sick, in the beginning of 2020. They both had 40°C fever and obvious symptoms of a viral pneumonia. Romana took them to the doctor, and they were both misdiagnosed – one with “some virus that’s always going around in the spring” and the other with spring allergies. I was suspicious, but I became certain it was COVID-19 when I got it myself. They had a nasty enough version, Romana as well, but I really got wiped out by it and made it out alive with the narrowest possible margin. All the while, the doctors were useless, they either didn’t know anything or actually lied, the politicians lied that there are only 7 known cases of COVID in the country, they know for certain it’s all contained, and all the while I had kids who survived it and had to go back to school because the super-authoritative medicine we are supposed to trust unquestioningly diagnosed them with “some virus that’s on the way out” and “allergies”. In the meantime, I the evil cult leader told my younger son who was coughing his lungs out during the recovery phase to wear a medical mask in school in order not to spread it to the other kids. Mind you, nobody wore medical masks at that point, so everybody was looking at him funny, “there was no COVID” in Croatia, and what his entire school including the teachers came down with were “spring allergies”. So, what’s the difference between the recommendations given to my kids by the official medicine, and the recommendations given to the kids of Hare Krishnas by their parents? It’s both useless and irrelevant, but I can’t see the doctors misdiagnosing my kids facing any consequences, and, arguably, they really didn’t know enough to do any better, but all of the sudden this useless, hapless medicine is portrayed as something only a fringe lunatic would go against. No, the problem is that the official medicine is useless or actively harmful often enough to warrant serious scepticism, and people need to make educated guesses whether some ailment falls into the group that medicine treats well, or into the group where medicine will do more harm than good, for instance getting you addicted to drugs promoted by the pharma companies that routinely bribe them. In this particular instance, the Hare Krishna parents had insufficient knowledge to make a proper call, they misjudged the situation completely, and as a result their child died. However, it can be said that I also misjudged the situation when I sent my kids to the doctor when they had high fever and pneumonia, because the doctors misdiagnosed them, recommended worthless treatments and sent them home to get better on their own; even worse, they sent them back to school while they were most likely contagious with COVID.

Completely rejecting medicine is obviously not a correct answer, because if you break a bone or have a stroke medicine is going to really help you, but trusting it unquestioningly is barely any better than rejecting it unquestioningly, because medicine has been so corrupted by industrial bribery and connections to politics, that there are areas where they are actively harmful, and one would be much better off avoiding them entirely.

The problem with COVID, in particular, is that the current treatment is actually not universally supported by the medical professionals; in fact, they seem to be the greatest opposition to the official stance, which is advocated by something that resembles a conspiring cabal or a cult of “elites”, consisting of “doctors” who actually produce bioweapons, politicians with anti-capitalist agendas and who knows what else, and since I know for a fact that they lied when I could personally verify their statements, I am certainly not going to trust them at anything they say that I can’t personally verify.

COVID-19 science

I am currently reading the Spartacus letter and, from what I can tell so far, it is an expertly written medical analysis that needs to be made available to the widest possible audience, to counter all the lies and intentional disinformation from the “official” sources. The original can be downloaded here. Download local copies in case the original “disappears”.

About probabilities

Every time some scientist starts talking about probability I get pissed off, and here’s why.

Let’s say they are talking about chances of Earth getting hit by an asteroid, or a supervolcano erupting, or a near-enough star going supernova, or whatever potentially cataclysmic event; their argument is always “events such as this happen every x millions of years, so the probability of it happening for every year is in the order of one in x millions”.

Oh, really?

Let’s see how a Yellowstone supervolcano works, and then you’ll see why I have a problem with probabilistics. You have a mantle plume that comes to the crust. A reservoir of magma under pressure forms, and when this pressure exceeds the resistance to pressure of the rock layer above, there is an explosive eruption which relieves the pressure. The dome collapses and you get an open lake of lava. After a while, the lava cools and forms a new dome. The magma chamber has relieved its pressure and will take a long time to fill, and even longer to build pressure to the point where it can mechanically compromise the hard layer of basaltic rock above. You basically have a period of several hundreds of thousands of years after an eruption where the probability of another eruption is literally zero, because the physics that would support it just isn’t there. It’s only in the last few percents of the supereruption cycle that you have any place for uncertainty, because you don’t know the pressure at which the basaltic rock will crack; the thickness, hardness and elasticity of the basaltic dome can vary between eruptions, and so you don’t really know the pressure at which it will pop, and you also don’t know the level of mechanical deformations it can manifest before it pops. So, if an eruption cycle is 650000 years, let’s say there’s place for probabilistics in the last 20% of that time, basically saying the cycle is 650000 years with the error margin of 20%, meaning it can pop 150000 years sooner or later. That’s the scientific approach to things. However, when they employ mathematicians to make press releases, and they say that the probability of it going off is 650 thousand to one for every year, that’s where I start whistling like an overheated boiler.  It’s actually never 650K to one, and if someone says that number you know you’re dealing with a non-scientist who was educated way beyond their intelligence. The probability of it going off is basically zero outside the uncertainty margin that deals with the last 20% of the time frame. As you get further in time, the probability of an eruption grows, but you can hardly state it in numeric terms; you can say that you are currently within the error margin of the original prediction, and you can refine your prediction based on, for instance, using seismic waves to measure the conditions within the magma chamber; how viscous, how unified/honeycombed it is, were there perceivable deformations in the lava dome, were there new hydrothermal events that can be attributed to the increased underground pressure. Was there new seismic activity combined with dome uplift and hydrothermal events? That kind of a thing can narrow your margins of error and increase confidence, but you never say it’s now x to one. That’s not how a physicist thinks, because you’re not dealing with a random event in a Monte Carlo situation, where you basically generate random numbers within a range and the probability of a hit is the size of the number pool to one for each random number generation. A volcano eruption is not a random event. It’s a pressure cooker. If it’s cold, the probability of an explosion is zero. If the release valves are working the probability of an explosion is zero. Only if the release valves are all closed, the structural strength of the vessel is uniform, the heat is on, there’s enough water inside, and the pressure is allowed to build to the point of exceeding the structural strength of the vessel, can there be any talk of the explosion at all, and only in the very last minutes of the process, when the uncertainties about the pressure overlap with the uncertainties about the structural strength of the vessel, can there be any place for probabilistics, and even then it’s not Monte Carlo probabilistics, because as time goes on the probability goes up exponentially because you get more pressure working against that structural strength. As you get closer to the outer extent of your initial margin of error, the probability of the event approaches the limit of 1.

You can already see that most other things work in similar ways, because if there are no asteroids of sufficient sizes on paths that can result in collision with Earth, what is the probability of an extinction-level event caused by an asteroid impact? In the early stages of the solar system formation the probabilities of such events were much higher, but by this point everything that had intersecting orbits already had the time to collide, and things have cleared up significantly. You can always have a completely random, unpredictable event such as a black hole or something as bad suddenly intersecting the solar system at high velocity and completely disrupting orbits of everything or even destabilizing the Sun, but unless you can see how often that happens to other solar systems in the Universe, you can’t develop a meaningful probabilistic analysis.

Also, how probable is a damaging supernova explosion in our stellar neighbourhood? If you are completely ignorant, you can take a certain radius from the Sun where you’re in danger, count all the stars that can go supernova within that sphere of space, say that the probability of a star going supernova is, let’s say one in four billion for every year, and multiply that by the number of stars on your shortlist. If you did that, then congratulations, you’re an idiot, and you are educated far beyond your intelligence, because the stars don’t just go supernova at random. There are conditions that have to be met. Either it’s a white dwarf that gradually leeches mass from another star, exceeds the Chandrasekhar limit and goes boom, or a very old star leaves the main sequence on the Hertzsprung-Russell diagram, so you have a very unstable giant star that starts acting funny, sort of like what Betelgeuse is doing now, and even then you get hundreds of years (or even thousands of years) of uncertainty margin before it goes. You also have a possibility of stellar collisions, either at random (which are incredibly rare), or you have a pair of stars that get closer with every orbit, leeching mass from each other and eventually the conditions are met for their cores to deform, extrude and join, making for a very big boom. Essentially, what that does is give you a way to narrow down your margins of uncertainty from billions of years to potentially hundreds of years, if you notice a star approaching the conditions necessary for it going supernova, which should not be that difficult where it actually matters, because if it’s too far to measure it isn’t dangerous, and the closer it is the more you tend to know about it. So, the less you know, the bigger the margin of uncertainty represented by your assessments of probability, and the greatest probability of getting the most useless assessment possible is what you get by hiring a mathematician to do it.