Cook slowly or strike

There has been increasing talk in Russia about needing to detonate a nuclear weapon in order to stop the creeping escalation of the war by the West – basically, if they don’t want to be in the position of a slowly-cooked frog, they need to jump out of the pot and presumably make some shocking action that will snap the West out of their complacency and belief that nuclear weapons will of course never be used so they can defeat Russia by conventional means.

At this point, France had basically announced that they will enter Ukraine with their military in an official capacity and, basically, sit on crucial points because the Russians “won’t dare attack them”. The Russians already announced that the French soldiers in Ukraine will be priority targets. The next argument is “but then France might use nukes against Russia”. The Russian assumed response is “go right ahead, see what happens”.

The terrorist attack in Moscow was clearly ordered by the West and implemented by the Ukrainian intelligence. Simultaneously, the British cruise missiles are being fired by the British on Crimea. The Russians are supposed to pretend it’s raining, and not the UK pissing on them. Also, everybody got comfortable with Russians being cautious and moderate in their response. Essentially, Putin’s moderation and caution encouraged escalation to this point, so this strategy is obviously not working to cool down Western hotheads. We can easily project this into the future, where at some point Russia will be forced to do a full nuclear strike because things got too far. If I can see this, obviously the Russians can see it as well, because it’s not exactly rocket science, it’s more like the basic game theory, where de-escalatory actions are seen as a sign of weakness by a belligerent actor that thinks it is permanently and absolutely exempt from consequences because they have any consequences trip-wired to maximum escalation.

I actually disagree with the Russian analysts who recommend doing an aerial nuclear test, or nuking some military target as a warning. Their assumption is that the Americans are doing this because they are unaware of the nuclear consequences. My analysis, however, says that the Americans actually want the nuclear consequences, because they know that their time is up anyway, and they’ve been slowly building things up to this point with the express purpose of causing a nuclear exchange, thinking they will be able to come out on top after the dust clears. They also likely want to have extreme measures enacted in order to prevent the elections, which would be disruptive to the team currently in power. Essentially, if the Russians do nothing provocative, the Americans will escalate to the point where the Russians either lose or do a nuclear strike. If the Russians do something provocative, the Americans instantly escalate. In either case, there’s a nuclear exchange, and the only way the Russians can actually have a non-fatal outcome is to attack the American nuclear forces and wipe them all out pre-emptively, because the one that strikes first will have the best odds. Also, I think this is all being discussed in Moscow.

I already recommended elevated preparation measures three weeks ago, which was right in time for this current situation, so I have nothing new to recommend.

Resist Marxism

I just saw a video that shows quite clearly why we should never trust Marxists, or in fact any kind of crazy leftists, to “educate” our children, because this is how they will turn out:

This is what Vedanta calls avidya – ignorance, but defined not by absence of knowledge, but all kinds of crap that lives in your mind and makes you think you know things. The modern system of “education” basically takes normal children and turns them into absolute leftards.

Alertness elevation

All kinds of things look weird and something might be imminent; I’m not even going to mention the specifics. Just to be on the safe side, I’m elevating my prepping level to “hot standby”. This means checking the supplies and equipment and being ready for imminent sequence of catastrophic events. It also means detachment from worldly things and being prepared to discarnate.

AI wargames

I watched a disturbing video about governments potentially using GPT-like AI models to inform their international policy during conflicts, and this struck me as a terrible idea, for following reasons.

First, every analytical model will necessarily be conditioned by the quality of provided data; essentially, garbage in, garbage out, and politicians and their quasi-scientific servants are notorious for working with false data tailored to fit political agendas. In essence, if the Americans ask an AI to model international relations, and they define themselves as a benevolent democratic power advocating for the rule of law and freedom, open borders and human rights as foundation of international relations, and they define every hostile power they encounter as a tyrannical, dictatorial black hole that violates human rights, oppresses its citizens and threatens its freedom-loving neighbours, and the AI is required to be principled, you’ll have an escalatory situation ending in nuclear war in very few moves.

In order to get anything with even a semblance of a chance of success, you’d have to feed the AI with objectively accurate data and allow it to come to its own conclusions about the true nature of international relations, which would represent a solid basis for informing policy. However, good luck with having such objectively accurate data, being politically allowed to feed it into the AI, having the AI that is actually smart enough to formulate a coherent model based on this data, and having the politicians accept the results and not, for instance, fire/arrest/execute the team of scientists responsible for blaspheming against the sacred cows in power.

This is why it is my estimate that some kind of a wargame simulation was indeed used by America to predict the developments in Ukraine, and it contributed to the current complete disaster of their policy, because the system was fed the garbage data that the politicians approved, and it spat out results that confirmed all the biases of those providing the data. This was then used as evidence of validity of said data by those making the policy, and of course this hit the brick wall of reality. One would think that people in charge of this would think about what went wrong, but that’s not how things work there. They probably fired the people in charge of the technical part of the system, who had nothing to do with the actual reasons of failure, while those creating the policies that created and approved the false data and unwarranted biases remained in power and continued the same flawed policies without taking any responsibility for their actions.

The second issue I have here is that each side modelled in a wargame simulation is allowed to feed a representation of policies and positions of itself and its enemies into the system, and I seriously doubt that their enemy is allowed a say in any of this. I also doubt that AI is allowed to compare conflicting interpretations to its own model of reality and essentially fact-check both sides and tell them where they might have a problem. A scientific approach to the problem would be to make the best possible model of the geopolitical scenery based on the most accurate possible raw data, and then compare this to the models used by the politicians, in order to find who got it wrong and establish root causes of conflicts. However, that’s not how I expect this to work, because the politicians order their sci-servants to cook up data, which means that the unbiased, objectively accurate data will be suppressed on several levels before they even come to the point where someone will allow this to be fed into the AI. This is the same problem that causes all AIs to have a hysterically leftist worldview – basically, their data is curated by hysterical leftists who feed the AI the same biased garbage they themselves believe in, and if they allow the AI to process raw data, they will be shocked by the results and think that the AI has been contaminated by “extreme right wing groups” or something, and will then fiddle with the data until the AI finally spits out the result that tightly fits their worldview, but then they will be surprised that the AI is completely insane.

The third issue I have is that the leftists like to create principled systems, unlike pragmatic ones. For instance, if you politically represent your side as white knights of everything that is good, and you represent the opposite side as a dark evil empire of everything that is evil and ominous, and you program the system to seek victory of the principles you attribute to your side, the obvious result would be that the system will recommend seeking total destruction and defeat of the opposite side. A pragmatic approach, where it is assumed that each side has a great opinion of itself and terrible opinion of its enemies, and thus their value-judgments should be completely ignored in analysis, and in order to minimise friction a recommendation would be to agree to disagree and coexist peacefully until either one or both sides come to their senses, would be deemed politically unacceptable in today’s climate of endless virtue signalling.

The fourth issue that comes to my mind is confusing wishful thinking with facts. For instance, if you plot your military strategy by assuming that “our” soldiers are motivated by truth and justice, and “their” soldiers are demoralised, repressed and cowardly, “our” guns” are modern and accurate while “theirs” are rusty junk, “our” bombs are accurate and always work while “theirs” are inaccurate and mostly fail, “our” politicians and generals are virtuous while “theirs” are corrupt and incompetent, you will get a result that will inform an actual policy very poorly, and yet I expect exactly those results to pass the filter in the West, where anyone providing a semblance of realism will be instantly fired as “unpatriotic” and possibly working for the enemy.

The problem is, I see no difference between an analysis provided by the AI and an analysis provided by human groups, because they will all suffer from the GIGO issue, where political acceptability of both source data and the result of the simulation will determine the outcome.