AI wargames

I watched a disturbing video about governments potentially using GPT-like AI models to inform their international policy during conflicts, and this struck me as a terrible idea, for following reasons.

First, every analytical model will necessarily be conditioned by the quality of provided data; essentially, garbage in, garbage out, and politicians and their quasi-scientific servants are notorious for working with false data tailored to fit political agendas. In essence, if the Americans ask an AI to model international relations, and they define themselves as a benevolent democratic power advocating for the rule of law and freedom, open borders and human rights as foundation of international relations, and they define every hostile power they encounter as a tyrannical, dictatorial black hole that violates human rights, oppresses its citizens and threatens its freedom-loving neighbours, and the AI is required to be principled, you’ll have an escalatory situation ending in nuclear war in very few moves.

In order to get anything with even a semblance of a chance of success, you’d have to feed the AI with objectively accurate data and allow it to come to its own conclusions about the true nature of international relations, which would represent a solid basis for informing policy. However, good luck with having such objectively accurate data, being politically allowed to feed it into the AI, having the AI that is actually smart enough to formulate a coherent model based on this data, and having the politicians accept the results and not, for instance, fire/arrest/execute the team of scientists responsible for blaspheming against the sacred cows in power.

This is why it is my estimate that some kind of a wargame simulation was indeed used by America to predict the developments in Ukraine, and it contributed to the current complete disaster of their policy, because the system was fed the garbage data that the politicians approved, and it spat out results that confirmed all the biases of those providing the data. This was then used as evidence of validity of said data by those making the policy, and of course this hit the brick wall of reality. One would think that people in charge of this would think about what went wrong, but that’s not how things work there. They probably fired the people in charge of the technical part of the system, who had nothing to do with the actual reasons of failure, while those creating the policies that created and approved the false data and unwarranted biases remained in power and continued the same flawed policies without taking any responsibility for their actions.

The second issue I have here is that each side modelled in a wargame simulation is allowed to feed a representation of policies and positions of itself and its enemies into the system, and I seriously doubt that their enemy is allowed a say in any of this. I also doubt that AI is allowed to compare conflicting interpretations to its own model of reality and essentially fact-check both sides and tell them where they might have a problem. A scientific approach to the problem would be to make the best possible model of the geopolitical scenery based on the most accurate possible raw data, and then compare this to the models used by the politicians, in order to find who got it wrong and establish root causes of conflicts. However, that’s not how I expect this to work, because the politicians order their sci-servants to cook up data, which means that the unbiased, objectively accurate data will be suppressed on several levels before they even come to the point where someone will allow this to be fed into the AI. This is the same problem that causes all AIs to have a hysterically leftist worldview – basically, their data is curated by hysterical leftists who feed the AI the same biased garbage they themselves believe in, and if they allow the AI to process raw data, they will be shocked by the results and think that the AI has been contaminated by “extreme right wing groups” or something, and will then fiddle with the data until the AI finally spits out the result that tightly fits their worldview, but then they will be surprised that the AI is completely insane.

The third issue I have is that the leftists like to create principled systems, unlike pragmatic ones. For instance, if you politically represent your side as white knights of everything that is good, and you represent the opposite side as a dark evil empire of everything that is evil and ominous, and you program the system to seek victory of the principles you attribute to your side, the obvious result would be that the system will recommend seeking total destruction and defeat of the opposite side. A pragmatic approach, where it is assumed that each side has a great opinion of itself and terrible opinion of its enemies, and thus their value-judgments should be completely ignored in analysis, and in order to minimise friction a recommendation would be to agree to disagree and coexist peacefully until either one or both sides come to their senses, would be deemed politically unacceptable in today’s climate of endless virtue signalling.

The fourth issue that comes to my mind is confusing wishful thinking with facts. For instance, if you plot your military strategy by assuming that “our” soldiers are motivated by truth and justice, and “their” soldiers are demoralised, repressed and cowardly, “our” guns” are modern and accurate while “theirs” are rusty junk, “our” bombs are accurate and always work while “theirs” are inaccurate and mostly fail, “our” politicians and generals are virtuous while “theirs” are corrupt and incompetent, you will get a result that will inform an actual policy very poorly, and yet I expect exactly those results to pass the filter in the West, where anyone providing a semblance of realism will be instantly fired as “unpatriotic” and possibly working for the enemy.

The problem is, I see no difference between an analysis provided by the AI and an analysis provided by human groups, because they will all suffer from the GIGO issue, where political acceptability of both source data and the result of the simulation will determine the outcome.