Dangers of AI

There’s been quite a bit of talk recently about the dangers of AI technology – from human jobs being replaced, to terminator-like robots killing all humans.

My take on this, after having seen some of the AI achievements, is that the name “artificial intelligence” is a misnomer – “artificial stupidity” would be more appropriate. Those things are essentially stupid as fuck, and have some extreme limitations, but they do have the ability to quickly iterate across datasets in order to find a solution, if there is a clear way of punishing failure and rewarding success. That’s basically all they do.

I’ve seen neural networks being trained to win in computer games, and the end-result is amazing and exceeds human ability, simply because it’s a scenario where there are clear win/loss events that enable the neural networks to be trained.

In essence, yes, those things can replace a significant number of human jobs; everything that has to do with data mining, pattern recognition and analysis, trivial but seemingly complex work such as programming that consists of finding and adapting code snippets from the web, or iterative “art” that consists of modifying and combining generic tropes – that’s all going to be done with AI. Engineering work that would require too many calculations for a human, such as fluid mechanics solutions – turbines, rocket engines and so on – are all excellent cases for neural networks.

Unfortunately, military use is among those cases, where it is quite easy to create loitering munitions – basically, drones that hover in the air – that can be sent to scan enemy territory for everything that moves, then recognise targets to identify the priority ones, and crash into them. Ground weapons that recognise human targets and take them down with some kind of a weapon also fit this category, as well as underwater drones that use passive sonar to scan for exactly the kind of ship they want to sink, and then rise from the sea floor and hit it from beneath. This is all trivially easy to do with pattern recognition of the kind that exists today, combined with the kind of hardware that exists today. Imagining killer drones as the humanoid terminators is silly, because such a form would not be efficient. Instead, imagine a quadcopter drone hovering above in scan mode, seeking targets, and then using some kind of a weapon to take them down – a needle with some kind of venom would do. It’s all technically feasible.

The more dangerous thing is a combination of neural networks and totalitarian-minded humans, and by that I mean all kinds of leftists in the West. An AI can data-mine the information sources in order to tag “undesirable” humans, and then this tag would be acted upon by the banks, governments, corporations and so on, basically making it impossible for one to send or receive money if not compliant with the current ideological requirements. This already exists and it’s why we must look for all the things the governments attack as “money laundering friendly” and adopt them as means of doing financial transactions, because if it’s “money laundering friendly”, it means the government can’t completely control it, and if the government can’t control it, it’s the only way for us to survive totalitarian governments aided by neural networks. Have in mind that the governments talk about controlling all kinds of criminals and perverts, but what they really mean is you. Targetting universally hated groups is merely a way to get public approval for totalitarian measures that will then be applied universally. What we will probably all end up doing in order to evade fascist governments is transact in crypto tokens, and settle in gold and silver, in some kind of a distributed, encrypted network that will be incredibly difficult to infiltrate or crack.

Basically, the payment and financial systems have been modified to accommodate totalitarian intent for decades already, to the point where now even the common folk understand that something is not right, but they cannot even imagine the danger. If someone restricts your ability to conduct business and purchase goods and services, and connects that to your political attitudes, you can kiss every idea of freedom and democracy goodbye, and that’s exactly what the American “democratic” overlords have been quietly doing, both at home and in their vassal states. Unfortunately, Russia and China are no better, because government power over the populace is just too tempting for any government bastard to resist.

So, basically, I’m not really afraid of AI. I’m afraid of AI being used by evil humans to create a prison for our bodies and minds, and only God can save us from this hell, which is basically why I think a nuclear war that would decapitate all the governments and destroy the technosphere that gives them infinite power is a lesser evil. The alternative, unfortunately, is much, much worse, because a logical continuation of “business as usual” is being completely controlled by the madmen who will cull the population every now and then to “save the planet” or whatever makes them feel good about themselves, and control us to the point where even saying the word “freedom” would put you on some list you don’t want to be on.