I’ve been wondering why LLM fake-AI systems manage to be so effective, since they are essentially glorified autocomplete; basically, they are a neural network based probability engine that determines what’s the most likely next word in a sentence.
Maybe because most humans are a neural-network based probability engine that’s also very good at figuring out the next thing that’s expected for them to say. And then this crossed my mind:
“FEJKA artificial potted plants don’t require a green thumb. Perfect when you have better things to do than water plants and pick up dead leaves. You’ll have everyone fooled because they look so lifelike.”
That’s what most people are. They are a fejka. LLM systems merely stumbled upon this fact by accident. A fake artificial intelligence can learn to finish sentences, paragraphs and entire articles in passable ways because that’s what fake human intelligence does – finish sentences in a “correct” way in order to avoid ridicule and punishment. Everybody knows what to say and all their conversations are formulaic and predictable to the point where someone learned how to make a computer system that does the same thing.
It’s not just text. People learn how to take photographs in a formulaic way that gets them acclaim and avoids ridicule. They learn how to have spiritual experiences that will get them acclaim and avoid ridicule, because they are of the exact same kind as everybody else’s, which is what created the idea of religions having the same origin and goal and it’s all the same thing. It’s because everybody has been copying homework from others. They are all fejka plastic potted plants. Looks like the real thing, but even better, because you don’t have to water it.
Now that I think about it more, human brain seems to be very good at doing the human autocomplete thing on autopilot when there’s no soul in the driver’s seat. The corollary is that spiritual awakening is the point where a soul wakes up in the body and actually starts perceiving things, paying attention and controlling actions – “oh fuck, I’m driving a car”. That’s why actual souls can be perceived as weird compared to a fejka NPC – a fejka knows what it has to say next. An actual soul has to figure it out, and is likely to say the “wrong thing”.
