Intel SNAFU

Regarding the Intel CPU issues, I must say I expected that; I couldn’t tell which manufacturer will have the issue first, but with the arms race of who’ll be the first to make the 0 nm node dye that draws a megawatt of power and needs to be cooled with liquid Helium, it’s a perfectly logical outcome. If I had to guess, I’d say they made transistors out of too little material at some critical part of the dye, and with thermal migration within the material at operational temperatures the transistors basically stopped working properly.

So, basically, the real question is who actually needs a CPU of that power on a single-CPU client machine? We’re talking 24 cores, 32 threads, 5.8 GHz boost clock, 219W max. power at full turbo boost. This is sheer insanity, and it’s obvious that my ridiculous exaggeration about megawatts of power isn’t even that much more ridiculous than the actual specs of that thing. So, who even needs that? I certainly don’t. Gamers? They probably think they do, and they are likely the ones buying it. Developers? Video renderers? Graphics designers? I don’t know. But putting that many cores on one CPU, and clocking them this high, is sheer madness reminiscing of the Pentium IV era, where they clocked them so high, and with such dye layout, that a piece of malware appeared that deliberately iterated along the central path until it overheated so much the dye cracked, destroying the CPU.

I’m writing this on a Mac Studio M2 Max, which has more power than I realistically need, even for Lightroom, which is extremely demanding, and it’s idling at 38°C during the summer. It never makes a sound, it never overheats, and it’s absolutely and perfectly stable. I actually had the option of getting the model with twice the power for twice the money, which is an excellent deal, and I didn’t bother, because why would I? At some point, more power becomes a pointless exercise in dick-measuring. Sure, bigger is better, but is it, when it’s already half a meter long and as thick as a fire extinguisher? So this is the obvious consequence – Intel tried to win a dick measuring contest with AMD and made itself a meter long appendage, because bigger is better, and now it turned out it’s not fit for purpose? How surprising.

 

Thoughts about computers

I’ve been thinking about computers lately, for multiple reasons, so I’ll share a few thoughts.

It’s interesting how people tend to have weird prejudice about things based on the label. For instance, “gaming” computers are supposedly not “serious”, you shouldn’t buy that stuff if you’re doing serious work with your computer. You should get a “business” or a “workstation” machine.

That’s such incredible nonsense, because what does “gaming” even mean at this point? It’s basically “workstation” with RGB lights. If a PC or a laptop is “gaming”, it usually means a powerful graphics card, overbuilt power supply, high-performance cooling system, and generally high-end components designed to be able to work on 100% indefinitely. What’s a workstation PC? Well, it has a powerful GPU, overbuilt power supply, high performance cooling, high-spec components that are designed to be able to work on 100% indefinitely, only the graphics card has drivers for CAD, which means it supports double precision floating point arithmetics, and it’s designed more “seriously”, which means no RGB and the box looks normal, and not like an alien space ship with glowing vents. It’s also more expensive, in order to milk the “serious” customers for money, which means that if you want to have the greatest performance for your money, get gaming components, get a normal-looking case, turn off the RGB nonsense and there you go. The only situation where you would actually want a real workstation machine is if you’re running server workloads and you actually need a server CPU. Otherwise, “gaming” translates into “great for photo and video editing and programming”.

It’s interesting that everything that tends to be labelled as “business” tends to be shit. That’s because a computer for “business” is the cheapest possible piece of crap that will still work, so penny-pinchers that buy equipment for general staff that slaves their lives away in cubicles for a meager wage can feel good knowing they spent the least possible amount of money on the “workforce”. That’s never the computer the boss is getting for himself. He’s getting a 16” Macbook Pro. He’s certainly not getting the “business” model. You don’t get a business-grade computer if you’re doing business, you get it if you are considered to be equipment required for doing business, and you need to be cheap.

There’s also the question whether to buy a Mac or a “PC”, as if somehow a Mac is not a PC. Let me write down my own experience. I used to write books on IBM T41 laptops running Linux. The keyboards were great, and the screen had lots of vertical space, being 4:3 ratio, so that worked fine, but they tended to die on me, because I used them on my lap, and I didn’t have air conditioning in the room so they overheated during the summer; the motherboards would die, so I would just get another used T41 (they were several generations obsolete and dirt cheap at that point), put in my hard drive and continue writing. At some point, after I went through two IBMs in two years, I decided I had enough of that shit and got a 13” Macbook Air. Now, that thing was indestructible, a proverbial cockroach that would survive a nuclear war. I had to retire it because it had only 2GB of RAM and became unbearably slow after five or so years of use, and traded it in for some new piece of hardware, but there’s obviously a reason why so many people buy Macbooks, and it’s not because they are stupid so they buy “overpriced junk”. If anything, the old Thinkpads were junk compared to the Macbook. I replaced the Air with a mid-2015 15” Pro, which is also a cockroach – I’m using it to write this, it’s 8 years old, I more-less retired it a few years ago and still works just fine. The screen, touchpad and keyboard are still great, but it’s significantly slower than the modern machines, so I wouldn’t do serious heavy lifting on it, but for all the normal tasks it’s just fine. The only interventions I did on it was to change a bloated battery when it was 5 years old, and I replaced the 256GB SSD with a 1TB Samsung. So, my answer to “why would you buy a Mac” is “because I want it to work reliably and well until it’s a million generations obsolete and I want to replace it anyway”. It doesn’t just die, the user interface is great, and it’s usually among the fastest machines you can get, and considering how well it works and how long it lasts, it’s dirt cheap. The only exception are the generations with the touchbar and butterfly keyboard. They were shit, and everybody who got one regrets their decisions.

It’s not that I have some general recommendation, such as “just get a Mac” or “just get a gaming machine”. In fact, it is my experience that, today, the computers are so good you really have many good options, but that’s only if you avoid the “economy” and “business” stuff, which is what junk made of obsolete components sold to businesses at clearance prices is called.

Democratic technology

I have a weak spot for “democratic technology” – meaning that you can be a kid with very little or no money, and still be able to buy it and use it to learn and start getting your initial experience making money. As a teenager, I had posters on my bedroom wall with HP Integral PC, Compaq Portable 386, IBM PS/2 and a HP 28c calculator, when other boys had posters of cars, girls and football stars; you can see where my priorities were. 🙂

I still have a weak spot for good quality pencils, calculators and computers, into which I projected almost magical qualities of compensating for my limitations. The irony is, I ended up in a place where I do almost all the heavy lifting in my head, and use computers as glorified typewriters, but I digress. 🙂

So, what’s the “democratic technology”? It’s basically the stuff you can actually buy and do all the work you would otherwise do on the hardware you dream of, but can’t afford.

Today, democratic technology is a cheap Xiaomi smartphone, a desktop computer you built from cheapest new or used components, running unlicensed Windows or Linux, or a laptop along similar lines, all bought with pocket money, allowing you to access stuff on the Internet that allows you to learn. Interesting, it’s very rarely the stuff that’s designed and marketed as “democratic”, such as a Raspberry pi. I would actually not recommend that as a computer for kids, because it’s seriously underpowered and not inexpensive enough to be worth the effort. You can actually get a used i5 laptop for the order of magnitude of 100 EUR, which would be greatly preferred. This would be something along the lines of a ThinkPad X240 i5-4300u, which would run either Linux or Windows, and you can install an SSD and add more RAM if required. Such a machine could be used to surf the web, learn Python, PHP or C, and basically get you started in a position where you are very low on money, and very high on motivation. Interestingly, laptops seem to be a cheaper solution than desktops, when you add everything up.

Similar examples can be found in other areas as well; photography, for instance. You can buy a ten year old digital SLR with a lens or two, get cheap macro extension tubes from Ebay, use some free raw converter such as RawTherapee, and that will get you started. Heck, you can use a smartphone to learn composition if you can’t afford a proper camera, but I’ve seen things such as a Canon 30d with a kit lens for the order of magnitude of 50 EUR, and that would be a very good way to get you started. What can you do with a 8MP camera and a kit lens? If you can add a cheap tripod, you can do this:

With only a smartphone, you can do this:

Sure, I wouldn’t attempt large magnifications from phone images, but we’re talking about learning here; in that phase, you could take excellent equipment and produce shit, because you don’t yet know how to pick light, don’t understand dynamic range, don’t know how to compose, or even how to hold the camera still. A phone will do for composition, colour and dynamic range; an old dSLR with a tripod will allow you to learn everything else.

It’s not my field of expertise, but with a piece of paper and colour pencils you can learn how to draw, and then use a cheap flatbed scanner to digitise your drawings and use them as illustrations on websites you design, to give your work a unique look. Or you can learn how to draw in some digital tool, such as Inkscape.

Sure, you need to compensate for technical disadvantages with skill and talent, but the “democratic” part of my point is that you don’t “pay to win”; people usually get the fancy gear only after they got rich using the basic stuff everybody has, or can get. If you have something meaningful to say, you don’t need a Macbook Air to write it down; any computer will do. Heck, a smartphone with a bluetooth keyboard will allow you to write books and articles if you don’t have anything else, although I wouldn’t recommend it if you have options. However, after you had been doing that for long enough, you’ll probably start healing your frustrations caused by inadequate gear the way I’m doing. 🙂 Sure, I could do it on a 386. Been there, done that, didn’t really get a t-shirt, but I did get trauma induced by having to delete the unnecessary multimedia files such as moricons.dll and *.wav from a Win3.11 installation in order to be able to fit my code builds on a 85MB HDD, and edit rich text files of the Ventura Publisher in a DOS text editor because the machine simply didn’t have enough RAM or CPU to edit the tables in the GEM GUI. Sure, it can be done, and you can get started and dig yourself out of the pit with very few resources, compensating for the drawbacks of your tools with some ingenuity. However, fuck me if I’ll do it anymore, now that I have the money. 🙂

The Mac Pro problem

Apple seems to have painted themselves into a corner by being perfectly reasonable. You see, they recently updated the Mac Pro tower, and it turned out to be the Mac Studio Ultra in a bigger box. The pci-e slots inside are barely of any use since GPUs are not supported, and the machine is inherently un-expandable; you can’t add more RAM, for instance. So, while this machine is extremely powerful, it’s by no means an open-ended system you can expand to fit your extreme workflow – for instance, trying to do fluid dynamics simulations or something. So, basically, the biggest Mac is great at doing normal Mac things, such as movies and audio, but it doesn’t extend into high-end scientific or engineering workflows. Is that a problem? I don’t think so, and let me explain why.

What Apple did right was design a range of machines that covers their professional and prosumer user base; people who make videos in Final Cut, make music in Logic Pro, or edit photos in Lightroom and Photoshop. They made excellent laptop range that covers everything from writing articles and checking Twitter in Starbucks, to the most complex software design, or audio, video and photography work. Then they made a desktop range that covers all of that and extends deeply into professional studio use, and I don’t think they left even a fraction of their traditional or potential user base uncovered. Heck, even I got a desktop Mac, which I resisted so far because all they had either came built into a display, overheated under load or cost extreme money while offering no benefits over what I had. The drawback of the current Mac lineup is that it doesn’t extend endlessly, and some, like Linus, will whine over that, while I might offer a more constructive approach.

You see, there’s only as much you can expect a desktop machine to do, and M2 Ultra actually exceeds this expectation by far. In order to exceed this ceiling, Apple would have to completely redesign the system that’s perfectly good for 99.9999% of their user base, in order to cater to the affectations of those who actually need a supercomputing cluster but aren’t technologically savvy enough to realize it. In order for Apple to try to artificially meet this almost non-existing demand, they would have to create a completely new architecture centered around a passive bus motherboard and Max/Ultra daughterboard cards that connect over this bus the way two Max chips are edge-connected to form the Ultra; in essence, combine the infinity fabric and PCIe technologies and make it work. The engineering overhead would be immense, and they would only sell a handful of those machines because the demand at such a high-end of computational needs is very slim.

Alternatively, those who actually want to perform complex computations should make their software cluster-friendly (basically, dump workload packages onto a NAS stack, and have the cluster nodes run workload processors that pop packages from the workload stack, process them and push them onto a result stack, and when that is done, have some script go through the results, check their integrity and assemble it all into the end product). Then you could have multiple Mac Mini or Studio devices connected to the LAN, processing the work you give them, and you could extend this infinitely, and the demands on an individual node would be such that you could optimize it for cost-effectiveness by buying the base Mac Mini or Mini Pro devices by the truckload. This kind of work is usually done on Linux nodes, but a modern Mac is so power-efficient that it has genuine advantages for cluster applications.

So, basically, the Mac lineup is truly powerful enough, and it allows for open-ended design for those who need endless computational power; it’s just not open-ended inside a single chassis, which is only a problem if you have unrealistic expectations, or a workflow that is poorly designed, by not breaking down the job into manageable components. Honestly, I could write such a cluster setup myself in less time than it would take me to start whining about lack of power and tasks taking too long.

Why Apple and not Linux?

I am aware of all the problems with America weaponizing IT by spying, withholding access to technology to those who defy them, and so on. So, why did I not migrate away from American technology, for instance by migrating everything to Linux?

I considered it, but eventually concluded that it wouldn’t solve anything, and would just make things difficult for me in the time before everything collapses anyway.

Let me show you my line of thinking.

I live in Croatia, which is an American vassal state. The government, police, military, courts and news/media are tightly controlled by America. In a best case scenario, I could have a computer that is not controlled by America, but the computer is the least of my problems if they really want to deal with me, as the example of Andrew Tate and his brother Tristan illustrates. They, too, lived in an American vassal state and thought they could be independent, sovereign individuals, and they were proven wrong quite easily. I am perfectly aware that the Americans can at any time pay or pressure any number of local corrupt bastards to plant fake evidence, purchase fake witnesses and lock me up indefinitely on fake charges, if they really found me a threat. They don’t, because I don’t have a significant audience, but they could. Migrating my computers and phone to Linux would not solve any of those issues. If doing it would actually contribute to my safety from any kind of oppression, I would have done it already.

The second argument is that having a technological setup that depends on being able to log in to a cloud service in America might make it all defunct if something happened to either America or the Internet. If that happened, I guess I would have bigger things to deal with than the computer, but I do, in fact, always have alternatives. I have an old laptop with Linux/Windows dual boot, everything on it is up to date, and I can open it at any time, boot into Linux and, in case Microsoft and Apple do some kind of a lock-out, I can still go online and access everything. It’s not like I would have to learn Linux from scratch or anything. I also have a spare Xiaomi smartphone with a spare SIM card, in case my primary phone ceases to function. My main worry in case of a major Apple/Microsoft denial of service is inability to contact people or perform basic tasks, such as the access to Internet banking, mail, ssh, web and so on.

The third argument is that the operating system is actually the least of my concerns, as all hardware seems to be designed with back doors in mind, such as the Intel management engine (ME), which is an ARM core on every modern Intel CPU, that listens to the ethernet port and seems to be designed to wake the computer up on remote command, and perform tasks below anything the user can see or control. Above that is the UEFI/BIOS, which is also proprietary technology that does who knows what, and only then we get to the operating system. The entire palimpsest of technology is so complicated and convoluted that I freely admit inability to secure my computer infrastructure in any meaningful way, because the American back doors are installed in every aspect of the infrastructure. It’s as if the primary purpose of it all was to extend American power and influence, and everything else, such as utility, was a secondary concern, or merely a way to market it. The way they went nuts when Huawei started to out-compete them in selling infrastructure was telling; basically, they couldn’t order the Chinese to install all the back doors and spying tools they install through the American/Western/vassal companies, and every Huawei infrastructural device meant loss of control for America.

So, I could dedicate quite a bit of my personal time and effort to attempts of securing my personal IT sphere, and it would all probably be for naught, so I shrugged and decided not to even try – instead, I decided to secure my money by keeping it almost completely out of the state/bank system, keeping connected just enough to make it easy for me to pay the bills and purchase goods and services in the system before it all fails. You see, there are several levels of true sovereignty. The first is the level of physical power and invulnerability. The second level is money and influence. The third level is all the unimportant stuff people fuss about. I don’t have anything to protect me against the first-level threats. I have enough money on the second level to make me a hard target; close my bank account and I’ll laugh. Try to cancel my credit cards and you’ll find out I don’t have any. Try to make me default on my loans and see I don’t have loans, I do everything cash. I have debit cards because I have to pay for the online stuff, and that would be a problem. I do use communal infrastructure, like water and electricity, and I’m sensitive to interruptions there. I also buy food, so I’m sensitive to interruptions in supply. As you can see, I thought things through and decided that computers are a luxury that works in the present-day environment, which is quite fragile, and it might all blow up at some point, in which case I am prepared to deal with all kinds of contingencies, but migrating to Linux? Yeah, if in some unlikely case in a world without Internet and American services I need computers, I am sure I will be able to patch something together well enough to serve the purpose, but I am more concerned with water, electricity, food, antibiotics and so on. Essentially, it’s a non-issue.