Some technical stuff

I’ve been doing some infrastructure work on the servers since yesterday, essentially creating a “traffic light” for reporting online status of services, as well as the infrastructure for simultaneous graceful shutdown of servers at home, attached to the UPS.

This is what it looks like on the danijel.org site when the home copy is down due to a simulated power outage (unplugging the UPS from the grid). When I power it up, it takes 10-15min. for all the services to refresh and get back online. It’s not instantaneous, because I had to make compromises between that and wasting resources on crontab processes that run too frequently for normal daily needs. Essentially, on powerup the servers are up within half a minute, the ADSL router takes a few minutes to get online, and then every ten minutes the dynamic DNS IP is refreshed, which is the key functionality to make the local server visible on the Internet. Then it’s another five minutes for the danijel.org server to refresh the diagnostic data and report the updated status. Detection of a power outage is also not instantaneous; in case of a power loss, the UPS will wait five minutes for power to come back, and then send a broadcast. Within two minutes everything will be powered down, and then within five minutes the online server will refresh the status. Basically, it’s around 15min as well.

Do I have some particular emergency in mind? Not really. It’s just that electricity where I live is less than reliable, and every now and then there’s a power failure that used to force me to shut the servers down manually to protect the SSD drives from a potentially fatal sudden power loss during a write. Only one machine can be connected to the UPS via USB, and that one automatically shuts down, while the others are in a pickle. So, I eventually got around to configuring everything to run automatically when I sleep, and while I was at it, I wrote a monitoring system for the website. It was showing all kinds of fake outages during the testing phase – no, I wasn’t having some kind of a massive failure – but I’m happy with how it runs now so I’ll consider it done. The monitoring system is partially for me when I’m not home, so I can see that the power is down, and partially to let you know if I’m having a power outage that inhibits communication.

The danijel.homeip.net website is a copy of the main site that’s being updated hourly. It’s designed so that I can stop the hourly updates in an emergency and it instantly becomes the main website, where both I and the forum members can post. Essentially, it’s a BBS hosted at my home with a purpose of maintaining communications in case the main site dies. Since I can’t imagine many scenarios where the main site dies and the ddns service keeps working, it’s probably a silly idea, but I like having backups to the point where my backups have backups.

Also, I am under all sorts of pressure which makes it impossible for me to do anything really sophisticated, so I might at least keep my UNIX/coding skills sharp. 🙂

Linux again

I recently did some experiments with “old” hardware – a Skylake i5-6500T mini pc running Debian 12 with KDE Plasma, configured so that I can use it either as a stand-in replacement for my home server, or a fully set up Linux desktop for myself in case I need it for something; I don’t know, if both Microsoft and Apple make their operating systems non-functional at the same time for some reason. I intentionally left the machine with 8GB RAM just to see if it’s enough, and it seems to be more than enough for the server, where it uses up 1.4GB, and barely sufficient for desktop, where it uses up almost all the RAM when I run all the things that I normally do. It’s all quite snappy, but I did notice one thing; when I play videos on YouTube on full screen, or even when I’m using one of the high-bandwidth modes such as 1080p@60, it frame drops like crazy and is as smooth as a country road in Siberia during the melt season. My first guess was that the Skylake iGPU doesn’t support the modern codecs used by YouTube for those high bandwidth modes, but then I thought more about it and decided it might be a Linux issue. I didn’t feel like installing Windows on that machine just to test my hypothesis, so I took out the second device I recently got on ebay, the Thinkpad T14 with i5-10310U, a Comet Lake CPU with support for all the modern codecs. Played that same 4k video test on Win11, with perfect results, zero frame drops. Then I rebooted into Ubuntu 24.04, same test, and it frame drops almost the same as the Skylake machine.

I did all the recommended stuff on Linux; tried different browsers, tried to toggle GPU acceleration on and off, and the only thing I managed to do is make it behave worse, not better.

Switch to Linux they say, you’ll solve all your Microsoft problems they say. Well it’s true, you’ll solve your Microsoft problems, and instead of Microsoft you’ll have problems caused by thousands of pimply masturbators with attitude issues who can’t agree on the colour of shit, which is why there are hundreds of Linux distros and they all have the same issues, because polishing the GPU drivers and the window manager is hard. But the important thing is that the Linux community is getting rid of “nazis” who think there are only two genders, and at the same time they get rid of Russian developers because “stand with Ukraine”.

Yeah. The frustrating thing about Linux is that so many things work well, and then you run into something important like this. Maybe Huawei will rework Linux into something that actually works well, and give it wider hardware support and localisation, doing for Linux desktop what Android did for mobile. Maybe. However then America is going to block it because they won’t be able to install their spyware.

PC lineage

I was watching some YouTube videos about old computers, thinking: which ones are predecessors of our current machines, and which ones are merely extinct technology, blind alleys that led nowhere?

It’s an interesting question. I used to assume that home computers from the 1980s are predecessors of the current machines, but then I saw someone work on an old minicomputer running UNIX, PDP-10 or something, and that thing felt instantly familiar, unlike ZX Spectrum, Apple II or Commodore 64, which feel nothing like what we have today. Is it possible that I had it wrong? When I look at the first Macintosh, it feels very much like the user interface we use today, but Macintosh was a technological demonstration that didn’t actually do anything useful yet, because hardware was too weak. But where did the Macintosh come from? Lisa, of course. And Lisa was the attempt to make Xerox Alto streamlined and commercially viable. All three were failures; the idea was good, but the technology wasn’t there yet. The first computers that feel exactly like what we are now using were the graphical workstations from Silicon Graphics and Sun, because they were basically minicomputers with a graphical console and a 3d rendering engine.

It’s almost as if home computers were a parallel branch of technology, related more to Atari arcade machines than the minicomputers and mainframes of the day, created as attempts to work with inferior but cheap technology, which evolved from Altair to Apple II to IBM PC, which evolved from 8088 to 80286 to 80386, when Microsoft copied the Macintosh interface and made it into a mass market OS, as technology became viable, then Windows evolved from 3.00 to 95 to 98… and then this entire technological blind alley went extinct, because the technology became advanced enough to erase the difference between the UNIX graphical workstations and personal computers, and so Microsoft started running a mainframe kernel on a PC, which was called NT, at version 4 it became a viable competition to Windows 95, and Windows 2000 ran NT kernel, and the 95/98/ME kernel was retired completely, ending the playground phase of PC technology and making everything a graphical workstation. Parallel to that, Steve Jobs, exiled from Apple, was tinkering with his NEXT graphical workstation project, which became quite good but didn’t sell, and when Apple begged him to come back and save them from themselves, he brought the NextStep OS and that became the OS X on the new generation of Macintosh computers. So, basically, the PC architecture was in its infancy phase and playing with cheap but inferior hardware until the prices of hardware came down so much that the stuff that used to be reserved for the high-cost graphical workstations became inexpensive enough that the graphical workstations stopped being a niche thing, went into main stream, and drove the personal computers as they used to be into extinction.

Just think about it – today’s computer has a 2D/3D graphical accelerator, integrated on the CPU or dedicated, it runs UNIX (Mac OS and Linux) or something very similar, derived from the mainframe NT kernel (Windows), it’s a multi-user, seamlessly multitasking system, but it all runs on hardware that’s been so integrated it fits in a phone.

So, the actual evolution of personal computers goes from an IBM mainframe to a DEC minicomputer to a UNIX graphical workstation to Windows NT 4 and Mac OS X, to iPhone and Android.

The home computer evolution goes from Altair 8800 to Apple I and II to IBM PC, then from MS DOS to Windows 3.0, 95, 98, ME… and goes extinct. The attempt to make a personal computer with graphical user interface goes from Xerox Alto to Apple Lisa to Macintosh, then to Macintosh 2, OS being upgraded to version 9… and going extinct, being replaced by a successor to NEXT repackaged as the new generation of Macintosh, with the OS that was built around UNIX. Then at some point the tech got so miniaturised that we now have phones running UNIX, which is a mainframe/minicomputer OS, directly descended from the graphical workstations.

Which is why you could take a SGI Indigo2 workstation today and recognise it as a normal computer, slow but functional, and you would take the first IBM PC or Apple II and it would feel like absolutely nothing you are accustomed to. That’s because your PC isn’t descended from IBM PC, it’s descended from a mainframe that mimicked the general look of a PC and learned to be backwards compatible with one.

Tariffs

Trump introduced those super high tariffs on every country America is in trade deficit with, which, essentially, means every country.

As a result, those countries are going to introduce reciprocal tariffs on America, which translates into a trade war.

What’s going to happen next is global disentanglement of supply chains based on toxicity. But more directly, some things are going to get more expensive. Expecting them to get more expensive, people will buy the existing stock quickly, and the manufacturers are going to stop supply until the pricing is figured out. This means both scarcity and high prices.

So, obviously, during the weekend I looked into the stuff I will have to buy in the next six months to a year, overlapping with the stuff that’s going to get more expensive within that timeframe, and as a result I bought two Apple laptops. Biljana needs a replacement for her Intel 16″ Macbook Pro from 2019, so she’s getting a 16″ M4 Max Macbook Pro. My 13″ M1 Macbook Air is also due for replacement because I broke the tab key and my eyes aren’t what they used to be so I got myself the 15″ M4 Air, but this time I got 24GB of RAM because 8GB was limiting. Essentially, I just flushed my shopping list for the year because I see no benefits in waiting.

So, yeah, my prognosis for this is that the economy will go down, prices will go up, availability of things will go down, and the general standard of living will be degraded across the West. Also, I expect wars to get much worse, and quickly. Buying laptops is not what you would normally do in those conditions, but I’d rather replace failing hardware now when it’s merely preventative maintenance, than later when it might be a serious problem.

Linux (in)security

This just came out:

Basically, 9.9/10 severity is a nightmare. RCE means people can execute code on your machine remotely, and 9.9/10 probably means root permissions. This is as bad as it gets. Even worse, the security analyst reporting this says the developers were not interested in fixing it and rather spent time explaining why their code is great and he’s stupid, which is absolutely typical for Linux people.
Canonical and Red Hat confirm the vulnerability and its severity rating.
So, when Linux people tell you Linux is better than Windows and Mac, and everybody should switch to it, just have in mind that an open source project was just caught with its pants down, having a 9.9/10 severity remote code execution bug FOR A DECADE without anyone noticing until now.

Edit: It turned out it’s not super terrible. The vulnerability is in CUPS, and the machine needs to be connected to the Internet without firewall in order for the attack to work, which is not a normal condition, however the CUPS code has more holes than Emmentaler cheese and uninstalling cups-browsed is recommended.