Linux desktop

I’m pretty regularly testing Linux desktop because I want to retain a level of proficiency with it, which would certainly degrade if I only worked with my server installations. However, my normal approach, to install a Linux virtual machine, isn’t completely useful because then I get to see how Linux supports virtual hardware drivers and not the actual iron, so I put Ubuntu as a dual boot configuration on my Thinkpad T14 gen1 (i5-10310U). Honestly, I initially didn’t feel like doing it because Win11 worked great on that machine, but the problem with Win11 is that it always works great, until they force something unacceptable down your throat, like AI that runs in the background, recording everything you do, and analyzing it against patterns provided by the American intelligence agencies, pretending it’s looking for child porn while it’s in fact mapping a possible insurgency to be eliminated once the people in charge decide to dispense with the fig leaf of democracy. A Win11 machine is constantly running too hot, meaning it’s running background processes I know nothing about, but I suspect it’s “indexing files”. I don’t know what back doors and stuff they put into Linux, but I suspect it can’t be very extensive, and can’t be as nefarious as they can make it when nobody is really able to look into the code that cooks the CPU with background tasks that would surely draw immediate attention in Linux. So, I keep Linux as an option in case Windows and Mac OS become something I can no longer tolerate, and in order for it to be an option I have to occasionally work with it in earnest, which in this case meant putting it on a laptop and using it regularly for weeks; not as my main computer, of course, but regularly enough to see what works and what’s broken, and something is always broken in Linux.

For instance, the first thing I noticed when I installed Ubuntu LTS (noble) on the Thinkpad was that touchpad gestures don’t work. They used to work on previous versions but they disabled them on the LTS, seemingly because it’s unreliable. I eventually upgraded it to the current non-LTS version, plucky, to find out that this is indeed the case; yes, gestures work, and yes, they are unreliable. It’s less reliable and smooth than the Win11 touchpad support, but the worst thing is that gestures stop working after the laptop wakes from long sleep. I found a workaround that seems to solve the problem: edit /lib/systemd/system-sleep/touchpad , put this in:

#!/bin/sh

case $1 in
  post)
    /sbin/rmmod i2c_hid && /sbin/modprobe i2c_hid 
    /sbin/rmmod psmouse && /sbin/modprobe psmouse 
  ;;
esac

chmod +x and voila, you can tap to click after it wakes from suspend.

Eat your heart out Windows and Mac sheeple, you wish your touchpad gestures worked after waking from suspend. Oh wait…

Other than this stupid bullshit, the OS is fine. I’m running my conventional desktop applications other than msecure, which stubbornly refuses to support Linux, probably because there’s no money in it, and I don’t blame them because Linux users make it a point of ideology not to pay for software. Chrome browser, Thunderbird for mail, Telegram and Element for chat, LibreOffice for documents and spreadsheets, KeePassXC for managing passwords, KRDC for remote desktop connections to my Win10 home server, because Remmina, the default and recommended application, was so incredibly broken it didn’t do anything at all. I managed to set up everything I normally use for non-photographic purposes, and other than one instance where the applications kept crashing without any apparent reason, which was resolved after reboot, I’m using Linux on this machine for about two weeks, and it’s fine. No, it’s not “faster than Windows”; it doesn’t seem to be any faster than Win11, which is incredibly smooth on this hardware, probably because I have 32GB RAM on the machine for shits and giggles.

Why am I using Ubuntu? First of all, I’m always using Debian-based distros, simply because I know where everything is and I don’t feel like wasting time on learning equivalent but different file/folder placements, daemon restart methods, and packet manager parameters and quirks. Second, Ubuntu usually has better hardware support and is more polished than Debian, which doesn’t matter on a desktop machine, but a laptop has all sorts of integrated hardware which just works on Ubuntu, and which kind of doesn’t or I need to waste time setting up on Debian. I know, Linux people hate Ubuntu for all sorts of reasons, but that’s because they would hate everything that became main stream enough, and they want to think of themselves as edgy or some other bullshit, and they are too socially inept to actually do something worthwhile, so they install arch, slackware or BSD and pretend they are different, special and not NPC. It’s like Android phone users who think they are advanced because they can tweak their phone, not realizing that people with actual lives don’t have time for this crap. Another reason I’m using Debian based distros is that I tried a dozen of other distros earlier and they were basically all the same. Boot manager, kernel, standard infrastructure, window manager, eye candy. If something doesn’t work, it usually doesn’t work on any of them. A distro won’t just magically make Lightroom work on Linux. If some driver is shit, it will be shit everywhere. So I stick with the most main stream distros where work hours are actually invested in making the experience polished enough for actual use, and that’s that. If a distro is intentionally hard to install and use in times where other distros are easy to install and use, and it offers no actual advantages, I dismiss it instantly. I don’t need computers to make things unnecessarily hard and challenging just to create the artificial sense of achievement. If I wanted that, I’d go out and mow the lawn at 35°C and get heat stroke. I want Linux to be efficient and elegant, the way my Mac is efficient and elegant. I don’t want to deal with some stupid clusterfuck that’s there just because someone wants to be different.

Yes, my annoyance with Linux bullshit is obvious. However, I run it on multiple machines constantly over the years, in fact decades, and I would find Windows unusable without either WSL or a Linux VM of some other kind, and if Mac OS becomes too closed down, and Windows becomes too dangerous to use due to all the privacy intrusions, I need a plan B. Well, other than the touchpad support (now apparently resolved), lack of commercial software and some instability, I find it quite usable on my Thinkpad.

ps. There seems to be another variable with this suspend thing, in the uefi/bios on the Thinkpad there’s an option to choose win10 or linux sleep mode, so that’s apparently a thing and it was in win10 mode. I set it to linux now to see if it helps, because apparently short sleep isn’t the problem, but long sleep is; the bios doesn’t specify whether win10 means acpi mode 5 or something, meaning it enters a very low power suspend where it turns off all sorts of things. Since it’s not specific, I can only guess, and run that script manually when gestures stop working. But yeah, that’s one of the things I hate about Linux. Things that work great everywhere else either don’t work, or just randomly glitch and you then have to get into it far more than you ever intended. If touchpad on a Macbook glitched like that, it would be a major international scandal. On Linux, running on Thinkpad which is usually the best supported platform because the Linux people love it, it’s just something they turn off in the LTS version because it’s unstable and nobody really gives enough shit to fix it.

Simple solutions

I had a weird IT problem that took me a long time to figure out, because it was so elusive and hard to reproduce. The NUC that used to upload the radiation data was acting up; it would just freeze for some reason. The first thing I did was reinstall Windows 11. Then I installed Windows 10. Then recently it actually got worse; it would stop refreshing data and I would come down to see it stuck at max fan speed and hot, probably 100% CPU for some reason, and not showing image on the screen nor reacting to keyboard. I concluded it’s probably fucked on a hardware level and put a HP mini PC there, with Windows 10. That didn’t fix anything, because it would stop refreshing data and I would come down to find the Radiascan software stuck. It turned out it wasn’t the software; the device itself was disconnected for some reason, and I first suspected some power saving feature, and went through everything in both Windows and UEFI; after each modification I had to leave it running to see whether it would hang, and it invariably did, in intervals from almost immediately to almost a day. As you can imagine, testing that takes a lot of time; a day per tweak, basically. Eventually I guessed the device drew too much power from the USB while charging its batteries, which overheats the USB controller or something and triggers a disconnect, so I tried putting a powered USB hub between the computer and the device, and that didn’t do anything, but I felt I was on to something, and then I remembered seeing that the USB cable connecting the device is frayed to the point where I can see the wires inside, and thought it can’t be, because it connects and reads data, right…? Right? I managed to find another mini USB cable somewhere, changed it, and it solved the problem completely.

Sometimes the solution to a complex looking problem can be remarkably simple.

Some technical stuff

I’ve been doing some infrastructure work on the servers since yesterday, essentially creating a “traffic light” for reporting online status of services, as well as the infrastructure for simultaneous graceful shutdown of servers at home, attached to the UPS.

This is what it looks like on the danijel.org site when the home copy is down due to a simulated power outage (unplugging the UPS from the grid). When I power it up, it takes 10-15min. for all the services to refresh and get back online. It’s not instantaneous, because I had to make compromises between that and wasting resources on crontab processes that run too frequently for normal daily needs. Essentially, on powerup the servers are up within half a minute, the ADSL router takes a few minutes to get online, and then every ten minutes the dynamic DNS IP is refreshed, which is the key functionality to make the local server visible on the Internet. Then it’s another five minutes for the danijel.org server to refresh the diagnostic data and report the updated status. Detection of a power outage is also not instantaneous; in case of a power loss, the UPS will wait five minutes for power to come back, and then send a broadcast. Within two minutes everything will be powered down, and then within five minutes the online server will refresh the status. Basically, it’s around 15min as well.

Do I have some particular emergency in mind? Not really. It’s just that electricity where I live is less than reliable, and every now and then there’s a power failure that used to force me to shut the servers down manually to protect the SSD drives from a potentially fatal sudden power loss during a write. Only one machine can be connected to the UPS via USB, and that one automatically shuts down, while the others are in a pickle. So, I eventually got around to configuring everything to run automatically when I sleep, and while I was at it, I wrote a monitoring system for the website. It was showing all kinds of fake outages during the testing phase – no, I wasn’t having some kind of a massive failure – but I’m happy with how it runs now so I’ll consider it done. The monitoring system is partially for me when I’m not home, so I can see that the power is down, and partially to let you know if I’m having a power outage that inhibits communication.

The danijel.homeip.net website is a copy of the main site that’s being updated hourly. It’s designed so that I can stop the hourly updates in an emergency and it instantly becomes the main website, where both I and the forum members can post. Essentially, it’s a BBS hosted at my home with a purpose of maintaining communications in case the main site dies. Since I can’t imagine many scenarios where the main site dies and the ddns service keeps working, it’s probably a silly idea, but I like having backups to the point where my backups have backups.

Also, I am under all sorts of pressure which makes it impossible for me to do anything really sophisticated, so I might at least keep my UNIX/coding skills sharp. 🙂

Linux again

I recently did some experiments with “old” hardware – a Skylake i5-6500T mini pc running Debian 12 with KDE Plasma, configured so that I can use it either as a stand-in replacement for my home server, or a fully set up Linux desktop for myself in case I need it for something; I don’t know, if both Microsoft and Apple make their operating systems non-functional at the same time for some reason. I intentionally left the machine with 8GB RAM just to see if it’s enough, and it seems to be more than enough for the server, where it uses up 1.4GB, and barely sufficient for desktop, where it uses up almost all the RAM when I run all the things that I normally do. It’s all quite snappy, but I did notice one thing; when I play videos on YouTube on full screen, or even when I’m using one of the high-bandwidth modes such as 1080p@60, it frame drops like crazy and is as smooth as a country road in Siberia during the melt season. My first guess was that the Skylake iGPU doesn’t support the modern codecs used by YouTube for those high bandwidth modes, but then I thought more about it and decided it might be a Linux issue. I didn’t feel like installing Windows on that machine just to test my hypothesis, so I took out the second device I recently got on ebay, the Thinkpad T14 with i5-10310U, a Comet Lake CPU with support for all the modern codecs. Played that same 4k video test on Win11, with perfect results, zero frame drops. Then I rebooted into Ubuntu 24.04, same test, and it frame drops almost the same as the Skylake machine.

I did all the recommended stuff on Linux; tried different browsers, tried to toggle GPU acceleration on and off, and the only thing I managed to do is make it behave worse, not better.

Switch to Linux they say, you’ll solve all your Microsoft problems they say. Well it’s true, you’ll solve your Microsoft problems, and instead of Microsoft you’ll have problems caused by thousands of pimply masturbators with attitude issues who can’t agree on the colour of shit, which is why there are hundreds of Linux distros and they all have the same issues, because polishing the GPU drivers and the window manager is hard. But the important thing is that the Linux community is getting rid of “nazis” who think there are only two genders, and at the same time they get rid of Russian developers because “stand with Ukraine”.

Yeah. The frustrating thing about Linux is that so many things work well, and then you run into something important like this. Maybe Huawei will rework Linux into something that actually works well, and give it wider hardware support and localisation, doing for Linux desktop what Android did for mobile. Maybe. However then America is going to block it because they won’t be able to install their spyware.

PC lineage

I was watching some YouTube videos about old computers, thinking: which ones are predecessors of our current machines, and which ones are merely extinct technology, blind alleys that led nowhere?

It’s an interesting question. I used to assume that home computers from the 1980s are predecessors of the current machines, but then I saw someone work on an old minicomputer running UNIX, PDP-10 or something, and that thing felt instantly familiar, unlike ZX Spectrum, Apple II or Commodore 64, which feel nothing like what we have today. Is it possible that I had it wrong? When I look at the first Macintosh, it feels very much like the user interface we use today, but Macintosh was a technological demonstration that didn’t actually do anything useful yet, because hardware was too weak. But where did the Macintosh come from? Lisa, of course. And Lisa was the attempt to make Xerox Alto streamlined and commercially viable. All three were failures; the idea was good, but the technology wasn’t there yet. The first computers that feel exactly like what we are now using were the graphical workstations from Silicon Graphics and Sun, because they were basically minicomputers with a graphical console and a 3d rendering engine.

It’s almost as if home computers were a parallel branch of technology, related more to Atari arcade machines than the minicomputers and mainframes of the day, created as attempts to work with inferior but cheap technology, which evolved from Altair to Apple II to IBM PC, which evolved from 8088 to 80286 to 80386, when Microsoft copied the Macintosh interface and made it into a mass market OS, as technology became viable, then Windows evolved from 3.00 to 95 to 98… and then this entire technological blind alley went extinct, because the technology became advanced enough to erase the difference between the UNIX graphical workstations and personal computers, and so Microsoft started running a mainframe kernel on a PC, which was called NT, at version 4 it became a viable competition to Windows 95, and Windows 2000 ran NT kernel, and the 95/98/ME kernel was retired completely, ending the playground phase of PC technology and making everything a graphical workstation. Parallel to that, Steve Jobs, exiled from Apple, was tinkering with his NEXT graphical workstation project, which became quite good but didn’t sell, and when Apple begged him to come back and save them from themselves, he brought the NextStep OS and that became the OS X on the new generation of Macintosh computers. So, basically, the PC architecture was in its infancy phase and playing with cheap but inferior hardware until the prices of hardware came down so much that the stuff that used to be reserved for the high-cost graphical workstations became inexpensive enough that the graphical workstations stopped being a niche thing, went into main stream, and drove the personal computers as they used to be into extinction.

Just think about it – today’s computer has a 2D/3D graphical accelerator, integrated on the CPU or dedicated, it runs UNIX (Mac OS and Linux) or something very similar, derived from the mainframe NT kernel (Windows), it’s a multi-user, seamlessly multitasking system, but it all runs on hardware that’s been so integrated it fits in a phone.

So, the actual evolution of personal computers goes from an IBM mainframe to a DEC minicomputer to a UNIX graphical workstation to Windows NT 4 and Mac OS X, to iPhone and Android.

The home computer evolution goes from Altair 8800 to Apple I and II to IBM PC, then from MS DOS to Windows 3.0, 95, 98, ME… and goes extinct. The attempt to make a personal computer with graphical user interface goes from Xerox Alto to Apple Lisa to Macintosh, then to Macintosh 2, OS being upgraded to version 9… and going extinct, being replaced by a successor to NEXT repackaged as the new generation of Macintosh, with the OS that was built around UNIX. Then at some point the tech got so miniaturised that we now have phones running UNIX, which is a mainframe/minicomputer OS, directly descended from the graphical workstations.

Which is why you could take a SGI Indigo2 workstation today and recognise it as a normal computer, slow but functional, and you would take the first IBM PC or Apple II and it would feel like absolutely nothing you are accustomed to. That’s because your PC isn’t descended from IBM PC, it’s descended from a mainframe that mimicked the general look of a PC and learned to be backwards compatible with one.