Linux: what it intended, and what it did

There’s been lots of talk about the recent development where the SJW cult apparently took over the Linux kernel development team, forcing Linus Torvalds to sign some LBGTASDFGHJKL manifesto, where meritocracy is decried as a great evil, equality of outcome is praised and white heterosexual men need to be removed in order for the world to be awesome.

To this, my answer is that communism, as usual, is eating its children, and this is nothing new. Linux was originally a communist project and a leftist cesspool, and since the SJW fraction already took over the modern communist movement elsewhere, it would not have been realistic to expect Linux to remain separate from this trend.

To this, I got a reply that Linux did some good things, and it’s not a failure: it powers the server-side, most of the mobile platform, and there are great companies making money with Linux and supporting its development. To this, I wrote an answer I’m quoting below:

Yes, there are companies that made a huge fortune using Linux – mostly those that just sell their services implemented on top of Linux, like Google with Android, but also some involved with Linux itself. If you look at it this way, Linux created both jobs and money. However, there’s an alternative perspective: Linux, by being good enough and free, destroyed the competition. SCO, Solaris, AIX, HP-UX went the way of the Dodo. All the people working on those were presumably fired, and because the competition is Linux, there were no alternative paying jobs waiting for them. Android destroyed the possibility of anyone developing a commercially sold OS for a mobile platform, other than Apple, whose position seems to be safe for now. If Android competed fairly and the cost of development was actually charged to the customer instead of being absorbed by Google and the open source community, with the goal of turning the devices into data-gathering and ad-delivery platform, competition could actually enter the marketplace and interesting things could happen, but this way, the only market pressure is on Apple, the only player who actually plays fairly, by charging money for things that cost money.
When Linux geekboys spout their hate fountains towards Microsoft and Bill Gates, and I’ve been watching that for actual decades, their complaint is that it costs money, and the users of Windows are stupid because Windows are easy to use. The argument against Apple today is the same recycled thing: the devices are expensive so the buyers are idiots and the company is greedy, and the devices are simple to use so the users must be idiots. This looks like all the bad shades of jealousy, hatred, spite and malice blended into a very nasty combination of mortal sins; essentially, they want to destroy companies that are financially successful by sacrificing their time and effort in order to provide a decent but completely free product in order to put the commercial products out of the market, because they hate that someone is rich, and something needs to be done about it.
Basically, Linux is a cancer that destroys the potentially profitable things by competing unfairly on the market, because it pays its developers in ego trip, hatred and envy instead of money, and its goal is essentially to make everything it touches inherently unprofitable. True, some managed to profit off of that, like Google who used the modified Linux to power its ad-delivery platform, as well as its server farms, but that was done by means of taking power away from the customer, because you’re not really the customer if you’re getting a heavily subsidised product, by turning the former customers into a product that is sold to the real customers: those that buy ads.
So, essentially, what Linux did was provide leverage that manages to pump wealth away from the software developers and into the pockets of ad sellers, making the customers less influential and less empowered in the process.
Also, what needs to be looked into is how much of the cloud computing boom is due to Linux, because it’s easy to have a supercluster if your OS is free; try paying Oracle per CPU for a Google or Facebook farm and you’ll get a head-spinning number that would probably make the entire thing financially unfeasible. This way, it’s another lever for centralising power over the Internet and over the end-users, essentially replacing the distributed nature of Internet itself with large corporations that, essentially, are the Internet for most people, and which, of course, are now starting to assert political and societal influence and controlling what people are allowed to think and say.
And in the meantime, the Linux crowd still hates Microsoft and dreams of a world where they’ll finally show it to Bill Gates who dared to charge money for Windows.

My desktop computer

Since I already started talking about computers, I’ll tell you what I’m using.

This is my desktop PC:

I built it myself, as I always do; I optimized it for silence first and power second. Silence wise, it’s built in Fractal Define C case, with Seasonic FX 850 Gold PSU in hybrid mode (which means the fan is off until it is really needed), there’s a huge CoolerMaster 612 v2 CPU cooler which is massive enough that the fan doesn’t really need to spin fast unless I’m pushing it. The GPU is Asus ROG Strix 1080ti, which is silence-optimized so the fans don’t spin at all in normal use, and even under full load all you hear is a whisper.

The CPU is a i7-6700K with 32GB RAM, SSD drives and a HDD. In normal use, the HDD’s whisper is everything I hear; the fans are tuned to work below audible threshold. Under full load, the fans are set up to get rid of heat as quickly as possible, silence be damned, and the top of the case is a dust filter, so hot air can rise up via convection, and since this is an effective method, the fans are never really that loud.

This is my desk. The monitor is LG 43UD79-B, the 108cm 4K IPS unit, which is the reason why I had to upgrade the GPU; Lightroom was rendering previews very slowly in this resolution, and since this operation is GPU-driven, I got the overkill GPU, and once I did that, I said what the hell and got the Logitech steering wheel so I can use it as a racing sim. The keyboard is Roccat Suora FX mechanical RGB, the mouse is Logitech G602. The microphone is Rode NT USB unit, which I use for skype. You can see the 15″ Macbook pro on the left, and misc gadgets and remotes on the right.

The machine runs Windows 10 as host, and several virtual machines with different configurations; the main one is Ubuntu Trusty Mate which I use for writing scripts and all the Unix work. The main reason why I got such a big monitor is so that I can always have one eye on the work-related chat on the right, while I do other things on the left. Also, I like the way my photos look on a really big screen, which approximates print size of a meter in diagonal. The entire rig is hooked to a UPS, so I don’t have to worry about losing work due to power outages or spikes, which, fortunately, happen only once or twice a year on average.

Essentially, this is a rig that “just works”, and it’s where I spend most of the day.

The era of a super-desktop PC

I read something interesting in a computer magazine, I don’t know exactly when, late 1980s, early 1990s perhaps, that the concept of a “home computer” is going to become obsolete, not because there won’t be any home computers, but because there will be too many for the term to make any sense – like, which one, the one in the microwave, in the TV, in the HVAC thermostat, in the networking router… and it actually went farther, so now we have not only the computerized appliances, but also computers in many shapes and user-interface paradigms; voice-controlled watches, phones, tablets, tablet-laptop hybrids, laptops, all-in-one desktops and conventional desktops, gaming consoles, and also the super-desktops, also known as either workstations or gaming PCs.

The super-desktop is an interesting category, because it’s usually called just the “PC”, the same as an ordinary unit found in businesses, the word/excel machine, but it’s a wholly different beast, of the kind that was known in the past as either a supercomputer, or a desktop minicomputer, also called graphical workstation. You see, when something can drive several TV-sized 4K displays, run multiple virtual machines at once with no lag, render movies, or process terabytes of other kinds of data, it’s no longer in the same category of things as a machine that is of nominally the same shape, running the same OS, but is weaker than one of its virtual machines.

So, what is a super-desktop, or a “gaming PC”, as they are euphemistically called? What is a machine that can drive an Oculus Rift VR system? The most honest description is that it is an alternative reality creation device. It creates simulated universes you can interact with and join. If you run a car racing simulation and you wear Oculus VR goggles, and especially if you have one of those seats that re-create mechanical shocks, you are essentially joining an alternate reality where you participate in a very convincing and physical activity, much more so than a dream, for instance.

So, what is the main difference between this and an ordinary computer that can play immersive games? Only quantity, but the thing is, if you increase quantity far enough, it becomes a quality of its own. If you increase the mass of an asteroid enough, it becomes a planet. If you increase the mass of a planet enough, it becomes a star. If you increase the mass of a star enough, it becomes a black hole. It’s the same thing as with human brain – add more neurons and suddenly completely new phenomena start taking place. Have only a few, you have a worm. Add more, you have a fish. Add more, you have a frog. Add more, you have a lizard. Add more, you have a rat. Add more, you have a monkey. Add more, and you get a man, and suddenly it’s no longer just the mass-equivalent of many worm ganglia together, it’s the phenomenon that can launch robots on Mars, fly cameras near Pluto, observe the beginnings of the Universe, break matter in ways in which only supernovae do, and even know God.
A super-desktop computer is not just a PC, and a PC is not just a glorified Commodore 64. It’s a machine of such power, it can add another dimension to human experience. It can immerse you in a realistic alternate reality where you drive supercars on race tracks, fly fighter jets, or fight dragons. It can literally provide you with a dynamically generated, interactive sensory input, which is a definition of an alternative reality. But there is a danger to that. Alternative reality is another name for illusion, and having such powerful illusion-creating devices at your disposal can allow you to add another layer of indirection between your consciousness and reality.

If it allows you to escape from issues that you are supposed to face and solve, it can also allow you to waste your life. There’s only one tool at our disposal that can do that, and it’s called drugs. Drugs can allow you to escape real issues and bury yourself in a world where there is reward without necessity for achievement. Powerful computers can become a drug-equivalent, a wish fulfillment tool which removes the necessity of achievement from the equation. As all powerful tools, they can really fuck your life up. Also, as all powerful tools, they can allow you to do more and better things.

Vacation, Sony FE 90mm G Macro and misc photo stuff

I was on Hvar for the last ten days, mostly to try to soak up the last warm and sunny days of the year, and also take pictures. This time I had a new lens to work with, the Sony FE 90mm G Macro:

So, what’s so cool about this one and what is it that it does, that can’t be done with the equipment that I already have. tl;dr: It’s the best macro lens in the world.

It has the least chromatic aberrations wide open, greatest sharpness, wonderful front and rear bokeh, image stabilization, autofocus and weather resistance. If you want to work in the closeup and macro range, which I do a lot, it’s the best lens you can get, with the possible exceptions of Zeiss Makro-Planar 100mm f/2 and Olympus m.Zuiko 60mm f/2.8 Macro. As a portrait lens, the Sony is so good, they compare it with Zeiss Batis 85mm f/1.8, which is one of the best portrait lenses out there. So, considering what you’re getting, it’s actually a bargain, regardless of the apparently high price. The price seems high as long as you don’t look at what it does and what you’d have to get to match it. So, why is it better than what I used so far, which is a Canon EF 85mm f/1.8 on macro extension tubes? First of all, Canon creates completely different-looking images, so it’s not a direct replacement, it’s a different tool in a toolbox, like hammer and pliers. In the same way, a Minolta MD 50mm f/1.7 on macro extenders makes completely different images, and I would prefer it for some things. What Sony 90mm G Macro does is allow me to take this:

… and in the next moment, without changing lenses or removing macro extenders, it allows me to take this:

 

Essentially, it’s a wonderfully versatile walkaround lens for my kind of photography, and the only thing I need to complement it is a good wideangle.

Talking about wideangles, I was kinda worried about the problems some photographers had with Canon lenses adapted to Sony FE bodies, where sharpness would drop off towards the edge of the frame. The problem is supposedly caused either by a focusing error, or interference with parts of the adapter, or with the FE mount itself, which is narrow for a 35mm. I couldn’t test the issue with my EF 17-40mm f/L lens, because it’s always unsharp in the corners due to its inferior optical design, but I did test it with the EF 15mm f/2.8 Fisheye, and the problem doesn’t exist with the Viltrox III adapter:

The edges and corners are completely sharp, and the only limitation is the depth of field (as visible on the above image in the bottom corners). Maybe my adapter is just that good; I do think the problem would show itself with the widest-angle lens there is. I would not hesitate to use Canon EF wideangles on a Sony FE body with this adapter, when edge and corner sharpness is critical.

There’s also controversy regarding the Sony FE 28-70mm f/3.5-5.6 OSS kit lens and its usability. In my experience, the lens is excellent. It’s very sharp even wide open, it doesn’t create distortions, chromatic aberrations or flare; vignetting is visible wide open but not when stopped down, and if used as a landscape photography lens from a tripod with meticulous technique, it creates stunningly good images and has no flaws whatsoever. Its problems are of different kind: it has poor close focus, so it’s useless for closeup/macro shots, and the aperture is slow, which makes it difficult to isolate the subject from background. When those two aspects are combined, it becomes useless as a walkaround lens for me, and considering how great the aperture blades are designed and how good the bokeh could be if only it focused closer and had bigger aperture, it’s a shame. However, as a moderate-wideangle to light-telephoto landscape lens, it’s excellent:

  

People have been maligning the Sony Vario-Tessar T* FE 24-70mm f/4 ZA OSS because it’s expensive and it isn’t sharper than the “kit lens”. The thing is, if it’s as sharp as the kit lens, it’s plenty sharp, thank you very much. It would be really difficult to get it sharper than completely sharp. As for it being expensive, I agree, but it also has harder contrast and color saturation than the 28-70mm, and it also has fixed aperture, and some dust and moisture sealing, which might make it attractive for some people. For me, the 24-70mm f/4 doesn’t add any real versatility that would make it useful for closeup photography, and I prefer the milder contrast and color rendition of the 28-70mm kit lens.

Another thing I got was the Meike battery grip for Sony A7II.

Essentially, it’s a cheap copy of the Sony battery grip, and is as good. It addresses the problem of poor camera ergonomics, and also the mediocre battery life, at the cost of making the camera bulkier and heavier. I’m not sure the result is as comfortable as a Canon 5d body, but is significantly less awkward and tendon-pain-inducing than the Sony A7II body alone with a large and heavy lens attached, when you go for long photographic walks. I recommend at least trying it; it might not be the solution to everyone’s problems, though.

As for the camera I used, the Sony A7II, I’m in love with the colors, resolution and the depth of information in deep shadows during the long exposures. I would like it to be less noisy during the long exposures, in higher ISO and in deep shadows, but regardless, the image quality is fantastic. The only problem with Sony that I had so far is that the first copy of the FE 90mm G Macro arrived with dead electronics – it was completely fubared: no aperture, no focus, no nothing. Some flat cable probably had a flimsy connection, or was subject to G-shock in transit, but I returned it, received a functioning replacement and my experiences with the lens so far were superlative, except that it’s a heavy brick. There are several other lenses I’m considering: one is a wideangle with better geometry and field curvature than my EF 17-40mm f/4L, and another is a telephoto, which is something I never bought because the good ones are very expensive and very heavy, and I would probably end up not using it much, but I still miss one considering how much I liked ones I had for review years ago. But yeah, that’s about it, rambling over. 🙂

 

About computer security

Regarding this latest ransomeware attack, I’ve seen various responses online. Here are my thoughts.

First, the origin of the problem is the NSA-discovered vulnerability in Windows, apparently in versions ranging from XP to 10, which is weird in itself considering the differences introduced first in Vista, and then in 8. This makes it unlikely that Microsoft didn’t know about it; it looks like something that was deliberately left open, as a standard back door for NSA. Either that, or it means that they managed not to find a glaring vulnerability since 2001, which makes them incompetent. Having in mind that other platforms had similar issues, it wouldn’t be unheard of, but I will make my skepticism obvious – long-term-undiscovered glaring flaws indicate either intent or incredible levels of negligence.

The immediate manifestation of the problem, the WannaCry ransomeware worm, is a sophisticated product of the most dangerous kind, the one that apparently doesn’t require you to click on stupid shit in order to be infected. The malware sniffs your IP, detects vulnerabilities and, if found, executes code on your machine. The requirement for you to be infected is a poorly configured firewall, or an infected machine behind your firewall, combined with existence of vulnerable systems. The malware encrypts the victim’s files, sends the decryption key to the hackers, deletes it from the local machine and posts a ransom notice requiring bitcoin payment on the afflicted machine. It is my opinion that the obvious explanation (of it being a money-motivated hacker attack) is implausible. The reason for this is the low probability of actually collecting any money, combined with the type of attack. A more probable explanation is that this is a test, by a nation-state actor, checking out the NSA exploit that had been published by Wikileaks. The possible purpose of this test is most likely forcing the vulnerable machines out in the open so that they can be patched and the vulnerability permanently removed, or, alternatively, assessing the impact and response in case of a real attack. It is also a good way of permanently removing the NSA-installed malware from circulation by permanently disabling the vulnerable machines by encrypting their filesystem and thus forcing a hard-drive format. Essentially, it sterilizes the world against all NSA-installed malware using this exploit, and it is much more effective than trying to advertise patches and antivirus software, since people who are vulnerable are basically too lazy to upgrade from Windows XP, let alone install patches.

As for the future, an obvious conclusion would be that this is not the only vulnerability in existence, and that our systems remain vulnerable to other, undiscovered attack vectors. What are the solutions? Some recommend to install Linux or buy a Mac, forgetting the heartbleed bug in the OpenSSL, which was as bad if not worse. All Linux and Mac machines were vulnerable. Considering how long it took Apple to do anything, and how long it remained undetected, I remain skeptical regarding the security of either platform. They are less common than Windows, which makes them a less tempting target, but having in mind that this is the exact reason why potential targets of state-actor surveillance would use them, it actually makes them more of a target, not by individual hackers, but by potentially much more dangerous people. The fact that hacker-attacks on Linux and Mac OS are not taken seriously, the protective measures are usually weak and reliant on the assumed inherent security of the UNIX-based operating systems. When reality doesn’t match the assumptions, as in case of the heartbleed bug, there are usually no additional layers of protection to catch the exceptions. Furthermore, one cannot exclude a low-level vulnerability installed in the device’s firmware, since firmwares are proprietary and even less open to inspection than the operating systems themselves.

My recommendation, therefore, would be to assume that your system is at any point vulnerable to unauthorized access by state actors, regardless of your device type or protective measures. It is useful to implement a layered defense against non-state actors: a hardware firewall on the router, a software firewall on the device, limit the amount of things shared on the network to a minimum, close all open ports except those that you actively need, and protect those as if they were a commercial payment system; for instance, don’t allow password authentication on SSH, and instead use RSA certificates. Use encryption on all network communications. Always use the newest OS version with all the updates installed. Use an antivirus to check everything that arrives on your computer. Assume that the antivirus won’t catch zero-day exploits, which is the really dangerous stuff. Don’t click on stupid shit, don’t visit sites with hacking or porn-related content, unless you’re doing it from a specially protected device or a virtual machine. Have a Linux virtual machine as a sandbox for testing potentially harmful stuff, so that it can’t damage your main device. Don’t do stupid shit from a device that’s connected to your private network, so that the attack can’t spread to other connected devices. Don’t assume you’re safe because you use an obscure operating system. Obscure operating systems can use very widespread components, such as the OpenSSL, and if those are vulnerable, your obscurity is far less than you assume. However: a combination of several layers might be a sufficient shield. For instance, if your router shields you from one attack vector, firewall and antivirus on your Windows host machine shields you from another attack vector (for instance UNIX-related exploits), Linux architecture on your virtual machine shields you from the rest (the Windows-related exploits), and your common sense does the rest, you are highly unlikely to be a victim of a conventional hacker attack. However, don’t delude yourself, the state actors, especially the NSA, have access to your system on a far deeper level and you must assume that any system that is connected to the network is vulnerable. If you want a really secure machine, get a generic laptop, install Linux on it from a CD, never connect it to the network and store everything important on an encrypted memory card. However, the more secure measures you employ, the more attention your security is likely to receive, since where such measures are employed, there must be something worth looking at. Eventually, if you really do stupid shit, you will be vulnerable to the rubber hose method of cryptanalysis, which works every time. If you don’t believe me, ask the guys in Guantanamo.