Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Make Linux safer… or die trying
The Register ^ | 14 February 2023 | Liam Proven

Posted on 02/15/2023 10:57:59 AM PST by ShadowAce

Part 1 Some Linux veterans are irritated by some of the new tech: Snap, Flatpak, Btrfs, ZFS, and so forth. Doesn't the old stuff work? Well, yes, it does – but not well enough.

Why is Canonical pushing Snap so hard? Does Red Hat really need all these different versions of Fedora? Why are some distros experimenting with ZFS if its licence is incompatible with the GPL? Is the already bewildering array of packaging tools and file systems not enough?

No, they aren't. There are good justifications for all these efforts, and the reasons are simple and fairly clear. The snag is that the motivations behind some of them are connected with certain companies' histories, attitudes, and ways of doing business. If you don't know their histories, the reasoning that led to major technological decisions is often obscure or even invisible.

The economics of the computer software industry has changed massively since some now-widespread tools were originally invented. Techniques and methods that made good commercial sense decades ago don't any more, and some of this applies to Linux more than it does to Windows. Modern Windows is based on Windows NT, the first version of which was released in July 1993 and was a modern, hi-tech OS from the start. Its developers had already learned lessons from its forerunners: less DOS and 16-bit Windows, more as OS/2 1.x and Digital Equipment Corporation's VAX/VMS.

Linux is quite a different beast. Although many Unix fans haven't really registered this yet, it's a fact: Linux is a Unix now. In fact, arguably, today Linux is Unix.

As a project in its own right, Linux is roughly the same age as Windows NT. Linux 0.01, the first public version, appeared in late 1991, it went GPL with version 0.99 in late 1992, and version 1.0 was released in March 1994. FreeBSD is about the same age, and so is NetBSD. All of them are fairly traditional, monolithic, Unix-like OSes in design. This means that it inherits many of its design choices from earlier, mostly proprietary Unix OSes.

The thing is, solid, carefully made decisions that worked for commercial Unix in its heyday may not be such a good fit any more. In the 1970s and 1980s, proprietary Unix boxes cost lots of money. The companies that bought them – and it was a big-business level of expenditure – could afford to pay for highly trained specialist staff to tend and nurture those machines.

Windows NT came out 30 years ago and created a lively commercial market of relatively inexpensive 32-bit PCs, powered by x86 processors, open-standard fast expansion buses and low-priced mass storage. Cheap mass-produced PCs were just about good enough, and so were cheap mass-developed OSes for them.

Since then, Windows has been good enough, and it runs on commodity kit. So the commercial mainstream, always looking for savings, moved to Windows. The result is that Windows tech staff became cheap and plentiful – which implies fungible – while Unix techies remained more expensive.

This cheap, mass-market hardware in turn has aided the evolution of open source Unixes. Linux has done well partly because its native platform is the same cheap kit that was built to run Windows. This is a huge and vastly diverse market and, as we recently described, software is a gas: it expands to fill the hardware. The result is that, to support the most diverse computer platform ever, Linux is big and complicated.

Yes, it's a Unix-like OS, and Unix has been around for over 50 years. But Linux isn't just another Unix. It's free for everyone, and the same kernel runs on everything from $5 SBCs to $50 million supercomputers. Proprietary Unix was expensive, exclusive, and mostly ran on expensive, high-quality hardware that was designed for it, while Linux mostly runs on relatively cheap devices that were designed to run Windows.

When Unix ruled the datacenter, computer resources were limited, and proprietary platforms strictly controlled what was on offer. Now that disk and memory are cheap, the PC hardware is uncontrolled and proliferates as wildly as kudzu. Linux supports most of it, meaning that it's much bigger and more complex than any proprietary Unix ever was… and to a good approximation, nobody fully understands the entire Linux stack: it's just too big. Real experts are scarce, and that means that they command top dollar.

But the mass adoption of Linux has changed the economics somewhat. While the top-tier gurus remain pricey, ordinary mortal techies aren't. Smart curious folks who can work out how to stack some components together like construction toys, and get it more or less working. Then you push it out into someone else's datacenter, add some tools that will arrange for it to scale out – if you're lucky enough to need it, and for it to work… those folks aren't so pricey. Which implies that the building blocks of that stack need to be tough, to match the levels expected over in Windows land, and they need to just plug together.

The flipside of this coin is the famed DevOps model: treat servers as cattle, not as pets. It's not all about servers – but it's server distros that pay. So desktop distros use lots of tools designed for servers, and phone distros are being built from the same components.

When the software and the hardware are cheap, but the skills are expensive, the cost centers become support and maintenance – which is a large part of why the big enterprise Linux vendors sell support, not software. The software is free, and if you don't mind compiling it yourself, you can have the source code for nothing. To get the ready-to-use version, though, you have to buy a support contract.

What that means is that the evolutionary selective pressure is to reduce the cost of providing that support in order to maximize the profitability of the support contracts. That requires making the OSes as robust as possible: to prevent faults from occurring, so you don't have to pay someone to fix them. If possible, to prevent whole categories of system failures. Better still, to make the OS able to recover from certain types of fault automatically, without human intervention.

If you want to deploy a lot of a cheap or free OS, without hiring a lot of expensive gray-bearded gurus, a core part of the economic proposition is to build Linux distros that can cope, even thrive, without constant nurture. For example, making them able to fetch and install their own updates. The goal is to make them able to cope with their own problems, and heal their own injuries, just as farm animals must in their short, miserable lives.

One aspect of this is visible as multiple parallel efforts to contain and manage the vast and ever-growing complexity of modern Linux: to encapsulate it, and if possible, even eliminate parts of it. This shows up in several places. The first was in file system design, but the first set of such changes was relatively minor and caused little disruption. Now another round of modernization is being worked on. There are also major changes in how software is packaged: how packages are built, how they're distributed, and how they're stored, installed, and upgraded. A further aspect is how they are uninstalled again or upgrades reverted.

This is a complex, interlocking set of problems, and not only is there not one single best way to tackle it, but the approach each company takes is guided by the tools which it has or favors. For various reasons, not all vendors are spending their R&D money in the same directions. Some are working on file systems, some on packaging, some on distribution, some on more than one of these at once.

In the second half of this feature, we'll offer an executive briefing on the different efforts, and why different distro vendors are addressing the problems in different ways. ®


TOPICS: Computers/Internet
KEYWORDS: linux
Navigation: use the links below to view more comments.
first 1-2021-27 next last

1 posted on 02/15/2023 10:57:59 AM PST by ShadowAce
[ Post Reply | Private Reply | View Replies]

To: rdb3; JosephW; martin_fierro; Still Thinking; zeugma; Vinnie; ironman; Egon; raybbr; AFreeBird; ...

2 posted on 02/15/2023 10:58:12 AM PST by ShadowAce (Linux - The Ultimate Windows Service Pack )
[ Post Reply | Private Reply | To 1 | View Replies]

To: ShadowAce

I love that graphic. It shows how Linux geeks behave just like sheep.


3 posted on 02/15/2023 11:04:53 AM PST by Mr. K (No consequence of repealing Obamacare is worse than Obamacare)
[ Post Reply | Private Reply | To 2 | View Replies]

To: ShadowAce

It is possible to over-think things.


4 posted on 02/15/2023 11:11:50 AM PST by SpaceBar
[ Post Reply | Private Reply | To 1 | View Replies]

To: Mr. K
Old graphic, but still applicable:


5 posted on 02/15/2023 11:13:21 AM PST by ShadowAce (Linux - The Ultimate Windows Service Pack )
[ Post Reply | Private Reply | To 3 | View Replies]

To: ShadowAce
Windows NT was built with modularity of the PC platform in mind by a very large, monolithic support company...Microsoft. Huge support and proprietary control of the source code.

Early UNIX was tailored to a proprietary hardware platform offered by each vendor. The hardware was tightly controlled. The software was highly optimized to the proprietary hardware. The tight control of hardware/software generated a high performance system with few bugs.

Linux tries to be Windows and UNIX. The hardware platform is massively broad and not controlled. The OS source is open source. Instead of one hardware/one OS, you have a plethora of hardware configs and a plethora of tailored configurations (distros) seeking to please everyone. Sometimes you are blessed with both a good hardware and good distro pairing, but it's a crap shoot.

Apple is patterned more like early UNIX. They tightly control the hardware platform and create a highly optimized UNIX OS with a pretty UI.

6 posted on 02/15/2023 11:25:22 AM PST by Myrddin
[ Post Reply | Private Reply | To 1 | View Replies]

To: ShadowAce

Isn’t this a problem that “docker containers” are supposed to solve?

You can create a virtualizable “container” that gets a task done, and all the funky linux choice and special linux config and extra library installation is localized to the container.

A server farm set up to run containers has computers with their “bare-metal” Linux properly set up to run Docker or whatever virtualization solution is called for. Docker / Kubernetes (container orchestration system ) simply fires up a particular container, that container’s Linux setup oddities are essentially quarantined, it processes data, and then gets shut down.

That’s my understanding, at least.


7 posted on 02/15/2023 11:45:47 AM PST by Yossarian
[ Post Reply | Private Reply | To 1 | View Replies]

To: Myrddin

Very much agree. The distro maze has made Linux a very schizophrenic OS akin to multiple personality disorder, coupled by too much free software riddled with bugs that go unpatched and security vulnerabilities that often go ignored. If you are trying to break into Linux as a user, get ready to get ignored or told, arrogantly, by the community to Read the f’ing manual (which usually doesn’t exist in any meaningful detail because coders wanna code, not write docs), and even if you did that, fix or patch whatever feature into the project yourself. There are exceptions,
of course, but they’re rare. Outside purpose built appliances or devops, Linux just ain’t worth the trouble.


8 posted on 02/15/2023 11:48:57 AM PST by Intar
[ Post Reply | Private Reply | To 6 | View Replies]

To: ShadowAce

I met Linus online in the early 1990’s. We ported everything from SCO to Linux. At one point we put Linux on a stack of floppy drives and sent it to some guys to look at that were running their stuff on some commercial version of Unix. Maybe I built one of the first “Distros”, lol.

Linux has been great. We had one Slack server running in a corner for years we never touched. I don’t think it was cycled for 3-4 years and it was running DPT drives arrays under load all the time.


9 posted on 02/15/2023 11:57:20 AM PST by isthisnickcool (1218 - NEVER FORGET!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Intar
The 90s called.

They miss you.

10 posted on 02/15/2023 11:59:42 AM PST by ShadowAce (Linux - The Ultimate Windows Service Pack )
[ Post Reply | Private Reply | To 8 | View Replies]

To: ShadowAce

Nope, still going on today.


11 posted on 02/15/2023 12:09:56 PM PST by Intar
[ Post Reply | Private Reply | To 10 | View Replies]

To: Openurmind

Ping.


12 posted on 02/15/2023 12:18:29 PM PST by Carriage Hill (A society grows great when old men plant trees, in whose shade they know they will never sit.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Intar
I rented out my CP/M machine to a computer dealer in exchange for a fully commented copy of the UNIX kernel code with annotations from the University of New South Wales in 1980. The first time I got my hands on a real UNIX system was a 3B20 at Pacific Telephone on a network shared with Bjarne Stroustrup in August 1983. The Bell System machines had full source code installed for everything. It was a perfect way to learn by reading all of the source to every command line utility, the kernel and having access to early versions of C++. I never looked back. Since 1983 I've worked as deep as kernel device drivers in HP-UX and as broad as networks of 12000 UNIX machines for the Army Corps of Engineers. Linux is something I can run at my home and build technology that my employer and customers needs. Fedora and Ubuntu are my distros of choice. For embedded systems, I used Debian to leverage the slower pace of patching.
13 posted on 02/15/2023 12:27:44 PM PST by Myrddin
[ Post Reply | Private Reply | To 8 | View Replies]

To: ShadowAce

I still sacrifice a goat once a month to give thanks for not having to “./configure, make, make install” for every piece of software I want to install.


14 posted on 02/15/2023 1:51:28 PM PST by Paal Gulli
[ Post Reply | Private Reply | To 2 | View Replies]

To: Myrddin

I used to be a big Fedora and Ubuntu user for years. Loved the stability and maturity of Fedora and the driver support and package manager of Ubuntu - made learning new projects a snap. Now that Fedora is expensive and Ubuntu has embraced gathering telemetric data from it’s users I’ve fallen out of love with both distros - Not a fan of Fedora core. I’ve moved back to Debian - 3rd time now. Each distro has pros and cons and I wish there was an industry standard for driver and package managers as that would make adoption of Linux so much wider for general users. I know why this standardization will likely never occur, but I can dream.


15 posted on 02/15/2023 2:38:12 PM PST by Intar
[ Post Reply | Private Reply | To 13 | View Replies]

To: Myrddin

16 posted on 02/15/2023 2:38:22 PM PST by martin_fierro (< |:)~)
[ Post Reply | Private Reply | To 13 | View Replies]

To: Paal Gulli

I definately don’t miss the days of trying and often failing to compile source code and creating kernal hooks to get some software running.


17 posted on 02/15/2023 2:40:27 PM PST by Intar
[ Post Reply | Private Reply | To 14 | View Replies]

To: Carriage Hill

Thank you my friend!


18 posted on 02/15/2023 3:25:45 PM PST by Openurmind (The ultimate test of a moral society is the kind of world it leaves to its children. ~ D. Bonhoeffer)
[ Post Reply | Private Reply | To 12 | View Replies]

To: ShadowAce

I mostly love my Linux Mint 20 distro. I see no reason to upgrade to whatever the current version is. It just works (mostly). The only problem I run into is some of the video and photo editing software I like, doesn’t work in Linux which is really stupid but it is what it is so I have a dual boot machine with Winblows10 on the other drive. A separate drive. I have a 3rd drive that is formatted to be able to be read by Win10 and Linux so that’s where I put files I want to mess with after booting into Winblows.

Other than some kind memory leak that developed when using Firefox which requires me to close FF, this Linux machine can stay running for a month or two between reboots sometimes. It’s that stable.

I hate when I boot into winblows and it wants to update all kinds of stuff or verify this or that. A real time waster. I wish I could just turn all that stuff off permanently.

Mac/Apple is okay. No complaints but I never took a shine to it. Too proprietary. The iPhones are nice and stable too. Then again so is Android on the Samsung phones I’ve had. Great cameras and the OS just works.


19 posted on 02/15/2023 4:24:42 PM PST by Boomer (The biden regime / identity politics is a clear and present threat to this constitutional republic.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Mr. K

And that is why you should probably stay away from the Big leagues of Linux. You first need to be able to tell the difference between a Penguin and a Sheep...


20 posted on 02/15/2023 4:54:36 PM PST by Openurmind (The ultimate test of a moral society is the kind of world it leaves to its children. ~ D. Bonhoeffer)
[ Post Reply | Private Reply | To 3 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-27 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson