Posted on 03/07/2014 4:16:24 AM PST by ShadowAce
For much of the Linux operating system's history, patching a kernel has been a process that has typically involved downtime. In 2014, that's no longer the case as there are now at least three distinct efforts that offer the promise of zero-downtime kernel patching to Linux servers.
The most recent entrant into the zero-downtime patching parade is Red Hat with the kpatch effort. Red Hat first revealed its efforts on Feburary 26th in a blog post detailing how kpatch works. Red Hat was unable to directly comment about any aspect of kpatch to ServerWatch for this article.
Red Hat is a relative late-comer to the dynamic patching party. Oracle has been in the space the longest, thanks to its 2011 acquisition of dynamic-kernel patching vendor Ksplice.
Michele Casey, Director of Product Management at Oracle, told ServerWatch there are several factors that come into play when considering providing kernel patches with zero-downtime.
"Ksplice has been providing zero-downtime kernel updates to thousands of production systems for a number of years," Casey said. "In fact, there have been over one million Ksplice patches released over the lifetime of the technology."
Casey explained that in order to resolve security issues or bugs with a patch that can be applied without a system restart, a vendor needs to account for all the various function calls and touch points a given piece of code has to the kernel.
She added that it's crucial to have the necessary infrastructure components to provide the checks needed for ensuring the updates are stable and ready for production deployments.
"Ksplice offers the tools and the expertise to monitor patches to determine which should be converted while providing multiple methods for installing and managing these updates, such as Ksplices web interface and offline client," Casey said. " In addition, scalability has been created within Ksplices infrastructure and tools to help ensure all kernel errata released can be patched by Ksplice."
Oracle also doesn't have a particularly enthusiastic view of rival dynamic kernel-patching technologies from Red Hat and SUSE. Casey said that from what Oracle has seen so far of the two other technologies, they are proof-of-concepts at best.
"kgraft (suse) is very restricted and cumbersome, requiring a lot of code duplication and code editing," Casey said. "Neither technology Proof of Concepts have shown the ability to do anything more than creating a simple patch. There is no build infrastructure to handle large-scale patch building for a large variety of kernels."
Vojtech Pavlik, director of SUSE Labs at SUSE, explained to ServerWatch that kGraft doesn't ever need to stop the kernel to do the patching and is even able to patch kernel functions that are being actively executed by the OS.
"kGraft is intended to be merged into the upstream Linux kernel and to become a living open-source project," Pavlik said. "It builds on and improves existing Linux infrastructure to fit seamlessly into the Linux kernel."
Pavlik noted that a key difference between Ksplice and kGraft is the upstream kernel approach. Pavlik noted that Ksplice tried and failed to get upstream acceptance in 2008, primarily because of the complexity of the changes required.
"Our goal is upstream acceptance, and thus we decided for a fresh start, and that allowed kGraft to be smaller, simpler and leaner and so give it a better chance to be accepted into Linus's upstream kernel tree," Pavlik said. "After our initial announcement of kGraft, it became apparent that we weren't the only ones to follow that line of thinking Kpatch (Red Hat) was most likely born the same way."
In terms of upstream Linux kernel timing, Pavlik said that SUSE's expectation is to open the code before the end of March and submit it for comments to the Linux Kernel Mailing List shortly thereafter. There is also the possibility of some form of collaboration between the SUSE Kpatch effort and SUSE's kGraft.
"We welcome this work on the same topic and are looking forward to working together with the Kpatch team and the kernel developer community in general on a common upstream solution for live kernel patching that may in the end turn out to be quite unlike either of the two technologies as they stand today," Pavlik said.
Could this explain why I am so rarely asked to restart my Ubuntu?
The beauty of most Linux distros is that the kernel is protected and the libraries can be shuffled around without affecting the core OS. Patching some libraries means a service might go down for a moment, but I’ve found that rarely affects the utility of the server as a whole.
No, not yet. This is still a work in progress.
Ksplice was around long before Oracle picked it up. I used it for a while, but I had a few problems with it. I was using it to patch-in instrumented kernels for debugging.
I went back to using kexec-tools for a while, but realized the best way to be sure that you have a fresh kernel is to reboot.
Also, dynamic kernel patching won’t work with Secure Boot technology. The workaround for now is to disable secure boot in the ROM-based Setup (UEFI). Don’t know if there are plans to make it work with secure boot.
This goes way, way back. In the 70s mainframes did this all the time. I worked on sherry and univac mainframes, and I often patched the Dispatcher algorithms on the fly.
But, since most of today’s kids have never worked on mainframes, this seemed a novelty to them.
Same for the concept of virtual machines. These are not new concepts, but implemented decades previous.
Nothing new under the sun (except to those who were not alive back then).
Should read sperry. Thanks, iPad.
Thanks IGB.
You can find the histories out there on the net.
This is all way way old stuff.
The biggest difference between today’s OS’s and those of 50 years ago is that today - since every rube out there is carrying around a computer with them - they are all working with the gubmint to provide them with the opportunity to spy and hack into people’s ‘puters.
I absolutely hate the idea of auto updates for which I do not have to specifically approve each one in order to apply them. I want to understand exactly what is changing.
Not to mention, in a business or organizational setting, where one has “dev” “test” “prod”, all changes must be system tested. Well, they don’t have to be, but then there will the occassional catastrophe with one’s computers.
To get around people’s objections, they love to simply bury one in tons of updates. Now you can’t possibly review them all. They love to say it’s “to keep security updated”. In fact, blasting in thousands of unknown, untested updates, where you trust some “governing committee” of open source, or some Fortune 500 company, both of which are knowingly or unknowingly allowing the gubmint’s operatives to influence their operations, is a surefire way to have nice backdoors for gubmint into your computers and have no idea that you do.
The backdoor is rarely a direct backdoor. It’s mostly hidden in plain sight, simple architectural weakness.
If one is then lax with system administration, the doors are left open. And the way OSs are designed today, one has to be a paranoid security guru in order to not be “lax”.
The ‘puter business now, like so many others, is a minefield of scams aimed at the sheeple.
I will give it a look when my laptop is not sounding like a lawn mower
Nah. It's better design. Also, you're lucky you're on Ubuntu. If you were on Fedora, and applied all the kernel patches, you'd be booting weekly. :-)
I skip most kernel updates unless there is something in it I need.
The vast majority of kernel exploits are local ones, so I don't really have to worry about them. I trust myself not to hack my system.
Break down the shell as much as you can and clean it out.. sounds like you are running hot.
Enterprise will see this in what.. five years?
Toshiba Satellites fans are situated deep. Not that easy for “normal” people to take apart. I don’t like the idea of pulling out the system board. I doubt I could put it back together
My worst situations breaking down Laptops were on Toshiba (will never buy one again, btw.. but is possible. Just have different places to store the tiny screws.. remember where each in goes..
Once you get the shell off, you can clean the fan and surrounding areas. Biggest prob, that I can remember, is putting the screws where they are supposed to go... and the long ones are the easiest ;^)
Trial and error type situation here.. learn as you go.
If you do nothing, system will shut down from overheat soon anyway :/
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.