Skip to comments.20 great years of Linux and supercomputers
Posted on 07/30/2013 11:03:15 AM PDT by ShadowAce
In the latest Top500 supercomputer rankings, 476 of the top 500 fastest supercomputers, 95.2 percent, in the world run Linux. Linux has ruled supercomputing for years. But, it wasn't always that way.
First Unix, and now Linux for the last few years, has ruled supercomputing.
When the first Top500 supercomputer list was compiled in June 1993, Linux was just gathering steam. Indeed, in 1993, the first successful Linux distributions, Slackware and Debian were only just getting off the ground.
What happened next, as reported in The Linux Foundation's forthcoming report, 20 years of Top500.org Supercomputer Data Links Linux With Advances in Computing Performance, was that "after first appearing on the list in 1998, Linux has consistently dominated the top 10 over the past decade and has comprised more than 90 percent of the list since June 2010."
Before Linux made its move, Unix was supercomputing's dominant operating system. Since 2003, the top operating system by performance share on the Top500 List underwent a complete flip from 96 percent Unix to 96 percent Linux. By 2004, Linux had taken over the lead for good.
According to The Linux Foundation, "Linux [became] the driving force behind the breakthroughs in computing power that have fueled research and technological innovation. In other words, Linux is dominant in supercomputing, at least in part, because it is what is helping researchers push the limits on computing power."
The Foundation believes that this has happened because of two reasons. First, since most of the worlds top supercomputers are superscalar research machines built for specialized tasks, each supercomputer is a standalone project with unique characteristics and optimization requirements. Thus, it's not affordable for anyone to develop a custom operating system for each system. With Linux, however, research teams can easily modify and optimize Linux to the one-off, groundbreaking designs that characterize the modern generation of supercomputers.
And, just as importantly, "The licensing cost of a custom, self-supported Linux distribution is the same, whether youre using 20 nodes or 20-million nodes." Thus, "by tapping into the vast open-source Linux community, projects had access to free support and developer resources to help keep developer costs on par with, or below other operating systems."
The result of this has been supercomputers that are going faster than ever. By total RMax, a supercomputer's maximum achieved performance on the Linpack benchmark, supercomputer performance has outpaced Moores Law (The number of transistors incorporated in a chip will approximately double every 24 months.) by doubling roughly every 14 months. At the top end, supercomputing is progressing at even more rapid rate. The RMax of the fastest supercomputer on the Top500 list has increased by a factor of three to reach the Tianhe-2s 33.86 petaflop/second in 2013 from the CM-5s 59.7 gigaflop/s in 1993."
Therefore, The Linux Foundation concluded, "By isolating RMax by operating system using the past 20 years of Top500 data, its clear that Linux is not only responsible for supporting the majority of supercomputers today, but is a driving force behind the disproportionate growth in supercomputing capacity over the past decade. In continuing to drive progress and innovation in computing, Linux is also helping to explore the mysteries of the universe and solve our toughest problems."
I can only agree with these conclusions.
I wanna get me a windoze supercomputer - imagine how fast office is going to run on that baby! /JK
Super-computers have become massively parallel, with hundreds or even thousands of CPUs. Legacy software providers typically set license terms according to the number of CPUs, and software license fees could get prohibitively expensive.
Yes, Linux can be easily customized. But, if you are building a massively parallel with nodes of 2-16 CPUs, you really don't need much customization: the kernel already has everything you need. You might need to develop or tweak a device driver for your interconnect, but that's easy to do, and possible on legacy OS's as well.
you are right-
I work with several HPC labs- years ago they
wanted to go to a supported “UNIX” from DEC’s
Tru64 unix to HPUX or whatever -in some cases
such as DEC, they could get the software for a minimal
fee- but then it was on proprietary hardware-
Well, from Berkley to MIT- (with the help of taxpayer
money)- the big centers started porting to open source
and forcing hardware manufacturers to make their hardware with Linux drivers- it is all about lowering
the cost- at first it was pretty buggy- but for the
most part it runs now.
BTW- several years ago Microsoft GAVE quite a few centers $100’s thousands to buy windows hardware and run
Microsoft code- I know of a center who took them up on it- no one EVER did a project on that Microsoft cluster.
Bump FYI !
They mentioned licensing, but did just glance over it. It’s no wonder there is a move to Linux, try building a super-scalar online application with a DB on the back-end... Licensing on Oracle/SQL will kill your ROI.
The database and the OS are two different things—we run Oracle on Linux servers here. The database licensing has nothing to do with the move to Linux.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.