Skip to comments.HP Project Moonshot hurls ARM servers into the heavens ( Just shown on Fox BSNetwork - small)
Posted on 02/22/2013 2:34:30 PM PST by Ernest_at_the_Beach
Free whitepaper Enabling efficient data center monitoring
Hewlett-Packard might have been wrestling with a lot of issues as CEOs and their strategies come and go, but the company's server gurus know a potentially explosive business opportunity when they see it.
That is why HP has put together a new hyperscale business unit inside of its Enterprise Server, Storage, and Networking behemoth and is getting out the door first with a server cluster that makes use of the EnergyCore ARM RISC server chip just announced by upstart Calxeda.
The hyperscale server effort is known as Project Moonshot, and the first server platform to be created under the project is known as Redstone, after the surface-to-surface missile created for the US Army, which was used to launch America's first satellite in 1958 and Alan Shepard, the country's first astronaut, in 1961.
Moonshot, of course, refers to NASA's Apollo project to put a man on the moon by the end of the 1960s, and there will no doubt be a lot of people calling this HP's Project Moneyshot, referring to something just a bit different. But HP's goal with Project Moonshot is larger than just making some money peddling alternatives to x86 chips for cranky Web operators who want do to more work for less money and with less juice. So don't get the wrong idea that Moonshot is just about ARM.
"This is an extension of, not a replacement of, our ProLiant and Integrity server lines," Paul Santeler, general manager of the hyperscale business unit within the Industry Standard Servers and Software division at HP.
And while HP was not in a mood to talk specifics, Santeler said that Moonshot would include super-dense servers based on Intel low-power Xeon and Atom chips and Advanced Micro Devices low-power x86 processors as well as multiple suppliers of ARM-based server chips.
HP is not interested in locking itself into one ARM supplier any more than it can afford to depend on just Intel or AMD alone. In this world, with technical, economic, and natural disasters, you need to at least dual-source key components, and equally more importantly, different ARM server chip variants are going to be good at different things.
As we have reported elsewhere, the EnergyCore chips are one of a handful of ARM processors that have been expressly designed for hyperscale server workloads. The ECX-1000 processors that are the first in the EnergyCore products are 32-bit chips that will come in two-core and four-core variants.
They include memory, I/O, and storage controllers and an embedded Layer 2 switch fabric on the chip, which means you can just wire a 4GB DDR3 memory stick to one, slap on some I/O ports, plug these babies into a passive backplane, and you have interconnected server nodes that take the place of rack servers and top-of-rack switches.
The EnergyCore Fabric Switch embedded on each chip can implement a 2D torus, mesh, fat tree, and butterfly tree topologies and scale across 4,096 sockets (each socket is a server node, since Calxeda is not doing cache coherency across sockets).
The original Calxeda reference design from last year was a 2U rack-mounted chassis that crammed 120 processors (and hence server nodes) into that metal box. With the production-grade ECX-1000 processors. Calxeda has put together a four-node server card with four memory slots and PCI-link interconnects so it can snuggle into a passive backplane to get power and talk to its ARM peers and make a network.
To make the Redstone, HP took a half-width, single-height ProLiant tray server and ripped out just about everything but the tray. In goes the passive backplane that the Calxeda EnergyCard, and HP can cram three rows of these ARM boards, with six per row, for a total of 72 server nodes, in a half-width 2U slot, like this:
An HP Redstone server tray crammed with Calxeda ARM servers
The trays slide into the 4U version of the ProLiant SL6500 chassis, and you can put four of these trays in the chassis thus:
HP Redstone SL6500 chassis fully armed and dangerous to 32-bit parallel workloads
That gives you 288 server nodes in a 4U rack space, or 72 servers per rack unit. That's 20 per cent more server density than the alpha test machine from Calxeda could do earlier this year with very early samples of its ARM chips.
That SL6500 chassis in the Redstone system has three pooled power supplies that can back each other up and keep the nodes going in the even one of them goes the way of all flesh. The system has eight cooling fans. Each tray has four 10Gb/sec links that come off the internal EnergyCore Fabric Switch.
All of these ports can be cross-connected using 10Gb/sec XAUI cables, and scaled across as many as 4,096 sockets. (By the way, 4,000 servers is pretty much the upper scalability limit of a Hadoop cluster these days.)
However, the recommended configuration at first will be to use link the 72 four-server nodes in a single SL6500 to each other with the integrated fabric switch, and then glue multiple SL6500s to each other using a pair of 10GE top-of-raw switches.
In effect, the SL6500 is the new rack, with an integrated top-of-rack switch, and the two external 10GE switches are akin to an end-of-row switch that usually links multiple racks to each other.
That 288 server count on the Redstone system assumes that you are going to network out to external disk arrays, but you can sacrifice some servers in the trays and plug in up to 192 solid state disks or 96 2.5-inch disk drives into an enclosure. The SSD and disk drive cartridges plug into SATA ports on the EnergyCard and draw their power from the backplane in the tray.
The sales pitch for the Redstone systems, says Santeler, is that a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.
A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m. The big, big caveat is, of course, that you need a workload that can scale well on a modestly clocked (1.1GHz or 1.4GHz), four-core server chip that only thinks in 32-bits and only has 4GB of memory.
"There are a lot of customers that I have talked to who think 32-bit is just fine," says Santeler. The chips will probably be good at web serving, web caching, and big data chewing workloads where processing data in smaller bits is the norm, not the exception.
That said, HP doesn't seem to be in a big hurry to commercialize the Redstone machines, but it is getting machines out there as Calxeda is starting to do samples and that is about as good as it can be.
The ECX-1000 chips from Calxeda are expected to sample late this year, with volume shipments in the middle of the year. HP's Redstone machines using these chips will be available in limited quantities for a limited number of customers in the first half of 2012.
And here's the kicker: Santeler is saying that HP is making no commitments at this time about when it will ship as a generally available product, or even if it will. (I think HP is just being overcautious and dramatic.)
HP is putting Redstone machines into DiscoveryLabs around the world, starting in a data center in Houston, Texas, where its PC and server factory is, and will give potential customers a chance to upload their programs on the Redstone servers and put them through the 32-bit ARM paces. And for now, that means doing so on Canonical Ubuntu or Red Hat Fedora Linux, which support ARM chips and which have been tweaked to support the Calxeda chips.
Incidentally, HP has no plans at this time for putting ARM processors in its general purpose ProLiant DL rack, ProLiant BL blade, or ProLiant ML tower servers. ®
Although this reports says it is based on Intel's Atom Chip....
The executive on the Fox Business network with the sample said based on ARM....
Will do more checking ....!!
This week, along with its financial results, Hewlett-Packard has announced the approximate launch date for its first power-efficient hyperscale server. Codenamed Moonshot, the new platform, based on Intels Atom chips, could go on sale as early as April.
According to HP, the Moonshot architecture can house thousands of servers in a single rack and provides energy savings of up to 89 percent, when compared against traditional x86 servers. It also takes up 94 percent less space and costs 63 percent less.
Announced in 2011, Project Moonshot was specifically designed for companies that deliver web services, social media and simple content, since they place the emphasis on server cost and power requirements instead of pure computational performance.
Moonshot servers will become some of the first to commercially test the concept of hyperscale high density systems which contain a very large number of relatively underpowered processors, as opposed to the big, fast, hot and power-hungry chips traditionally used in the data centre.
HP said that in addition to other advantages, Moonshot can also reduce overall complexity of IT infrastructure, since it doesnt require as many cables, switches and peripheral devices.
In an earnings call on Thursday, CEO Meg Whitman confirmed the new Atom-based servers will be available next quarter.
Originally, HP intended to base the platform around highly efficient ARM chips made by Calxeda. However last year, it had a change of heart, opting to equip the first Moonshot models with specifically designed Intel silicon called Centerton, which was announced in April 2012.
The resulting platform, codenamed Gemini, was expected to ship by the end of 2012, which means HP has fallen behind its own schedule. Gemini with ARM processors on board hasnt been cancelled, and could appear later in the year.
Many analysts agree that spurred by the popularity of ARM chips, power-efficient hyperscale servers will disrupt the IT infrastructure market, and could shift the balance of power away from Intel.
A lot of hyperscale precursors are starting to appear now, but it is just a dress rehearsal before the release of the 64-bit ARM platform. This will happen in about 18 months, and by this point, ARM will be in a good shape to take off in the data centre, Chris Tyler, Seneca Colleges industrial research chair for open source technology, told TechWeekEurope in October.
AMD later announced that it plans to start shipping its first 64-bit ARM-based servers by 2014.
8 hrs ago
Hewlett-Packard's first low-power server for hyperscale computing environments, developed under a project it calls Moonshot, will go on sale next quarter, CEO Meg Whitman said on Thursday. Project Moonshot is an effort to build low-power servers based on alternatives to Intel's Xeon processors for use in mega data centers like those operated by Facebook and Google.
From the PDF:
I'll not ever purchase an AMD again for desktop use.
HP claims the server consumes up to 89 percent less energy than a traditional x86 server
IDG News Service - Hewlett-Packard's first low-power server for hyperscale computing environments, developed under a project it calls Moonshot, will go on sale next quarter, CEO Meg Whitman said on Thursday.
Project Moonshot is an effort to build low-power servers based on alternatives to Intel's Xeon processors for use in mega data centers like those operated by Facebook and Google.
HP announced the project in 2011, and the first server platform it talked about, known as Redstone, was to be based on an ARM-type processor from Calxeda.
But the latest release of Dedora 18 via Kooroga seems to have fixed that issue.
Running on it at the moment,...software is Beta from korora.
But I haven't seen issues.
I am running the AMD Catalyst Driver.
I'm currently running Suse on it, but I tried everything to get the sound to work under Linux (even slackware), no go. The more digging I did about it, the less happy I became with AMD.
I knew I'd have to write my own touchscreen apps for it, but at least I had the raw data to work with. With the sound? I can't find out anything inside their little black box. I'd write my own driver for it, but the info wasn't there as of 6 months ago.
Sounds like HP did something ....does the mobo have an audio chip on it.....?
HP used to be friendly to Linux,...but the last HP printer I bought is not supported that I can find.