Skip to comments.ARM, AppliedMicro and Red Hat Aim to Develop 64-bit Server Platform.
Posted on 10/29/2012 7:47:40 PM PDT by Ernest_at_the_Beach
ARM, Red Hat, and Applied Micro Circuits Corp. on Thursday announced a collaboration that aims to develop a 64-bit server design platform to lower the total cost of ownership (TCO) of cloud computing, data centers and enterprises.The platform will enable makers of servers to quickly adopt ARM hardware and software architecture in their products.
We are excited to support AppliedMicros innovation as it develops 64-bit ARM powered server system-on-a-chips. ARMs business model is centered on partnership, and this collaboration is a further example of ARM ensuring that a compelling software ecosystem coalesces. The ecosystem will enable the industry to take full advantage of the device innovation and integration underway for deployment in the server market, said Tom Cronk, deputy general manager of processor division at ARM.
The server platform will be compliant with ARMv8 architecture and will utilize Red Hat Linux operating system. The platform will be based on the AppliedMicro X-Gene server-on-chip (SoC) that has been purpose-built for cloud and enterprise server deployment to deliver unprecedented low power, high performance and integration, with the goal of changing the way servers are designed for cloud, data center and enterprise applications.
(Excerpt) Read more at xbitlabs.com ...
Man the system is slow,....maybe needs an ARM server in feontof things.
Can we get half a dozen or so for FR?
The chip vendor who can deliver scalable performance with less power is going to win in the datacenter in the long run.
AMD never wins these battles with Intel, but their innovation spurs Intel to get out of their comfy little world and actually do something new.
Intel released “Atom”, and as one who has been in the computer industry for nearly 30 years (and as a proper disclaimer, I must admit I now work for them)....any who count Intel out always wind up in the tech graveyard. The resources of Intel are nothing short of phenomenal, even if they’re a bit slow to react to certain industry trends at times. They always “wake up”....and omg do they catch up and fast.
Let’s just say I’d never, ever want to compete directly with ‘em.
I think back to when AMD designed in some high-bandwidth memory interfaces into their chips - before AMD did this, Intel had bifurcated their CPU’s into a ‘desktop’ and ‘server’ product, and only the server products had high-bandwidth memory.
AMD’s use of technology that was originally designed for the DEC Alpha in the Athlon put Intel on notice that AMD was going to expose how weak the memory path was in the “desktop” chips... and Intel reacted by dispensing with the hobbling of the desktop CPU’s.
I give AMD lots of credit for innovation, but sadly, their corporate management has a talent for bitterly disappointing investors. They just don’t know how to manage their numbers with Wall Street.
Even more profound.....when Opteron embedded the memory manager directly into the proc. Kicked Intel’s ass for a while, especially in my field (high performance computing, aka “HPC”, aka in the older days “supercomputing”).
Never again. They got caught with their trou around their ankles and will never, ever let that happen again. They’re years ahead, technologically, of the competition. That’s a fact.
Different times, different company, different approach to business. I have huge admiration for the outfit....especially their cast-in-stone principles taught to ALL employees about always doing the right thing. Keep it honest, open, above board....or be fired. I happen to like that a lot.
Doesn’t Intel have 64 bit processors. Sun and IBM.have had then for yeasts. What is the Atom all about? Isnt the Itanium still Intel’s best chip?
Yes, I agree. Intel has had a couple periods where they slouched off or let their focus be diverted, but on the whole, they’re a technology leader. No one without the resources of a government can now hope to go up against Intel in fab technology, and without a modern fab, there’s no way to compete with Intel’s time to market or their ability to spin and then re-spin a chip.
Their corporate culture is also pretty good, I agree. When I had to deal with them, they were upfront and honest about what they had and what they were doing. They didn’t waste our time by blowing sunshine up our backsides. One of our requirements was “more cache, less FPU” - because, quite frankly, in those days when we booted a chip in a router, we turned off the FPU if we could. We wanted it to throw an exception any time someone used floating point, so we could hunt down the n00b who put FP instructions in the code and smack them with the “clue stick.”
Intel told us rather forthrightly that they were never going to deliver a x86 chip that was quite like what we wanted in that application, so we ended up continuing with MIPS and Motorola. I was personally lobbying for PPC and the new IBM cellular chip architectures that would have allowed us to create chips with a lot less design time in our own house, but I lost that argument. I still think the PPC is one of the slickest chip architectures out there - much more slick than most other RISC chips.
Yes, Intel has the Itanium... but you’ll likely never see or use that chip architecture on a desktop.
I can explain a bit about Itanium; I’ll try to keep it high-level, since most people don’t know the gory guts of CPU or computer architecture.
First, some background:
Most CPU’s are what are known as “vertical” architectures. You execute one instruction... and then sometime later in CPU clock time, you fetch and execute another instruction. You plod along, going forward in CPU time, fetching, decoding and executing instructions.
OK, that’s the old days. Now, modern CPU’s can speculative fetch instructions out ahead of the current instruction you’re executing and start setting up addressed, caching memory and pre-computing results. This is all very complicated and sexy as hell. In the Intel x86 products, this is accomplished by not having the CPU execute x86 instructions directly - the x86 instructions are effectively emulated and there is a “RISC-like” CPU instruction set, which Intel doesn’t divulge outside the company, which is being used to break apart x86 instructions into “micro-operations” which are then scheduled out ahead of the current instruction. As I said, very sexy, very complicated. This pre-execution effectively allows for instruction-level parallel computing in the CPU. We’re talking Erin Rogers in the second season the Buck Rogers sexy right here.
Why did they do this? Because the x86 instruction set became a barrier to higher performance, just as the VAX-32 and IBM S/370 instruction sets became barriers to higher performance in those architectures. The VAX is pushing up daisies, and IBM has done their own very spooky stuf to the 370.
But... there’s another way to create a CPU with parallelism - where the CPU is doing more than one thing at once, and this is called “horizontal” architecture.
Insert spooky theme music here...
Instead of having small instructions which we stack in a succession in clock time, we can stack multiple instructions into one word, make each instruction complete within one clock cycle and allocate multiple decode/ALU blocks against each part of the word that is fetched. This is commonly known as “VLIW” or “Very Long Instruction Word” architecture. I worked on one such chip at cisco... and I’m here to tell you that while they sound really suave on paper, the actual implementation makes your brain hurt.
First, there are the concurrency issues when you have multiple instructions within the word asking the memory controller or bus to setup and lock multiple addresses at the same time. Then there are compiler issues - we’ve spent 20+ years working on RISC vertical CPU compilers, and the industry has that technology down pretty well. Most vendors have pretty much quit rolling their own compilers and they just use GCC as a starting point and work on the instruction scheduling and back-end issues.
Well, with a horizontal architecture... you’re not able to neatly leverage all those years of instruction-level optimization and scheduling that have been perfected in compilers. Now you have to re-think how you optimize code in the compiler back end... and it’s a bear. It really is. The guys I worked with who were working on a compiler for the horizontal architecture chip sometimes said they thought that dropping acid might be a good way to make progress on re-thinking some of this stuff.
The net:net result in the CPU market comes down to this: Software rules the day. The huge installed base of software on the x86 architecture now crowds out nearly all other CPU architectures because the cost to replicate what you can get off the shelf by going with the x86 architecture is huge. I’m talking billions of dollars huge.
HP and Oracle just finished litigating an interesting case: Oracle dropped support for the Itanium - and HP sued them over this, claiming breach of contract. HP just won this case, and the court ruled that Oracle has to keep producing software to support Itanium for HP-UX customers for awhile to come.
Microsoft and Red Hat (Windows and Linux, respectively) have announced end-of-life for their development of OS support of Itanum.
With the future architecture of x86-64, there’s no reason to keep banging heads against the VLIW rock wall. It’s going to become another legacy system that disappears into the mists of computing history soon... much as the iAPX-432 architecture chipset that Intel produced back in the 80’s did.
Xeon’s, Jack. Sorry, but you’re way behind the times. Itanium is the past, in a big way. Currently Sandy Bridge....soon Ivy Bridge. Awesome 64-bit procs. Intel has had 64-bit procs for years.....don’t take this as mean or an attack....but I’m guessing you’ve been in the biz, so where’ve you been the last decade?