Posted on 07/01/2011 11:41:34 PM PDT by LibWhacker
IBM researchers have made a breakthrough in a new kind of memory chip that can record data 100 times faster than todays flash memory chips. That means scientists are one step closer to creating a universal memory chip that is fast, permanent, and has lots of capacity.
If they really work as billed, these multi-bit phase-change memory chips could transform enterprise computing and storage by around 2016, according to IBM. The technology could lead to chips that are lower cost, faster, and more durable in storing applications for consumer devices, including mobile phones and cloud storage. It could also benefit enterprise data storage applications.
Scientists at the research labs of Big Blue have demonstrated that phase-change memory can reliably store multiple bits of data per cell for extended periods of time.
The technology moves science one step closer to finding a universal non-volatile memory. Dynamic random access memory used for main memory in computers is fast, but it can only store data when the power is turned on. Flash memory can store memory permanently, even when power is turned off, but it is slower and has less capacity than DRAM, and its also more expensive. Hard disk drives have a lot of capacity, but theyre slower. A universal memory would theoretically be able to store and fetch data quickly, keep it permanently, and have a low cost and high capacity.
Computers with a universal memory could boot instantly (like Apples flash-based MacBook Air) and have lots of storage capacity and fast performance that would be useful in enterprise computing. Phase-change memory can retrieve and write data 100 times faster than flash, enable high storage capacity and not lose data when the power is off. It is also durable and can endure for 10 million write cycles, compared to 30,000 cycles for enterprise flash memory and 3,000 cycles for consumer flash memory.
People shouldnt get too excited until IBM can prove that it can mass produce the technology, since phase-change memory has been under research for a long time. The IBM scientists say they used advanced modulation coding techniques to get rid of a problem known as short-term drift, which changes resistance levels in chip circuitry over time and causes data read errors. Up to now, reliable retention of data has only been possible for single-bit per cell phase change memory. Now IBM has demonstrated that multi-bit cell phase change memory can be made reliably.
The phase change memory leverages the change in resistance that occurs in the material, which is an alloy of various unnamed elements, when it changes from a crystal phase to an amorphous phase. The crystalline phase has low resistance, while the amorphous has high resistance. That can be changed by applying electrical voltage or pulses. Those changes can designate a switch from a one to a zero, which are the basic bits used in digital memory systems. Depending on the amount of voltage applied, part or all of the material will undergo a phase change; that means that multiple bits can be stored within a single memory cell, making the material cheaper to produce.
Its sort of like being able to put four people into a single family home rather than just one. The housing cost per person goes down. IBM is able to put four distinct bits into one cell. IBM scientists made the writing process more reliable by writing and then measuring the accuracy of the write. If it isnt accurate, the write action is performed again until it is accurate, said Haris Pozidis, Manager of Memory and Probe Technologies at IBM Research, Zurich.
The latency, or time it takes to get a task done, is 10 microseconds, which is 100 times faster than the fastest flash memory on the market today. But to be really useful, the phase change memory will have to be able to beat the flash chips that are on the market in 2016. IBM also used an advanced modulation coding technique that enabled better read accuracy. Reading and writing are the basic functions of a memory chip, in addition to storing data.
The phase change memory test chip was created by IBM scientists located in Burlington, Vermont; Yorktown Heights, New York and in Zurich. The experiment for retaining data has been under way for more than five months. IBM presented a paper on the topic at the 3rd IEEE International Memory Workshop in Monterey, Calif.
Itty Bitty Machines does it again.
I hope it can be produced economically in large capacities, a gigabyte or more, then it will be able to keep up with modern multicore processors and actually mean something.
Instant-booting a computer can’t happen until its peripherals know how to come up instantly in a known ready state, rather than taking several seconds to initialize and get in sync with the operating system.
This doesn't make sense to me. They're saying a "task" takes normal flash 1000 microseconds -- a millisecond.
But I have consumer USB flash drives that can read and write at tens of MB per second (that's a fraction of a microsecond per byte). And there are enterprise grade SATA SSD (solid-state drives) that will do hundreds of MB per second. That's continuous data rate, not burst.
So what is a "task" by IBM's definition? A write of a MB of data?
bflr
.
No idea. I was wondering the same thing.
IBM's OS/2 was far superior to Windows in design, flexibility, efficiency, reliability and security.
It still is.
Yet IBM executives had no spines to stand up to Gates and his media minions. Gerstner and his band of cowards surrendered and gave Windows the entire market (except for a few Mac and Linux devotees.)
I view this as analogous to the political situation today. With the exception of a few brave hearts, the Republicans have no spine to stand up to the Democrats like Obama, Schumer, Durbin, Pelosi, Reid and their media minions, all the while having a far superior platform.
So years later we are still suffering with insecure, bloated, clumsy Windows which we have to "upgrade" every few weeks to prevent the hackers from getting our data or worse.
And we're still suffering from massive debt and intrusive government foisted on us by 'craps and their media.
With todays technology, it shouldn't be too hard. Similar to telling a BIOS it doesn't have to do a discovery check each boot. A couple discovery tweaks for new peripherals shouldn't be too hard to incorporate. All it will require is a little more standardization.
My WAG is that they're saying that a standard task "Flip this array of bits from 0 to 1" is 100 times faster.
The amount of data processed over time is a meaningless metric because of parallel processing.
I also think that they may be using "100 times faster" a bit fast and loose. Each individual memory cell is faster, certainly but because it compacts more data in the same amount of space, they're using that as a multiplication factor. I.E. The actual response time may really be 12.5 times faster but because they're changing or reading 8 bits(as opposed to 1 in standard memory), it's equal to 100 times.
The take away is that if it can perform as if it's 100 times faster, we're looking a drastic improvement in instant on and responsiveness. Hard disks are on the way out now anyway and everything will be solid state at some point even without this discovery. There are certain things that disks do well, like long term storage, but in 10+ years I don't believe hard disks will be in common usage except in old equipment.
Great for implanting under foreheads and wrists.
“IBM’s OS/2 was far superior to Windows”
Windows 3.11 but not NT. Not even close.
So how long before the Chinese have it in production?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.