Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: neverdem

I’ve often wondered why they never put two or four heads on a single hard drive platter. You would essentially have two or four hard drives in one(depending on the number of heads. Then each head would be sequentially synchonized so that one head would read/write one bit, the next head the next bit, and so on. So if you have 4 heads, there would be 4 separate tracks each reading and writing slightly out of phase. You would quadruple your speed with 4 heads without increasing the RPMs of the platter. If a hard drive had 4 platters and 4 heads per platter, you’d have 16 times the speed.


9 posted on 02/16/2012 8:57:33 PM PST by mamelukesabre
[ Post Reply | Private Reply | To 1 | View Replies ]


To: mamelukesabre
I’ve often wondered why they never put two or four heads on a single hard drive platter.

We did it 30 years ago. There were four of them, and it boosted the speed 4X over a single one. But, even mainframe disk devices were really slow back then.

However, it's expensive. And you can accomplish the same thing with RAID-0 and multiple drives. A properly implemented RAID driver will read all the drives in parallel and assemble the results into a single data stream. Of course, each disk device has to be on independent SATA/SCSI/IDE channels.

The truth is that for most people, the seek and rotational latency is what really limits throughput. Seek latency is waiting for the head to move into position, and averages around 9 milliseconds. Rotational latency is waiting for the disk to spin into position, and averages 4.2 milliseconds for a 7200 RPM drive.

Compare that to how long it takes to actually move data. Drives with the latest SATA 6.0 Gbit/sec interface can sustain about 150 MByte/sec, but that is limited by the density of the bits on the drive platter.

The cluster size on your Windows NTFS file system isn't any larger than 32 kilobytes, and is likely smaller than that. Both the operating system and the disk drive do some read-ahead caching and will read more than you request at one time, but let's use 32K as an example. At a rate of 150 MByte/sec, a 32 KByte transfer would require only about 200 microseconds, or about 20 times the average rotational latency. You would have to read about 630 KBytes at one time to just split the average rotational latency time and transfer time 50/50, and that's not even considering the seek latency.

Are you seeing the problem? Very few people read a huge amount of data at one time, repeatedly, and would benefit from a faster sustained transfer rate. And those that do can construct a RAID array much cheaper.

Small random reads are much more common, especially on a computer with a virtual memory system. And that's why operating systems cache disk data in unused RAM, and even individual disks read-ahead and store data in their own internal cache RAM (which are about 32 MBytes these days). When you request that data, it gives you the cached copy instead, avoiding the seek and rotational latency delays.

23 posted on 02/17/2012 7:22:46 AM PST by justlurking (The only remedy for a bad guy with a gun is a good WOMAN (Sgt. Kimberly Munley) with a gun)
[ Post Reply | Private Reply | To 9 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson