Free Republic
Browse · Search
News/Activism
Topics · Post Article

To: 2ndDivisionVet
Just as an exercise in understanding the nature of the datacenter we are discussing, let's take a look at the current state of systems processing and storage. We'll have to work with some "givens" and these can only be guesstimates based on the rumors that we are hearing. We’ll also have to assume that I have a certain familiarity with large scale databases.

Rumored Data size = 5 Zettabytes (5 x 10 21st)

It would be an exercise in futility to simply store all of this information as a big old flat file so we can safely assume that databases will be constructed. It would also be silly to assume that we can throw any old hardware at a database of this size, so we can also assume that whatever hardware is used, it will be specifically engineered as a database engine – built and tuned specifically for the database(s) it will be processing. There are two companies that I am aware of that build commercially available, large scale, database-specific hardware at this time – IBM with its Power Systems and Oracle with its Exadata. Because the Power Systems can also perform general purpose computing, they may be considered more versatile, but for that same reason, may not be appropriate vehicles for this colossal database enterprise.

Because this is simply an exercise, and I have to go mow the lawn, let’s just take a look at Exadata and its size requirements. According to Oracle’s own marketing folks, it would take 3 Exadata racks to store one Petabyte of data. When we extend this to the projected 5 zettabytes, we are looking at a truly huge amount of iron. ( 1024 TeraBytes =1 PetaByte, 1024 PB = 1 ExaByte, 1024 EB = 1 ZettaByte). So then (3*1024*1024)*5 yields 15,728,640 Exadata racks. Now each of those Exadata racks will hold 56 PCI Flash cards. That brings our number of monitorable and administrable pieces of hardware up to 880,803,840.

From a systems point of view, the administration of 880 million individual flash cards in a single building is…well…governmental in nature. Even if we apply a compression factor of 10 or 15 to the archived (older) data, we are still talking about over 88 million cards that can fail. And they do fail with great regularity.

Now, based on the scale of what we are seeing, we can raise a few questions. Let’s assume (and this is a reasonably good assumption) that this giant system cannot have 100% uptime – something this big probably cannot even produce 5 x9’s. So let’s say 4 x 9’s. 99.99% availability. This then shows a .01% failure rate for whatever reason – physical or logical. On a system containing tens of millions of identifiable and vulnerable pieces of hardware. On a system presumably storing hundreds of millions to billions of data points each day. On a system presumably processing millions of inquiries each day, a subset of which needs eyes on by an intelligence analyst.

ALL monitored and administered by government workers.

I ask you – what could go wrong?

On the other hand - we have skunk works for aviation. I think it is safe to assume we have skunk works for information processing. Our state of the art, commercially available systems may be several iterations behind...

Or perhaps Moore's Law itself has been rendered obsolete. Cue Skynet. Then too, maybe I'm full of it.

53 posted on 06/09/2013 9:18:04 AM PDT by Ol' Sox (Research, Resolve, Remediate, Repeat)
[ Post Reply | Private Reply | To 1 | View Replies ]


To: Ol' Sox
Our state of the art, commercially available systems may be several iterations behind...

I think perhaps Google® would disagree.

118 posted on 06/09/2013 9:40:27 PM PDT by Elsie (Heck is where people, who don't believe in Gosh, think they are not going...)
[ Post Reply | Private Reply | To 53 | View Replies ]

Free Republic
Browse · Search
News/Activism
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson