Skip to comments.Top 10 Largest Databases in the World
Posted on 05/07/2010 5:37:02 AM PDT by ShadowAce
We all collected things as children. Rocks, baseball cards, Barbies, perhaps even bugs -- we all tried to gather up as much stuff as possible to compile the biggest most interesting collection possible. Some of you may have even been able to amass a collection of items numbering into the hundreds (or thousands). As the story always goes, we got older, our collections got smaller, and eventually our interests died out...until now. There are currently organizations around the world in the business of amassing collections of things, and their collections number into and above the trillions. In many cases these collections, or databases, consist of items we use every day. In this list, we cover the top 10 largest databases in the world:
Library of Congress
Not even the digital age can prevent the world's largest library from ending up on this list. The Library of Congress (LC) boasts more than 130 million items ranging from cook books to colonial newspapers to U.S. government proceedings. It is estimated that the text portion of the Library of Congress would comprise 20 terabytes of data. The LC expands at a rate of 10,000 items per day and takes up close to 530 miles of shelf space -- talk about a lengthy search for a book.
If you're researching a topic and cannot find the right information on the internet, the Library of Congress should be your destination of choice. For users researching U.S. history, around 5 million pieces from the LC's collection can be found online at American Memory.
Unfortunately for us, the Library of Congress has no plans of digitizing the entirety of its contents and limits the people who can check out materials to Supreme Court Justices, members of Congress, their respective staff, and a select few other government officials; however, anyone with a valid Reader Identification Card (the LC's library card) can access the collection.
By the Numbers
The Central Intelligence Agency (CIA) is in the business of collecting and distributing information on people, places and things, so it should come as no surprise that they end up on this list. Although little is known about the overall size of the CIA's database, it is certain that the agency has amassed a great deal of information on both the public and private sectors via field work and digital intrusions.
Portions of the CIA database available to the public include the Freedom of Information Act (FOIA) Electronic Reading Room, The World Fact Book, and various other intelligence related publications. The FOIA library includes hundreds of thousands of official (and occasionally ultra-sensitive) U.S. government documents made available to the public electronically. The library grows at a rate of 100 articles per month and contains topics ranging from nuclear development in Pakistan to the type of beer available during the Korean War. The World Fact Book boasts general information on every country and territory in the world including maps, population numbers, military capabilities and more.
By the Numbers
Amazon, the world's biggest retail store, maintains extensive records on its 59 million active customers including general personal information (phone number address, etc), receipts, wishlists, and virtually any sort of data the website can extract from its users while they are logged on. Amazon also keeps more than 250,000 full text books available online and allows users to comment and interact on virtually every page of the website, making Amazon one of the world's largest online communities.
This data coupled with millions of items in inventory Amazon sells each year -- and the millions of items in inventory Amazon associates sell -- makes for one very large database. Amazon's two largest databases combine for more than 42 terabytes of data, and that's only the beginning of things. If Amazon published the total number of databases they maintain and volume of data each database contained, the amount of data we know Amazon houses would increase substantially.
But still, you say 42 terabytes, that doesn't sound like so much. In relative terms, 42 terabytes of data would convert to 37 trillion forum posts.
By the Numbers
After less than two years of operation YouTube has amassed the largest video library (and subsequently one of the largest databases) in the world. YouTube currently boasts a user base that watches more than 100 million clips per day accounting for more than 60% of all videos watched online.
In August of 2006, the Wall Street Journal projected YouTube's database to the sound of 45 terabytes of videos. While that figure doesn't sound terribly high relative to the amount of data available on the internet, YouTube has been experiencing a period of substantial growth (more than 65,000 new videos per day) since that figures publication, meaning that YouTube's database size has potentially more than doubled in the last 5 months.
Estimating the size of YouTube's database is particularly difficult due to the varying sizes and lengths of each video. However if one were truly ambitious (and a bit forgiving) we could project that the YouTube database will expect to grow as much as 20 terabytes of data in the next month.
Given: 65,000 videos per day X 30 days per month = 1,950,000 videos per month; 1 terabyte = 1,048,576 megabytes. If we assume that each video has a size of 1MB, YouTube would expect to grow 1.86 terabytes next month. Similarly, if we assume that each video has a size of 10MB, YouTube would expect to grow 18.6 terabytes next month.
By the Numbers
Imagine having to search through a phone book containing a billion pages for a phone number. When the employees at ChoicePoint want to know something about you, they have to do just that. If printed out, the ChoicePoint database would extend to the moon and back 77 times.
ChoicePoint is in the business of acquiring information about the American population -- addresses and phone numbers, driving records, criminal histories, etc., ChoicePoint has it all. For the most part, the data found in ChoicePoint's database is sold to the highest bidders, including the American government.
But how much does ChoicePoint really know? In 2002 ChoicePoint was able to help authorities solve a serial rapist case in Philadelphia and Fort Collins after producing a list of 6 potential suspects by data mining their DNA and personal records databases. In 2001 ChoicePoint was able to identify the remains of World Trade Center victims by matching DNA found in bone fragments to the information provided by victim's family members in conjunction to data found in their databases.
By the Numbers
Sprint is one of the world's largest telecommunication companies as it offers mobile services to more than 53 million subscribers, and prior to being sold in May of 2006, offered local and long distance land line packages.
Large telecommunication companies like Sprint are notorious for having immense databases to keep track of all of the calls taking place on their network. Sprint's database processes more than 365 million call detail records and operational measurements per day. The Sprint database is spread across 2.85 trillion database rows making it the database with the largest number of rows (data insertions if you will) in the world. At its peak, the database is subjected to more than 70,000 call detail record insertions per second.
By the Numbers
Although there is not much known about the true size of Google's database (Google keeps their information locked away in a vault that would put Fort Knox to shame), there is much known about the amount of and types of information Google collects.
On average, Google is subjected to 91 million searches per day, which accounts for close to 50% of all internet search activity. Google stores each and every search a user makes into its databases. After a years worth of searches, this figure amounts to more than 33 trillion database entries. Depending on the type of architecture of Google's databases, this figure could comprise hundreds of terabytes of information.
Google is also in the business of collecting information on its users. Google combines the queries users search for with information provided by the Google cookies stored on a user's computer to create virtual profiles.
To top it off, Google is currently experiencing record expansion rates by assimilating into various realms of the internet including digital media (Google Video, YouTube), advertising (Google Ads), email (GMail), and more. Essentially, the more Google expands, the more information their databases will be subjected to.
In terms of internet databases, Google is king.
By the Numbers
Similar to Sprint, the United States' oldest telecommunications company AT&T maintains one of the world's largest databases. Architecturally speaking, the largest AT&T database is the cream of the crop as it boasts titles including the largest volume of data in one unique database (312 terabytes) and the second largest number of rows in a unique database (1.9 trillion), which comprises AT&T's extensive calling records.
The 1.9 trillion calling records include data on the number called, the time and duration of the call and various other billing categories. AT&T is so meticulous with their records that they've maintained calling data from decades ago -- long before the technology to store hundreds of terabytes of data ever became available. Chances are, if you're reading this have made a call via AT&T, the company still has all of your call's information.
By the Numbers
The second largest database in the world belongs to the National Energy Research Scientific Computing Center (NERSC) in Oakland, California. NERSC is owned and operated by the Lawrence Berkeley National Laboratory and the U.S. Department of Energy. The database is privy to a host of information including atomic enegry research, high energy physics experiements, simulations of the early universe and more. Perhaps our best bet at traveling back in time is to fire up NERSC's supercomputers and observe the big bang.
The NERSC database encompasses 2.8 petabytes of information and is operated by more than 2,000 computational scientists. To put the size of NERSC into perspective, the total amount of spoken words in the history of humanity is estimated to be at 5 exabytes; in relative terms, the NERSC database is equivalent to 0.055% of the size of that figure.
Although that may not seem a lot at first glance, when you factor in that 6 billion humans around the globe speak more than 2,000 words a day, the sheer magnitude of that number becomes apparent.
By the Numbers
If you had a 35 million euro super computer lying around what would you use it for? The stock market? Building your own internet? Try extensive climate research -- if there's a machine out there that has the answer for global warming, this one might be it. Operated by the Max Planck Institute for Meteorology and German Climate Computing Centre, The World Data Centre for Climate (WDCC) is the largest database in the world.
The WDCC boasts 220 terabytes of data readily accessible on the web including information on climate research and anticipated climatic trends, as well as 110 terabytes (or 24,500 DVD's) worth of climate simulation data. To top it off, six petabytes worth of additional information are stored on magnetic tapes for easy access. How much data is six petabyte you ask? Try 3 times the amount of ALL the U.S. academic research libraries contents combined.
By the Numbers
The following databases were unique (and massive) in their own right, and just fell short of the cut on our top 10 list.
Nielsen Media Research / Nielsen Net Ratings
Best known for its television audience size and composition rating abilities, the U.S. firm Nielsen Media Research is in the business of measuring mass-media audiences including television, radio, print media, and the internet. The database required to process such statistics as Google's daily internet searches is nothing short of massive.
United States Customs
The U.S. Customs database is unique in that it requires information on hundreds of thousands of people and objects entering and leaving the United States borders instantaneously. For this to be possible, the database was special programmed to process queries near instantaneously.
There are various databases around the world using technology similar to that found in our countdown's second largest database NERSC. The technology is known as High Performance Storage System or HPSS. Several other massive HPSS databases include Lawrence Livermore National Laboratory, Sandia National Laboratories, Los Alamos National Laboratory, Commissariat a l'Energie Atomique Direction des Applications Militaires, and more.
Facebook has got to be huge.
Yeah, but isn’t it mostly text? In terms of storage space, it probably get compressed pretty far, so it wouldn’t be as large as Youtube.
Text and pics for the most part. But think how many FB users.
Had a talk with a colleague recently that said that the only thing that makes FB possible is some very powerful caching HW and SW, as every time you access FB it’s making dozens of dozens of DB queries.
My question also....yet maybe a search tool of other data bases ....
The article claims that Google has “91,000,000 searches per day” more likely 91,000,000 searches per hour. That number is laughably low. I also wouldn’t be surprised if the FBI’s database rivaled the CIA’s in size, with larger entries for each U.S. citizen and person of interest.
China Mobile has 500,000,000 subscribers, and the RedChinese givernment must have a massive database or two kicking around.
This article would have been better entitled, some of the larger databases in the world, not “The Ten Largest.”
Or perhaps "The Top 10 Largest we can get access to so we can measure them"
Great ping. Thanks ShadowAce.
I believe Lexis/Nexis is an aggregator and doesn’t actually house the DB. They would also be text-based and offer proxy connectivity to subscription-based sites.
I would argue there is a significant difference between a database and a collection of data, such as Google. Databases relate things. Google has little relation. Calling a collection of data a database would mean calling our hard drives databases.
The world’s largest without a doubt belong to the DoD/Intel crowd.
Every public tweet, ever, since Twitters inception in March 2006, will be archived digitally at the Library of Congress. Thats a LOT of tweets, by the way: Twitter processes more than 50 million tweets every day, with the total numbering in the billions.
Well... When you act like Al Capone and have to keep two sets of books...
45 terabytes for youtube doesn’t sound like much and surely must be a mistake. I have 5, 1 terabyte hard drives sitting on my desk.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.