Posted on 06/01/2005 9:55:09 PM PDT by Ernest_at_the_Beach
Astronomers have used supercomputers to re-create how the Universe evolved into the shape it is today.
The simulation by an international team is the biggest ever attempted and shows how structures in the Universe changed and grew over billions of years.
The Millennium Run, as it is dubbed, could help explain observations made by astronomers and shed more light on the Universe's elusive dark energy field.
Details of the study appear in the latest issue of Nature magazine.
"We have learned more about the Universe in the last 10 or 20 years than in the whole of human civilisation," said Professor Carlos Frenk, Ogden professor of fundamental physics at the University of Durham and co-author on the Nature report.
"We are now able, using the biggest, fastest supercomputers in the world, to recreate the whole of cosmic history," he told the BBC.
The researchers looked at how the Universe evolved under the influence of the mysterious material called dark matter.
Dark matter model
According to cosmological theory, soon after the Big Bang, cold dark matter formed the first large structures in the Universe, which then collapsed under their own weight to form vast halos.
The gravitational pull of these halos sucked in normal matter, providing a focus for the formation of galaxies.
The simulation tracked some 10 billion dark matter particles over roughly 13 billion years of cosmic evolution. It incorporated data from satellite observations of the heat left over from the Big Bang, information on the make-up of the Universe and current understanding of the laws of physics on Earth.
"What's unique about the simulation is its scope and the level of detail with which we can re-create the cosmic structures we see around us," Professor Frenk commented.
English Astronomer Royal, Sir Martin Rees told the BBC: "Now we have the Millennium Run simulations, we have the predictions of the theory in enough detail that we can see if there is a meshing together of how the world looks on the larger scale and the way we expect it should look according to our theories. It's a way to check our theories."
Energy problem
Comparisons between the results of the simulation and astronomical observations are already helping shed light on some unsolved cosmic mysteries.
Some astronomers have previously questioned how radio-sources in distant galaxies called quasars could have formed so quickly after the Big Bang under the cold dark matter model.
The Millennium Run simulation demonstrates that such structures form naturally under the model in numbers consistent with data from the Sloan Digital Sky Survey.
The virtual universe may also shed light on the nature of dark energy, which makes up about 73% of the known Universe, and which, Frenk says, is the "number one unsolved problem in physics today - if not science itself".
"Our simulations tell us where to go looking for clues to learn about dark energy. If we want to learn about this we need to look at galaxy clusters, which encode information about the identity of dark energy," Professor Frenk explained.
Video is available at the BBC Website.
fyi
Looks just a smidge like "Collusus, The Forbin project." bump, bttt.
Cool (Free!) Astronomy-related Software: Please FReepmail other suggestions |
|
|
The system used for this research is the principal supercomputer at the Max Planck Society's Supercomputing Centre in Garching, Germany, the Regatta, as described at http://www.rzg.mpg.de/computing/IBM_P/hardware.html:
The IBM pSeries Supercomputer
The pSeries "Regatta" system is based on Power 4 processor technology.
Node characteristics are:
32-way "Regatta" compute nodes (eServer p690), equipped with 1.3 GHz Power 4 processors, with a peak perf. of 166 GFlop/s and 64 GB of main memory per node.Since Oct 2001, an 8 proc test system had been installed.
From January 2002, a six node system with an aggregated performance of 1 TFlop/s and 1/2 TB of main memory has been in operation.
In mid 2002, the system has been extended to 22 compute nodes and 2 I/O nodes with an aggregated peak performance of 3.8 TFlop/s and 1.8 TB of main memory.
In Jan/Feb 2003, the system was moved to the new computer building with significantly increased disk space.
After initial tests with the new IBM High Performance Switch (HPS, "Federation Switch") as of early Sep 2003, the system was upgraded to this fast node interconnect technology in December 2003, together with a system expansion to 4.2 TF peak performance.
There are now 25 compute nodes and 2 I/O nodes, connected by the High Performance ("Federation") Switch, with four links per Regatta node.
Here's the cover of the Nature magazine with the article:
And here is one of the several impressive images linked off the above press release:
Nature - 2 June 2005.
samedi 4 septembre 2004
An international group of cosmologists, the Virgo Consortium, has realized the first simulation of the entire universe, starting 380,000 years after the Big Bang and going up to now. In "Computing the Cosmos," IEEE Spectrum writes that the scientists used a 4.2 teraflops system at the Max Planck Society's Computing Center in Garching, Germany, to do the computations. The whole universe was simulated by ten billion particles, each having a mass a billion times that of our sun. As it was necessary to compute the gravitational interactions between each of the ten billion mass points and all the others, a task that needed 60,000 years, the computer scientists devised a couple of tricks to reduce the amount of computations. And in June 2004, the first simulation of our universe was completed. The resulting data, which represents about 20 terabytes, will be available to everyone in the months to come, at least to people with a high-bandwidth connection. Read more...
Here is a general overview of the project.
The group, dubbed the Virgo Consortium -- a name borrowed from the galaxy cluster closest to our own -- is creating the largest and most detailed computer model of the universe ever made. While other groups have simulated chunks of the cosmos, the Virgo simulation is going for the whole thing. The cosmologists' best theories about the universe's matter distribution and galaxy formation will become equations, numbers, variables, and other parameters in simulations running on one of Germany's most powerful supercomputers, an IBM Unix cluster at the Max Planck Society's Computing Center in Garching, near Munich.Now, here some details about this cluster -- and its limitations.
The machine, a cluster of powerful IBM Unix computers, has a total of 812 processors and 2 terabytes of memory, for a peak performance of 4.2 teraflops, or trillions of calculations per second. It took 31st place late last year in the Top500 list, a ranking of the world's most powerful computers by Jack Dongarra, a professor of computer science at the University of Tennessee in Knoxville, and other supercomputer experts.But as it turns out, even the most powerful machine on Earth couldn't possibly replicate exactly the matter distribution conditions of the 380 000-year-old universe the Virgo group chose as the simulation's starting point. The number of particles is simply too large, and no computer now or in the foreseeable future could simulate the interaction of so many elements.To understand why such a powerful system cannot handle this simulation in a reasonable amount of time, we need to look at the parameters of this simulation.
The fundamental challenge for the Virgo team is to approximate that reality in a way that is both feasible to compute and fine-grained enough to yield useful insights. The Virgo astrophysicists have tackled it by coming up with a representation of that epoch's distribution of matter using 10 billion mass points, many more than any other simulation has ever attempted to use.These dimensionless points have no real physical meaning; they are just simulation elements, a way of modeling the universe's matter content. Each point is made up of normal and dark matter in proportion to the best current estimates, having a mass a billion times that of our sun, or 2000 trillion trillion trillion (239) kilograms. (The 10 billion particles together account for only 0.003 percent of the observable universe's total mass, but since the universe is homogeneous on the largest scales, the model is more than enough to be representative of the full extent of the cosmos.)With these ten billion points, the Virgo team faced a serious challenge.
The software [astrophysicist Volker Springel] and his colleagues developed calculates the gravitational interactions among the simulation's 10 billion mass points and keeps track of the points' displacements in space. It repeats these calculations over and over, for thousands of simulation time steps.The simulation, therefore, has to calculate the gravitational pull between each pair of mass points. That is, it has to choose one of the 10 billion points and calculate its gravitational interaction with each of the other 9 999 999 999 points, even those at the farthest corners of the universe. Next, the simulation picks another point and does the same thing again, with this process repeated for all points. In the end, the number of gravitational interactions to be calculated reaches 100 million trillion (1 followed by 20 zeros), and that's just for one time step of the simulation. If it simply chugged through all of the thousands of time steps of the Millennium Run, the Virgo group's supercomputer would have to run continuously for about 60,000 years.Because it was obviously unacceptable, Springel and his colleagues used a couple of tricks to reduce the amount of computations.
First, the researchers divided the simulated cube into several billion smaller volumes. During the gravitational calculations, points within one of these volumes are lumped together -- their masses are summed. So instead of calculating, say, a thousand gravitational interactions between a given particle and a thousand others, the simulation uses an algorithm to perform a single calculation if those thousand points happen to fall within the same volume. For points that are far apart, this approximation doesn't introduce notable errors, while it does speed up the calculations significantly.They used another method for short distance interactions.
Springel developed new software with what is called a tree algorithm to simplify and speed up the calculations for this realm of short-distance interactions. Think of all 10 billion points as the leaves of a tree. Eight of these leaves attach to a stem, eight stems attach to a branch, and so on, until all the points are connected to the trunk. To evaluate the force on a given point, the program climbs up the tree from the root, adding the contributions from branches and stems found along the way until it encounters individual leaves. This trick reduces the number of required calculations from an incomputable n2 to a much more manageable n log10n, says Springel.After these two tricks were introduced into the software, the simulation started. And it was completed in June 2004, generating about 20 terabytes of results. These results, which represent 64 snapshots of a virtual universe, will be available to all of us in the months to come. But who will really have access to such an amount of data outside universities and research centers? My guess is that the Virgo Consortium will find a way to reduce the size of the snaphots for regular folks. So stay tuned for the next developments.
Sources: Alexander Hellemans & Madhusree Mukerjee, IEEE Spectrum, Vol. 41, No. 8, P. 28, August 2004
The early beginnings of the next Supercomputer strain:
Updates above.
Mark for cool links. Thanks.
Supercomputer simulations explain the formation of galaxies and quasars in the universe
There are several natural package sizes for computers - a room, a filing cabinet size box, a rack or drawer, a board, a processor package, and now a cpu core within such a processor package.
The fundamental change, underlying pretty much all else, for the last half century, has been the shrinking of logic, the size of a bit of storage or a logic gate. As it shrunk, first one of the above natural package sizes, then perhaps a decade later, the next size down, became the natural size of a single computer processor unit (CPU) - that thing which executes a single stream of logical instructions on a set of data.
Then, as the natural size shrank a little further, we put more than one CPU in a package. First it was multiple CPU's in a room, then in a box, then in a rack, then on a board, and now in a processor package. There have always been practical limits on how many CPU's we could cram into any particular package size - due to limits on how fast we could pump data into them, and extract data and heat from them. But at each size, we have quickly pushed to get perhaps 2 or 8 or 32 CPU's in a package.
Before now, this work on parallel computing always occurred alongside the work in continuing to shrink CPUs down to fit in the next natural size package.
Due to the enormous expense of original design work on CPUs at the wafer level, and due to the enormous constraints that the major players (basically Intel and IBM) can impose on anyone trying to interlope at that level, I do not think there is a smaller package size.
The processor package, that square inch of branded plastic, semiconductor, copper, aluminum and ceramic, is the last natural size.
Now the use of multiple CPUs has arrived at the processor package size, and will ship in rapidly increasing quantity this year and the next few years.
The breath taking and rapid evolution of computer and hardware architectures over the last fifty years enters a new phase. Like the automobile, which has not changed packaging in any fundamental way in a century now, the computer has now arrived at the full range of packaging, from the large room fulling big honkin NUMA iron shown in pictures above, to the six or seven cores in the processor package of next years Playstation.
This range of packaging sizes is now established, and now we enter a period of refinement and elaboration.
Depends. Ever hear of quantum computing? The physical size of the thingie might be limited (it should be big enough to see, I guess), but there could be millions of processors on a "chip."
--Boris
Package sizes are determined by economics, thermodynamics and memory bandwidth, and not just by device size.
We have separate processor IC's, circuit boards, racks, cabinets and rooms because it's practical for different companies to compete at a given size, to depend on separate providers from the next smaller size, and sell to markets for the next larger size. This depends on the cost of integration of packages of one size into a package of the next size being fairly cheap.
I just don't see wide spread use of low cost means to integrate pin-head size quantum CPU's into a single I.C., across corporate boundaries.
And except for specialized programs requiring very regular parallel operations, putting thousands or millions of CPUs into a single I.C. is useless, because you can't get the data in and out -- memory bandwidth will be limited by the cross I.C. interconnect technology.
I don't know enough about quantum computers to know if they will run dramatically cooler - if they do then at least the thermodynamic hurdles for such a beast will be less.
Device size reductions have ruled the development of computers for a half century now, known to many as Moore's Law.
Nothing grows (or shrinks ;) at the rate computer device sizes have been shrinking forever. The delightful insanity of the last half century will end.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.