Posted on 07/17/2002 3:04:07 PM PDT by sayfer bullets
mike carter Analysis of Stephen Wolfram's "A New Kind of Science" Wed Jul 17 2002
In A New Kind of Science (Wolfram Media, 2002) Stephen Wolfram develops a series of positions upon which he seeks to build a model for investigating reality by means of computer generated pictures instead of traditional mathematics (pp. 1, 111, 724, 742, 793). He opts for this approach because, he argues, computers are in fact universal systems with fixed underlying rules that can perform any possible computation (p. 5) and that means the pictures generated by computers make it possible to represent reality at a more abstract level than is possible through traditional mathematics (p. 5, 637-638). Therefore the most important idea upon which this new science is constructed is the notion of computation (p. 634). All systems, Wolfram says, can be viewed as computations (pp. 715, 716) and by thinking in terms of computation, it is possible to formulate principles that apply to a wide variety of systems irrespective of the details of those systems (p. 638).
The six basic positions derived from Wolframs perception are as follows:
1. that simple programs based on simple rules can produce great complexity (This position is first stated on page 4 and repeated often throughout the book. I counted fifty-four repetitions and may well have missed many.), so much complexity that to us they can appear random (This point is first stated on page 27 and repeated some seventeen times throughout the book. The idea of randomness as an appearance based on human limitations is mentioned on pages 620, and ascribed to the computational equivalence of us as observers in nature on page 845);
3. that the complexity expressed by such a system correlates to the activity within the system itself as it develops (p. 76) rather than being a consequence of the systems initial conditions (p. 381-362 this distinguishes Wolframs position from standard chaos theory, more on this below);
4. that many such systems based on very simple rules are universal (p. 5) by which Wolfram means they operate independent of the particular details in the system (pp. 52, 298, 366, 397, 525, 528, 638, 643, 675, 690, 719), are capable of emulating any other system (p. 643, that is they are fundamentally equivalent [p. 674]), and computationally irreducible (p. 821);
5. that there is a low threshold in the complexity of the rules that define such systems and that once this threshold is reach the system in question will generate complexity (pp. 105, 106, 309, 675, 711), and that once this threshold has been reached, adding additional rules does not increase the level of complexity in the system (pp. 62, 106, 602);
6. that computational irreducibility means that even if one has all the information there is to be had about such a systems initial conditions, the behavior of the system cannot be predicted (pp. 739, 748, 755, 771, 819), so that the only way to determine what such a system will do is to let the system run (p. 750) or do an exhaustive search of all possibilities (p. 842), and that even if one had an accurate model, working out all its consequences could prove irreducibly complex (footnote Philosophical implications, p. 1196)
From these six positions, Wolfram proposes what he calls the Principle of Computational Equivalence. This principle tells us that whenever one sees behavior that is not obviously simple it can be thought of as corresponding to a computation of equivalent sophistication. (p. 5) Hence it discloses unity across a vast array of systems (p. 6), suggesting that most axiom systems whose consequences are not obviously simple will tend to be universal (p. 815). Furthermore the Principle of Computational Equivalence implies any system whose behavior is not obviously simple will tend to be exactly equivalent in its computational sophistication (p. 844). This means that the systems can be investigated using the same kinds of ideas (p. 7). However, even though the processes of both mathematics and nature [interesting distinction for a materialist to make we will look at it later] can be expressed as computations (p. 772) and are ultimately equivalent in their computational sophistication (p. 775), the Principle of Computational Equivalence also implies that simplified computations cannot predict how systems exhibiting complexity will evolve (p. 444).
The Principle of Computational Equivalence allows Wolfram to address the epistemological dilemma with which Plato wrestled and that is consequent to the Wests tendency to distinguish between knower and known, that is, given such a distinction, how can the knower arrive at accurate knowledge of the known? For Wolfram the Principle of Computational Equivalence derived from the universality of systems based on simple rules is the key. He argues that human perception, though limited in the kinds of ways contemporary neurobiologists and traditional structuralists/conceptualists have described (pp. 547, 577, 751), and is therefore quite capable of misleading us (p. 735), but the Principle of Computational Equivalence, bring applicable to any process of any kind (p, 715), renders them all mutually intelligible through their correlative parities. He says, [A]lmost any rule whose behavior is not obviously simple should ultimately be capable of achieving the same level of computational sophistication and should thus in effect be universal (p. 718, he repeats variations on this statement at least twenty times during the next 128 pages).
Not only human perception, but human thought is based on a few simple rules (pp. 628, 733, 751, 753, 828), which means that our computational abilities are on the same level as those of the universe and can ultimately achieve the same sophistication as has been achieved by the entire universe (p. 845). Indeed, Wolfram suggest that, because it is based on a few simple rules, intelligence, whether biological or otherwise, should have arisen early in the universe and be quite common (p. 822), and he suggests that our tendency to anthropomorphize systems in nature, a tendency expressed in phenomena like animism expresses our implicit recognition of this (pp. 844-845). In fact, in a footnote entitled Physics as Intelligence found on pages 1191-1192, Wolfram writes: [P]erhaps intelligence could exist at a very small scale, and in effect have spread throughout the entire universe, building as an artifact everything we see. [T]his is at some level exactly what the Principle of Computational Equivalence suggests has actually happened. [T]his supports the theological notion that there might be a kind of intelligence that permeates our universe. For Wolfram, despite what can be read as a disclaimer on page 3, the idea of God is very much alive. It is not complexity that reveals God but the irreducible simplicity of the systems that generate that complexity that might reveal God (p. 838 it is worth noting here that in the great monotheistic faiths God is conceived as ultimately simple). He does argue that whether or not an ultimate model of the universe would eliminate the need for God or reveal God, knowing a complete and ultimate model [of the universe would] make it impossible to have miracles or divine interventions that come from outside the laws of the universe but he goes on to say that, since working out the implications of those laws may very well be irreducibly difficult, such a model may be beyond our capabilities (p. 1025, footnote Theological implications). When considering the nature of the universe, two questions must be kept distinct: why is there something rather than nothing? and why is something expressed in one way rather than another? Wolfram, in the long established tradition of science, is considering the second question rather than the first.
Wolfram spends much of his book suggesting ways in which his ideas can address problems across the spectrum of science. For example, addressing evolution, he argues (shades of Michael Behe) that, as constituted, the current theory does not explain the presence of complexity in living things (pp. 14, 383, footnote Complexity and theology, bottom of the left hand column). He then argues that in biological systems, complexity is a coincidental creation (p. 388) or randomly selected programs (p. 396) that result in growth (p. 400). Natural selection, in the scenario he proposes, works to retard complexity, it is analogous to engineering (p. 393). Hence complexity of form in biology arises not because of natural selection, but in spite of it (p. 396).
However, Wolfram is a physicist, so while he spends only part of Chapter 8 addressing biological issues, he dedicates all of Chapter 9 to discussing how his ideas relate to physics. This is appropriate since much of his book is a critique of the failure of traditional mathematics to adequately address most systems in nature, and physics is the field where traditional mathematics has had the most success (p. 433). Without getting into a lot of the details, Wolfram argues that any ultimate model of the universe will probably construe time, gravity, space and all its contents as being made of the same stuff so that in a sense space becomes the only thing n the universe (pp. 474, 481, 536 - 537, 540). Space, at is lowest level, can best be conceived as a giant network of nodes (p. 475, 480, 508), best represented by a giant four-dimensional network (p. 482), that evolves (p. 484), like a mobile automation that is constantly updated (p. 486), an idea that to me sounds remarkably similar to the Hindu concept of Indras net. He imagines that such a concept, especially as it relates to particles and their motions, will require many new and abstract concepts (p. 529), and this may be true, but I wonder if process philosophy might not provide a basic framework to begin rethinking such problems. As an aside, he also presents an interesting discussion of the Second Law of Thermodynamics, which he does not believe is a unversal principle (pp. 451, 453, 453, 457), using a definition (pp. 434, 445, 448, 450) with which I am quite comfortable but which Don might find disconcerting. Don should especially note his discussion of entropy and randomness (p. 444) and entropy and information (pp. 448 - 449).
Also there are some obvious parallels to chaos theory, parallels Wolfram acknowledges early (p. 19), but there are important differences that I think help illuminate what Wolfram has done. In chaos theory, initial conditions indicate the point in an ongoing process where the observer begins to measure and catalogue the phenomena to be investigated. Hence initial conditions are not initial in any absolute sense since any point in the process could conceivably serve as the initial point. When chaos theory was first formally described, simple computer programs were often employed to illustrate the principle that non-linear systems could develop in ways that looked chaotic but in fact expressed potentials latent in the programs themselves. The point was that chaotic systems, though they seemed to develop in highly complex, even random, ways were in fact were highly deterministic.
Beginning with such simple programs, Stephen Wolfram agues that the important point is not the highly deterministic nature of systems that appear chaotic, it is that very simply programs can produce extraordinary complexity. This leads him to conclude that much of the complexity we see in the world may in fact be the consequence of systems that can be described by a few elementary rules.
It seems to me that Wolfram has in a sense inverted Aristotelian teleology. Instead of formal causality structuring complex natural development from some future realm outside time, Wolfram posits that programs imbedded in time at the very beginning and involving a few elementary rules lay the source for identifiable systems of complexity in nature. This represent an interesting challenge to the materialism that defined science throughout most of the twentieth century, a challenge Wolfram implicitly admits when he distinguishes between mathematics and nature (p. 772), processes produced by the human mind and by nature (p. 715), or between systems and processes that exist in nature and elsewhere (pp. 753, 846). Surely for the materialist such a distinction represents a fundamental error, but it is one Wolframs system allows him, even requires him, to maintain. That is because the basic processes express principles that, because they are indifferent material particulars, lie beyond the material realm. Therefore they suggest there is more to the universe than mere material.
Vanity seemed appropriate even though it will end up being a crevo bit.
Sterile. Vain. Megalomaniacal. Useless.
A nice short review.
Although Wolfram touts his book as "new" science, the theories behind complexity and chaos have been developing for over 50 years. Wolfram's contribution to the field is largely based on the technical achievement behind the Mathematica program, and it's ability to convey the complexity theories using simple programs and routines developed in the software. Anyway, I guess I will have to read Wolfram's book myself to determine how "new" Wolfram's version of complexity theory really is. But what is nice about Wolfram's (previous) work is that, combined with the technical programming, he makes the science of complexity more readily understandable and accessible to a broader audience than is often the case with scientific literature of such a technical nature.
I read some long excerpts and concluded that the answer is no. The old review fits well - "It contains both truth and originality. Unfortunately, the true parts are not original, and the original parts are not true."
Actually, most of it can't be judged as being true or false, because many of the assertions are so general as to be close to meaningless. If an idea cannot be tested, it is philosophy, not science, and I didn't see a single new idea that could be tested.
But the sophomore philosophy majors are going to love it. It is this generation's equivalent of "Our whole universe could be an atom in the fingernail of some giant being..."
He's not very specific here, but the implication about chaos theory is not accurate. The most interesting result of chaos theory is universality, i.e., that systems entering chaotic states will have similar properties, regardless of not only the initial conditions, but of the systems themselves.
This review captures the spirit of the book. Wolfram is asking questions and making observations, not laying down theorems. His book is designed to broaden horizons, stimulate discussion and help overcome the downside of compartmentalized thinking.
Caveat. What I am about to say is based on several reviews I have read and reports I have received from knowledgeable people in my field who have read the book. Based on that, I'll wait to read the book 'til it is in paperback.
Understand that Wolfram is a very very bright boy and I expect to get a lot of insight out of what he writes.
But from what I have seen in the reviews (including the one printed in this thread, which, by the way, is very well written and makes perfect sense to a professional in the field), the book is not revolutionary. All of the reviews suggest that the book is about issues that have been discussed in the Evolutionary Computation and A-Life fields for years. He seems to present the ideas in a provocative framework and to have made an attempt to big picture what known phenomenon in cellular automata actually mean.
But so far, I haven't seen anything that says "run out and spend $50 because it will change the way you think."
I think I just had one of those long-forgotten pieces of trivia pop into my head when I read that name. If I'm not mistaken, the original name for the element tungsten was wolfram.
Actually, that was Dr. Samuel Johnson, who said: "Your manuscript is both good and original; but the part that is good is not original, and the part that is original is not good."
I had a prof who transmuted it to "new" and "useful" when reviewing technical papers.
--Boris
Yup. Wonderkind. Invented Mathematica which made him rich enough to spend 10 years writing 1200 pages of drivel.
And right again, the chemical symbol for Tungsten is "W" (Wolfram).
Here's a little something I came across in an endnote that'll probably fluff some feathers here and there:
"Truth and incompleteness. In discussions of the foundations of mathematics in the early 1900s it was normally assumed that truth and provability were in a sense equivalent -- so that all true statements could in principle be reached by formal processes of proof from fixed axioms.... Godel's Theorem showed that there are statements that can never be proved from given axioms. Yet often it seemed inevitable just from the syntactic structure of statements (say as well formed formulas) that each of them must at some level be true or false. And this led to the widespread claim that Godel's Theorem implies the existence of mathematical statements that are true but unprovable. Over the years this often came to be assigned a kind of mystical significance, mainly because it was implicitly assumed that somehow it must still be ultimately possible to know whether any given statement is true or false. But the Principle of Computational Equivalence implies that in fact there are all sorts of statements that simply cannot be decided by any computational process in our universe....
"In some cases statements can in effect have default truth values -- so that showing that they are unprovable immediately implies, say, that they must be true. An example in arithmetic is whether some integer equation has no solution. For if there were a solution, then given the solution it should be straightforward to give a proof that it is correct. So if it is unprovable that there is no solution, then it follows that there must in fact be no solution. And similarly, if it could be shown for example that Goldbach's Conjecture [1742, holding that any even number can be stated as the sum of two primes] is unprovable then it follows that there must be true, for if it were false then there would have to be a specific number which violates it, and this could be proved. Not all statements in mathematics have this kind of default truth value...."
Wonderful essay, sayfer bullets. Thanks so much for posting it.
(These sorts of posts take me a while ... =)
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.