Free Republic
Browse · Search
Religion
Topics · Post Article

To: Alamo-Girl; ZellsBells; tortoise; Doctor Stochastic; betty boop; little jeremiah
". . . it is becoming untenable for scientists to cling to the original formulation 'random mutations + natural selection > species' . . ."

Yes it is, and certain evolutionary theorists, such as Luis Rocha, who stress the need for "syntax" to explain evolutionary processes of communication are pointing this out:

". . . the feud between those who claim that natural selection is the sole explanation for evolution and those who stress that other aspects of evolutionary systems, such as developmental constraints, also play an important role. . . . the second group likes to think of the propensities of matter or historical contingencies as being of at least equal importance in evolution . . ."

Maintaining the emphasis upon random mutations and natural selection is only applicable to those evolutionary theorists who are attempting to cling to Darwin's original formulation, or a reasonable facsimile thereof. There is a real debate among evolutionary theorists that is outside of the problem stated in the first quote, since many "do not accept the notion of happenstance" as has been suggested.

". . . From the Intelligent Design theorists, taking the approach of looking backwards, the issue of "irreducible complexity" has been raised. . . ."

"Irreducible Complexity" is the cornerstone of the Intelligent Design theory, because it argues, or at least implies, that if you go back far enough in the evolutionary record you will come to a point at which you can go back no further and within which you still have a level of biological complexity that is advanced enough so as to negate any evolutionary ancestry. It is not a scientific theory because there is no attempt to define a starting point for evolution in real time or in taxonomy, which could be tested by scientists, and it is instead argued from mathematical probability.

"Functional Complexity" is a real issue in evolutionary theory, but my opinion of the arguments made by supporters of Intelligent Design theory or other scholars they cite, Schützenberger, e.g., is that they are guilty of either postulating that there is something basically undecipherable about microbiological/genetic processes or what Immunologist/Microbiologist Andrea Bottaro refers to as "how to lose one's way while looking for misdirection." The above-quoted link for the interview with the late mathematician Schützenberger is a case in point. Allow me to excerpt:

". . . Q: What do you mean by functional complexity?

S: It is impossible to grasp the phenomenon of life without that concept, the two words each expressing a crucial and essential idea. The laboratory biologists' normal and unforced vernacular is almost always couched in functional terms: the function of an eye, the function of an enzyme, or a ribosome, or the fruit fly's antennae -- their function; the concept by which such language is animated is one perfectly adapted to reality. Physiologists see this better than anyone else. Within their world, everything is a matter of function, the various systems that they study -- circulatory, digestive, excretory, and the like -- all characterized in simple, ineliminable functional terms. At the level of molecular biology, functionality may seem to pose certain conceptual problems, perhaps because the very notion of an organ has disappeared when biological relationships are specified in biochemical terms; but appearances are misleading, certain functions remaining even in the absence of an organ or organ systems. Complexity is also a crucial concept. Even among unicellular organisms, the mechanisms involved in the separation and fusion of chromosomes during mitosis and meiosis are processes of unbelieveable complexity and subtlety. Organisms present themselves to us as a complex ensemble of functional interrelationships. If one is going to explain their evolution, one must at the same time explain their functionality and their complexity.
"

Schützenberger tells you on the one hand that "laboratory biologists" and others working "at the level of molecular biology" have trouble giving a defintion to the term "functional complexity" he can accept, but -- and read carefully what I have quoted above -- he does not offer a definition of his own -- which is something that scientists in those fields could take him to task for if they found it inadequate, except that there is nothing. And notice the language of undecipherability and misdirection I mentioned above: ". . . impossible to grasp the phenomenon of life without that concept . . ." and ". . . appearances are misleading . . ." and ". . . processes of unbelieveable complexity and subtlety . . ."

I submit that it is wholly unscientific to discuss "functional complexity" without proper scientific rigor, which requires that terms be stated and defined clearly. I encourage everyone to compare Schützenberger's response to the question "what do you mean by functional complexity?" above with the following definition given by Belgian evolutionary theorist Francis Heylighen, in his "The Growth of Structural and Functional Complexity during Evolution":

". . . Functional complexification follows from the need to increase the variety of actions in order to cope with more diverse environmental perturbations, and the need to integrate actions into higher-order complexes in order to minimize the difficulty of decision-making. . . ."

That is the kind of scientific rigor that makes the concept clear and is something we are not seeing from the Intelligent Design theorists, because you can take Heylighen on if your view of "functional complexity" is different than his. Or to put this another way, Heylighen is a "hard target," which is what a true scientist should be, not one who tells us that things are too difficult to understand, as Schützenberger continues in the next response he gave after the first question I quoted above:

". . . Q: What is it that makes functional complexity so difficult to comprehend?

S: The evolution of living creatures appears to require an essential ingredient, a specific form of organization. Whatever it is, it lies beyond anything that our present knowledge of physics or chemistry might suggest; it is a property upon which formal logic sheds absolutely no light. Whether gradualists or saltationists, Darwinians have too simple a conception of biology, rather like a locksmith improbably convinced that his handful of keys will open any lock. Darwinians, for example, tend to think of the gene rather as if it were the expression of a simple command: do this, get that done, drop that side chain. Walter Gehring's work on the regulatory genes controlling the development of the insect eye reflects this conception. The relevant genes may well function this way, but the story on this level is surely incomplete, and Darwinian theory is not apt to fill in the pieces. . . .
"

So, "functional complexity" "lies beyond anything that our present knowledge of physics or chemistry might suggest" does it? Well we can really take that one on in the lab can't we? What this all amounts to is that Intelligent Design theory must either be rethought to advance hard scientific proposals that can be tested or it must be rejected as a scientific explanation. Since the former requires action on the part of the supporters of Intelligent Design, the latter should be the attitude of the scientific community in response.

Some brief additional comments on the rest:

Von Neumann has presented a scientific challenge evolutionary theorists have yet to address fully. But I believe that the field of Biosemotics -- the quote I put up from Rocha above falls within this category -- is answering his challenge, though the discipline is still in its infancy and the response is not yet adequate to be qualified as an answer.

And "randomness" taken by itself is meaningless unless it is specifically applied to a scientific formulation of a theory that can be tested. Mathematicians have raised problems with "randomness" in general, some evolutionary theorists are dealing with it as it applies to natural selection, but we cannot use it as an alternative to either explanations for the origin of life on earth or problems within evolutionary theory unless it is attached to a specific theory whose applicability can be tested.

One way or another we are going to have to see something offered by Intelligent Design theorists or supporters of the theory, that attempts to make an argument based upon evidence, something like "result x has likely occured because of evidence y occuring at time z." It is not enough to simply challenge the body of scientific study to the contrary in an attempt to negate its hypotheses and/or conclusions and, as a result, leave Intelligent Design theory as the only possible alternative. That is not scientific under any criteria that are worthy of being considered as falling within "science."
140 posted on 12/12/2004 5:59:24 PM PST by StJacques
[ Post Reply | Private Reply | To 139 | View Replies ]


To: StJacques; betty boop; tortoise; Doctor Stochastic; ZellsBells; little jeremiah
Thank you so much for your thorough and thoughtful reply! I particularly enjoyed the article by Francis Heylighen as he raised an issue in the discussion of complexity in biological systems which I had not yet researched – namely, does evolution increase complexity?

In investigating the subject a bit further, I discovered some things you and the others here might also find quite interesting. Chief among these is that there is a formulation for “functional complexity” in complex systems which includes biological systems. The evolutionary biologists however seem to question the applicability of the term to their domain and have offered yet another type of complexity, “physical complexity”. Heylighen’s article (1996) did not use the term which was evidently coined by Adami. Following is an article by Adami raising the concept and the results of modeling the theory:

Physical Complexity

Evolution of biological complexity – Adami

Darwinian evolution is a simple yet powerful process that requires only a population of reproducing organisms in which each offspring has the potential for a heritable variation from its parent. This principle governs evolution in the natural world, and has gracefully produced organisms of vast complexity. Still, whether or not complexity increases through evolution has become a contentious issue. Gould (1), for example, argues that any recognizable trend can be explained by the "drunkard's walk" model, where "progress" is due simply to a fixed boundary condition. McShea (2) investigates trends in the evolution of certain types of structural and functional complexity, and finds some evidence of a trend but nothing conclusive. In fact, he concludes that "something may be increasing. But is it complexity?" Bennett (3), on the other hand, resolves the issue by fiat, defining complexity as "that which increases when self-organizing systems organize themselves." Of course, to address this issue, complexity needs to be both defined and measurable.

In this paper, we skirt the issue of structural and functional complexity by examining genomic complexity. It is tempting to believe that genomic complexity is mirrored in functional complexity and vice versa. Such an hypothesis, however, hinges upon both the aforementioned ambiguous definition of complexity and the obvious difficulty of matching genes with function. Several developments allow us to bring a new perspective to this old problem. On the one hand, genomic complexity can be defined in a consistent information-theoretic manner [the "physical" complexity (4)], which appears to encompass intuitive notions of complexity used in the analysis of genomic structure and organization (5). On the other hand, it has been shown that evolution can be observed in an artificial medium (6, 7), providing a unique glimpse at universal aspects of the evolutionary process in a computational world. In this system, the symbolic sequences subject to evolution are computer programs that have the ability to self-replicate via the execution of their own code. In this respect, they are computational analogs of catalytically active RNA sequences that serve as the templates of their own reproduction. In populations of such sequences that adapt to their world (inside of a computer's memory), noisy self-replication coupled with finite resources and an information-rich environment leads to a growth in sequence length as the digital organisms incorporate more and more information about their environment into their genome. Evolution in an information-poor landscape, on the contrary, leads to selection for replication only, and a shrinking genome size as in the experiments of Spiegelman and colleagues (8). These populations allow us to observe the growth of physical complexity explicitly, and also to distinguish distinct evolutionary pressures acting on the genome and analyze them in a mathematical framework.

If an organism's complexity is a reflection of the physical complexity of its genome (as we assume here), the latter is of prime importance in evolutionary theory. Physical complexity, roughly speaking, reflects the number of base pairs in a sequence that are functional. As is well known, equating genomic complexity with genome length in base pairs gives rise to a conundrum (known as the C-value paradox) because large variations in genomic complexity (in particular in eukaryotes) seem to bear little relation to the differences in organismic complexity (9). The C-value paradox is partly resolved by recognizing that not all of DNA is functional: that there is a neutral fraction that can vary from species to species. If we were able to monitor the non-neutral fraction, it is likely that a significant increase in this fraction could be observed throughout at least the early course of evolution. For the later period, in particular the later Phanerozoic Era, it is unlikely that the growth in complexity of genomes is due solely to innovations in which genes with novel functions arise de novo. Indeed, most of the enzyme activity classes in mammals, for example, are already present in prokaryotes (10). Rather, gene duplication events leading to repetitive DNA and subsequent diversification (11) as well as the evolution of gene regulation patterns appears to be a more likely scenario for this stage. Still, we believe that the Maxwell Demon mechanism described below is at work during all phases of evolution and provides the driving force toward ever increasing complexity in the natural world.

Does complexity always increase during major evolutionary transitions?

Conclusions: The physical complexity of those regions that do not code for the structural changes increases independently from structural complexity. There is no correlation between fitness of collectives and division of labor (possible causes will be discussed later). On the other hand, there is a correlation between the size of colonies and fitness. Thus, it seems that there is no additional increase of physical complexity other than those regions that code for structural changes…

Seems to me that result ought to renew our interest in “what is functional complexity” with regard to biological systems. Here is the definition from the “complex systems” corner:

Complex Systems

Wikipedia definition

NECSI: Complexity

Complexity is ...[the abstract notion of complexity has been captured in many different ways. Most, if not all of these, are related to each other and they fall into two classes of definitions]:

1) ...the (minimal) length of a description of the system.

2) ...the (minimal) amount of time it takes to create the system.

The length of a description is measured in units of information. The former definition is closely related to Shannon information theory and algorithmic complexity, and the latter is related to computational complexity.

NECSI: Emergence

Emergence is...

1) ...what parts of a system do together that they would not do by themselves: collective behavior.

2) ...what a system does by virtue of its relationship to its environment that it would not do by itself: e.g. its function.

3) ...the act or process of becoming an emergent system.

According to (1) emergence refers to understanding how collective properties arise from the properties of parts. More generally, it refers to how behavior at a larger scale of the system arises from the detailed structure, behavior and relationships on a finer scale. In the extreme, it is about how macroscopic behavior arises from microscopic behavior.

According to this view, when we think about emergence we are, in our mind's eye, moving between different vantage points. We see the trees and the forest at the same time. We see the way the trees and the forest are related to each other. To see in both these views we have to be able to see details, but also ignore details. The trick is to know which of the many details we see in the trees are important to know when we see the forest.

In conventional views the observer considers either the trees or the forest. Those who consider the trees consider the details to be essential and do not see the patterns that arise when considering trees in the context of the forest. Those who consider the forest do not see the details. When one can shift back and forth between seeing the trees and the forest one also sees which aspects of the trees are relevant to the description of the forest. Understanding this relationship in general is the study of emergence.

Unifying Principles in Complex Systems

Functional complexity

Given a system whose function we want to specify, for which the environmental (input) variables have a complexity of C(e), and the actions of the system have a complexity of C(a), then the complexity of specification of the function of the system is:

C(f)=C(a) 2 C(e)

Where complexity is defined as the logarithm (base 2) of the number of possibilities or, equivalently, the length of a description in bits. The proof follows from recognizing that a complete specification of the function is given by a table whose rows are the actions (C(a) bits) for each possible input, of which there are 2 C(e). Since no restriction has been assumed on the actions, all actions are possible and this is the minimal length description of the function. Note that this theorem applies to the complexity of description as defined by the observer, so that each of the quantities can be defined by the desires of the observer for descriptive accuracy. This theorem is known in the study of Boolean functions (binary functions of binary variables) but is not widely understood as a basic theorem in complex systems[15]. The implications of this theorem are widespread and significant to science and engineering.

IMHO, the science of complex systems - function, observer and emergence – would have accomplished the same distinction that Michael Behe targeted with his new term, irreducible complexity. By introducing a new term under color of Intelligent Design, it gave the concept a taint of ideology. Compare the above definitions to the definition of irreducible complexity:

Irreducible Complexity

Wikipedia

The term "irreducible complexity" is defined by Behe as:

"a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning" (Michael Behe, Molecular Machines: Experimental Support for the Design Inference)

Believers in the intelligent design theory use this term to refer to biological systems and organs that could not have come about by a series of small changes. For such mechanisms or organs, anything less than their complete form would not work at all, or would in fact be a detriment to the organism, and would therefore never survive the process of natural selection. Proponents of intelligent design argue that while some complex systems and organs can be explained by evolution, organs and biological features which are irreducibly complex cannot be explained by current models, and that an intelligent designer must thus have created or guided life.

You continued with a few other comments:

Von Neumann has presented a scientific challenge evolutionary theorists have yet to address fully. But I believe that the field of Biosemotics -- the quote I put up from Rocha above falls within this category -- is answering his challenge, though the discipline is still in its infancy and the response is not yet adequate to be qualified as an answer.

I certainly agree that the field is not fully developed wrt biology (though much progress has been made in the mathematics by Wolfram et al). Personally, I believe the evolutionary theorists will be brought kicking and screaming to the theory simply because the theory “goes to” complex systems arising from the iteration of simple rules. That would speak against happenstance and for direction. For Lurkers interested in the subject:

Self-Organizing Complexity

Cellular Automata – Wikipedia

Self-Organizing Complexity in the physical, biological, and social sciences

There is however a related aspect of the inquiry which I very strongly suspect will bring self-organizing complexity to a head: the application of Claude Shannon’s Theory of Communications to molecular machines.

The key to understanding all the dialogue on the “Chowder Society” is communications. In ordinary conversation – and indeed, in many of the articles from the science community at large – information and message are treated as pretty much the same thing. A database, letter, diagram, DNA would all be considered information using various symbolizations or languages for comprehension.

But to understand the Shannon definition, one must view information as a reduction of uncertainty in a receiver – the communication itself having been successfully completed. In molecular machines this is vital and can be best visualized by comparing a dead skin cell to a live one. The DNA, or message, is as good dead as alive. But the live skin cell successfully communicates with itself and the environment.

The state changes that occur in the molecular machinery is evidence of the reduction of uncertainty in the receiver. This is akin to the state changes which Rocha indicates would be required in an RNA world (abiogenesis) for self-organizing complexity to begin – i.e. toggling between autonomous communication and communication with the environment.

And "randomness" taken by itself is meaningless unless it is specifically applied to a scientific formulation of a theory that can be tested. Mathematicians have raised problems with "randomness" in general, some evolutionary theorists are dealing with it as it applies to natural selection, but we cannot use it as an alternative to either explanations for the origin of life on earth or problems within evolutionary theory unless it is attached to a specific theory whose applicability can be tested.

I agree that randomness is problematic from the get-go. It is a problem for the theory of evolution as it was originally formulated, that’s why I suggest the theory needs to be brought up-to-date. It is a big problem for abiogenesis, too – because logically there must be some “break point” population from which the theoretical RNA world could be bootstrapped into replication. IOW, it would require much more than a single phenomenon in a vast population of opportunity.

One way or another we are going to have to see something offered by Intelligent Design theorists or supporters of the theory, that attempts to make an argument based upon evidence, something like "result x has likely occured because of evidence y occuring at time z." It is not enough to simply challenge the body of scientific study to the contrary in an attempt to negate its hypotheses and/or conclusions and, as a result, leave Intelligent Design theory as the only possible alternative. That is not scientific under any criteria that are worthy of being considered as falling within "science."

IMHO, the Intelligent Design fellows at Discovery Institute are essentially minor players wrt the subject of happenstance in biological systems. They are much resented because of an assumed theological agenda – the same objection many of us have against the metaphysical naturalists like Lewontin who promote atheism under the color of science.

To the contrary, IMHO, the whole notion of happenstance is dying with a whimper because of the work of general mathematicians, information theorists and physicists. Precious few of them have a perceptible metaphysical bias, but they keep coming back to theories which make an unintended theological statement, that biological systems did not arise by happenstance alone. This is much like the determination that there was a beginning – the most theological statement to come out of science.

141 posted on 12/13/2004 10:06:08 AM PST by Alamo-Girl
[ Post Reply | Private Reply | To 140 | View Replies ]

Free Republic
Browse · Search
Religion
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson