Free Republic
Browse · Search
Religion
Topics · Post Article

To: StJacques; betty boop; tortoise; Doctor Stochastic; ZellsBells; little jeremiah
Thank you so much for your thorough and thoughtful reply! I particularly enjoyed the article by Francis Heylighen as he raised an issue in the discussion of complexity in biological systems which I had not yet researched – namely, does evolution increase complexity?

In investigating the subject a bit further, I discovered some things you and the others here might also find quite interesting. Chief among these is that there is a formulation for “functional complexity” in complex systems which includes biological systems. The evolutionary biologists however seem to question the applicability of the term to their domain and have offered yet another type of complexity, “physical complexity”. Heylighen’s article (1996) did not use the term which was evidently coined by Adami. Following is an article by Adami raising the concept and the results of modeling the theory:

Physical Complexity

Evolution of biological complexity – Adami

Darwinian evolution is a simple yet powerful process that requires only a population of reproducing organisms in which each offspring has the potential for a heritable variation from its parent. This principle governs evolution in the natural world, and has gracefully produced organisms of vast complexity. Still, whether or not complexity increases through evolution has become a contentious issue. Gould (1), for example, argues that any recognizable trend can be explained by the "drunkard's walk" model, where "progress" is due simply to a fixed boundary condition. McShea (2) investigates trends in the evolution of certain types of structural and functional complexity, and finds some evidence of a trend but nothing conclusive. In fact, he concludes that "something may be increasing. But is it complexity?" Bennett (3), on the other hand, resolves the issue by fiat, defining complexity as "that which increases when self-organizing systems organize themselves." Of course, to address this issue, complexity needs to be both defined and measurable.

In this paper, we skirt the issue of structural and functional complexity by examining genomic complexity. It is tempting to believe that genomic complexity is mirrored in functional complexity and vice versa. Such an hypothesis, however, hinges upon both the aforementioned ambiguous definition of complexity and the obvious difficulty of matching genes with function. Several developments allow us to bring a new perspective to this old problem. On the one hand, genomic complexity can be defined in a consistent information-theoretic manner [the "physical" complexity (4)], which appears to encompass intuitive notions of complexity used in the analysis of genomic structure and organization (5). On the other hand, it has been shown that evolution can be observed in an artificial medium (6, 7), providing a unique glimpse at universal aspects of the evolutionary process in a computational world. In this system, the symbolic sequences subject to evolution are computer programs that have the ability to self-replicate via the execution of their own code. In this respect, they are computational analogs of catalytically active RNA sequences that serve as the templates of their own reproduction. In populations of such sequences that adapt to their world (inside of a computer's memory), noisy self-replication coupled with finite resources and an information-rich environment leads to a growth in sequence length as the digital organisms incorporate more and more information about their environment into their genome. Evolution in an information-poor landscape, on the contrary, leads to selection for replication only, and a shrinking genome size as in the experiments of Spiegelman and colleagues (8). These populations allow us to observe the growth of physical complexity explicitly, and also to distinguish distinct evolutionary pressures acting on the genome and analyze them in a mathematical framework.

If an organism's complexity is a reflection of the physical complexity of its genome (as we assume here), the latter is of prime importance in evolutionary theory. Physical complexity, roughly speaking, reflects the number of base pairs in a sequence that are functional. As is well known, equating genomic complexity with genome length in base pairs gives rise to a conundrum (known as the C-value paradox) because large variations in genomic complexity (in particular in eukaryotes) seem to bear little relation to the differences in organismic complexity (9). The C-value paradox is partly resolved by recognizing that not all of DNA is functional: that there is a neutral fraction that can vary from species to species. If we were able to monitor the non-neutral fraction, it is likely that a significant increase in this fraction could be observed throughout at least the early course of evolution. For the later period, in particular the later Phanerozoic Era, it is unlikely that the growth in complexity of genomes is due solely to innovations in which genes with novel functions arise de novo. Indeed, most of the enzyme activity classes in mammals, for example, are already present in prokaryotes (10). Rather, gene duplication events leading to repetitive DNA and subsequent diversification (11) as well as the evolution of gene regulation patterns appears to be a more likely scenario for this stage. Still, we believe that the Maxwell Demon mechanism described below is at work during all phases of evolution and provides the driving force toward ever increasing complexity in the natural world.

Does complexity always increase during major evolutionary transitions?

Conclusions: The physical complexity of those regions that do not code for the structural changes increases independently from structural complexity. There is no correlation between fitness of collectives and division of labor (possible causes will be discussed later). On the other hand, there is a correlation between the size of colonies and fitness. Thus, it seems that there is no additional increase of physical complexity other than those regions that code for structural changes…

Seems to me that result ought to renew our interest in “what is functional complexity” with regard to biological systems. Here is the definition from the “complex systems” corner:

Complex Systems

Wikipedia definition

NECSI: Complexity

Complexity is ...[the abstract notion of complexity has been captured in many different ways. Most, if not all of these, are related to each other and they fall into two classes of definitions]:

1) ...the (minimal) length of a description of the system.

2) ...the (minimal) amount of time it takes to create the system.

The length of a description is measured in units of information. The former definition is closely related to Shannon information theory and algorithmic complexity, and the latter is related to computational complexity.

NECSI: Emergence

Emergence is...

1) ...what parts of a system do together that they would not do by themselves: collective behavior.

2) ...what a system does by virtue of its relationship to its environment that it would not do by itself: e.g. its function.

3) ...the act or process of becoming an emergent system.

According to (1) emergence refers to understanding how collective properties arise from the properties of parts. More generally, it refers to how behavior at a larger scale of the system arises from the detailed structure, behavior and relationships on a finer scale. In the extreme, it is about how macroscopic behavior arises from microscopic behavior.

According to this view, when we think about emergence we are, in our mind's eye, moving between different vantage points. We see the trees and the forest at the same time. We see the way the trees and the forest are related to each other. To see in both these views we have to be able to see details, but also ignore details. The trick is to know which of the many details we see in the trees are important to know when we see the forest.

In conventional views the observer considers either the trees or the forest. Those who consider the trees consider the details to be essential and do not see the patterns that arise when considering trees in the context of the forest. Those who consider the forest do not see the details. When one can shift back and forth between seeing the trees and the forest one also sees which aspects of the trees are relevant to the description of the forest. Understanding this relationship in general is the study of emergence.

Unifying Principles in Complex Systems

Functional complexity

Given a system whose function we want to specify, for which the environmental (input) variables have a complexity of C(e), and the actions of the system have a complexity of C(a), then the complexity of specification of the function of the system is:

C(f)=C(a) 2 C(e)

Where complexity is defined as the logarithm (base 2) of the number of possibilities or, equivalently, the length of a description in bits. The proof follows from recognizing that a complete specification of the function is given by a table whose rows are the actions (C(a) bits) for each possible input, of which there are 2 C(e). Since no restriction has been assumed on the actions, all actions are possible and this is the minimal length description of the function. Note that this theorem applies to the complexity of description as defined by the observer, so that each of the quantities can be defined by the desires of the observer for descriptive accuracy. This theorem is known in the study of Boolean functions (binary functions of binary variables) but is not widely understood as a basic theorem in complex systems[15]. The implications of this theorem are widespread and significant to science and engineering.

IMHO, the science of complex systems - function, observer and emergence – would have accomplished the same distinction that Michael Behe targeted with his new term, irreducible complexity. By introducing a new term under color of Intelligent Design, it gave the concept a taint of ideology. Compare the above definitions to the definition of irreducible complexity:

Irreducible Complexity

Wikipedia

The term "irreducible complexity" is defined by Behe as:

"a single system which is composed of several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning" (Michael Behe, Molecular Machines: Experimental Support for the Design Inference)

Believers in the intelligent design theory use this term to refer to biological systems and organs that could not have come about by a series of small changes. For such mechanisms or organs, anything less than their complete form would not work at all, or would in fact be a detriment to the organism, and would therefore never survive the process of natural selection. Proponents of intelligent design argue that while some complex systems and organs can be explained by evolution, organs and biological features which are irreducibly complex cannot be explained by current models, and that an intelligent designer must thus have created or guided life.

You continued with a few other comments:

Von Neumann has presented a scientific challenge evolutionary theorists have yet to address fully. But I believe that the field of Biosemotics -- the quote I put up from Rocha above falls within this category -- is answering his challenge, though the discipline is still in its infancy and the response is not yet adequate to be qualified as an answer.

I certainly agree that the field is not fully developed wrt biology (though much progress has been made in the mathematics by Wolfram et al). Personally, I believe the evolutionary theorists will be brought kicking and screaming to the theory simply because the theory “goes to” complex systems arising from the iteration of simple rules. That would speak against happenstance and for direction. For Lurkers interested in the subject:

Self-Organizing Complexity

Cellular Automata – Wikipedia

Self-Organizing Complexity in the physical, biological, and social sciences

There is however a related aspect of the inquiry which I very strongly suspect will bring self-organizing complexity to a head: the application of Claude Shannon’s Theory of Communications to molecular machines.

The key to understanding all the dialogue on the “Chowder Society” is communications. In ordinary conversation – and indeed, in many of the articles from the science community at large – information and message are treated as pretty much the same thing. A database, letter, diagram, DNA would all be considered information using various symbolizations or languages for comprehension.

But to understand the Shannon definition, one must view information as a reduction of uncertainty in a receiver – the communication itself having been successfully completed. In molecular machines this is vital and can be best visualized by comparing a dead skin cell to a live one. The DNA, or message, is as good dead as alive. But the live skin cell successfully communicates with itself and the environment.

The state changes that occur in the molecular machinery is evidence of the reduction of uncertainty in the receiver. This is akin to the state changes which Rocha indicates would be required in an RNA world (abiogenesis) for self-organizing complexity to begin – i.e. toggling between autonomous communication and communication with the environment.

And "randomness" taken by itself is meaningless unless it is specifically applied to a scientific formulation of a theory that can be tested. Mathematicians have raised problems with "randomness" in general, some evolutionary theorists are dealing with it as it applies to natural selection, but we cannot use it as an alternative to either explanations for the origin of life on earth or problems within evolutionary theory unless it is attached to a specific theory whose applicability can be tested.

I agree that randomness is problematic from the get-go. It is a problem for the theory of evolution as it was originally formulated, that’s why I suggest the theory needs to be brought up-to-date. It is a big problem for abiogenesis, too – because logically there must be some “break point” population from which the theoretical RNA world could be bootstrapped into replication. IOW, it would require much more than a single phenomenon in a vast population of opportunity.

One way or another we are going to have to see something offered by Intelligent Design theorists or supporters of the theory, that attempts to make an argument based upon evidence, something like "result x has likely occured because of evidence y occuring at time z." It is not enough to simply challenge the body of scientific study to the contrary in an attempt to negate its hypotheses and/or conclusions and, as a result, leave Intelligent Design theory as the only possible alternative. That is not scientific under any criteria that are worthy of being considered as falling within "science."

IMHO, the Intelligent Design fellows at Discovery Institute are essentially minor players wrt the subject of happenstance in biological systems. They are much resented because of an assumed theological agenda – the same objection many of us have against the metaphysical naturalists like Lewontin who promote atheism under the color of science.

To the contrary, IMHO, the whole notion of happenstance is dying with a whimper because of the work of general mathematicians, information theorists and physicists. Precious few of them have a perceptible metaphysical bias, but they keep coming back to theories which make an unintended theological statement, that biological systems did not arise by happenstance alone. This is much like the determination that there was a beginning – the most theological statement to come out of science.

141 posted on 12/13/2004 10:06:08 AM PST by Alamo-Girl
[ Post Reply | Private Reply | To 140 | View Replies ]


To: Alamo-Girl
That was a very thorough and well-formatted reply Alamo-Girl. Though I am getting even farther behind in that I still owe betty a response on the "Plato" thread, I must defer to a later time when I can address what you have raised in a more comprehensive manner. Given the completeness of your post I think it would be untoward of me to simply dismiss what you have written, and I won't do that even when I do respond because some of what you have stated I agree with, so I will ask your indulgence to wait until I have a little more free time on my hands before I reply. I'm just stopping by right now during a work break and I am entering a busy week in which I have to complete some daunting job tasks that are going to occupy a good deal of my time, some of which will likely run beyond regular working hours.

I'll be back later, though I cannot say just when, though I'll try to get in here tonight. I owe betty a response first.
142 posted on 12/13/2004 11:20:00 AM PST by StJacques
[ Post Reply | Private Reply | To 141 | View Replies ]

To: Alamo-Girl; StJacques

The biosemiotics approach looks very interesting! Here's a link, to Friedrich Salomon Rothschild, "protosemiotician":

http://www.ut.ee/SOSE/sss/anderson311.pdf

Just found it, and it looks fascinating. I'll need some time to digest it. Maybe you might find it of interest, too?

Wonderful essay, Alamo-Girl! Thank you so very much!


145 posted on 12/14/2004 6:57:22 AM PST by betty boop
[ Post Reply | Private Reply | To 141 | View Replies ]

Free Republic
Browse · Search
Religion
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson