Skip to comments.Genome Evolution | First, a Bang Then, a Shuffle
Posted on 01/31/2003 4:19:03 PM PST by jennyp
Picture an imperfect hall of mirrors, with gene sequences reflecting wildly: That's the human genome. The duplications that riddle the genome range greatly in size, clustered in some areas yet absent in others, residing in gene jungles as well as within vast expanses of seemingly genetic gibberish. And in their organization lie clues to genome origins. "We've known for some time that duplications are the primary force for genes and genomes to evolve over time," says Evan Eichler, director of the bioinformatics core facility at the Center for Computational Genomics, Case Western Reserve University, Cleveland.
For three decades, based largely on extrapolations from known gene families in humans, researchers have hypothesized two complete genome doublings--technically, polyploidization--modified by gene loss, chromosome rearrangements, and additional limited duplications. But that view is changing as more complete evidence from genomics reveals a larger role for recent small-scale changes, superimposed on a probable earlier single doubling. Ken Wolfe, a professor of genetics at the University of Dublin, calls the new view of human genome evolution "the big bang" followed by "the slow shuffle."
It's a controversial area.
"There has been a lot of debate about whether there were two complete polyploid events at the base of the vertebrate lineages. The main problem is that vertebrate genomes are so scrambled after 500 million years, that it is very difficult to find the signature of such an event," explains Michael Lynch, a professor of biology at Indiana University, Bloomington, With accumulating sequence data from gene families, a picture is emerging of a lone, complete one-time doubling at the dawn of vertebrate life, followed by a continual and ongoing turnover of about 5-10% of the genome that began in earnest an estimated 30-50 million years ago. Short DNA sequences reinvent themselves, duplicating and sometimes diverging in function and dispersing among the chromosomes, so that the genome is a dynamic, ever-changing entity.
Duplication in the human genome is more extensive than it is in other primates, says Eichler. About 5% of the human genome consists of copies longer than 1,000 bases. Some doublings are vast. Half of chromosome 20 recurs, rearranged, on chromosome 18. A large block of chromosome 2's short arm appears again as nearly three-quarters of chromosome 14, and a section of its long arm is also on chromosome 12. The gene-packed yet diminutive chromosome 22 sports eight huge duplications. "Ten percent of the chromosome is duplicated, and more than 90% of that is the same extremely large duplication. You don't have to be a statistician to realize that the distribution of duplications is highly nonrandom," says Eichler.
The idea that duplications provide a mechanism for evolution is hardly new. Geneticists have long regarded a gene copy as an opportunity to try out a new function while the original sequence carries on. More often, though, the gene twin mutates into a nonfunctional pseudogene or is lost, unconstrained by natural selection because the old function persists. Or, a gene pair might diverge so that they split a function.
Some duplications cause disease. A type of Charcot-Marie-Tooth disease, for example, arises from a duplication of 1.5 million bases in a gene on chromosome 17. The disorder causes numb hands and feet.
INFERRING DUPLICATION ORIGINS A duplication's size and location may hold clues to its origin. A single repeated gene is often the result of a tandem duplication, which arises when chromosomes misalign during meiosis, and crossing over distributes two copies of the gene (instead of one) onto one chromosome. This is how the globin gene clusters evolved, for example. "Tandem duplicates are tandemly arranged, and there may be a cluster of related genes located contiguously on the chromosome, with a variable number of copies of different genes," says John Postlethwait, professor of biology in the Institute of Neuroscience at the University of Oregon, who works on the zebrafish genome.
In contrast to a tandem duplication, a copy of a gene may appear on a different chromosome when messenger RNA is reverse-transcribed into DNA that inserts at a new genomic address. This is the case for two genes on human chromosome 12, called PMCHL1 and PMCHL2, that were copied from a gene on chromosome 5 that encodes a neuropeptide precursor. Absence of introns in the chromosome 12 copies belies the reverse transcription, which removes them.1 (Tandem duplicates retain introns.)
The hallmarks of polyploidy are clear too: Most or all of the sequences of genes on one chromosome appears on another. "You can often still see the signature of a polyploidization event by comparing the genes on the two duplicated chromosomes," Postlethwait says.
Muddying the waters are the segmental duplications, which may include tandem duplications, yet also resemble polyploidy. "Instead of a single gene doubling to make two adjacent copies as in a tandem duplication, in a segmental duplication, you could have tens or hundreds of genes duplicating either tandemly, or going elsewhere on the same chromosome, or elsewhere on a different chromosome. If the two segments were on different chromosomes, it would look like polyploidization for this segment," says Postlethwait. Compounding the challenge of interpreting such genomic fossils is that genetic material, by definition, changes. "As time passes, the situation decays. Tandem duplicates may become separated by inversions, transpositions, or translocations, making them either distant on the same chromosome or on different chromosomes," he adds.
QUADRUPLED GENES Many vertebrate genomes appear to be degenerate tetraploids, survivors of a quadrupling--a double doubling from haploid to diploid to tetraploid--that left behind scattered clues in the form of genes present in four copies. This phenomenon is called the one-to-four rule. Wolfe compares the scenario to having four decks of cards, throwing them up in the air, discarding some, selecting 20, and then trying to deduce what you started with. Without quadruples in the sample, it is difficult to infer the multideck origin. So it is for genes and genomes.
"How can you tell whether large duplications built up, or polyploidy broke down? People are saying that they can identify blocks of matching DNA that are evidence for past polyploidization, which have been broken up and overlain by later duplications. But at what point do blocks just become simple duplications?" asks Susan Hoffman, associate professor of zoology at Miami University, Oxford, Ohio.
The idea that the human genome has weathered two rounds of polyploidy, called the 2R hypothesis, is attributed to Susumu Ohno, a professor emeritus of biology at City of Hope Medical Center in Duarte, Calif.2 The first whole genome doubling is postulated to have occurred just after the vertebrates diverged from their immediate ancestors, such as the lancelet (Amphioxus). A second full doubling possibly just preceded the divergence of amphibians, reptiles, birds, and mammals from the bony fishes.
Evidence for the 2R hypothesis comes from several sources. First, polyploidy happens. The genome of flowering plants doubled twice, an estimated 180 and 112 million years ago, and rice did it again 45 million years ago.3 "Plants have lots of large blocks of chromosomal duplications, and the piecemeal ones originated at the same time," indicating polyploidization, says Lynch. The yeast Saccharomyces cerevisiae is also a degenerate tetraploid, today bearing the remnants of a double sweeping duplication.4
Polyploidy is rarer in animals, which must sort out unmatched sex chromosomes, than in plants, which reproduce asexually as well as sexually. "But polyploidization is maintained over evolutionary time in vertebrates quite readily, although rarely. Recent examples, from the last 50 million years ago or so, include salmonids, goldfish, Xenopus [frogs], and a South American mouse," says Postlethwait. On a chromosomal level, polyploidy may disrupt chromosome compatibility, but on a gene level, it is an efficient way to make copies. "Polyploidy solves the dosage problem. Every gene is duplicated at the same time, so if the genes need to be in the right stoichiometric relationship to interact, they are. With segmental duplications, gene dosages might not be in the same balance. This might be a penalty and one reason why segmental genes don't survive as long as polyploidy," Lynch says.
Traditional chromosome staining also suggests a double doubling in the human genome's past, because eight chromosome pairs have near-dopplegängers, in size and band pattern.5 A flurry of papers in the late 1990s found another source of quadrupling: Gene counts for the human, then thought to be about 70,000, were approximately four times those predicted for the fly, worm, and sea squirt. The human gene count has since been considerably downsized.
Finally, many gene families occur in what Jurg Spring, a professor at the University of Basel's Institute of Zoology in Switzerland, dubs "tetrapacks."6 The HOX genes, for example, occupy one chromosome in Drosophila melanogaster but are dispersed onto four chromosomes in vertebrate genomes.7 Tetrapacks are found on every human chromosome, and include zinc-finger genes, aldolase genes, and the major histocompatibility complex genes.
"In the 1990s, the four HOX clusters formulated the modern version of the 2R model, that two rounds of genome duplication occurred, after Amphioxus and before bony fishes," explains Xun Gu, an associate professor of zoology and genetics at Iowa State University in Ames. "Unfortunately, because of the rapid evolution of chromosomes as well as gene losses, other gene families generated in genome projects did not always support the classic 2R model. So in the later 1990s, some researchers became skeptical of the model and argued the possibility of no genome duplication at all."
THE BIG BANG/SLOW SHUFFLE EVOLVES Human genome sequence information has enabled Gu and others to test the 2R hypothesis more globally, reinstating one R. His group used molecular-clock analyses to date the origins of 1,739 duplications from 749 gene families.8 If these duplications sprang from two rounds of polyploidization, the dates should fall into two clusters. This isn't exactly what happened. Instead, the dates point to a whole genome doubling about 550 million years ago and a more recent round of tandem and segmental duplications since 80 million years ago, when mammals radiated.
Ironically, sequencing of the human genome may have underestimated the number of duplications. The genome sequencing required that several copies be cut, the fragments overlapped, and the order of bases derived. The algorithm could not distinguish whether a particular sequence counted twice was a real duplication, present at two sites in the genome, or independent single genes obtained from two of the cut genomes.
Eichler and his group developed a way around this methodological limitation. They compare sequences at least 15,000 bases long against a random sample of shotgunned whole genome pieces. Those fragments that are overrepresented are inferred to be duplicated.8 The technique identified 169 regions flanked by large duplications in the human genome.
Although parts of the human genome retain a legacy of a long-ago total doubling, the more recent, smaller duplications provide a continual source of raw material for evolution. "My view is that both happen. A genome can undergo polyploidy, duplicating all genes at once, but the rate of segmental duplications turns out to be so high that every gene will have had the opportunity to duplicate" by this method also, concludes Lynch. It will be interesting to see how the ongoing analyses of the human and other genome sequences further illuminate the origins and roles of duplications.
Ricki Lewis (email@example.com) is a contributing editor.
You are misreading the article you cite:
The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species.
No, you're misreading it. Britten came up with a more accurate figure for the differences in sequence. He was not trying to measure the number of mutations needed to produce those differences in sequence.
What the above means is simply that because of deletions in each species, the strands selected did not align properly, hence a simple 'alphabetic' comparison of the sequences gave a wrong number. What Britten did, and the reason he revised the figures, is he properly aligned the strands according to what was the purpose of them. In this way he came up with the more accurate 5% number.
But the DNA hybridization technique "involved collecting tiny snips of the DNA helix from the chromosomes of the two species to be studied". Now, stop & think what this would mean if you had a child that doubled its parents' chromosomes (ex.: from 10 to 20): The hybridization technique would not detect any difference between the two genomes! It would "think" it was just seeing twice as many snips of the child's DNA sample as it was "seeing" of the parent's sample, and would declare the genomes to be exactly 0% different.
In reality such a doubling of the number of chromosomes required one mutation, but the newer sequence comparison technique would correctly conclude that there was a 50% difference in total sequence between parent & child. See? The two techniques are measuring two different things.
Now as to neutral mutations, they just cannot spread throughout a species - according to studies made by evolutionists themselves when they were trying to solve the problem posed by genetics. The basis of population genetics is the Hardy-Weinberg principle which says that in a stable population the genetic mix of the population will remain stable absent any genetic advantage of a particular genetic makeup. What this means is that a neutral mutation in a population of 1 million organisms will continue to be in only 1 millionth of the population if it is neutral. In fact it will likely dissappear completely due to chance (if you play a game at odds of 2 to 1 with two dollars long enough you will lose both dollars), so neutral mutations cannot be in any way responsible for these differences in any significant way.
As we have seen, interbreeding often is limited to the members of local populations. If the population is small, Hardy-Weinberg may be violated. Chance alone may eliminate certain members out of proportion to their numbers in the population. In such cases, the frequency of an allele may begin to drift toward higher or lower values. Ultimately, the allele may represent 100% of the gene pool or, just as likely, disappear from it.
IOW, Hardy-Weinberg only helps you if you're talking about a species that does not separate into tribes, so it really is one huge interbreeding population. It might help you if we're talking about promiscuous ocean-dwelling fish, or birds that live in huge flocks, or modern humans, etc. But it does not help you with prehistoric humans.
Due to the above, yes, the differences are 5%. Yes, you need some 150 million mutations. Yes, mostly all of them have to be favorable to have survived.
In conclusion, no, no, and no.
As jennyp has pointed out, there are several different types of differences that can be measured, from direct 1:1 comparisons straight up the line, to other analyses designed to account for insertions, deletions, reversals, and partial or complete gene duplications, in addition to other more exotic mutations.
I also wonder which chimp genome is being compared with which human genome. And how does this compare with a genome comparison between an African bushman and say, an Alaskan Eskimo? Or an Australian Aborigine versus a Brazilian Wari'?
But I think the larger point is this: No one doubts the similarity of chimps and humans. The physical and behavioral similarities are reinforced by genetic analysis, which points unquestionably towards common ancestry. The mechanisms of genetic variation are still under investigation, and the creo side, instead of triumphantly crowing about how these mechanisms couldn't possibly have caused X degree of diffence in Y amount of time, need to present evidence for something that could.
It hardly matters which human genome is measured, as you are closer genetically to any other human being on Earth than are two chimps living on opposite sides of the same mountain in Africa. More evidence that the human genome is slow to change.
I agree with your point that we need to keep refining numerical models so that the numbers better reflect reality- that is what we are trying to do. The answer is to keep at it, like we are trying to do, not tip over the chessboard and speculate that we can't use numbers to calculate anything meaningful. Of course we can, if we try. What scares some is that the meaningful calculations will show the vast improbability of the chimp-man common ancestor.
But I think the larger point is this: No one doubts the similarity of chimps and humans. The physical and behavioral similarities are reinforced by genetic analysis, which points unquestionably towards common ancestry.
Your bias is showing. The common ancestry is unquestionable only to a person who accepts it on faith to begin with. Common Designer is just as valid a hypothesis, unless one dogmatically rules it out in advance regardless of the evidence due to a personal choice.
The mechanisms of genetic variation are still under investigation, and the creo side, instead of triumphantly crowing about how these mechanisms couldn't possibly have caused X degree of diffence in Y amount of time, need to present evidence for something that could.
You have it backwards I am afraid. It is up to the EVO side to show that there are mechanisms which can reasonably produce these changes in the time allowed. We already have "something that could"- an intelligent designer.
The mice that scientists recently inserted jelly-fish glow genes into can now glow in the dark. Should civilization end and these critters escape into the wild I suppose some scientits 500 years from now could speculate that these genes evolved. Others could hypothysize that these genes were the results of intelligent designers manipulating genes.
I don't think the first group of scientists should demand that the second group produce the bodies of those long gone researchers as "proof" of the design hypotheis. The second group should be able to use stats to show how absurd is the idea of evolution in this case, especially within 500 years.
Granted. Now how much closer? The claim that chimps are 5% or 1.4% different than humans is not particularly informative without a basis for comparison.
Common Designer is just as valid a hypothesis, unless one dogmatically rules it out in advance regardless of the evidence due to a personal choice.
Common Designer implies common ancestry. I'll cop to a slight semantic stretch at this point, of course, but even assuming the existence of a common designer, it still shows that he/she/it/they made people out of monkey parts. The thing is, we have a clear time line for the emergence of primates. Work backwards in time along the fossil record and the skulls of our ancestors become increasingly ape-like.
In the meantime, the Designer Hypothesis is not rejected because of dogma, or anti-religious fervor, or because of powerful mind-control rays from hyper-intelligent, 7-dimensional guppies. It's rejected because of a lack of evidence on the one hand, and a lack of unique predictions on the other. Put together and it's hard to make much of a case.
We already have "something that could"- an intelligent designer.
But no evidence to support that hypothesis. I'm also going to contest that "intelligent" bit. A sophomore in any ME program in the country can propose a laundry list of structural and engineering improvements in the human body this "intelligent designer" fellow somehow overlooked. And this without even putting down their beer or looking up from the PS2.
The second group should be able to use stats to show how absurd is the idea of evolution in this case
Only if the second group was fully conversant with the initial conditions at the period in time when the glow-in-the-dark mice first appeared in the animal kingdom. Without that info, calculating statistical improbability it just guesswork with a slide rule.
Within five posts you comepletly change your opinion as to the value of this debate. Why is that C-man?
The chimp and the human genome can be directly compared for similarity. Unfortunately the comparisons are often reported as pure numbers and without the background, I don't know exactly what "5% similarity" actually means in real-world terms. Calculating statistical probabilities of events that have already occured, a popular ID/Creationist past time, is an exercise in retroactive astonishment. The odds, for example, that SOME guy wins the lottery are far greater than the odds that it's going to be THAT guy.
The evidence SCREAMS design.
Now whose bias is showing?
I posted a thread which gave a testable design model. It had lot's of predictions, but once again evos looked into the telescope and claimed to see nothing.
I recall seeing that thread in existence, but honestly don't remember actively participating. Got a link?
I would say that it would be more a reflection of the ignorance and arrogance the ME rather than any weakness in the human design itself...If we knew more about how we are really put together, those presumed weaknesses would not look so weak.
For humans, the knee, the spine, and male nipples immediately spring to mind. As do the pelvic structure of ceatceans and snakes, and the eyes of the golden mole. And I'm not even ME (Mechanical Engineer, for the lurkers). The human body exhibits sufficience; I would expect an intelligently designed organism to exhibit optimization.
Up until your tagline on #42, you thought this thread was about facts, by post #47, once it was clear the facts were not going your way, you now decide it is about "guesswork".
Again, chimp-human genome comparison studies are about fact. My comment about "guesswork" was in direct response to your statement regarding the ability of scientists 500 years from now to calculate the probability of the natural emergence of a rogue band of free-ranging, glow-in-the-dark mice. Not about chimp-human genome comparisons. My apologies if I was less than clear. (As an aside, I would be quick to point out that glow-in-the-dark mice would likely experience a significant predatory disadvantage compared to their non-luminescent brethren. A blinking, neon "Eat at Joe's" sign comes to mind for some reason... heheh...)
Regarding the "glass hammer" tagline, this thread has garnered almost 50 whole posts in a single week. Didja happen to notice that you and I are currently the only ones participating? The Russian silver fox thread suffered a similar fate. Now compare to the public policy debate threads. The Texas Tech prof thread, for example, was posted a day before this one and is almost up to 350 posts. Generally speaking, policy debates drag on forever, while the new discovery threads are subjected to a flurry of ID/Creationist drive-bys and are quickly abandoned. You are one of the few that actually stick around to hash out the facts, for which I do give you credit.
Ahem, that's nothing; this Dini thread is up to 1142!
You admitted, with your 42 million, that this would be one change fixing itself in the entire human genome every three months- and that was in FUNCTIONING GENES THAT ARE EXPRESSED! This does not seem realistic at all.No, 1 fixation every 3 months was for all types of mutation:
Let's see... 10 million years divided by 42 million mutations = 1 fixation every .238 years (3 months or so). But keep in mind that there are always many mutations at different locations in the genome working in parallel to get themselves fixed at the same time. How many? I have no idea, but if there were 1000 different alleles out there in the population at the same time that would mean an average allele would have 238 years in which to fixate for the numbers to work out. If there are 100,000 alleles then the average allele has 23,800 years to acheive fixation for the numbers to work out.
10 million years / 1.26 million coding base pair differences = 1 coding fixation every 8 years. Then multiply that by how many coding differences are working towards fixation at any one time. 10 mutations simultaneously working their way means each mutation has 80 years available to fixate, 100 mutations = 800 years per mutation, etc.
I went to that link you gave to g3K. Here, from your link are two examples of a LACK of gene change that I found very interesting....Which link is that? Could you provide the url? I couldn't find any reference to cheetahs at the Hardy-Weinberg page from Kimball.
Cheetahs, the fastest of the land animals, seem to have passed through a similar period of small population size with its accompanying genetic drift. Examination of 52 different loci has failed to reveal any polymorphisms; that is, these animals are homozygous at all 52 loci.
That's an interesting finding, but I want to see more information about how exactly they were comparing these 52 loci. Were they using gel electrophoresis, or did they do actual letter-by-letter sequencing of 52 genes? It doesn't seem too surprising that an inbred population would all look the same if you don't sequence the DNA letter by letter. IOW, I think there are lots of sequence differences among cheetahs that are under the radar of that study - cheetahs are not all exact clones of each other.
Here's something interesting I found at another cheetah page:
There are seven recognized subspecies of cheetah, distinguished by subtle differences in their coats. The most striking is the king cheetah with spots that have been modified into wide discontinuous bars.
How can there be 7 recognizable subspecies if they're all exactly the same? (aHA!)
Also on that page, it describes the cheetah's habits. It seems they're very mobile - in fact cheetahs can't be housebroken because they don't stay in one place in the wild, so they have no real "home territory" to keep clean. I suspect that African cheetahs (the only population that survived after the bottleneck 10,000 ya) were one of those species I described to g3k, where they were one big population instead of tribes that were securely isolated over time. So maybe cheetahs are an example of Hardy-Weinberg forcing stasis. (Unlike humans & chimps, who both stayed in relatively small, isolated groups where gene drift could work.)
I'm assuming we're using 10 million years in our calculations instead of the more correct 5 million, for that reason.
Condorman, I will take any credit you are willing to give. I am glad to hear that you do want to move forward with a mathematical model. The mouse thing was meant to be a tight analogy with our discussion of evolution vs. intellignet design. I was only trying to show that it is not necessary to reproduce the Creator in a laboratory in order to infer that non-random forces had been at work.
Since you have said that I misinterpreted and you DO think this thought experiment of ours has some value, it does not matter about the analogy. (Also as an aside, I thought the very same thing. Where could such critters prosper? Perhaps in the block crawl space under my house, where they could attract bugs to eat and no large predators can enter (well, maybe a snake.))
So I am being very generous here- whenever the numbers are in doubt, I am conceding the numbers to you two. Whenever there is doubt about how much of the genome is functional (i.e. is 'junk DNA really 'junk'', I think most of it is not, you think it is) I am conceding it to you. I am bending every step of the way and using your own numbers at every point.
So here it is : we are agreed that the minimum differences in mutations would require one fixation in the population's overall genome every three months, with one fixation in the coding regions every 8 years? Yea or Nay?
That works for me. In fact, since the whole "problem" is how to account for whatever differences there are, it doesn't really matter to me about the coding genes. So for me the supposed problem is in accounting for all the differences. (See? I'm generous too! :-)
If that's what it works out to be then sure.
The next step, I suppose, is to identify possible mechanisms for the change.
No. Let's look at how the original one was done:
The helix at this point would contain one strand from each species, and from there it was a fairly straightforward matter to "melt" the strands to infer the number of good base pairs.
As can be seen, this was a base for base comparison. No adjustments at all being made. Let's continue with the article YOU cited:
The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species.
As can be seen, the original method did not take account of deletions which would put the DNA bases 'out of sync' after a while. This is why with greater knowledge of the genomes of both species (the complete sequencing of the human genome and some partial sequencing of chimps) Britten re-did his work. This time he took account of the deletions to more accurately match the genomes. That he 'refuted' his own work shows to me at least that there was good reason for him using this way of comparison as more accurate. (as to the article by John Pickerel whom I showed to be an evo hack with no credibility in post# 37 the less said the better).
As we have seen, interbreeding often is limited to the members of local populations. If the population is small, Hardy-Weinberg may be violated.
I am well aware of such statements being made by numerous evolutionists. I reject them because they contain numerous half truths. The first half truth (and a half truth is really a complete lie that because it contains and element of truth makes it more believable and thus a better sounding lie) is the implication that while Hardy-Weinberg can be violated in a small population, this makes it likely that a neutral mutation will take over the whole species from that blast off point is false.
Let's continue with the example of the population of a million in the species and let's say that the 'tribe' of 100 gets a neutral mutation and it spreads through it. Well, if the 'tribe' gets mixed into the general population (somehow, sometime, somewhere) then Hardy-Weinberg will be in effect again and those carrying the neutral mutation will be only 1/10,000 of the species and will remain so BECAUSE THIS MUTATION IS NEUTRAL. So again this neutral mutation will not take over the population or even become a significant part of the overall genome pool of the species. So this argument is bunk.
There is an even bigger problem though with these mutations becoming through a small inbred group a part of the genome pool of the whole species. It is a scientific fact that harmful mutations far exceed all other mutations. It is a scientific fact that inbreeding is harmful for the tightly inbred group. What this means is that the inbred group will become much less viable due to the inbreeding and that any neutral mutations within it will (if the group does not die off due to the harmful mutations) will dissappear when (or if) it joins the larger group and those harmful mutations show that the inbred group is less viable and less 'fit' than the main group.
What all the above amounts to is that you do need pretty close to 150,000,000 favorable mutations between man and chimp in just some 10,000,000 years of divergence (according to evolutionists). Problem is that we have not been able to find, in decades of experimentation a single such mutation creating the greater complexity needed to account for the differences between man and chimp.
Sorry I have been kind of busy the last few days and was unable to respond sooner. I hope you enjoy my response just above. :)
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.