Subject: Mathematics and creation
}rift between mathematicians & biologists
}exponential population growth
}Various conceivable patterns fail to emerge
}Complexity from Simplicity
> Here's an interesting story... (I think)... In 1967, a few
> mathematicians and biologists were chatting over a picnic lunch
> organised by Victor Weisskopf, prof. of physics at MIT. A "weird"
> discussion took place as the conversation turned to the subject of
> evolution by natural selection. The mathematicians were stunned by
> the optimism of the evolutionists about what could be achieved by
> chance. The wide rift between the participants led them to organise a
> conference on "Mathematical Challenges to the Neo-Darwinian Theory of
> Evolution"...(skip to the conference)... which opened with a paper by
> Murray Eden, Prof. of Electrical Engineering at MIT, entitled "The
> Inadequacy of Neo-Darwinian Evolution as a Scientific Theory". Eden
> showed that if it required a mere six mutations to bring about an
> adaptive change, this would occur by chance only once in a billion
> years --while, if two dozen genes were involved, it would require
> 10,000,000,000 years, which is much longer than the age of the earth.
> (See Gordon R. Taylor's "The Great Evolution Mystery"). "Since
> evolution does occur and has occured, something more than chance
> mutation must be involved."
> Von Neumann & complexity
It's hard to see how the described "wide rift" between biologists and
mathematicians could exist, since most of the population geneticists
I know *are* mathematicians--like my thesis advisor, a PhD in Statistics.
Population genetics is an intrinsically mathematical subject, as
my students found with great dismay about 2 weeks into the course
I TA'ed on the subject.....
I get a little angry when people seem to be implying that evolution is
casually refutable and was refuted (by a professor of electrical engineering?)
decades ago. Do they really think that two decades of bright,
dedicated biologists would stick to a theory that this kind of argument
Adaptive change by mutation has been shown in the laboratory and is
not in question. It is quite easy to demonstrate in bacteria, and
advantageous forms which were generated by the co-occurance of multiple
mutations are quite possible. Three points are usually being missed
by people who make Prof. Eden's mistake:
1. Disadvantageous forms can persist in the population for a long time;
2. Multiple ways to the same end (multiple mutations giving the same
result) are not only possible but common;
3. Intermediate steps often have an inobvious advantage in themselves,
making them targets of natural selection.
Seriously, there is something badly wrong with the mathematician's
models if this story is true. In the first place, there isn't really
a necessity for each mutation to occur from a blank slate - virtually all
species have a fair amount of diversity. In the second place, there is
a considerable amount of recombination - even with base pairs on the
same chromosome (crossover) (or maybe the mathematician has never heard
of sex :-). Thirdly, the rate of mutations can be measured and is
significantly higher than what appears to be implied by the fixing of 6
mutations in 1 billion years. Fourthly, if any intermediate forms have
any slight advantage (due to partial implementation of the feature),
then those forms will be selected -- and selection is NOT a random process.
Fifthly, many single point mutations have similar/identical effect (that is,
it wouldn't be necessary for 6 specific mutations to occur but one from
each of 6 different sets, a much easier problem).
All I can figure is that the model assumes a population of a single
homozygous individual whose progeny never exchange any genetic material
and in which the mutated genes never recombine by crossover during mitosis.
In other words, sort of like analyzing the aerodynamics of racehorses by
assuming a spherical horse
Sounds like he's talking about six simultaneous mutations, which may
very well be statistically phenomenal. Not required they be simultaneous
by evolution however, and once one mutation is replicating throughout
a group of related organisms, the odds then go up that one of them might
develop another significant mutation in addition to the one they are now
} - exponential population growth
And by the same exponential growth law we are up to our armpits in roaches.
This does obviously not happen, therefore there are other constraints.
What leads Creationists to conclude that the exponential growth constants for
a 50 year sample apply to 5000 years? This is known as "extrapolating beyond
region of known fit".
The growth curve is exponential. The population origin can
be extended back much further in time, and the recent doublings
are bunched together.
I love exponential growth when used by those unaware of the basics for the
derivation. You can use the same system to show that we are up to our
armpits in fruit flies every 3 years or so...
According to U.N. figures, the world population in 1650 was 508 million,
up from 200-300 million in 1 AD. This corresponds to a growth rate of
0.032 to 0.057% per year during much of recorded history, far lower than
the "sickly 0.5%" used here.
5000 years of growth at 0.057% would increase the population by a factor
of 17, much less than the 7*10^10 implied by a rate of 0.5%.
}- Various conceivable patterns fail to emerge, despite an overwhelming
} tendency to diversify.
There is always luck. If the mutation does not occur, you cannot select
for it. Evolution is not aimed. That's a deity's job. Evolution handles
the current entity, not some future not-yet-conceived entity for some
} Complexity from Simplicity
There was no primordial chaos before the big bang - not really. Instead,
everything was neatly concentrated in one location. Then it scattered,
and is still scattering, a disorderliness far exceeding the structural
order of galaxies, stars, planets, and life forms which have appeared in
the course of the process.
"Science & Creation"
Analog, Sept 1983
ref the information example. It is easy to get VERY complicated systems
containing a tremendous amount of information starting from very simple,
low information systems. Two methods:
1. fractal structures - start with a very simple rule and repeat it over
and over and over. The resulting structure can be (usually is) VERY
complicated, but the formation equations can be very, very simple. And
the universe has had a long time to do so. Example: Look at a snowflake.
2. chaos - You can get very, very complicated systems if you use nonlinearities
in the progression. That is why weather forecasting doesn't work.
Complexity does not imply design. Recursion or nonlinearity work quite well.
And the world is recursive and very non-linear.
I went and got "Theory of Self-Reproducing Automata" by Von Neumann.
You know that it was done in 1966 before most of the chaos & fractal work?
As an initial look, I see how this is NOT applicable to life as
Micha tried to do in <10541@dasys1.UUCP>. Looking at section 5.3.2
"Self-Reproducing automata" we find that, under his constraints, the
secondary (initially quiescent) automaton is identical to the parent,
except that the constructing automaton is larger, and in a sense more
complex, because the construction automaton contains the complete
plan and a unit which interprets and executes this plan. This should
NOT apply to biological forms as discussed here because:
The plan IS the unit that executes itself. In Mary's term, the life is
and, what I consider more relevant
The constructed automaton IS NOT A DUPLICATE of the constructing automaton.
No parent unit that I am aware of (excluding fission reproduction, in which
the parent unit cannot be identified afterwards) is the child a
duplication of the parent. In every case that I am aware of the constructed
unit is a simpler and much smaller unit, which grows OF ITSELF into a
near-copy of the original. Since the complexity is added AFTER the
reproduction process, the reproduction process should not be a limiting
factor. Proof: watch almost ANYTHING grow up.
Therefore, while the descent is INITIALLY simpler than the parent, its
final state can be more complex. Therefore, the argument that information
theory proves that life could not have come from non-life is invalid.
BTW: New systems of cooperating parts have evolved, and they are not even
biological. See "The Evolution of Cooperation", in particular the
computer simulations in which the routines "decide" ON THEIR OWN that
cooperation is "better".
}simulations cannot produce effects
>> And I am as sure of
>>the statement "Selection can change the frequencies of variants",
>>since I've done computer simulation to test it. That's most
>>of evolutionary theory right there.
>Mary, that is very interesting. Could you describe how you modeled selection
>pressure. Any thing that I have seen (i.e., non techinical info) is so
>vague about what selection is that I have no idea how to model it.
>Examples like the light vs dark moths seems too simplistic to me. It shows
>how the number of species (or the amount of variation) can decrease but it
>gives me no hint as to how the number of species can increase.
Directional selection (selection "for" or "against" something) in a static
environment will lose variation. To get a more interesting result, you
can look at either of two things:
1. Selection which is not directional. Here are some examples:
Frequency dependent selection. Forms which are rare are at an advantage.
There are several decent real-world examples of this; female fruit flies
prefer males who look "different", and animals which have immune system
genes different from their neighbors' seem less likely to get
diseases from them.
Heterozygote advantage. The organism with two different forms of the gene
has an advantage over others. The classical example is sickle-cell
anemia in humans, where the person with one sickle and one normal allele
is protected from malaria.
Two kinds of selection pulling in different directions. For example,
females may prefer brightly colored males, but so may predators. Some
values for the parameters here will give a balance of different
forms in the population.
2. Non-static environments. This is much harder to model, but interesting.
You can easily get frequency-dependent selection out of an environment
with two food sources, both subject to overexploitation. Environments
which change over time either randomly or in a cycle can also maintain
The simplest model I know in which something like speciation can be seen
to happen is one that contains two factors:
There is a gene with two variants, and the heterozygote is worse than
There is the possibility for evolving reproductive isolation based on the
Reproductive isolation could be modeled in several ways. You could
explicitly add a gene that controls mate recognition. You could arrange
your simulated organisms on a grid and restrict most mating to near
neighbors, and see if two populations seperated from an initial mixture.
Don't forget that if you use random rather than strictly proportional
selection (that is, if you use a random number to see who lives
and who dies), population size makes a huge difference. It is almost
impossible to maintain high variability in a tiny population, even
with strong selection.
> Von Neumann clearly chown that more complicated systems cannot come
> from less complicated systems. Information gets downgraded
I got the Scientific American article (September 1964, vol 211) on interlibrary
loan (too early for local holdings) to read Mathematics in the Biological
Sciences. The lead in is:
"Biologists use mathematics, but the complex systems they study resist
mathematical description. The kind of description that might someday
be helpful is suggested by the abstract analysis of self-reproduction."
Note that this was before most of the developments in Chaos Theory, which
does what they are talking about. Most of the examples in the article are
concerned with modeling specific systems (neurons primarily), cellular
automaton (see newsgroup comp.theory.cell-automata) and a couple of other
systems. The relevance to evolution is not brought up until the last
column on the last page. The applicability of the approach at all is
questioned by the author, though the specifics are said to probably be
possible. The last sentence:
"Nobody has yet done the engineering design work required to build such a
machine, but I think it will someday be built."
Not exactly the posture of one "disproving" the concept, is it?
I checked out the Von Neumann book _Theory of Self-Reproducing Automata_
and am currently reading ther fth lecture "Re-evalutation of the Problems
of Complicated Automata - Problems of Hierarchy and Evolution". The third
lecture: "Statistical Theories of Information" brings in the thermodynamics
issues as it related entrophy, enthalpy, and information. A worthwhile
note is that Von Neumann points out that no local system is closed, and that
local increases are easily accomodated by decreases elsewhere.
A major disagreement that I have so far is the difference between the
automata discussed and actual systems.
1. The cellular automata discussed are assumed to be in their final state.
No further increases in complexity are apparently allowed. This is
blatently false in actual systems which continuously grow.
2. The descendants are perfect copies of the parents. In fact, this is one
of Von Neumann's criteria in self-reproduction. In reality, (with the
exception of very few EXTREMELY simple systems) the "child" is nowhere near
as complex as the parent. As a major difference, the child is not capable
of reproduction. Therefore, actual biological systems do not fit within
the Von Neumann's definitions for self-reproducing automata.
These two differences alone can account for evolutionary change without
the need for a parent to produce (directly) a more complex child. The
parent simply produces a LESS complicatd child, which is NOT a replication
of the parent, which then grows to a more complicated system than the
parent. This can be observed happening when a tree produces seeds
(which as-is are incapable of reproduction and are plainly NOT a duplicate
of the tree) which in turn develop of themselves into a grove, which is
more complicated (see Von Neumann's book on how he measures complexity -
the editors introduction covers it in one place with examples as opposed to
across numerous lectures for those in a hurry).
Did you bother to read _Theory of Self-Reproducing Automata_? The
fifth lecture is entitled "Re-Evaluation of the Problems of
Complicated Automata - Problems of Hierarchy and Evolution". After
a good bit of evaluation & discussion Von Neumann writes:
"There is thus this completely decisive property of complexity, that
there exists a critical size below which the process of synthesis
is degenerative, but above which the phenomenon of synthesis, if
properly arranged, can become explosive, in other words, where
synthesis of automata can proceed in such a manner that each automaton
will produce other automata which are more complex and of higher
potentialities than itself."
Further down this same page he discusses why a system of "less than
a dozen kinds of elements are needed.".
That system could come into being other than being constructed by
a system designed to do so. Luck, for instance.
A close comparison with Turing's machine is done on the next page.
And one page further we get what appears to be an analogy to Mary's
definition of life: "it might be quite complicated to construct a
machine which will copy an automata that is given it, and that is preferable
to proceed, not from original to copy, but from verbal description
to copy." This "description" almost appears to be Mary's "language".
Therefore, this entire string is fallous.
First, Von Neumann was not talking about the development of systems like
those we observe (he said that was the case early in the quoted chapter),
and those differences make the development trivial. And it turns out that
EVEN WITH THESE AUTOMATA evolution (once past the simplest stage) is