Labels

Saturday, April 27, 2024

The Mathematical Challenges of Darwinism

 Unanswered Mathematical and Computational Challenges

facing Neo-Darwinism as a Theory of Origins


The computational capacity of the universe suggests an upper bound to the number of transformational steps available for any theory of origins.

The ratio between the computational upper bound and the number of ways a molecular cell can be structured is indicative of the probability space a theory should be able to traverse on the basis of a clearly supported
mathematical model.

This is one of the key challenges facing a naturalistic explanation of life this paper reviews in the context of
neo-Darwinism.



Prologue - a thought experiment

In 1999 Time Magazine listed Kurt Gödel as one of the most influential mathematicians of the 20th century.   As a colleague of Albert Einstein, Gödel and Einstein were able to converse on equal terms on Einstein's theory of Relativity - Gödel once producing a novel solution to Einstein's field equations that caused Einstein to express concerns about his own theory.

Gödel also once expressed a concern about the mathematical underpinnings of evolution.  Writing to his colleague Hao Wang he noted "The formation within geological time of a human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components."    

For Kurt Gödel - whose key insight demolished the cumulative efforts of a century of mathematical formalism embodied in Russell and Whiteheads' three volume "Principia Mathematica" - the laws of physics were too simple in nature to account for biological complexity in available time.

On the back of a basic concern by the twentieth century's most 
eminent logician it seems reasonable that 150 years of research would have uncovered a demonstrable mathematical model - one that unambiguously charts how order arises de novo through chance processes, and described with the normative degree of precision and abstraction applicable in all fields of mathematics and physics research.   

Given the ubiquitousness of the theory of evolution such a model would be as justly famous as the theory of Relativity - with its key equations quoted regularly in the press and routinely appearing in the appendix of virtually every biological textbook.


Seen this mathematics recently?   Or are we confusing observations of common descent with unquantified common assent?

Galileo demonstrated in the 16th century that no matter widely supported or accepted a theory may be, without a demonstrable mathematical foundation it will ultimately fail the test of time.

Consider the makeup of our universe:
Approximately 1017 seconds have elapsed since the big bang.
Quantum physics limits the maximum number of states an atom can go through to 1043 per second (the inverse of Planck time, i.e. the smallest physically meaningful unit of time)
The visible universe contains about 1080 atoms.
  
 It seems reasonable to conclude that no more than 10140 chemical reactions have occurred in the visible universe since the big bang (i.e. 1017+43+80)
  
 Following from this evolution needs to be theoretically demonstrable within 10140 molecular state transitions.

(For comparative purposes see Seth Lloyd's "Computational Capacity of the Universe" [r62], reviewed by the Economist [r70].  Lloyd comes up with a value of 10120).

Take a step back and look at the makeup of a cell:

The Ribosome - a key protein conglomerate that translates DNA into protein contains about 250,000 key functional atoms. [r73]    Assume we are provided with the correct ingredients for a hypothetical, primitive Ribosome of just 2,000 atoms.   How many ways are there that this structure can be arranged in three dimensions en route to the initial viable structure that kick-starts the evolutionary process?  

By reducing the size of the Ribosome and artificially constraining atomic interactions to those of a comparatively easily understood mechanical model such as a Rubik cube we have a simple way to calculate a lower bound for the number of possible permutations.   The number of ways of arranging a Rubik cube with 20 elements per side is approximately 101,477.    

The key logistical issue is that the number of ways of organizing the atomic makeup of this elementary Ribosome model exceeds the number of transitions the universe is physically capable of supporting by a factor of 101,337. (Or 101,357 if you use Lloyd's model which takes available energy into account).   This makes it difficult for any probabilistic model to traverse the solution space.

In particular, 10140 / 101,477 suggests that since the start of the universe all stochastic models would have been able to explore a maximum of just 
 
0.000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000000000000
000000000000000000000000000000000000000000000000000000001%

of the solution space in search of the correct configuration for a 2,000 atom Ribosome.   To put this in perspective, the ratio of the width of the observable universe to that of a hydrogen atom is of the order of 1 to 1036.    Alternatively, all stochastic models are physically unable to cover more than a millionth of the width of a human hair in a search for the correct biological configuration in a solution space that covers the width of the visible universe in the time that has elapsed since the Big Bang.

Probably the most non-intuitive aspect is that exploring the total solution space of something as simple as a 20 unit sided Rubik cube exceeds the computational capacity of the universe by a factor of 101,337.  And the percentage of potential configurations that can be visited by any physical process (including neo-Darwinism) becomes exponentially smaller as more complex molecular structures are considered.
If we look at the number of ways the 250,000 atoms of a modern Ribosome can be assembled using the Rubik model the size of the solution space is around 10162,221 - any hope of a thorough traversal of the solution space is well out of reach irrespective of method.

But even if we assume a primeval 20x20x20 Ribosome through molecular happenstance - what next?   Evolution cannot begin unless we can replicate the first Ribosome.  

So to start evolution we require both the Ribosome and an encoded duplicate of the Ribosome - in the form of precursor mRNA - floating in the vicinity for the Ribosome.   In information theoretic terms this could be a chance of the order of 1 in 102,956.

Hidden in this statement is a puzzle that exercised the imagination of Karl Popper, who was regarded as the greatest philosopher of science of the 20th century:

"What makes the origin of life and of the genetic code a disturbing riddle is this: the genetic code is without any biological function unless it is translated; that is, unless it leads to the synthesis of the proteins whose structure is laid down by the code.   ....  The code cannot be translated except by using certain products of its translation. This constitutes a really baffling circle: a vicious circle, it seems for any attempt to form a model, or a theory, of the genesis of the genetic code."  [r69]

Notwithstanding how two complex interlocking systems that cannot exist without each other managed to arise simultaneously together with a grammar of translation appropriate to all biological forms, there is a fundamental housekeeping issue. Even if the Ribosome managed to latch on to a perfectly encoded mRNA precursor organized in a loop and started churning out hundreds of copies of itself, without any cell wall the newly duplicated Ribosomes would simply wash out to sea.

In real life the molecular probability stack reaches higher.  To function correctly a cell requires around 10,000,000 Ribosomes: approximately 7,000 are produced each minute and as each Ribosome contains about 80 proteins we can infer about 500,000 Ribosomal proteins are synthesized in the Cytoplasm per minute.  [r72]

In human engineering terms this is the equivalent of creating and powering a factory containing 10 million assembly lines, where half a million of the components created every minute are assembled into 7,000 new assembly lines per minute - with worn out parts recycled where practical.

And sustainable life require Lysosomes, the Gologi apparatus, mitochondria, centriole, endoplasmic reticulum, etc. in active cellular support.
Given that no more than 10140 flips of the biological coin are available it is worth comparing this number with the number of ways a cell of 100,000,000,000,000 atoms can be arranged in three dimensions.  

Such a number would define the size of the solution space that evolution would have had to traverse.    Using the simplified Rubik cube model, the number of ways of arranging 100,000,000,000,000 surface cells is of the order of 1064,600,000,000,000

Quantifying this in everyday terms, if each digit in this number were represented by just one cubic millimeter it would be sufficient to fill the visible universe many times over - there would be insufficient free energy to write it out in full.  

Note that we are not saying that evolution can't get there.   This simply compares the size of the solution space that incorporates life with the 10140 steps available to reach that point irrespective of the stochastic method.  

A similar issue is that unless all steps required to create cellular life are incrementally accessible across this landscape the journey will not be able to complete. The size of the probability landscape suggests any a priori assumption of incremental reach may be speculative, a perspective supported in the discovery of a range of enzymes (highlighted later in this paper) required to speed up critical cellular reaction times in excess of the entire evolutionary timeframe: processes with half lives of 1.1 trillion years, 2.3 billion years, 78 million years etc. Just how does an enzyme evolve incrementally to speed up a reaction whose half-life is 100 times the age of the universe?
 
Factor in the total number of cells in the body - estimated at around 100,000,000,000,000 - with each containing say 100,000,000,000,000 atoms - and it becomes clear that evolution's generation-by-generation unguided sequential traversal of the total solution space en route to a fully functioning human body required extraordinarily lucky leaps in view of the limiting 10140 transitions.

Consider the total number of molecules that occupy a one litre container of air at sea level.   The probability that we find these molecules spontaneously occupying just one tenth of this volume is 1 in 1010,000,000,000,000,000,000,000. [r74]     

Thus while physics does not forbid the spontaneous inflation of an uncapped bicycle tyre, the probability of this event occurring is not at all dissimilar to the probability that life arose by chance.

Gödel's assessment was right on the money.


So what?

In the fifth year of school our class teacher introduced us to "the fish that walked" - or at least a fish that had once been party to the definitive ancestral genus of terrestrial life in the popular imagination.    Interest piqued, I took the opportunity to read my parent's copy of "Old Four Legs" by Prof. J.L.B. Smith - the remarkable story of the discovery that the long fossilized Coelacanth was indeed healthy, well and living as it were in our own back yard.  [r1], [r2]

One particular page in the book embossed itself into my mind with the longevity of a fossil footprint - two adjacent black-and-white photographic plates highlighting anatomical differences between a fossilized Coelacanth and a modern day species.   My evolutionary epiphany was not so much the shock that scientists had been mistaken in hailing Latimeria chalumnae as a long lost cousin at a crucial juncture in the "Tree of Life" but being unable to discern any differences between photos representing nearly 400 million years of evolutionary history.    

With my trust in the collegiate "it is now more or less accepted" covered in an indelible layer of dust I resolved that in future I would review scientific theories first hand - at least to the point where I had had a chance to validate underlying principles for myself.

Later Richard Dawkins was to liken evolution to a mountain of apparent impassability - but one that nevertheless could be climbed, albeit slowly and steadily given abundant time, a steady stream of mutations and natural selection.  After millions of years Mount Improbable would disgorge incredible machines like the human eye, the brain, our integrated senses, the human body - which together with a sense of dispassionate and objective knowledge would permit the triumph of rationalism over the forces of chaos, ignorance and superstition.

Unfortunately the Coelacanth was not the only fossil to swim on in blissful ignorance of this type of assertion.   Salamanders remained virtually unchanged from fossils 150m years old, as did butterflies (50m), cockroaches (300m), cycads (200m), the lungfish (350m), cicadas (150m), nautiluses (500m), velvet worms (500m), Neopilina mollusks (500m), alligators (230m) Gingko trees (270m), the Wollemi Pine (175m), the 8mm wasp of the family Xyleidae (200m), silverfish (350m), horsetails (325m), Amborella trichopoda (130m) [r4], slit snail gastropods (500m) [r5], horseshoe crabs (500m), the ragworm Platynereis dumerilii (600m), Gromia sphaerica (1.8b) [r81], Priapulus caudatus (505m) [r86]  etc.    

Perhaps most surprisingly in the light of the hypothesis of continuous, undirected mutations the geological column recorded all animal phyla appearing in just the first 2% of the column - without precursors.   G.G. Simpson noted in a paper prepared for the Darwin Centenary Symposium that above the level of species gaps in the fossil record are both systematic and typically large - a record of continuous discontinuities [r14].   And within species, morphological changes are consistent with gene segregation rather than these  postulated "de novo" mutations - rates of change inferred from the fossil record do not retropolate correctly for e.g. a common mammalian ancestor. [r3]

Darwin's expectation was that the fossil record would vindicate his theory by providing an unambiguous line of changes linking all species - a continuum of fossils populating the "Tree of Life".   After 150 years, this defining hope remains confined to dotted lines, artistic renderings and interpolations in text books.  

Clearly, if it takes more than 50,000 coordinated morphological changes to change a land dwelling creature into a whale, books that support the theory of evolution by the device of assuming it to be true in the context of an explanation - such as "the baleen whale skull became modified to feed upon concentrations of plankton" [r10] - are presenting only 0.002% of the picture.   Extrapolating a completed picture from single pieces of the puzzle may offer the appearance of a solution but can lead to 
an incomplete or naïve understanding of the detailed issues at the biochemical level.

Gerald Schroeder, a nuclear physicist familiar with atomic and molecular dynamics, wrote that he found the arguments of Steven Pinker, Stephen Jay Gould and Richard Dawkins to be compelling until he started studying molecular biology.   He says "[Now] knowing the complexity of the processes involved, when we see a diagram showing how simple evolution is, how one organ can change into another merely by adding a feature here and there, we must realize those demonstrations are a farce.   As long as the intricate working of the cell are disregarded, there's no problem ... to talk of random mutations producing the goods of life."    [r7]

Similarly, the world's most influential philosophical atheist, Antony Flew, announced in January 2004 that as a result of the progressive discoveries in the field of molecular biology over the last half century he was renouncing atheism in favour of non-specific theism.   [r8], [r9

The question at the centre of the debate is whether the "top down" approach of biological inference traditionally favoured by biologists can be made to meet the "bottom up" approach of mathematical analysis typically favoured by dissenters and doubters - and who for the record include atheists, agnostics, and people of every permutation of religious opinion.    It has never been particularly scientific or accurate to dump neo-Darwinian dissenters into a polemical punch bag labeled "fundamentalists".  

In any event the neo-Darwinist camp can easily silence their critics by producing an appropriate, measurable, and undisputable mathematical framework for their biological model.     The straight forward device of framing observations in mathematical terms enabled physicists to demonstrate the earth goes round the sun and the universe had a beginning.   The universal jurisprudence of mathematics made it unnecessary for physicists to take internal controversies such as the "Big Bang" before courts of law - mathematics is quite capable of speaking for itself.

Gregory Chaitin, an IBM Research Fellow, put it eloquently in a lecture entitled "Speculations on biology, information and complexity" he presented in Auckland in 2006:   "If  Darwin's theory is as simple, fundamental and basic as its adherents believe, then there ought to be an equally fundamental mathematical theory ... that expresses these ideas with the generality, precision and degree of abstractness that we are accustomed to demand in pure mathematics."   [r49]

The ultimate criteria of the quality of any branch of science is that of its mathematical foundations.
 



Naturalism - is truth history in reverse?

The theory of evolution rests largely on the primary assumption of naturalism - the view that our universe is a materialist's paradise, consisting of absolutely nothing more than atoms juggling in a beneficent thermodynamic dance.   If this scenario could be demonstrated as valid then all religious assumptions would engage us at the same level as tabloid assertions that ancient civilizations were once visited by aliens in flying saucers.  

For example the miracles and resurrection of Jesus would have been impossible and people who believe life extends beyond the molecular would be most to be pitied.   

But what if the material cosmos is really all there is, was or will be? 

By definition the materialist's cosmos is a "closed" system made up solely of molecules and energy - in particular containing no independent or other volitional "first causes".    It is thus a mechanical system in which every event is determined by and only by previous events within the system.  

Within such a universe every living species is in effect a molecular "play back head" of history.   Thoughts, intentions and actions reflect only the passing incidence of molecular history.    

In this system "choice" is subsumed by Richard Dawkin's view that design in nature is an illusion - as value systems are ultimately nothing more than byproducts of random particulate processes in turn chained together over billions of years.     Thus, because life is simply a manifestation of molecular history, we are only deluding ourselves if we think we are somehow objectively independent of this molecular inheritance.  

Our minds are thus no more than the cerebral equivalent of a snowflake - having the appearance of amazing complexity and uniqueness but ultimately totally mechanical and explainable - the equivalent to a molecular rearview mirror.

This leads to interesting anomalies - for example the secular biologist teaching evolution to a university class has no more real control over his or her choice of worldview than the student believing the universe to be created by God.   In both cases "choice" is first and foremost a molecular pantomime scripted by past events - perhaps involving parents, schools, food, weather and subatomic particles - but at the most fundamental level devoid of independent volitional intent in every sphere - including the thinking process.    In a most profound sense our minds were cast at the time of the Big Bang, our cherished beliefs nothing more than subsequent molecular happenstance against the backdrop of a large vacuum.

Unsurprisingly proponents of evolution have proposed various mechanical explanations to describe how - in the interests of logic - the mind can break free from this confining "event horizon" of mechanism.  Popular postulates include that choice and consciousness may "arise" from a closed mechanical system through the interaction of quantum gravity in microtubule-like structures in the brain, or that consciousness "must be" the product of a sufficiently complex electro-mechanical infrastructure containing positive and negative feedback loops.   

As it turns out, all phenomena in nature stem from handful of physical laws that can be summarized in mathematical form on one side of a sheet of paper - as in Table 1.   Together with the universal constants - Table 2 - they provide a thumb-nail sketch of the dynamical characteristics of the universe.   

A point to note is that it is widely accepted that these laws describe a universe finely "tuned" to support life - for example starting with cosmologists' observations that the vacuum energy components present at the time of the "Big Bang" needed to cancel out to one part in 10120, or Roger Penrose's calculation that finding ourselves in a universe with usable energy (i.e. low entropy at the time of the Big Bang) is as low as one part in 1010123.   Thus even if we were to postulate millions of sequential or parallel universes, the chance of finding ourselves in the one universe with energy to sustain life is smaller than one in a googolplex.  [r16]

In respect of biology Dr Walter Bradley comments:  "
The emission spectrum for the sun not only peaks at an energy level which is ideal to facilitate chemical reactionessential for life but it also peaks in the optical window for water. Water is 107 times more opaque to ultraviolet and infrared radiation than it is to radiation in the visible spectra (or what we call light). Since living tissue in general and eyes in particular are composed mainly of water, communication by sight would be impossible were it not for this unique window of light transmission by water being ideally matched to the radiation from the sun. Yet this matching requires carefully prescribing the values of the gravity and electromagnetic force constants as well as Planck's constant and the mass of the electron."   

While it is impressive that all the laws of the universe appear to line up rather niftily to facilitate life on earth, 
the rub is that there is absolutely nothing in these dynamical equations suggestive of a pathway that could lead to the appearance of  information with the complexity evident in the works of Shakespeare, Boeings, lunar landers or Fermat's long lost "marvellous demonstration" of his Last Theorem.  

Contrast this with the theory of evolution that asserts that informational complexity is no more than a by-product of self-replicating molecular structures subject to mutations and natural selection.     With a primitive cell, the combination of the sun's energy and the thermodynamic characteristics of the universe, the assertion is that this will inevitably (or, modestly, could) produce increasingly rich information.     The only part of the process dependent on entirely fortuitous circumstances - undirected selection - is the initial self-replicating molecular structure.

The evolutionary explanation of life is thus "top down":

  • Life exists

  • Natural selection is the best (non-theistic) explanation we have for life

  • Natural selection must therefore account for the complexity of life and all information systems no matter how improbable it seems.

By contrast the "bottom up" view can be summarised along these lines:

  • The universe is described by a handful of dynamical equations.

  • The probability of informational complexity arising as a result of any purely stochastic system of laws is vanishingly small.

  • The observed informational complexity of life vastly exceeds anything a mutational model can deliver within the age of the universe.

A seldom documented problem is that the epistemological foundation of evolution rests entirely on the state of the universe - the assertion that truth is ultimately defined by nothing more than the location and velocity of molecules in what is in effect a cosmic pinball machine.    

Even if we assume our knowledge of mathematics is somehow independent of this quagmire - a reality we have to assume somehow exists independently of the limits of our molecularly directed perception - we are faced with Kurt 
Gödel's seminal "Incompleteness Theorems" demonstrating that "mechanical rules" or proofs do not subsume truth [r17].     Thus proof is always a weaker notion than truth [r53], so everyone has to make an implicit or explicit "leap of faith" with respect to their choice of worldview.

For the naturalist the documented intelligibility of the universe is a puzzle in search of an explanation.   For the theist, anything other than an intelligible universe would constitute a mystery:  in the final analysis the "Big Bang" and anthropic principle are not altogether unexpected. 



Surely there has been enough time, Dr. Watson?

The theory of evolution depends entirely on the veracity of a self-replicating molecule arising on purely stochastic grounds.   One would expect this to involve a modest stretch of time.    

How much time?    In the "Blind Watch Maker" Richard Dawkins suggests that attaining such a probability is simply a matter of having the correct perspective - for example, if we lived for 100 million years and happened to play bridge we would not be surprised to see something as improbable as a perfect bridge hand - where each player was dealt the same suite - turning up from time to time.    So it would not be so improbable after all.

How accurate is this assumption?   Calculating the chance of one perfect bridge hand where we play bridge 100 times a day for 100 million years comes out at a paltry 1.63x10-15  - the same degree of delight one would associate with winning a lottery event twice.  [r36], [r42] Which is not quite "from time to time".   To improve this to a chance of say one in a million we would have to keep playing a hundred games a day for  61,238,285,120,420,996 years.   By contrast, the best current estimate for the age of the earth weighs in at a mere 4,567,000,000 years.

Even mathematicians occasionally underestimate probability - in "A Brief History of Time" Stephen Hawking mentions that monkeys pounding away on keyboards will "very occasionally" by pure chance type out one of Shakespeare's sonnets.    The calculation for the sonnet "Shall I compare thee to a summer's day" shows that the chance is about 1 in 10690 i.e. 10 followed by 690 zeros. [r78]   As there have only been 1018 seconds since the Big Bang and there are about 1080 atoms in the visible universe it is difficult to see where 10690 fits in comfortably.    Physical limits on monkeys and keyboards means we would have to cycle through the heat death or final collapse of the universe in excess of 10600 times to obtain a single sonnet.  

Could the universe have cycled through this many Big Bangs?   Thermodynamics dictates that the number of photons will increase relative to other particles with each cycle, thus given a finite number of particles, the entire universe would 
eventually be reduced to photons.   Our ability to read tells us the universe is not based on infinite cycles.

Of course, this is not saying that an event of 1 in 
10690 could not take place - rather it is an objective measure of how unexpected life is given the assumption of a reductionist universe.   

The informational complexity inherent in a sonnet parallels the informational complexity that defines a protein.  Just as a sonnet is assembled from 26 letters (ignoring punctuation, spaces and capitals), proteins are assembled (by previously assembled proteins) from strings of 20 distinct amino acids ranging in length from 20 (TRP-Cage) to 26,926  (Titin).   Proteins are the "work-horse" of biological life - each cell in the human body produces about 2,000 per second, and as each protein is produced it is folded by other proteins or self-folds into a complex three-dimensional shape required to activate it's chemical function.

The most abundant proteins associated with the DNA of eukaryotes are the Histones.   As they are essential for maintaining the structural integrity of DNA and have a role in the transcription process they are structurally intolerant to change.   Histone H4 contains about 104 amino acids and differs in two or three places across a wide range of species.    The high level of invariance of Histone H4 with respect to cellular replication suggests it is a candidate for examining probabilities associated with the formation of an equivalent protein in a primeval cell on the basis of chance.

With 104 amino acids, there are 20104 ways a primeval equivalent of Histone H4 could have been arranged through chance.      For convenience, we approximate 20104 by 2 x 10135

If we assume that the entire observable universe - approximately 
1080 atoms - was available to manufacture the very first Histone H4 equivalent protein - at an average of 10 atoms per amino acid - we would have 1077 amino acids available.   If we spent all 1018 seconds since the Big Bang cycling through all possible proteins using all the available resources of the universe once every second we would have generated a maximum of 1095 proteins.     Thus the chance of obtaining one Histone H4 equivalent protein using all the resources in the universe for a workable primeval cell would be 1 in 1040.   

To put things in perspective, a perfect bridge hand would be 4,473,877,774,353 times more likely.

But t
his is only step one.   As noted a newly manufactured protein has to be correctly folded into its specific three-dimensional shape to become chemically active - if this step depended entirely on random processes it would take a small 100 amino acid protein in the region of 1087 seconds to work through all possible configurations - 1069 times longer than the entire age of the universe.      Amino acids have hydrophilic and phobic facets that are assisted into the correct shape by accessory proteins - some of which are synthesised along with the main protein itself.   [r37]

Replication of the DNA template proteins are derived from is carried out by proteins.  In prokaryotic cells DNA polymerase III is a complex of seven subunits ranging from 300 to 1,100 amino acids in length, one does the copying and the remainder are involved in accessory functions including error correcting.      In E. coli DNA repair processes are managed by approximately 100 different genes.    Just how does one "evolve" an error correcting process distributed across 100 genes on the basis of chance?

As a general rule about one third of the amino acids in a protein are directly involved in providing structural and chemical function and are therefore invariant while the remaining amino acids are drawn from a pool of about three or four types.    Given that the simplest bacteria requires some 2,000 enzymes (protein catalysts), and assuming an average enzyme chain of 300 links we have

    2000! x [ 1/( 20100 x 4200)] 2000  => a probability of about 1 part in 10500,000         [r18]

The evolutionary view is this calculation would be more realistic if performed using the intracellular parasite Mycoplasma genitalium as a model.  The disadvantage of using Mycoplasma genitalium is that it depends on the existence of the cellular machinery whose probability we are trying to estimate.    Another problem is an analysis of the genetic characteristics of the mycoplasma class suggests it arose through a loss of genetic material.  [r11]

Either way the probability for the random generation of the enzymes for this parasitic organism 
work out at about 1 part in 106,393.   In a purely reductionist universe, one Shakespearean sonnet would be more likely by a factor of 105,703.

There is also the matter of the coordination of enzyme timing.

Consider:

  • When catalyzed by the enzyme uroporphyrinogen decarboxylase, the biosynthesis of chlorophyll and haemoglobin is increased by a rate equivalent to the difference between the diameter of a bacterial cell and the distance from the earth to the sun.  The reaction half-life of the chemical process is reduced from 2.3 billion years - half the age of the earth - to milliseconds.  [r83]

  • Uridine 5'-phosphate is an essential precursor of RNA and DNA. In neutral solution, orotidine 5'-monophosphate undergoes spontaneous decarboxylation to uridine 5'-phosphate with a half-time of 78 million years. At the orotidine 5'-phosphate decarboxylase active site, the same reaction proceeds with a half-time of 18 milliseconds. [r84]

  • The half-time for attack by water on alkyl phosphate dianions is 1.1 trillion years at 25°C, the phosphatase enzyme speeds this reaction up by 21 orders of magnitude to complete within 10ms. Every aspect of cell signaling follows the action of the type of phosphatase enzyme that breaks down phosphate monoesters. They also help mobilize carbohydrates from animal starch and play a role in the transmission of hormonal signals.
     
    One trillion years is estimated to be around one hundred times the age of the universe.
     [r85]

  • Other examples of the half times of biological reactions proceeding in water at 25°C without the presence of catalysts include the following from [r85]:
     
    1.1x106 years for alpha-O-glysocide hydrolysis
    1.4x105 years for phosphodiester anion hydrolysis (C/O)
    9.8x104 years for mandelate racemization
    6000 years for amino acid racemization
    450 years for peptide hydrolysis s
    73 years for cytidine deamination
    2 days for triosephosphate isomerization
    7 hours for chorismate mutation
    23 seconds for peptide cis-transisomerization
    5 seconds for COhydration.

Envisaging how the evolution of enzymes that would have initially been dependent on non-catalyzed processes taking 1 trillion, 2.3 billion and 78 million years to complete and then ensuring these catalyzed reaction times all lined up with an accuracy of tens of milliseconds on the basis of random chemical determination appears to be one of the more interesting mathematical challenges in this paper.

If one allows for the approximately 50,000 catalysts which need to line up for the process to work, this is analogous to fitting a key with 50,000 bittings into a lock, all of which need to line up with sufficient precision for the lock to open, or in this case for the cell to function and produce more "keys".

If the catalyzed time range is for the sake of discussion never more than 10 units - clearly not the case from the above - we would need to explore 1050,000 different keys which once again is beyond the computational capacity of the universe.

Even following the standard evolutionary explanation and suggesting the key is produced gradually we are faced with the essential difficulty of a single evolutionary iteration for one enzyme falling exponentially beyond the maximum timeframe available for the entire evolutionary discourse.
  
At this point the primeval cell is still a long way off.   Other requirements for a cellular release candidate include:

  • Creating a base template from which we can derive copies of the required primeval proteins at any stage in the future

  • A mechanism to read the base template - to identify where instructions for a specific protein begins and ends.

  • A mechanism for duplicating just the protein specific portions of the base template for manufacturing purposes as required.

  • Proof reading and error correcting of the temporary template

  • Transporting the proofed template to the production zone.

  • A molecule capable of reading the temporary template and assembling amino acids in accordance with the template instructions.

  • Identification and transportation services to move proteins to appropriate locations.

  • A process to assist with the folding of the new protein into its particular three-dimensional configuration.

  • Regulatory systems to ensure we never have too little or too much of a specific protein.

  • Energy acquisition / management - each step requires that the right levels of energy to be available at just the right time in order to function.

  • House keeping - a mechanism to disassemble and recycle protein components past their use-by date.

  • The entire "factory" must be capable of replicating itself to facilitate "natural selection".

 

As noted in the prologue Kurt Gödel once observed that "The formation within geological time of a human body by the laws of physics (or any other laws of similar nature), starting from a random distribution of elementary particles and the field, is as unlikely as the separation by chance of the atmosphere into its components."    With customary precision Gödel elaborated that unless the complexity of living bodies was innate to the material they were derived from, or present in the laws governing their formation, the laws of physics were fundamentally too simple to account for biological complexity within available geological time.  [r12]

Biologists pay tribute to 
Gödel the mathematician, but believe it unnecessary to pay much attention to Gödel's comments on evolution.  After all, they point out, Gödel was not trained as a biologist.     

Whether Kurt 
Gödel ever reciprocated this insight is not recorded.



Loading the scales

An early "anthropic coincidence" was the observation that humans find themselves located more or less in the centre of the measurable universe.   More precisely, the measurable scale of things ranges from about 10-26 at the bottom end of the subatomic scale to 1027 metres across for the estimated width of the universe.   Human beings are measured in metres at the 10position.    So by a curious coincidence - in terms of scale - the universe extends as far down beneath us as it extends above us.

To gain an appreciation for the complexity represented by the human body imagine shrinking ourselves down to the smallest end of the measurable spectrum and looking back at the human body.    Focusing on the orders of magnitude, we are made up of approximately 100,000,000,000,000 cells of about 100,000,000,000,000 atoms each.

Thus our molecular makeup comes in at 10,000,000,000,000,000,000,000,000,000 - a total which exceeds the estimated number of stars in the universe roughly 100,000 times.  [r13]

Given that the human body is derived from a single cell, it can be reasonably argued that our cells are the most informationally rich objects in the universe. Just how complex is a cell
 in reality?   Follow the suggestion of molecular biologist Dr Michael Denton [r15] and imagine a model of a cell scaled up to the point at which each atom is the size of a tennis ball.  Such a model would be 20km in diameter, completely eclipse most city centers, and reach to more than double the cruising altitude of modern jetliners.     A construction worker welding in a representative atom per minute would take fifty million years to complete a single cell - and 50,000 million million million years to complete one human.

Expand all the cells in the human body to the same representative size, we would be able to cover the surface of the earth - both land and oceans - over 50,000,000,000 times.   
 Try building - and running - a factory of that size using goalless stochastic methods.

As atoms are clustered into amino acids and nucleotides within a cell, construction time at this level of representation reduces to under five million years, or by using mass production methods for the parts of the cell that are themselves products of the cell's mass production systems, less than one million years.

Yet a typical cell replicates itself within half an hour - 17.5 billion times faster.     Computer buffs would be interested to learn that it has been estimated that in the first year of life the number of chromosomal copy operations amounts to the transfer of over 40 Terabytes of information per second.   Come across any backup systems that match that?

A
 space measuring just tens of micrometers across contains sufficient information to control the placement of 6,000,000 skin cells per cm2, maintain a red blood cell density of approximately 5,000,000 cells per mm3, develop a brain with 100,000,000,000 cells - deployed at approximately a quarter of a million per minute - establish an estimated 10,000,000,000,000,000 neural interconnections with the aid of 500,000 to 1,000,000km of neural wiring - and then for good measure wire the brain to the rest of the body through an additional 380,000km of nerve fibres - the latter matching the distance to the moon.

From information held at a density of 
1.88x1023 bits / cm3 we derive a retina containing 400,000 optical sensors per 
mmwith a total count of 110 million rods and 6 million cones - rendering the equivalent of an "ultra-high definition" picture via a 100 to 1 real-time lossless compressed signal sent to the brain through a 2mm thick nerve bundle containing one million nerve fibres - and from which the brain resolves real-time stereoscopic depth information to complete the picture.   

From 
one cell comes the totality of our sense of hearing - developed with the assistance of 12,000 sensory cells arranged on a 32mm long lamina in four parallel rows with a width of 0.05mm and a geometrical distribution similar to that of a piano - tuned to 20khz at one end and up to 30Hz at the other.     The hearing apparatus handles a range of 12 orders of magnitude while being able to discern the lateral displacement of a source within 3° - on the basis of a time difference of 0.00003 seconds.
    
The inner ear also contributes to the sense of balance through three semi-circular, fluid filled canals at approximately right angles to each other, containing fine hairs topped with a small node of calcium carbonate to enhance inertial sensitivity.  The deflection of these hairs across three spatial dimensions is translated into electrical signals measuring rotational and translational movement.    The balance mechanism of the inner ear is connected directly to the eyes through one of the fastest reflexive circuits in the body - enabling our eyes to track a fixed point while our heads turn.   [r27]

Sounds complex?   Here's a key question: how much more complex do you think the human genome is when compared to - say - a computer product like Microsoft Office or Ubuntu Linux?   Hundreds of thousands of times?   Millions?  
 

The human genome consists of approximately 3 billion DNA base pairs (150 billion atoms).    Every three nucleotides in the string of 3 billion codes for one of twenty amino acids.   Assuming equal frequencies, this equates to 4.32 bits per amino acid, which when apportioned back to the nucleotide level, means each base pair encodes for 1.44 bits of information.   The human genome thus contains 515MB (megabytes) of encoded information.

Molecular biologists assure us that only 1.1% to 1.5% of the genome actually codes for function [r19], so assuming the central dogma of molecular biology to be correct - the human phenotype is defined by 5 to 7MB of information - 200 times smaller than a typical Microsoft Office installation, and leaving us only 60% ahead in the race against the fruit fly!

Even allowing for the fact that the component parts of 
our 22,287 genes [r20] can be recombined in a multitude of different ways, the crunch question is whether a total of 5-7MB of information is sufficient to run a "universe sized" chunk of molecular machinery.

Perhaps it comes as 
no surprise to hear that Dr Craig Venter, whose company Celera Genomics produced the first complete sequence of the human genome, commented in 2001:  "We simply do not have enough genes for this idea of biological determinism to be right. The wonderful diversity of the human species is not hard-wired in our genetic code.  Our environments are critical. [r21]    

Similarly 
the late biologist Stephen Jay Gould wrote in the New York Times in 2001: "The collapse of the one gene for one protein, and one direction for causal flow from basic codes to elaborate totality, marks the failure of [genetic] reductionism for the complex system we call cell biology."     [r23]

Fifty 
years of faith and billions of dollars research grants in deference to the single gene / single phenotype thesis failed to demonstrate appropriate dividends in medical and behavioural genetics: the tale of the "selfish gene" is catalogued under fictional history.     

As a replacement, Richard Strohman proposes that "In order to assemble a meaningful story, a living cell uses a second informational system. .... Let's say you have 100 genes related to a heart disease or cancer. These genes code for at least 100 proteins, some of which are enzymes, so you have a dynamic-epigenetic network, consisting of 100-plus proteins, their many biochemical reactions and reaction products. It is 'dynamic' because it regulates changes in products over time, and it is 'epigenetic' because it is above genetics in level of organization. And some of these changed products feed back to DNA to regulate gene expression. The key concept here is that these dynamic-epigenetic networks have a life of their own-they have network rules-not specified by DNA. And we do not fully understand these rules.

"
In short, genetics alone does not tell us who we are, or who we can be. While, as Gould says, the reductionist theory of genetics has collapsed, the dynamic-epigenetic point of view retains genetics as part of a new paradigm for life, one that has striking implications for the future of the life sciences."   [r22]

The new vista is visible on the educational front:   "Genomes are the handmaidens, not the blueprints ... not 'selfish genes' but self organization, self-assembly, and emergent traits that are adaptive, purposive, taking advantage of genes to instantiate themselves.   Organisms create themselves, bottom up.   There's no blueprint for a human in the human genome.    Rather, regulated expression of genes in human embryos generates proteins that interact to bring about the adjacent possible - the next trait - which in turn makes possible the next trait."    [
r24]

At least this generous explanation hints at an extraordinary degree of foreknowledge about the environment, and a complexity across all genomes capable of actively facilitating adaptation when catalyzed by external pressure.    

Unfortunately, the key question - how the original informational content was generated that facilitates such highly sophisticated recombinational techniques under control of genes that preceded particular selection pressures is not touched on.   The explanation is once again implicitly attributed to evolution and pushed back in time under the assumption that evolution is already known to be true.



Information-gain and correlated adaptation

Reinforcing the view that genes appear incidental to size and complexity is the following table -  which shows roundworms, mice and humans all within hailing distance of each other.

OrganismGene countComment

Fruit fly

13,767

38,016 variant protein molecules from one gene

Roundworm - Caenorhabditis

20,000   

Consists of 959 cells                  [r33]

Weed - Arabidopsis thaliana

25,498    

Smallest genome of a flowering plant

Rice

57,535

 

Banana

(unknown)

50% human counterparts

Dog

19,300

85% human counterparts

Mouse

21,587

300 genes difference

Chimpanzee  (estimated)

22,000

96% human counterparts

Human

22,287

100 trillion cells

 
The differences between humans and chimps have been found to be four times more likely to be accounted for in regulatory rather than protein coding genes.   [r25] [r26].   Apart from why up to 200 regulatory genes would (in the evolutionary scale of things) undergo near simultaneous changes, a wider question is how one genotype is able to pre-specify a range of different phenotypes before evolutionary pressure is brought to bear.

In the literature, we are told "
Humans and chimps diverged from a common ancestor only about 5 million years ago, too little time for much genetic differentiation to evolve between the two species"  [r28].   This makes it reasonable to ask when language, musical and mathematical skills evolved - given that from an information-theoretic standpoint the genetic code shows no significant differentiation between chimps and humans.

The official response is that these capabilities were already integral to the common genome, and it was just a matter of "switching on" the appropriate combination of genes at the right time to express the relevant diversity.   By implication, chimpanzees (and mice, who are only 300 genes away from humans) share a heritage that "arose" yet remained entirely unexpressed format before the time of Mus musculus.   Was Douglas Adams right about mice after all?

So how long does it take a human brain to evolve?   If it took the Nautilus over 140 million years to add a handful of floatation chambers to give rise to the Ammonites, can we account for the most complex object in the universe arising in the space of only five million years?  What is the maximum rate of information gain that can be attributed to evolution?   This will be considered in some detail in the next section.

Curiously, other parts of the Nautilus failed to get any grip on macro evolution, even after 500 million years   Take its pin-hole camera eye - routinely touted in evolutionary literature as the archetypical candidate of eye evolution.    

Richard Dawkins puts it well:  "The [Nautilus] eye is basically the same shape as ours, but there is no lens and the pupil is just a hole that lets in seawater into the hollow interior of the eye.  Actually, Nautilus is a bit of a puzzle...Why, in all the hundreds of millions of years since its ancestors first evolved a pinhole eye, did it never discover the principle of the lens?... The system is crying out for a particular simple change...Is it that the necessary mutations cannot arise...I don't want to believe it."    [r29]

The challenge, however, is a wider one.    At a top level symposium convened to examine the problems many mathematical practitioners had run into attempting to come up with a viable neo-Darwinian mathematical framework, Dr Stanislaw Ulam (a co-inventor of both Monte Carlo statistical techniques and the hydrogen bomb) highlighted specific limitations with respect to eye evolution.    The chairman of the symposium countered: "T
his is a curious inversion of what would normally be a scientific process of reasoning. It is, indeed, a fact that the eye has evolved. ... The fact that it has done so shows that [your] formulation is, I think, a mistaken one." [r55

Either way a cogent proof of the principle of evolution would outclass the theory of relativity in terms of its importance as a contribution to the field of science.    




Mathematical models of evolution - an information-theoretic approach

A paper published by Robert Worden in the Journal of Theoretical Biology in 1995 [r30] was specifically designed to explore the contentious issues around the rate of evolution.   Darwin was a "gradualist" who saw evolution as a slow continuous process - the mainstream view.   Eldridge and Gould challenged this orthodoxy in 1972 by pointing out that in view of the widely observable discontinuities in the fossil record a more precise interpretation would be for evolution to proceed in short bursts covering a few million years followed by long periods of stasis.   [r31]

Worden's paper attempted to provide a resolution for these divergent streams of thinking by phrasing the core issue as one of information gain within a mathematical model.   So just how fast can evolution go, given that:

  • The rate of evolution of any trait depends on the strength of selection on that trait, where a weak selection pressure results in negligible penetration of a population by a change in the phenotype.

  • A species can only sustain a limited selection pressure for the obvious reason that when it is too high, the species simply dies out

Worden calculated a speed limit in terms of a measure called the Genetic Information in the Phenotype (GIP), a property of the population measured in bits.    So the more precisely defined a trait was in a population, the higher the GIP.   Halving the width of the spread of a trait increased the GIP by one bit.     GIP would thus be indirectly related to the information content of the genome, but always less than it.

Worden found the calculated speed limit to be inconsistent with some versions of Eldridge and Goulds' theory (such as pure species selection) but compatible with others.

Under the section "Human intelligence and learning" Worden noted that Chomsky had argued that the capacity for language arose de novo in mankind, and in this was supported by Pinker.   Pinker drew attention to the fact that although the human and chimp genotype differed by only about 1%, the difference of about 10MB of information was 
in his view sufficient to specify a complete language engine.

Worden estimated a total of 350,000 generations since the assumed split 5-7m years ago.   Working on an average number of three children per couple the calculated practical limit on total GIP growth for the human phenotype worked out at most 100,000 bits in total - less than the 10MB quoted by Pinker.

The question Worden considered next was how much of total GIP contributed to the evolution of intelligence, working from the basis that in a typical hominid group the difference in fitness between the least and most intelligent - in terms of survival to successful reproduction - could be assumed to be no larger than 10%.    Worden calculated that a 10% variation in survival probability contributed a GIP of at most 1/8 bit per generation. So the useful genetic information in the human brain, beyond that in the chimp brain, is limited to 40,000 bits, a shade short of 5KB.

Worden compared the 5KB extra design information in the human brain with a previous estimate of 100KB total GIP in the mammalian brain and concluded that the design information which distinguishes our brains from those of chimps is around 5% of the total.

Reviewing these results Worden noted that 5KB is equivalent to a computer program of around 300 lines:  "Experience suggests that the functionality one can express in 300 lines of program code is very limited; certainly not enough to design a complete facility for language learning and use, unless the underlying computing engine is already very well-adapted for the language task.

"We are led to the conclusion that the main difference between the human brain and other primate brains is one of capacity and power rather than of design complexity; and that our language ability is largely built on some pre-existing mental capacity which other primates have, although they do not use it for language."

Using an information gain of 20,000 bits per million years for a single species as a crude baseline estimate and assuming unchanging reproduction and selection pressures - the total amount of information gain with respect to a specific species since the Cambrian period under the standard model of evolution would be:

Estimated rate of information gain according to the standard evolutionary paradigm

Geologic era

Period

Time my

Characteristics

Total GIP gain, MB

Cenozoic

Quaternary

1.8

Humans

1.30

 

Tertiary

65 

Primates

1.29

Mesozoic

Cretaceous

146 

Flowering plants, placental mammal

1.14

 

Jurassic

208 

Dinosaurs, mammals, birds

0.95

 

Triassic

245 

Re-colonization

0.80

Palezoic

Permian

286 

Mass extinctions

0.71

 

Pennsylvanian

325

Reptiles

0.62

 

Mississippian

360

Sharks

0.52

 

Devonian

410

Seedless plants, land-dwelling vertebrates

0.44

 

Silurian

440

Jawed fish

0.32

 

Ordovician

505

Jawless fish

0.25

 

Cambrian

544

Marine invertebrates

0.09

Thus evolutionary derived GIP available in our era is five times smaller than the theoretical information content represented by the protein encoding parts of the human genome - or 400 times smaller than the information content of the complete genome.     This factor is a further order of magnitude too small if information gain is considered only from the Cretaceous.    Another anomaly is highlighted by the fact that approximately 7,000 genes of the (959 celled) nematode Celegans are shared by humans [r32 - given the origin of Celegans dates back from in excess of 600m years whereas human evolution has taken place in the last 5m years. [r34].  

If the difference between a preference for living in the soil, the trees or a castle is derivable by adjusting a set of pre-existing switches in the genotype, then the genotype either displays a considerable amount of really old fashioned luck or a modicum of forethought.

In the absence of any known rigorous selection process in our past how is a genotype generously shared between mice, chimps, humans and even dogs able to provide the capacity for the human brain in a time-frame that in evolutionary terms is the click of a mouse button?  

The rate at which proteins are expressed by genes doesn't help either as the "variation in gene expression between individuals within the species is substantial, relative to the differences between humans and chimpanzees.  For example, one human brain sample differs more from other human samples than the latter differ from the chimpanzee samples."   [r51]

The late 
Marcel-Paul Schützenberger commented: "Gradualists and saltationists alike are completely incapable of giving a convincing explanation of the quasi-simultaneous emergence of a number of biological systems that distinguish human beings from the higher primates: bipedalism, with the concomitant modification of the pelvis, and, without a doubt, the cerebellum, a much more dexterous hand, with fingerprints conferring an especially fine tactile sense; the modifications of the pharynx which permits phonation; the modification of the central nervous system, notably at the level of the temporal lobes, permitting the specific recognition of speech.

"From the point of view of embryogenesis, these anatomical systems are completely different from one another. Each modification constitutes a gift, a bequest from a primate family to its descendants. It is astonishing that these gifts should have developed simultaneously. Some biologists speak of a predisposition of the genome. Can anyone actually recover the predisposition, supposing that it actually existed? Was it present in the first of the fish? The reality is that we are confronted with total conceptual bankruptcy."  [
r56]

Thus both gene mutation and combinatorics offer little theoretical insight into the problem of the information gain necessary to support correlated adaptation.    



Mathematical models of evolution - a probabilistic perspective

Somewhere back in the hypothetical evolutionary tree of life, individual nucleotides have to be subjected to mutations and natural selection in order for information to accumulate.   We cannot depend on "gene recombination" at every level in the geological column as an explanation for the problem of information gain.

A probabilistic model that uses school mathematics to review the outcome of mutational changes subject to natural selection - with parameters published in mainstream biological literature - is provided by Dr Lee Spetner in his excellent book, "Not by Chance!", which both highlights and quantifies numerous issues with the current evolutionary paradigm. [r35]    This overview also draws on a review of Spetner's work by Ashby Camp.  

The statistical event Spetner is interested in is the emergence of a new species. 
The four "input parameters" are:

Mutation rate 

The observed mutation rate in non-bacterial organisms ranges between 0.01 and 1 per billion for a specific nucleotide in a specific replication.   Spetner uses the geometric mean of 1 in 1010 per birth.

Steps in a species transition 

G. Stebbins, one of the architects of neo-Darwinian theory, estimated that a species transition would take approximately 500 small, distinct steps. 

Births per evolutionary step 

George Gaylord Simpson, a dean of evolutionists and an authority on the evolution of the horseestimated horse evolution took about 65 million years and involved 1.5 trillion births.  The horse is said to have evolved through 10 to 15 genera.  If the horse went through about five species per genus, this gives 60 species and a million years per species - matching other evolutionary estimates.  This works out at 25 billion births per species, or 50 million births per evolutionary step.

Selective value of a specific positive mutation 

Simpson's estimate of a "frequent value" here is 0.1%.  What this means is the chance of the survival of the mutant is 0.1% higher than the rest of the population.

Spetner makes the simplifying assumption that each evolutionary step is the result of the mutation of a single nucleotide.      Given that the chance of a specific nucleotide mutating in one birth is 1 in 10-10, the chance of this happening over 50 million births is 1 in 200.   Assuming an approximately equal chance that the nucleotide will change to any one of the other three bases (from the set of adenine, guanine, thymine and cytosine), the odds of a specific change for a specific nucleotide is 1 in 600.  

Sir Ronald Fisher, a world expert on the mathematics of evolution, has shown that the odds of the survival of a single mutation with a survival benefit of 0.1% greater than the rest of the population is 500 to 1 against - because the majority of mutants are eradicated by random effects.  In other words, only 1 in 500 mutants with a positive benefit of 0.1% will end up taking over the entire population.

The chance that a specific change to a specific nucleotide will occur during a step is thus 1/600, and the odds that it will also take over the population is 1/500.   The total odds are thus 1/600 * 1/500 or 1/300,000.     This needs to happen 500 times in a row (the number of steps required to arrive at a new species).    We thus need to multiply 1/300,000 by itself 500 times.  The odds against this happening are approximately 3.6 x 102738 to 1, or viewed the other way round, the chance of this happening is 2.7 x 10-2739.  

Of course, one cannot simply assume that only one mutation is available at every step.  How many positive mutations are available?   Nobody knows the answer to this.   So Spetner turns the question around: for evolution to have a reasonable chance of working, how many positive mutations must be available at each step for the model to deliver a new species?   

What constitutes a "reasonable chance"?   chance of one in a thousand could reflect the observation that for every species alive today approximately 1,000 have gone extinct.  However, as some species go for a very long time without changing - the well recorded phenomenon of stasis - Spetner chooses a chance of 1 in 1,000,000

The chance of a single step succeeding must be large - because we need to multiply it by itself 500 times (for the 500 steps) so that it comes out as close to 1/1,000,000 as possible (i.e. the chance of 1 in a million).    The smallest number that will do this is 0.9727 as

1 - (1 - 1/300,000 ) 1,080,000   0.9727

So if the odds that a specific nucleotide will mutate and take over a population are to be 0.9727 for each step, there must be 1,080,000 potential positive adaptive copying errors for each of the 500 steps to arrive at a 1 in a 1,000,000 chance for the development of a new species.

The high number of adaptive paths has important consequences for the hypothesis of convconvergent (parallel) evolution.   Palaeontologists concluded that the mammalian brain evolved in parallel in the different orders and families of mammals and the eye evolved separately at least 40 to 60 times (e.g. Dawkins, "Climbing Mount Improbable", 1996, pg127).

But if there need to be over a million potential adaptive pathways at each step, this means it is impossible for precisely the same trait to evolve independently in two different species as the amount of selective freedom at each step would be too great.

Allowing for redundancy, even if only 100 of the 500 choices needed to be the same, the odds against this happening would be 1 in 10600 for convergence in a single species.    For the convergence of complex organs such as wings, kidneys or eyes the probability would be much smaller because one would need to allow for many species and thousands of steps.

Evolutionists believe this is still possible on the basis that many different changes to the underlying genotype could result in the same phenotype.    But given that genotype determines phenotype, freedom in genotype still translates into some level of freedom in the phenotype.   Even if one million genotype choices at each of the 500 steps of transition to a new species equate to only 10,000 phenotype choices, the total number of pathways in the 500 steps is still 102,000

In 1994 the gene that controls eye development in insects and vertebrates was found to be 94% identical, leading the discoverers to suggest - without having to resort to any calculation - that "the traditional view that the vertebrate eye and the compound eye of insects evolved independently has to be reconsidered."   

This is precisely what Dr Spetner's mathematical analysis suggests.

This "eye gene" thesis has been subsequently confirmed by experiments done by 
Professor Walter Gehring and others in which Drosophila, mouse and squid genes were successfully spliced into Drosophila, resulting in the targeted expression of ectopic compound eye structures on the wings, legs and antennae, and successfully demonstrating the non-random nature of the control genes across three evolutionary branches once considered independent.   [r39], [r40], [r41]

In a 1998 blog, Stephen Jones [r38] writes "That this is a real difficulty for the Darwinists is evident from [Richard] Dawkins reaction to it.  He admits that his "message...that eyes evolve easily and fast...might seem challenged by an intriguing set of experimental results, recently reported by a group of workers in Switzerland associated with Professor Walter Gehring."  (Dawkins R., "Climbing Mount Improbable", 1996, pp174-175).   Dawkins is surprised by it: "Amazingly, the treated adult flies grew up with fully formed compound eyes on their wings, legs, antennae and elsewhere" and "They even work....." (pp175-176), describing it twice as "remarkable" (p176) and "almost too startling" (p176). Rather limply he says he doesn't think that he was "wrong to think that eyes have developed forty times independently" and that "At least the spirit of the statement that eyes evolve easily and at the drop of a hat remains unscathed." (p176).

David Berlinski, with customary verve, noted "The ... group's more recent paper, "Induction of Ectopic Eyes by Targeted Expression of the Eyeless Gene in Drosophila" (Science 267, 1988) is among the most remarkable in the history of biology, demonstrating as it does that the ey gene is related closely to the equivalent eye gene in Sea squirts (Ascidians), Cephalopods, and Nemerteans. This strongly suggests (the inference is almost irresistible) that ey function is universal (universal!) among multicellular organisms, the basic design of the eye having been their common property for over a half-billion years. The ey gene clearly is a master control mechanism, one capable of giving general instructions to very different organisms.   No one in possession of these facts can imagine that they support the Darwinian theory.

"How could the mechanism of random variation and natural selection have produced an instrument capable of anticipating the course of morphological development and controlling its expression in widely different organisms?" [r52 

From a purely mathematical analysis the answer is unambiguous.





A microbiological perspective of Darwin's "cold shudder"

Writing to the American botanist Asa Gray in 1860, Darwin wrote "The eye to this day gives me a cold shudder, but when I think of the fine known gradation my reason tells me I ought to conquer the odd shudder."   [r43]   To Darwin, the eye's ability to focus for different distances, admit varying amounts of light, and correct for both spherical and chromatic aberration made the concept of formation by natural selection appear "absurd in the highest degree".   However, in convincing himself that all that was required was for nature to undergo a series of small, easily imagined morphological changes, he saw it as correspondingly simple for nature to follow suit - in modern parlance, mutations and natural selection were all that it required.     And in accordance with Darwin's original view, biologists assure us the eye evolved not just once but independently between 40 and 60 times.   Albeit with some specific experimental limits.

We are informed that all that is required for an organism to evolve an eye is a simple light sensitive spot.   Of course, light sensitive spots need to be integrated with an organism's nervous system.    Could a light sensitive spot develop as the result of a genetic mutation at the end of an existing nerve fibre?    But how would a nerve fibre get there in the first place - unless it is perhaps already dedicated to another "sensitive spot" - heat perhaps?   What changes are required to the brain to enable a sense of light rather than more heat - or maybe an unexpected taste?    How do we specify additional but dedicated nerve fibres, or additional brain power?

Everyone is aware that the optic nerve runs from the eye to the back of the brain.   This raises a question: if processing happens at the back of the brain (distributed over a number of areas), why does the picture we see appear to emanate from the retina?  

Yet the actual image on the retina is inverted - highlighting that what we think we are seeing doesn't even match the reality of the light that falls on the retina.    Clearly, the brain is providing the original "virtual reality" show.

The reality of course is that photons do not intrinsically "shine" at any wavelength - the assumption that the universe is bathed in a benevolent glow of light is a perception that exists only in our brains.   What we "see" is a genetically determined biochemical reaction to electromagnetic energy in the range of 
400 to 700 nanometers - coinciding with the property that water is 107 times more transparent to radiation in this narrow window than it is to ultraviolet and infrared radiation and matched by the radiation from the sun.     With the exception of infrared, the rest of the electromagnetic spectrum passes undetected.  

Naturally it would be appropriate to demonstrate that the principle of Darwin's fine, gradual changes can be achieved at the microbiological level.   For a moment, let us step into a purely molecular, naturalistic universe, at a time corresponding to millions of years preceding the existence of vision, and task ourselves with creating vision for the very first time at the microbiological level.    Sight has never existed, the notion and memory of light a total unknown.   How to proceed?   

First, which part of the electromagnetic spectrum would we select?   But perhaps not too fast on this point - given that the concept of 
light is still to come into existence we would presumably have no notion that an electromagnetic spectrum exists.   We would have to infer it from molecular entanglements, the rise and fall of energy levels, the thermodynamic interplay of biochemical events.  

Perhaps we reason there is a chemical that changes shape in response to the event we are groping towards.   Which chemical structure - out of several million - should we select?     How do we know we have found the right one?    Can our hypothetical primeval being synthesize it?  If not, where would we get it from - and how?   How would we select, transport, and store it?   Does it have an optimal operating temperature or concentration?   How will it interact with the chemical structures in the rest of our primeval being?   If the - so far - unknown event causes the chemical to change shape in a picosecond, how would we know when to expect such a change, be aware of or be able to measure the change with any accuracy?    In particular, would we be able to differentiate this change from changes due to other biochemical phenomena?  

Let's say that we positively identified a chemical structure that changes shape for an as yet unknown reason.   As a first step, how would we be able to infer the important concept of a directional component with respect to the changes happening around us - given that we have no notion of direction, distance or dimensionality outside our microbiological universe - and photons would be impacting us from all directions?    Also, there is a practical consideration - what if our entire stock of chemical were to change shape in response to this unknown factor?   How would we know it was used up?   Would we be able turn it back into the original shape?    Would we require other chemicals to assist with this process?   Which ones?   How many, what quantities and concentrations?    Again, could we synthesize them - and if so, what precursors would we require in order to perform the synthesis?   How to identify the precursors?   Transport them?

How would we know that photons - which we haven't seen so far - are in fact directionally orientated?   If we could figure that out, would we be able to infer the concept of focus on the basis of photons impacting our chemical substrate and the way shapes change at an atomic level?   Could we infer the rules of optics without any prior optical knowledge?    If so, how?  How would we differentiate graduated shading from an out of focus area?     What microbiological activities would allow us to infer the necessary procedure to create a lens?    How would we persuade our molecular cohorts to (gradually perhaps) make a hollow cavity in an area reserved for the brain so we could set up a light detecting chamber?    How big should the hollow be?   What shape?     Should it be done over multiple generations?   Could we ensure the shape persisted - or could we leave this step to chance?    If by suitable diligence we are able to create an "early" protein for a lens - one that allows photons to pass through it, how would we know that we had the optimal protein construction?    Practical thought - how would we clean the lens?   

Would we understand - at a molecular level - that proteins denature and that we would need to find a way to stop this particular protein from denaturing?   Do we do this by continually replacing the protein or should we construct another protein to prevent the first protein from denaturing but in such a way that light could still be transmitted through it?    What length of time would be appropriate to guard against denaturing?  How do we determine or measure an appropriate timeframe at a microbiological level?   If we create an eye, can we figure out how to make it move?    If so how far and in what directions?     How do we control the eye's speed in any particular direction?  How do we supply the eye with nutrients without affecting vision?

How could we differentiate between photons of different wavelengths?   How do we distinguish wavelength at a molecular level?   Would we have any idea of the relevance of wavelength if we had never seen it?   Are there subtle variations to the chemical structure that would make it possible to distinguish between photons of different wavelengths? 

Making a suitably large leap - assuming we have figured out a number of the items above - what do we do with a molecule that changes shape?  How do we pass knowledge about a change in shape to the brain?    How do we know where the brain is located, or what signals it is expecting?  Should we do this mechanically or electrically?    If electrically, how do we convert the change in shape to an electric charge?   Chances are that molecules with electric charges simply repel or attract one another.   If we are going to use electrical charges, how do we ensure an electrical charge is effectively used as a communication medium?   How do we make a charge move form one cell to another, and in the right direction?   Where is the right direction?    How do we organize for a hole to grow in the skull to accommodate the optic nerve?   Where should it be located?    Can we stop a moving charge from dissipating as it travels to the brain?   What is going to interpret the information? 

Not least, how do we get one million nerve cells to grow back into the brain and ensure they link up to the appropriate target points - and validate these connections before using them?

How do we perform a 100 to 1 lossless real-time compression of visual data if we don't know what the signals represent?    How do we know that "10111001" coming out of a bundle of nerves means life whereas "10111011" signals an incoming predator?   Which muscles should contract in response if the signal is changing by one bit per millisecond from the right, and by how much?   How do we measure the duration of this millisecond?    Does this signal warrant a wider adrenaline response?    

How do we infer a projected, three-dimensional picture "out there" from varying charges emerging from millions of identical looking nerve endings?   How do we add a new part to the brain to handle the three-dimensional geometry that allows us to seamlessly stitch together two overlapping pictures in "real time"?      Do we develop this processing capability before or after we develop the eyes?   Do we need to start with one eye?   Can we resolve scenes we have never seen before?    How will we recognize, store and recall images even if they are presented substantially differently?   Will the visual field of our primeval being be able to compensate if we choose to invert our vision millions of years later?

Looking back, how would you rate your chances of providing the appropriate answer to all the above bio-chemical and mechanical questions 
without any prior knowledge or awareness of the existence of light?   If a sighted person struggles with the details, the chance of a molecular melange resolving these problems on the basis of a goalless stochastic process is going to be vanishingly smaller - no matter how many times the exercise is repeated.     With no preconception of light and millions of interdependent parts, we are presented with what is known as a tall order. 

The main point of this exercise is to highlight, in non-mathematical terms, the sheer scale of the problem domain that "natural selection" really faces.    Details, like gears, need to be specifically engaged.

Incidentally, until May 2007 evolutionists routinely drew attention to "profound optical imperfections" in the vertebrate eye as evidence that it could not have been designed: the vertebrate eye is wired up backwards with the nerves in front of the rods and cones which "interfere with the images" (e.g. PBS-TV series, episode 1).    

A 2007 paper in the Proceedings of the National Academy of Sciences modified that view by showing that Müller cells that traverse the retina - once thought to provide only structural support and nourishment to retinal neurons - provide an optimized optical fibre bridge through the nerve bundle.    The parallel array in the retina suggests a fibreoptic plate capable of low-distortion image transfer and solves a long-standing problem in our understanding of the inverted retina.    [r67], [r68]     The limiting factor is the diffraction of light waves at the pupil - the eye is quite capable of detecting single photons and has a dynamic range of one million million to one.   [r57]

T
here is in fact an excellent reason why the nerves are in front - the retina is easily damaged by light-induced thermal changes and requires cooling.   So the space behind the eye is taken up by the choroid, which provides the blood supply for the retinal pigment epithelium and regeneration of photoreceptors, reduces internal reflections and manages the required heat dissipation.   The blood flow through the choroid is the highest per gram of tissue of all tissues in the body and is - unsurprisingly - regulated in part by the brain in response to changes in illumination.   Thus the retina of each eye has an integrated, optically regulated liquid-cooling system to ensure long term visual acuity.    

By contrast, cephalopods - routinely quoted as being wired up the "right" way round - are exposed to a much lower light intensity so thermally-induced damage is less of an issue.    [r50]


The dimensions of microbiological probability

Most discussions on the veracity of the evolutionary model focus on the probability of a one dimensional target "phrase" of amino acids assembling by chance within a particular timeframe.     

What is almost always overlooked is that 
DNA is without any biological function whatsoever - unless the cell already exists.  DNA comes with no inherent energy sources, proteins to assist it with translation or error correction and there is nowhere it can store its own products.  Without a cell, protein products would have simply been washed away into the primeval sea.     But the instructions for constructing the mechanism that decodes the DNA is held in the DNA itself.    DNA code cannot be translated - unless the cell it is going to create already exists.    It is decoded by the product of its own translation.

It is thus insufficient to look at the evolution of a "target phrase" as the evolution of a single gene or protein does not correspond in any meaningful way to the magnitude of complexity involved in the three-dimensional interaction of enzymes and proteins - even at the simplest bacterial level.

Here are some illustrations of the complexity in the microbiological dimensions:
  • In E. coli DNA enzymatic repair mechanisms and cellular responses to DNA damage involve approximately 100 genes.   Without the delta sub-unit of DNA polymerase III the number of errors that creeps in during transcription increases by a factor of 105.   [r46]

  • Proteins targeted for a mitochondrion are given a temporary polypeptide "label" and encapsulated by chaperone proteins designed to prevent the premature folding of the newly manufactured protein - but in such a way that leaves the label free.   The label is identified by a receptor protein in the outer mitochondrial membrane and the protein is then guided through a channel (made of other proteins) straddling the inner and outer membranes.    The unfolded state is necessary for the newly manufactured protein to be able to fit through the channel.  Inside the mitochondrial matrix another set of chaperone proteins prevents premature folding until a third group of chaperone proteins takes over and catalyses correct folding.  The label is then removed.   [r6]
     

  • In the cell, the oxidization of glucose is mediated by about 30 different enzymes, each designed to fit a specific substrate at precisely the right step.  Energy is taken off these reactions in small doses and packaged into energetic molecules of adenosine triphosphate - ATP.   Thirty-eight packages are formed during  the conversion of each glucose molecule, two are used as part of the conversion process giving a net yield of thirty six small energy packets for the cell.    Outside the body, glucose burns at a single high temperature, inside the body, the available energy is dissected and apportioned with meticulous precision.   [r45]

  • The fifth step in the eight step heme production chain - integral to the production of hemoglobin - has a reaction half-life of 2.3 billion years unless acted on by the enzyme uroporphyrinogen decarboxylase.  This enzyme increases the reaction rate by a factor that is equivalent to the difference between the diameter of a bacterial cell and the distance from the Earth to the sun.  [r82]

The assertion that random genetic errors were responsible for the development of the delta sub-unit of DNA polymerase III - without which random genetic errors would accumulate at a rate too great for genetic information to arise - is a perturbation of logic sufficient to arouse the Dormouse in Alice in Wonderland. Errors can only be identified within the context of a pre-specified informational syntax - if the syntax keeps evolving along with the errors, error recognition is rendered meaningless and ineffective.

A thirty-eight step glucose reduction process that will only run if its own products are already present provides for a similar exercise in enlightenment - as does the requirement that proteins be assembled, labeled, transported, identified, unfolded and then unlabeled by processes involving a multitude of other proteins, all manufactured by similarly interlocking processes.

Perhaps the simplest way to capture some sense of the depth of informational complexity involved in the dynamics of the three-dimensional systems of interlocking molecules making up proteins, organisms, cells and species is to provide an illustrative snapshot of the number of ways a molecular configuration can occupy three-dimensional space.

Of course, there will be multiple working solutions, the point is that the number of solutions is small relative to the overall probability space. A simple illustrative analogy comes from human language.

 Meaningful words and sentences are rare among the set of possible combinations of English letters, particularly as the sequence grows.  The ratio of meaningful 12-letter words to 12-letter sequences is 1/1014, the ratio of 100-letter sentences to 100-letter strings is 1/10100Meaningful sentences are highly isolated from one another in the space of possible combinations, so that random substitutions degrade meaning.  While some closely clustered sentences may be accessible by random substitution, the overwhelming majority of meaningful sentences lie beyond the reach of random search.   [r47]

Natural selection is a goalless function that has to work in three dimensions.   By contrast one-dimensional target phrases provide us with as much insight into the realities of the problem domain as a cog in a watch can tell us about the nature of time.

The central questions are thus "what is the magnitude of the probability space relative to amino acid or cell count" and "how does the probability space grow as a function of the number of amino acids or cells".

Perhaps the best way is to start with the smallest imaginable cell - made up of say an arrangement of 26 amino acids arranged in a three-dimensional cluster.

Let us assume that because it is so small this cell will only work correctly when all its amino acids are correctly aligned with one another, enabling functional chemical bonding.   How many ways are there that one can orientate 26 amino acids in a three dimensional cluster?    

Engaging in highly sophisticated mathematics is not the point of the exercise - the point is simply to examine the properties of the three dimensional solution space natural selection is required to work on from a micro biological level, and to see how the solution space varies as the size of problem grows.

If we are prepared to limit the number of amino acid types to six, a back-of-the-envelope solution can be taken directly from the well studied - and easily grasped - domain of solutions to Rubik cubes.    The number of surface "mini cubes" in a standard 3x3x3 cube is (unsurprisingly) 26, and the solution to the cube corresponds to their correct three dimensional alignment.  

In the table below we start with a solution to a 3x3x3 cube and move up to a 20x20x20 cube - corresponding perhaps to an elementary Ribosome - the real version of which consists of 52 separate proteins and three RNA strands of length 120, 1,542 and 2,904 nucleotides.

Cluster size

Surface cell
count

Number of ways of orientating clusters of "amino acids" in a simplified "Rubik cell".

3 x 3 x 3

26

43,252,003,274,489,856,000  [r48]

The perfect Bridge hand

52 cards

2,235,197,406,895,366,368,301,559,999  ... for comparison purposes.

4 x 4 x 4

56

7,401,196,841,564,901,869,874,093,974,498,574,336,000,000,000

5 x 5 x 5

98

282,870,942,277,741,856,536,180,333,107,150,328,293,127,731,985,672,
134,721,536,000,000,000,000,000,000

20 x 20 x 20

2,168

 13,366,106,203,729,717,328,004,388,722,210,709,492,786,665,116,787,
079,998,681,211,538,396,679,042,570,022,585,425,404,370,821,206,040,
182,545,545,327,522,208,772,410,297,140,477,757,425,772,619,410,961,
344,454,117,617,230,260,366,328,664,585,446,326,493,728,844,508,629,
569,445,908,755,642,341,454,003,586,455,099,667,570,898,763,628,333,
813,977,467,425,743,448,041,919,790,101,558,108,208,093,996,227,738,
435,574,933,039,360,725,661,756,288,491,984,803,766,778,529,753,538,
486,321,787,414,062,938,450,610,866,237,972,063,079,664,018,696,360,
961,878,447,208,507,659,723,847,392,806,454,333,733,426,081,417,056,
070,362,965,275,011,314,780,315,281,828,339,691,106,215,863,742,591,
182,739,466,913,446,535,031,731,005,413,713,456,679,104,585,248,947,
030,076,457,550,189,748,025,140,180,002,025,928,042,672,399,244,074,
882,778,447,836,205,867,933,905,631,221,514,711,756,063,996,080,463,
141,198,210,745,160,265,531,358,580,632,240,484,040,861,201,809,726,
714,468,061,099,553,945,202,311,832,719,936,014,590,289,920,108,112,
191,490,634,408,727,217,523,478,784,079,999,088,684,526,361,226,049,
455,825,956,216,543,757,985,870,617,646,047,936,641,338,217,984,520,
100,962,194,014,027,498,487,731,444,475,919,912,503,314,546,146,199,
530,972,857,141,097,807,100,303,128,683,197,139,495,113,964,913,082,
086,217,692,223,191,522,669,898,844,274,394,808,515,580,151,379,363,
739,632,240,534,074,543,045,038,280,245,234,230,722,927,672,902,959,
146,096,339,383,367,731,184,440,630,224,573,353,521,483,612,160,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

 
If one were to use the above as a model for cellular functioning in part of the brain or retina, with each "mini cube" representing a uniquely orientated, non-homogenous entity with a specific local function, the probability of correctly arranging 2,168 such cells in a 20 x 20 x 20 cluster reduces to the smaller probability of one in 3.351x102,123.  

To appreciate the magnitude of the 1,478 digit number shown in the above table - associated with the homogenous clustering of only six amino acid types - consider that the number of atoms in a "sun sized" star as about 8.4x1056.   To set up a statistical experiment that exactly matches the likelihood of solving a 20 x 20 x 20 Rubik cube, we would require a blind-folded person to pick out just one pre-selected atom from a total of 3.98x101,430 stars.     The problem of course is the total number of atoms in the visible universe is estimated at around only 1080.    

Time is similarly limited - the best Hubble estimates suggest we are 13.7 billion years or 4.323x1017 seconds old, with the universe expected to fizzle out (one way or another) in the next 15 billion or so years.    Even if we were to repeat the statistical experiment as fast as the laws of nature would allow us, we would be almost guaranteed to encounter the heat death of the universe first.

The scaling factor associated with the probability space goalless natural selection has to deal with to attain a specific molecular or cellular configuration can be gauged from the following:

 

Surface cell count

Probability - one in ...
100,000

1.4102 x 1065,064

500,0007.9014 x 10325,077
1,000,0001.7172 x 10647,209
5,000,0002.8618 x 103,236,242
10,000,0008.8260 x 106,468,498
50,000,0001.4503 x 1032,333,016
100,000,0002.4302 x 1064,632,601
500,000,0004.9347 x 10323,213,343
1,000,000,0003.3181 x 10646,368,935
 

  ... and extrapolating probabilities in the absence of sufficiently flexible software:

 100,000,000,000,0001064,600,000,000,000  (Estimated atoms in a cell)
 100,000,000,000,0001064,600,000,000,000  (Estimated cells in the human body)

These calculations are estimates in that they assume a simple three-dimensional arrangement of six distinct entities - whereas a realistic calculation would take into account 20 distinct amino acids or 200 cell types while clustering at multiple, overlapping levels.     The point of the exercise is primarily to highlight the order of magnitude entailed by a "three dimensional" probability space.    As the human genome does not vary by more than 0.1%, the overall probability measure lies well within these bounds.

By contrast, the probability tools at the disposal of natural selection are:

Primary atomic particles in the observed universe:

1080

Number of seconds since the big bang:

1017

Maximum number of state transitions per second per atomic particle:

1043

Evolution's upper bound:

10140

The number 10140 is thus the limit on the total number of calculations, chemical reactions or distinct operations of any sort that could have been performed in the history of the universe.    As the limiting bounds of the universe are also evolution's terminus, if natural selection cannot produce an organism incrementally in 1014distinct steps it is not supportable as an explanation for the origin of life.    (See e.g. the total number of calculations performed by all the computers on the earth [r62b])

The central claim is that evolution traversed to the solution point of a domain too big to be incrementally traversed - with a resolution of 1 in 
1064,600,000,000,000 - by flipping the biological coin no more than 10140 times - in a goalless process - is mathematically unusual.  

A valid statistical explanation would need to take us along a demonstrably smooth path from the first molecule to the final aggregation, within the time available, subject to the molecular and chemical constraints of the earth - and with change to spare.

We know that the optimal number of steps to solve a 3x3x3 Rubik cube is 20.  Compare this with the potential number of steps - 43,252,003,274,489,856,000.   If the ratio between 20 and 43,252,003,274,489,856,000 is sufficient to demonstrate the presence of human intelligence, what does the ratio between 10140 and 1064,600,000,000,000 suggest?
 


Robotic cars, Eigen's paradox and how to win Lotto using natural selection

If computers can get through more mathematical calculations in a second than we can manage in a lifetime can't computer simulations of evolutionary principles demonstrate the veracity of the evolutionary viewpoint?    The short answer is that all evolutionary algorithms are intelligently designed goal seeking algorithms that use stochastic methods to explore a solution space where a suitably all-encompassing mechanical algorithm may be hard to define.   They don't prove that randomness is capable of producing life any more than queuing theory proves the existence of shops.  


The claim of neo-Darwinian theory is the possibility of deriving complexity without goal directed processes subject to nothing more than "natural selection" - itself a random event.    Can we replicate this?

A simple way to test this thesis is to use an elementary robotic car initially equipped with a "goal seeking" light sensor - however instead of populating the car's computer memory with an intelligently designed, goal seeking evolutionary program, we populate it with a random bit string - thereby exactly replicating Darwin's "warm little pond".

To provide an evolutionary head start, the car is provided with electric steering, solid state gyroscopes, a microprocessor, variable speed motor, power supply and a computer memory that interfaces directly to these digitized sensor and control lines.    This is the equivalent of a complete, functional cell with one provisio: 
the "DNA" of the robotic car is left entirely to chance, not to a computer program provided by a biologist.

As
 a robotic model cannot be replicated rapidly, we emulate this by treating any instruction set associated with a successful outcome as a static parent template subject to bit level mutations specified by the natural logarithm of the total parental bit count.  The parent template is otherwise conservatively aged by applying bit level mutations to the template on the basis of the natural logarithm of the number of unsuccessful iterations. 

Then, with each successful iteration (survival of the fittest) we get to add more memory, sensors and performance capability.    
As the capacity for "stochastic intelligence" grows over time we can add flight-capable motors, wings and flight control surfaces, perhaps multiple light sensors equipped with a lens and focal servo motor - emulating the evolution of flight and the eye.

How much memory to begin with?

In 1971 a Nobel prize winner Manfred Eigen demonstrated in the absence of an error correcting system the presence of mutations limits the number of base pairs in a macro molecule to about 100 -  representative of approximately 144 bits of information.     

Eigen's mathematical analysis highlighted that evolution 
cannot move beyond 100 base pairs without an error correcting system, yet it is not possible to encode a molecular error correcting system in under 100 base pairs.   Based on current day observations the hypothesis is that ribozymes may have provided a suitable low error pathway, however the length of the hypothetical ribozyme is not stated.  However as the cell would not have existed at this point there would have been nothing to stop the replicas from being washed out to sea, there are no known reactions outside of biology that produce cytosine at a rate sufficient to compensate for its decomposition [r61] and the lack of phosphates in solution for a suitable backbone provides an additional microbiological challenge. [r59], [r60]

 
Logically, we ignore Eigen's paradox on the basis that all error correcting systems produced by evolution are by definition products of chance and provide the robotic car with an evolutionary respectable 4KB of "starter" memory, i.e. totaling 32,768 bits.  

 
Assume for this particular microprocessor there are 1,000,000 algorithms that would successfully bring the car to the destination.   

 
Once the 32,768 bits of memory have been randomly populated the microprocessor is activated.   If the car does not move, the "warm little pond" is emulated by re-populating the microprocessor with a new random bit string; and we proceed back to the start of this paragraph.

What is the probability the car will move towards the light?  

 
The number of ways a random bit string can populate 32,768 bits of memory is 1.41x109,864.   Therefore the chance that the microprocessor memory will be loaded with an appropriate algorithm on any particular iteration is 1 in 1.41x109,858.     

Assuming an estimated 1018 seconds since the big bang, with a maximum number of state transitions per second of 1045, and repeating a "random load" as many times as possible, the probability that the car arrives at it's destination at least once in the history of the universe is 1 in 1.41x109,795.     Bridge players dealing cards at the same rate would get through 4.47x1036 perfect bridge hands.

To increase the chance of the car moving, we could work with a restricted memory and use simpler algorithms.  Here the probability of reaching the target in the timeframe of the universe ramps up as follows

 

 

BytesChance of 1 in ...Assumed
algorithms
3214,474,011,1548,000
64837,987,995,621,412,318,723,376,562,387,865,382,967,460,363,787,024, 586,107,722,590,232,610,251,879,596,686,050,11716,000
9664,688,253,845,862,872,297,874,145,352,604,273,135,703,584,046,529,
025,464,127,168,251,085,456,361,932,370,263,786,600,367,860,269,914,
603,655,445,713,794,716,923,815,835,171,450,599,643,511,969,793,382,
547,466,852,290
24,000
128   5,617,791,046,444,737,211,654,078,721,215,702,292,556,178,059,194,
708,039,794,690,036,179,146,118,921,905,097,897,139,916,325,235,500,
660,003,558,745,981,042,426,837,180,275,450,519,452,901,482,207,483,
566,386,805,246,669,527,046,414,884,444,362,538,940,441,232,908,842,
252,656,430,276,192,208,823,201,965,046,059,784,704,400,851,161,354,
703,458,893,321,819,998,351,435,577,491,134,526,104,885,300,757,004   
32,000

 

Given success in the 32 byte model, we would copy the original microprocessor code with a small set of random bit mutations and use this as the foundation for the 64 byte model, populate new memory with a random bit string and proceed again.    Then assuming success with the 64 byte model, we would copy the evolved code base subject to further minor random mutations to the 96 byte model, initialize the new memory as before and proceed again.

This is precisely where many mathematicians and evolutionary biologists part company.   Biologists claim that the probability of the car moving at each level should only be measured by the incremental probability - the "small steps" up the back of Mount Improbable.  

 
Isn't this analogous to claiming it is possible to win a Lotto-sized jackpot by correctly guessing individual digits from the winning number, and rolling correctly guessed digits forward week by week until the jackpot is reached?     If the Lotteries Commission used these assumptions, they would be demonstrably bankrupt in every geological era.

Nevertheless, biologists are convinced it is the mathematicians who need to change.   For example in a paper entitled "A pessimistic estimate of the time required for an eye to evolve", Nilsson and Pelger cheerfully assert that Darwin's concern about the evolution of the eye is now primarily an historical curiosity - "the question is now one of process rate rather than of principle."

The paper is widely quoted in academia, popular literature and the internet as providing compelling evidence for the evolution of the eye in time scales so short that even geologists cannot measure them.    So it is "unsurprising" that the eye evolved completely separately at least 40 to 60 times, and "obvious that the eye was never a real threat to Darwin's theory of evolution".   [r63]

Richard Dawkins describes Nilsson and Pelgers' paper in "River out of Eden" in these terms:    

 
"Their task was to set up computer models of evolving eyes to answer two questions. The first was: is there a smooth gradient of change, from flat skin to full camera eye, such that every intermediate is an improvement? (Unlike human designers, natural selection can't go downhill not even if there is a tempting higher hill on the other side of the valley.) Second, how long would the necessary quantity of evolutionary change take?  ..... In their computer models, Nilsson and Pelger made no attempt to simulate the internal workings of cells.

 
"Nilsson and Pelger started after the invention of the photocell. They worked at the level of tissues: the level of stuff made of cells rather than the level of individual cells. ....

"The results were swift and decisive. A trajectory of steadily mounting acuity led unhesitatingly from the flat beginning through a shallow indentation to a steadily deepening cup, as the shape of the model eye deformed itself on the computer screen.

"The transparent layer thickened to fill the cup and smoothly bulged its outer surface in a curve. And then, almost like a conjuring trick, a portion of this transparent filling condensed into a local, spherical subregion of higher refractive index. Not uniformly higher, but a gradient of refractive index such that the spherical region functioned as an excellent graded-index lens."   
[r64]

Clearly, a successful computer simulation would suggest that after 150 years an unambiguous mathematical basis for the theory of evolution would make this paper of inestimable worth to the scientific community.

Those in search of the defining mathematical trajectory in 
Nilsson and Pelgers' six page paper may be disappointed.   The paper is based on two equations.  The words "computer" and "simulation" do not appear anywhere, nor are there algorithms or pictures of computer screens.   Nilsson confirmed the paper was not based on a computer simulation.   [r65]

  

The paper starts by hypothesizing what the evolution of the eye ought to look like at a morphological level.  Having determined this would require an increase in the size of an eye structure from unity to 80,129,540 - correlated with invagination and aperture constriction to ensure increasing visual acuity - the paper quantifies these changes in terms of the number of 1% steps required to reach such a point - so if 1.01n=80,129,540, then n=1,829 steps.     Each step is associated with a positive gain because "the model sequence is made such that every part of it, no matter how small, results in an increase of the spatial information the eye can detect".    As the model sequence is "made", the paper begins with the results of intelligent intervention rather than the random processes required for a successful evolutionary derivation.

Nilsson and Pelger then move to standard population genetics:  R=h2iVm, where R represents the change observable in each generation, his the genetically determined proportion of the phenotypic variance, "i" is the intensity of the selection, V the coefficient of variation which measures the difference between the standard deviation and the population mean is represented by "m".   Here h is assigned a typical value of 0.5, and i and V are allocated conservative values of 0.01.  So R=0.00005m.  

From this 1.00005n=80,129,540 for which they estimate n as 363,992.   So the eye could have evolved in approximately 363,992 generations - the geological equivalent of a blink of the new organ.

First, Dawkin's observation that the paper's computer simulated models provide a "swift and decisive" confirmation of the theory of evolution misses the target for no other reason than the paper's senior author states no computer simulation was involved. 

 

David Berlinski - a post-doctoral fellow in mathematics and molecular biology - highlights a number of technical issues in public correspondence with Nilsson and others, questioning for example the defense of a description of the evolution of real vertebrate camera eyes on the basis of the theoretical optics of compound invertebrate eyes where predicted angular-sensitivity functions have been shown to perform poorly.    

From a mathematical perspective, Berlinski asks: "Just how do Nilsson and Pelger's light-sensitive cells move from one step on that path to the next? I am not asking for the details, but for the odds. There are two possibilities. Having reached the first step on the path, the probability that they will reach the second (and so on to the last) is either one or less than one. If one, their theory cannot be Darwinian -there are no random changes. If less than one, it cannot be right - there is no way to cover 1,829 steps in roughly 300,000 generations if each step must be discounted by the probability of its occurrence."    All biological change comes at a cost.

Responding to a charge by Nilsson that Berlinski is simply moved to reject "uncomfortable scientific results" Berlinski writes:

"The length of time required to form an eye is a matter of perfect indifference to me; had he and Susanne Pelger been able to demonstrate that the eye was in fact formed over the course of a long weekend in the Hamptons, I would have warmly congratulated them. As I have many times remarked, I have no creationist agenda whatsoever and, beyond respecting the injunction to have a good time all the time, no religious principles, either. Evolution long, evolution short - it is all the same to me. I criticized their work not because its conclusions are unwelcome but because they are absurd.

" .... To the extent that simultaneous and parallel changes are required to form a complex organ, to that extent does the hypothesis of random variation and natural selection become implausible. It is one thing to find a single needle in a haystack, quite another to find a dozen needles in a dozen haystacks at precisely the same time. Surely the burden of proof in such matters is not mine. I am not obliged to defend such mathematical trivialities as the proposition that as independent events are multiplied in number, their joint probability of occurrence plummets."    r66 

   

The paradox of life is that the information content of our universe can only be interpreted through the lens of biological processes.  Mathematics rigorously transforms information, but does not create it.   The physical laws of our universe transport information, but cannot interpret it.    But without the awareness of self that comes through the biological framework biological data is without meaning.  


So if life is nothing more than the end-product of a cosmic pinball machine, choice is an illusory chain of interlocking molecular interactions stretching all the way back to the Big Bang - making our responses to the data we receive through our senses ultimately meaningless.

In contrast our ability to accurately pursue the foundations of mathematics and physics, to hold conversations and engage in logical interchange is a lucid demonstration that the biological framework transcends molecular determinism.  Like the proverbial iceberg, the part of our human totality that cannot be seen turns out to be the part that counts for the most.  


Are the choices we make truly free?    While biology clearly provides boundary constraints in life, the super-naturalist maintains that within that framework the essence of life is the ability to make true choices.    If there are no truly free choices - if everything is deterministic - then there are no grounds for knowing anything to be true - including the assertion that everything is deterministic.     Logically, the super-naturalist appears to have the upper hand.

  

Someone once said - only half-jokingly - that anyone wanting to debate the existence of God at a university campus would find the most likely candidate in the faculty of astronomy and astrophysics - the least likely in the humanities and biological sciences.

There is an irony here.   Years ago when biologists looked at living cells through microscopes they saw apparently simple amorphous blobs of protoplasm.   Now the documented, visible complexity of the cell exceeds anything that astronomers and astrophysicists encounter in the heavens.

So if the world's most influential atheist, Antony Flew, could renounce atheism in favour of theism as a direct result of discoveries in the field of molecular biology over the last 50 years, it is not inconceivable that had Charles Darwin lived today he too could have reached a different conclusion - not least if he had understood that all mechanistic processes are physically incapable of exploring anything except a fraction of a hair's width in a universe sized molecular solution space in the search for life.

As Thomas Huxley suggested in a note to Charles Kingsley in 1860, "Sit down before fact as a little child, be prepared to give up every preconceived notion, follow humbly wherever and to whatever abysses nature leads, or you shall learn nothing."
  

If there is no room for truth in a closed system, there is even less space for it in a closed mind!




Pascal's Wager - taking a chance on a probability function?

In summary, we have highlighted mathematical, metaphysical and conceptual difficulties that lie in the path of a stochastic model of evolution that are of a non-trivial nature.   In contrast to the non-trivial  discontinuities between fossils in the geological column and a probability space that cannot be fully traversed with the resources available in the universe, the so-called anthropic coincidences are well documented - with cosmic fine tuning suggestive of a precision in excess of ten trillion trillion trillion trillion trillion trillion trillion trillion times better than the best technology human engineers can produce.

So can one objectively accept one probability in the face of considerable discontinuity and ignore the ramifications of an astonishingly fine continuum in another?  A mathematical audit requires balanced accounting.    If the chance of finding ourselves in any universe with usable energy is less than one in a googolplex, we need something more robust than arguments based on hypothetical perfect bridge hands as a basis for discounting the possibility that life is solely a molecular phenomenon.

Of course if human beings are nothing more than biological machines, then in the final analysis we are qualitatively indistinguishable from robots.  Not that you would have ever heard it put this way: the popularity of a campaign spearheaded by bus logos saying "You are probably only a robot, so relax and enjoy life" would be unlikely to gain an enthusiastic following.  But in the end that is all that is left.
 
Blaise Pascal shaped this dilemma into the terms of a wager.  If we can prove beyond a shadow of doubt that we do in fact live in a purely naturalistic universe, then in the long run the temporary arrangement of matter into life forms on this planet is of no lasting consequence. In the end, we all die, finis, our molecules move on in other shapes and forms.

But if a mathematical analysis of evolution - and the anthropic principle - hints with even the smallest probability at a non-naturalistic universe, at the fingerprints of a divine creation, and even if we believe this probability to be vanishingly smaller than that of evolution, the nature of an externally created and defining "other" suggests that this a probability that should not be left unexamined.

Pascal, the father of probability theory, memorably summarized this belief in God as follows:

If you gain, you gain all; if you lose, you lose nothing.

So it would be appropriate to ask exactly what one may gain.    The totality of what Pascal was referring to can be comprehended by looking at the other side of Pascal's statistical coin - i.e. in terms of the Judeo-Christian framework.   Whatever one may think of the story of Genesis, it is clear that it asserts that far from being the result of an accidental configuration of molecules in a large vacuum humanity could not have a more diametrically different origin: it asserts that we are created in the image of the Person who created the entire universe.   

When you stop to think about the implications, it is a totally stunning claim. 

Genesis captures the force of this with its use of the word 
bara' - meaning to create "ex nihlo" - which was used once with respect to the creation of matter (what physicists would call the Big Bang) but three times with respect to the creation of humanity: "God created man in his own image, in the image of God he created him, male and female he created them."

When seen in this light, the totality of the Biblical record is an unflattering picture of how the holders of an extraordinarily high office monumentally botched it and have been on the run ever since.   

How do we know that the claims of the Bible are not just the work of particularly skilful or imaginative writers?

A simple test - predicting the future is difficult even for the skilful and imaginative.  If, as physicists tell us, time and matter are inseparably linked, then the God the Bible claims created the universe should demonstrate transcendence over both time and space.  In the Biblical context miracles are primarily to demonstrate God's transcendence over matter. The prophetic demonstrates His transcendence over time.

Not least, if someone claims to be able to offer you eternal life, this suggests they should at least know what is going to happen in this one!

It is worth pointing out that the God of the Bible shows a prescient knowledge of the claims of modern physics in that one of the names He uses for Himself throughout this book and through which He first revealed Himself to Moses is "I AM" - a God existing outside and beyond time.

Not only are prophecies integral to the warp and woof of the Biblical narrative, the Bible holds them up as a metric of divinity - with Isaiah stating "Bring your proofs ... Tell us what is to come that we may know that you are gods."   

Thus the Bible not only engages us 
at a personal level but actively invites objective analysis of its claims - through the fulfillment of events at a national and international level.

There are numerous examples.

God promised Abraham that his descendents would be a blessing to all the people of the world.   Without doubt the Jewish contribution to civilization remains the highest of any people in history.    For example Time Magazine's "Person of the Century" - Albert Einstein - was Jewish.  One of the co-founders of the ubiquitous Google search engine is from a Jewish family.   The Jewish population is representative of just 0.25% of the world population yet has been awarded 22% of all Nobel prizes.   Israel had the highest number of patent filings in the world per $1bn of GDP in 2007 [r77].   

Abraham prophesied that the "The scepter shall not depart from Judah, nor the ruler's staff from between his feet, until Shiloh comes, and to him shall be the obedience of the peoples".  It is interesting to note that the nation of Israel has not had a king since the time of Christ.   Coincidence?

In 1543 AD Sultan Suleiman the Magnificent rebuilt the walls of Jerusalem - excluding part of the original city known as Mount Zion.   Because Mount Zion was outside the city, it was converted into farm land - the only part of the original city known to have been ploughed.  Back in 700 BC the prophet Micah had predicted that "... Zion shall be ploughed as a field." (Micah 3:12).   [
r76

Sultan Suleiman's building program also resulted in the closing of the Golden Gate, fulfilling a more obscure pronouncement from Ezekiel 
"The Lord said to me, ‘This gate is to remain shut. It must not be opened; no one may enter through it. It is to remain shut because the Lord, the God of Israel, has entered through it." (Ezekiel 44:2).

Daniel's enigmatic prophecies conclude with the observation that the time of the end will be distinguishable by two things: "People shall run to and fro and knowledge shall increase."   

Interestingly the self-proclaimed age of science comes from the Latin word "scientia"- which simply means knowledge.   And the internet has made information globally accessible and international travel is common place.   Jesus graphically described the fall of Jerusalem (AD75) and heralded the subsequent return of the Jewish people to Israel once the "times of the Gentiles are fulfilled".   The view that Jesus' prediction was "added" to the Gospels after the event is offset by His statement that Jerusalem would remain under Gentile rule or domination until the times of the Gentiles were fulfilled.

At the date of independence in 1948 the commonly accepted view reflected by World War II military commanders such as General Montgomery was that the fledgling State of Israel would be wiped off the map within two to three weeks - a country equipped with one tank, 28 scout planes and no warplanes was not expected to survive a combined invasion by six national armies.    Despite overwhelming odds the fulfillment of the second part of this prophecy remains on track 60 years later.

Then there are unlikely prophecies embedded in the middle of general pronouncements such as the one from Jeremiah:  "Behold, I will bring them from the north country and gather them from the farthest parts of the earth, among them the blind and the lame, the pregnant woman and she who is in labour, together; a great company, they shall return here".    The unusual part o
f course is that women do not engage in international migration while in labour.

There were several distinctive waves of immigration from the north countries following the fall of Communism.   Less well known is that during Operation Solomon
 over May 24 and 25 in 1991 when 14,325 Ethiopian Jews were air lifted to Tel Aviv in a covert operation, the New York Times recorded five babies as being born on these flights.  One could call this a precise prophetic delivery. [r75]

Most specifically, Old Testament prophecies focus on the life of Christ.  The chance that just 48 of these would come true in the life of one person on a purely random basis has been estimated as a chance of 1 in 10157
.  [r54].  Yet all of them did.

Thus this demonstration of transcendence over time in terms of fulfilled prophecy implicitly provides surety for the Biblical claim of eternal life.   A God subject to time could not offer us eternity with certainty.

Christianity - founded by a Jew - made major contributions to the development of the West through education - some of the most well known colleges and universities such as Harvard, Yale, the University of California (Berkeley), Princeton, Oxford, Cambridge, Paris, Heidelberg and Basel have Christian origins.  

Similarly catechetical schools, cathedral schools, Episcopal schools, monasteries, medieval schools, schools for the blind and deaf, Sunday schools, modern grade schools, secondary schools, modern colleges, universities and universal education have one thing  in common: they are products of Christianity [r80].  

When we recall that at Christmas "God became man, and dwelt among us", we have a totally seamless event - as we were already created in the image of God, it is not as though God had to metamorphose into something that was foreign or an "otherness".   Jesus both pointed to and demonstrated the nature of the truth He fully embodied. 

Was Christ consistent in His revelation of the nature of God?   Did the things He said, the miracles He performed and His prophetic words underline or conflict with His claim to be God?   Should we be concerned that He broke up every funeral He attended, including His own?   Anyone else done that in history?

Or does a naturalistic interpretation resting on a probabilistic framework open to significant challenges suggest we can ignore the wider significance of the supernatural - Christ raising Lazarus from the dead, the fulcrum of Easter, the assertions of near death experiences by modern contemporaries?   [r58]

In line with Pascal's Wager, the options around eternity are those that should be evaluated the most carefully.  

As Ravi Zacharias comments:  "All judgments bring with them a margin of error.  But no judgment ought to carry with it the potential for so irretrievable a loss that every possible gain is unworthy of merit."    To do so, he says, is to engage in a faith beyond the scope of reason, and to be willing to live and die in that belief is a very high price to pay for mere conjecture.  

We summarize the more significant points of the two worldviews here - keeping in mind that the roots of Western scientific tradition stand on the theology of a "reasonable universe" in which one is free to make real choices.

Naturalistic worldview

Biblical worldview

  • We assert the material universe is all there is.
  • We are molecular entities that have grown over millions of years through the mechanism of random mutations and natural selection.
  • Choice is ultimately an illusion as all our actions are functional responses to changing molecular configurations in our brains.   Every choice, every assertion is a product of the past.
  • Values, beauty, hope, truth survive as ideas only so far as they help us survive.  Everything of supposed value in this world is ultimately a product of molecular motion.   There are no absolute consequences, only "survival values".
  • The only permissible universe is one that can be measured. A hypothetical, invisible universe inhabited by spiritual beings is dismissed as superstitious folly and precluded from academic discussion.
  • The "anthropic principle" is explained by postulating millions of parallel or sequential invisible, immeasurable universes, each run under random arrangements of natural law.  We can then claim that the optimal interrelations in our universe are unsurprising - we simply would not be able to observe them anywhere else.  

  • Focus on getting the maximum out of life because life is short.

  • When life ends, the lights go out.   "Near death" experiences are dismissed as medical anomalies.
  • God is infinite, transcending both time and space.   

  • We are made in His image, making us of infinite worth to Himself.  He records every word we say and knows the number of hairs on our head.

  • Although we rebelled against God and brought a curse of futility, pain and death on both ourselves and creation, God sent His Son Jesus to take the penalty for our rebellion - on Himself.  This was done as a free gift, but one we need to appropriate for ourselves.

  • Behind every miracle in the Bible is a demonstration of God's willingness and power to reverse the terms of the original sentence that hangs over nature and ourselves - always on His terms.

  • Fulfilled prophecies demonstrate God's transcendence over time.    As we perceive a single moment God sees all of time - describing Himself as "I Am" - transcending time.  A practical consequence is Judaism and Christianity contain extensive prophetic literature, e.g. the crucifixion of Jesus is predicted in the Psalms and Jesus' predicted both the fall of Jerusalem and the later re-establishment of Israel.  Fulfilled prophecy provides a logical foundation for assurance of eternal life.

  • The laws of the universe interrelate in a way that optimally supports life on earth.  The "anthropic principle" is exactly as expected: anything other than a rational universe would constitute a mystery.

  • Our actions are not pre-determined.   Because we are created in the image of God our actions are completely free but carry eternal consequences.

  • Values, beauty, hope, and truth reflect absolute attributes that come from God and are available to enrich our present experience.

  • Sacrificial loving, practical caring should be our ultimate goals.

  • When life ends, the stage lights go on - we meet the Author of Life.

If it takes the death and resurrection of an infinite, perfectly loving God to make it possible for those created in His image to find a place with Him in eternity, it stands to reason there is nothing we can do as finite beings that can ever act as a substitute.   If we spurn this infinite gift, according to Biblical revelation we are effectively rejecting our own nature and inheritance.    By contrast the invitation of Christ - with the pre-requisite of turning our backs on and accepting the forgiveness of our sins - stems from the logic of an immeasurable grace and the completion of our highest joy rather than any sense of cheap and clumsy playground exclusiveness.    It is an invitation that transcends the confines of naturalism.

As someone once memorably noted Christianity is not 
about following lists of "do's" or "don'ts" so much as embracing a single "done" - what God has already completed for each one of us.   It is not a gift earned rather a gift worn and lived out.

Galileo demonstrated in the 16th century that theories incompatible with the mathematical language of the universe will fail the test of time.

The ultimate irony would be to spurn the opportunity for a friendship 
with your Creator - one transcending the limitations of naturalism - on the basis of a theory of origins that still awaits a quantitative mathematical foundation after 150 years.

It's worth checking the mathematics before our time in this universe is up.



References:

 TitleAuthorLink
r1Old Fourlegs - The story of the CoelacanthJ.L.B. SmithLongmans, Green and Co., London, 1956
r2Smith's Sea FishesDr P Heemstra & Prof MM SmithSmith's Sea Fishes   1991 ed, summary pgs 152-153
r3Evolution under the microscopeDavid SwiftEvolution Under the Microscope    page 285-286
r4'Missing Link' In Evolution Of Flowering PlantsUniversity of Coloradohttp://www.colorado.edu/news/releases/2006/187.html
r5Slit snail gastropods http://www.mnh.si.edu/livingfossils/slitsnailframeset.htm 
r6Evolution under the microscopeDavid Swift  Evolution Under the Microscope    page 359
r7The Hidden face of GodGerald SchroederThe Hidden Face Of God    page 103
r8Letter to "Philosophy Now"Antony Flewhttp://www.philosophynow.org/issue47/47flew.htm
r9Biola News and Communications interviewAntony Flew and Gary Habermashttp://www.biola.edu/antonyflew/
r10Natural History of Whales and DolphinsPeter EvansQuoted in One Small Speck  page 201
r11Mycoplasma genitaliumLoss of genetic materialhttp://www.tigr.org/tdb/CMR/gmg/htmls/Background.html
r12From Mathematics to PhilosophyHao WangQuoted by D. Berlinski The End of Materialist Science
r13See the Universe in a Grain of Taranaki SandGlen Mackiehttp://astronomy.swin.edu.au/~gmackie/billions.html
r14Article for Darwin Centenary SymposiumGeorge Gaylord Simpsonquoted in Theory in Crisis Page 165
r15Evolution: A theory in CrisisMichael DentonTheory in Crisis   Page 329
r16The Road to Reality - A Complete Guide to the Laws of the UniverseRoger PenroseRoad to reality    Page 730   The Big Bang and its thermodynamic legacy
r17Centennial tribute to Kurt GödelInstitute for Advanced Study

http://www.ias.edu/spfeatures/kurt_Gödel

r18Evolution of Life: A Cosmic PerspectiveN.C. Wickramasinghe & F. HoyleAction Bio Science original paper
r19Science 16/2/01: Vol. 291. pp. 1177 - 1180Elizabeth PennisiHuman genome
r20The "Gold Standard" sequenceSanger press releaseGenome - Sanger
r21The Death of Genetic Determinism and BeyondMae-Wan HoThe Human Genome Map
r22A New Paradigm for LifeRichard StrohmanLife Beyond Genetic Determinism
r23Comments on monograph Ecology and GeneticsRichard StrohmanEcology and Genetics
r24Washington University in Saint LouisEducational materialProtein putsch
r25Most Human-chimp Differences Due To Gene Regulation - Not GenesUniversity of ChicagoChimp Differences Due To Gene Regulation
r26Comparing Chimp, Human DNAUniversity of California - DavisComparing Chimp, Human DNA
r27The Wonder of ManWerner GittThe Wonder of Man
r28Backgrounder: comparing genomesDr George JohnsonBackgrounder: comparing genomes
r29The Blind WatchmakerRichard DawkinsBlind Watchmaker page 85-86, quote in One Small Speck
r30A Speed Limit For EvolutionR.P. WordenSpeed limit for evolution
r31Punctuated equilibria: an alternative to phyletic gradualismStephen Gould & Niles EldredgePunctuated Equilibria
r32Identification and Cloning of Single Gene Deletions in the Nematode CelegansDon MoermanNematode Caenorhabditis elegans
r33What does a worm want with 20,000 genes?Jonathan HodgkinCaenorhabditis elegans
r34Diversity and evolution of programmed cell deathBoris ZhivotovskyCell death and differentiation
r35Not by chance! Shattering the modern theory of evolution.Dr Lee SpetnerNot by chance
r36Ibid Page 165
r45Ibid page 219
r37One Small Speck to Man: the evolution mythVij SoderaOne Small Speck  Page 156, ref. to Donald and Judith Voet: "Biochemistry", John Wiley, 1995, p194
r46Ibid One Small Speck  Page 150 & 151, ref. to "Molecular Cell Biology"
r38Commentary on Eyeless Gene in DrosophilaStephen JonesStephen Jones blog
r39Science Magazine: Random SamplesEyeless Gene commentaryEyeless 1
r40Walter Gehring: Master Control GenesInterviewEyeless 2
r41Squid Pax-6 and eye developmentWalter Gehring et alEyeless 3
r42Miscellaneous bridge probabilitiesBridgehands.comBridge hands
r43Letter to Asa Gray, February 1860, in Darwin F., ed., "Life and Letters of Charles Darwin", 3 vols, John Murray: London, vol. 2, 1888, p273Charles DarwinQuoted in Stephen Jones blog
r47Intelligent Design: The Origin of Biological Information and the Higher Taxonomic CategoriesStephen C. MeyerOrigin of Biological Information
r48On-Line Encyclopedia of Integer Sequences;  Speedcubing.comIBM ooRexx Encyclopedia of Integer Sequences
Speed cubing
 General interestACMMinimum moves required to solve Rubik's cube 
r49Speculations on biology, information and complexityGregory ChaitinSpeculations on biology, information and complexity
r50Is Our Inverted Retina Really Bad Design?Peter WV GurneyInverted Retina
r51Intra- and interspecific variation in primate gene expression patternsWolfgang Enard et alScience, Vol 296, 12 April 2002, p340-3.  Quoted in One Small Speck page 435.
r52Denying Darwin: David Berlinski and CriticsCommentarySeptember 1996, pp9,28,30
r53Gödel, Escher, Bach: An Eternal Golden BraidDouglas R. HofstaderAmazon
r54The New Evidence that Demands a VerdictJosh McDowellThomas Nelson Publishers 1999
r55Mathematical Challenges to the Neo- Darwinian Interpretation of Evolution - Discussion: Paper by Dr. UlamEd. Moorhead PS & Kaplan MMWistar Institute Symposium Monograph Number 5Wistar Institute Press, Philadelphia PA, 1967, pp.28-29
r56The Miracles of DarwinismInterview with Marcel-Paul Schützenberger.Miracles of Darwinism
r57The Wonder of ManWerner GittThe Wonder of Man   Page 16
r58My glimpse of eternityIan McCormackhttp://www.aglimpseofeternity.org/
r59Self organization of matter and the evolution of biological macromoleculesManfred EigenSpringerLink   1971
r60Eigen's ParadoxPublic domainWikipedia entry
r61Prebiotic cytosine synthesis: A critical analysis and implications for the origin of lifeRobert ShapiroNational Academy of Sciences
r62,
r62b
Computational capacity of the universeSeth LloydArxiv
r63pessimistic estimate of the time required for an eye to evolveDan-E Nilsson, Susanne PelgerProceedings of the Royal Society of London
r64Where d'you get those peepersRichard DawkinsSimonyi Professorship Web Site
r65A Scientific ScandalDavid BerlinskiDiscovery archive 1
r66A Scientific Scandal?  David Berlinski & CriticsDan-E. Nilsson, Paul R. Gross, Matt Young, Mark Perakh, Jason Rosenhouse, Nick Matzke, David Safir, Norman P. Gentieu, David BerlinskiDiscovery archive 2
r67Optical fibers in the vertebrate retinaJochen Guck, Andreas Reichenbach et al

Proceedings of the National Academy of Sciences

r68Muller cells: Nature's fibre opticsBlog by MCNeurophilosophy Wordpress
r69Scientific Reduction and the Essential Incompleteness of All SciencePopper, Karl R., University of London"Studies in the Philosophy of Biology," Vol. 259, 1974, pp.259-284, p.270).
r70It's all just number-crunchingThe EconomistSeth Lloyd review - largest computer in the universe
r71Ribosome: Protein of the Month, October 2000.RCSB Protein Data BankRibosome structure
r72Nobel Prize siteNobel Prize siteRibosome overview
r73Lab computer simulates ribosome in motionC|Net News.comRibosome simulation
r74The Road to Reality - A Complete Guide to the Laws of the UniverseRoger PenroseRoad to reality    Page 693   The Big Bang and its thermodynamic legacy - the robustness of the entropy concept
r75Report on Operation Solomon recorded in the New York Times May 26 1991Joel BrinkleyNew York Times archives
r76The Land of Promise: Notes of a Spring-JourneyHoratius BonarAmazon book reference
r77Patent filingsThe EconomistEconomist Feb 28 2008
r78Genesis and the Big BangGerald SchroederAmazon book listing
r79The Dawkins LettersDavid RobertsonAmazon book listing   Page 70
r80How Christianity Changed the WorldAlvin J. SchmidtAmazon book listing   Page 185
r81Deep Sea Protist Provides New Perspective On Animal EvolutionScience DailyScience daily link
r82
r83
Without Enzyme, Biological Reaction Essential To Life Takes 2.3 Billion YearsScience DailyScience daily link
r84OMP decarboxylase 78 million yearsWikipediaWikipedia entry
r85Slowest Known Biological Reaction Takes 1 Trillion YearsPNASProceedings of the National Academy of Sciences
r86PriapulidsPNASScience daily link
 Slow human evolution revealed by worm genomeGeneral Sciencehttp://www.physorg.com/news9717.html
 In the beginning was InformationWerner GittIn the beginning was information
 Responses to Darwin's Rottweiler & the public understanding of sciencePeter S. WilliamsReviewing the Reviewers
 Marxism as the Ideology of Our Age      Professor Nikolaus Lobkowicz http://www.leaderu.com/truth/1truth13.html      
 Reflections on the Human Genome ProjectCraig Holdrege and Johannes WirzLife Beyond Genes
 Platynereis dumerilii: elucidating the evolution of genomes and of the CNSDetlev ArendtPlatynereis dumerilii
 Darwinian Fairy TalesDavid StoveAmazon
 Gödel's ProofErnest Nagel, James Newman.  Ed. Douglas R. HofstaderAmazon

"Magna opera Domini exquisita in omnes voluntates ejus."

This is the inscription that James Clerk Maxwell inscribed above the door of
the 'old' Cavendish Laboratories in Cambridge - where Crick and Watson determined the structure
of DNA.   Translated, it means "Great are the works of the Lord; they
are pondered by all who delight in them."
(from Psalm 111:2).


Cambridge produced more Nobel prize winners than any other institution,
including Oxford - 29 in Physics, 22 in Medicine and 19 in Chemistry  
[r79]

Copyright © 2007-2014 RJ Goudie

http://darwinsmaths.com/#top