Difficulty of Evolving New Functions

A forum for discussion and criticism of specialized topics relevant (pro and con) to Intelligent Design (ID) -- design detection, design specification, irreducible complexity, origin of life, platonic forms, design matrix, population genetics, cybernetic theories, semiotic theories, Fishers's fundamental theorem, Kimura's neutral evolution, Darwinian evolution, modern synthesis, probability theories, fine tuning, typology, discontinuity systematics, steganography, evolutionary algorithms, published ID material, ID philosophy, front loaded evolution, omega point theory, anthropic principles, multiverses and many-worlds, panspermia, extra terrestrials, teleology in biology, redundant complexity and fault tolerance, algorithmic complexity, complexity measures, no free lunch, blindwatchmakers, bad design, evil design, junk DNA, DNA grammars, von Neumann replicators vs. autocatalysis, Quines, polyconstrained DNA, Mendel's Accountant, DNA skittle, re-association kinetics, molecular clocks, GGU/GID models, enigma of consciousness and Quantum Mechanics, Turing machines, Lenski's bacteria, thermodynamics, Avida, self organization, self disorganization, generalized entropy, Cambrian explosion, genetic entropy, Shannon information, proscriptive information, Programming of Life, law of large numbers, etc.

Difficulty of Evolving New Functions

Postby stcordova » Sat Mar 10, 2018 2:42 pm

I actually break some ranks with the ID community and say that some new functions are easy to evolve. There are point mutations (DNA) and in/del mutations. In similar manner there are amino acid changes which are often called amino acid substitutions rather than point mutations, there are also in/del mutations in amino acids.

Given there are 20 amino acids, a ROUGH approximation is to say the probability of one random amino acid being substituted is 1 out of 20. This is a ROUGH estimate because of degeneracy of the 64 possible codons mapping to 20 amino acids and 3 stop codons -- thus the probability of one amino acid over another is not really equiprobable....

Someone has said Doug Axe and Ann Gauger have argue that any new function requiring beyond 2 amino acid substitutions are not possible. I find that hard to believe. If the probability of one amino acid change is on ROUGH order 1 out of 20^1, then 2 amino acid changes is 1 out o 20^2, and 3 amino acid changes 1 out of 20^3, etc.

So let's take bacteria. If there are a billions bacteria with different amino acid variants for the same protein, then the odds it is possible a 7 amino acid simultaneous change might be feasibly. Given the rates of mutation and all the bacteria of one species in the world, then this seems feasibly to me.

It would be obviously a different story with human populations.

So to answer theses questions, one needs to specify parameters like population size, mutation rate, reproduction rate.

I don't think there are any easy, rule of thumb answers.

Regarding Lenski, he had huge bacterial populations relative to human populations, and Behe rightly pointed out if citrate digestion is all that can be evolved in 30,000 generations of bacteria, how much can we reasonably expect humans to evolve in 30,000 generations (roughly 600,000 years)?

Conversely, since it is easier to destroy function than create it (as any engineer can clearly see), it is easy to imagine LOSING function at enormous rates. This is also an empirically testable hypothesis and is actively studied by John Sanford's group and associates.
Posts: 447
Joined: Wed Mar 05, 2014 1:41 am

Return to Intelligent Design