Sunday, October 08, 2006

Fungibility, gauge-invariance and natural selection

In the current account of particle physics, elementary particles are curiously fungible. One lepton is as good as another. Walking out of a curry shop on Friday, after considering Shinwell and Pitts' monadic account of fresh names via FM-theory the thought flashed by that the finite support plus permutation group account of fresh variables -- which to make the connection is an account of the curious fungibility of program variables -- might have more than a passing surface similarity to the fungibility of particles. More specifically, i am musing on the idea that gauge-invariance is related to something like alpha-equivalence + freshness.

While you, dear reader, if you really are out there, are verifying the resonance of this idea, let me throw in a counterpoint to the very idea of fungibility. What if fungibility is only apparent? What if Darwin's dangerous idea is really, really scale-invariant? What if there was enough fine-grained structure in particles to support a process of natural selection? What if the apparent stability and similarity of structure across a given class of particles is only because the populations are subject to such intense selection pressures that to investigators at our scale they seem to be coherent and stable? Certainly the string-theoretic account gives more than enough material to support a natural selection process, in principle.

Moreover, the reflective account i am developing, also gives enough structure to support a natural selection process. Whatever the theoretical apparatus, i find the story more satisfying and beautiful. A natural ecosystem of ideas vibrate together once we ascribe internal structure to 'fundamental' particles.
  • Recursive structure -- just like recursive computer programs are made out of recursive computer programs or circular data structures are made out of circular data structures, fundamental particles are made out of ... fundamental particles
    • And, i might add just like the non-Frankensteinian recipe for making living structures involves living structures of the same type -- like making humans involves humans
  • To get to a certain level of complexity -- like what matches observed complexity of the natural world -- the internal structure of the particles supports enough structure to implement natural selection, in principle*
  • Effects we see up and down the spatio-temporal scales begin to cohere and simplify -- we see a smaller set of laws
  • But, these laws admit a larger set of behaviors, like
    • spacetime itself as potentially alive
    • many more forms capable of rich and complex organization
    • many more forms capable of hosting intelligence
      • Nothing, in principle, keeps a cell from having as much intelligence as a human!
This is not surprising: when we relax our axioms we admit more models; but, by the same token, following Chaitin, we reduce risk. So, it actually, turns out to be more conservative to consider accounts with fewer points we must consider self-evident.

i fear this rambling may seem too abstract. So, let me try to be much more specific. What i am suggesting is that
  • String theory has an essentially correct idea -- give up atomism
    • Ascribe internal structure to elementary particles
    • Use this internal structure to simplify the account of observed complexity
  • But, they should also take two essentially correct ideas from computing
    • You don't have to give up digitalism when you give up atomism
      • The internal structure can be digital, discrete, more to the point algebraic
      • This design choice simplifies accounts -- and calculations -- dramatically
    • Use reflection -- this way we can avoid the inevitable question -- but what are the branes made of? In a reflective account atoms at one level can be made of atoms (with the very same theory) at another level
  • Einstein's programme does not need to be abandoned -- giving up atomism does not necessarily result in a background-dependent story -- you can still have a co-inductive account of spacetime and matter-energy where each is mutually defining the other.
    • A computational account of behavior suffices to give rise to an account of geometry via bisimulation. That is, the equivalences classes of the appropriate bisimulation will give rise to a dynamically metrizable topology that provides the digital analog of Einstein's continous account.
So, even more specifically and more technically, i am suggesting that the entire stack of physical theories from an account of quantum gravity all the way up through biology can be realized and considerably simplified by a reflective process-calculus-based theory. At the root of this account we accept that processes -- in the mathematical account of a reflective process calculus -- have an ontological status that is also physical. In contemporary slang, 'it' is 'bit', but our account of bit is based in process calculi.

To demonstrate this to the satisfaction of the community we have to
  • make contact with existing theories in such a way that analogous calculations can be performed
    • this involves producing an interpretation of stable, successful theories in the new framework
  • demonstrate a value proposition
    • simplify calculations
    • make new calculations possible
  • demonstrate testability/falsafiability
This is the account i am working toward; but more to the point, this is the kind of account i believe is 'in the air' in the way profound scientific ideas have always been in the air. It's like the community begins to vibrate at a series of frequencies that lets ideas that need to be expressed be expressed. There's a kind of fungibility of idea-carriers; one thinker is as good as another; it's the overall population that has to be rich enough and coherent enough to allow an idea to be crystallized in a workable account of a useful theory.

* This notion is naturally connected to a more specific critique of the current scientific community's ability to appropriately utilize the theory of computation in formulating and debating scientific theories. Why not take stock of all the properties that go into the standard model and ask the following question -- what is the computational complexity of any theoretical account that can give rise to all these properties? That is, independent of any specific model of the properties, what is the computational complexity of any/all such accounts? i have my own suspicions regarding what the answer will be, but that's somewhat beside the point; the point is to use the methodology of computational complexity to winnow, weed or otherwise evaluate scientific theories.


Post a Comment

<< Home