Wednesday, November 22, 2006

May you live in interesting times

David Snyder, my collaborator on the knots-as-processes work, found a nice counter example to the proof method i used to establish the proof of the main theorem. Perko-pairs are minimal crossing projections with really different wiring diagrams. These pairs are constructed by chains of (mostly) R3 moves. The problem with my proof method was that while the R1 and R2 differences in crossings could be handled with restriction, if you use restriction to handle the R3 moves you essentially end up with completely closed processes after chains of these moves. So, the theorem remains correct, but only because the processes interpreting the knots are completely unobservable -- i.e. have been restricted on all the ports used by all the crossing and wire processes.

i have found a way around the problem. Below i outline the strategy. There are two major components. The first is to recognize that you can label all transitions with crossings or wires. Then you can plug these into R-move-based contexts. Then you build a 1-R move away bisimulation. Then you build R-bisimulation up to R-bisimilarity. Iterating this procedure gets you an equivalence allowing sequences-of-R-move-away bisimulations allowing for nested or overlapping applications of R-moves. The R-move sequences can be used to construct an R-bisimulation up to R-bisimilarity^n and likewise if you are told that two processes in the image of the encoding are R-bisimilar up to R-bisimilarity^ this means, ultimately, that you can demand chains of R-move contexts for every state either process can get into from which you can recover the R-move sequence proof of the ambient isotopy of the knots.

This has implications for the use of spatial logic. The straight up predicates we define can distinguish elements within the same isotopy class -- which for the purposes of ambient isotopy-based searches is too distinguishing. So, one must find a way to coarsen the logic. Intriguingly, the bisimulation construction is cookie cutter in the bisimulation up to techniques. This leads to the most obvious question -- what happens to the HML logics when you pull the bisimulation up to techniqes through the construction of an HML? i believe that no one has considered this question. (i've never seen it in the literature.) So, i believe we can build a knot specific logic by pulling the bisimulation up to construction through the logical construction and exhibit specific examples of queries.

Finally, David has informed me just today that there is a procedure associating to any compact 3-manifold (think some substructure in a cell) a link (more general than a knot but still entirely within our method of encoding). If the procedure is constructive then the end-to-end program to work gives you a storage and retrieval machine for basically all the 3-d structures you might be interested in, and makes quite solid the claim that this is a way to interpret space as behavior.

Tuesday, November 07, 2006

Knotwork continued

Well... David Snyder and i submitted a proposal to the NSF to fund our continuing research into knots as processes. Here's a link to a draft of the proposal. What is new in this document for readers familiar with the work we presented at the Cats, Kets and Cloister workshop (see this entry) is that we begin to spell out how the spatial logics can be used to inspect the fine-grained structure of knots. We suspect that whether knots are underlying an investigation of protein folding or an investigation of quantum gravity this kind of specification of partial structure to pick out classes of entities will be of considerable interest.

As an important addendum, David and i have now morphed the draft proposal into a draft for submission to the Journal of Knot Theory and Its Ramifications. As usual there is still so much work to be done, but here it is for your reading pleasure.

Friday, November 03, 2006

Group mind, social computing and the structure of software

A friend of mine communicated to me a brilliant re-appropriation of new-media-based social metaphors. In his words:

Wandering bravely into the territory of markitecture BS, one might say that 'enterprise 2.0' is not *yet* about applications as mashups... instead it is about providing a 'social' or 'ecological' model for applications. Each application has an identity, it has a friends list, it chats, it sends and receives emails, it has a blog and subscribes to other blogs, it shares information via a wiki, it discovers information using some sort of ad hoc search and is not tightly bound to one directory model, ...


A few decades back using earlier, more corporate social metaphors -- inboxes, receptionists, etc. Agha, et al, made a similar move in trying to communicate the actor model of computation. Frankly, until the discussion strays closer to the ecological/biological side, i don't think people will find metaphors to internalize compositionality. i still haven't internalized it, and i have been working with it for decades. Mathematicians i know have been working with it for much longer than i still haven't internalized it.

There's a profound blindspot when it comes to our innerworkings. From the perspective of computational capability, it now seems quite clear that there is nothing, in principle, that distinguishes a cell from a human. Likewise, there is nothing, in principle, that distinquishes a string (or m-brane) from a cell or a human. But, this view of the world is *very* radical. It goes against millenia of atomist thinking -- smaller is simpler. This view is patently false -- as any parent will tell you ;-). But, the basic sterility of this view also one of the messages sitting inside recursive grammars (found daily in XML schemas) or recursive programs (found daily in applications of global use, like XML parsers) -- especially once you recognize there doesn't have to be a base case -- as in the specification of streams or non-well-founded sets.

That said, i'm beginning to see glimmers of social metaphors via group intelligence measures that might help with a better understanding of compositionality. The old saws about guessing the number of jelly beans in a jar -- professor puts a jar of jelly beans on the desk and asks the class to say how many jb's are in the jar. The average guess is always the most accurate. A similar phenomenon was observed centuries ago in a country fair game of guessing the weight of a cow. These are contrived circumstances that make it possible to wield a simple mathematical tool -- weighted sum -- to help see the group, qua group, and what it might be thinking. Voting schemes, democratic processes, are also relatively simple tools to help see the group -- and more importantly to try to faciliate the dialogue between the group and the individual. i have been thinking a bit about more sophisticated mathematical tools, like weighted sums of theories, as a more sophisticated way of seeing the mind of the group.

What's at stake in attributing 'mind' to group (or component for that matter) is seeing the entity in question, like a person, or an application, as essentially the same as the aggregate or the component. More to the point, in the context of this discussion, is that until one internalizes a basic isomorphism between the group and the individual participating in the group -- a kind of ultimate corporation-as-person proposition, or going in the opposite direction, recognizing that a person is a corpus -- or at least finds a vehicle for engaging in that internalization process, it's tough slogging, trying to understand the structure of software (or hardware for that matter) and how to manage it's complexity.

Wednesday, November 01, 2006

still pondering a way to formulate the intuition regarding economics

Okay... one of the basic insights at play in what i want to talk about is that the assertion that \langle a | M | b \rangle can be interpreted as a probability amplitude was originally couched in a physical theory. If it had been couched strictly as a mathematical theory leading to a calculation of probability, then it would have been easier to spot the obvious question -- on what grounds do we have the right to interpret this calculation as a probability? In other words, how are we to interpret this calculation in terms of standard theories of probability? i will assert, here, that there are none -- that, in fact, this constitutes a third theory of probability -- neither frequentist nor Bayesian.

As such, it would seem to me to be one of the most highly tested theories of probability, ever. After all, people are verifying the correspondence between calculations -- grounded in this mathematics and with this probabilistic interpretation -- and physical predictions and observations out to 30+ decimal places.

If this is the case, could this mathematical engine be a better workhorse for other probabilistic calculations? For example, could we use this mathematics to model trading? Could we interpret price, for example, as the probability that when a trade is observed we will see a certain distribution of goods and services across the agents of the market place? Naively, is there a workable model of the following form?

|t \rangle -- information underlying a trader's ask
\langle s| -- information underlying a trader's bid
M -- observables associated with this trade
(\langle s | M | t \rangle)^2 -- price of the observed trade = probability that goods and services connected in this trade will be seen in a certain distribution given the information underlying the ask and bid.

(i acknowledge that i passed out putting the kids to bed and allowed midnight of Oct 31st to pass by before making this post.)