Friday, 28 November 2014

Why We Need a Proof Assistant in Law and Finance.

Generalising the "Proof Assistant" for Understanding the Models of Law and Finance.

With oh so many subjects available to study & so many amalgamations going on at both the subject and institutional level, maybe we ought  to think about what's "worth doing" and "why".  There is a trend at the higher-end of mathematics to build "proof assistants" (see Voevosky's videos at his website at the Institute of Advance Studies),  It's clear to him that building a communication tool to computers so that proofs can be checked WILL eventually be the way mathematics will be taught in the general culture someday.  This is absolutely determined, because how else can the mathematical space be explored with such complex proofs, NOBODY can be certain if they are true?  A machine-based (including a quantum machine-based) tool that does most of the mechanical "checking" quickly would help ensure that complex proofs are true and accurate.  Most complex proofs after all have no way of being checked to 100% accuracy by mere humans.

Apply this same idea to much more difficult subjects such as law and finance [Remember Von Neumann's comment? "Anyone who thinks mathematics is difficult, has not yet experienced real life."], and the concept of a "law and finance 'proof assistant'" becomes both theoretically and practically interesting.

Now, imagine the irony if building that sort of machine were really, really difficult?  Voevosky states that the only computer language that could take on the formalities of his Homotopy Type Theory (HOTT) is Coq.  So everybody and his mother is now writing in Coq to get to a universal proof assistant.

But Gross, Chlipala & Spivak (http://adam.chlipala.net/papers/CategoryITP14/CategoryITP14.pdf) say that doing rather easy category theory in Coq is pretty hard!  So they've written some short-cuts to make the use of Coq less burdensome.

This brings me to my point that category theory which was invented [see Eilenberg & Mac Lane 1945) so that we could compare complex theories in mathematics could be made to be extremely useful for COMPARATIVE LAW and a genuine understanding of how COMPLEX FINANCIAL INSTRUMENTS actually work.  In effect, the hinge is that legal systems are models and in the extreme are isomorphic categories which can be compared using functors.  Same can be said about financial instruments. From a legal and financial perspective, the MOST COMPLICATED instrument in the financial universe is the MORTGAGE because it has centuries of legal strata embedded within it (legal-historical interpretations are about 2000 years) and its current re-interpretation via ASSET-BACKED SECURITIES REGULATIONS as an underlying asset of pass-through or senior-subordinated note structures has been further complexified with CREDIT RISK RETENTION REGULATIONS.  All of these legal-financial rules are complex models that need to be re-arranged into MODEL-TYPES than can be formally compared.  Otherwise, really, just as Voevosky says about complex proofs, we have no chance at all in understanding how these instruments actually work in the real world.

For a way to get started in this approach, I recommend reading David Spivak's (2013/2014) A Category Theory for Scientists.  There's an old version freely available on the web and the MIT edition is also quite convenient.  

Wednesday, 12 November 2014

S&P 500 in 392 Weeks: Scale Invariance Test Coming


In one of Mandelbrot's original works on "scale invariance," he studied the cotton markets around the mid- to late 19th century and found that the shape of the price versus time graphs were similar no matter whether you measured the average price per week or month.  A discovery of any kind of invariance is an important fact about how the way the world works.  No matter how quirky the invariance is, any theory worth being called a theory needs to explain the invariance's existence.

The above graph is not scale invariant.  But it might be multi-scale invariant.  Much depends on what will happen in the next week or so.

Towards a Homotopy Type Theory for Law and Finance

1.  Imagine a homotopy diagram for law and finance involving contracts, torts and criminal law, as well as the media, culture, justice, fairness.  The universe of discourse is represented by an oval that looks like the cosmic background radiation map (LOL) and it is divided in half so that we have a starting frame (ideal initial conditions) between one part on the left which is an unjust and unfair society and another part on the right which is a just and fair society.    Criminal litigation is a partition that moves from right to left with the ideal as the central line axis.  Thus, societies can maximize or minimize the unjust-unfair part in relation to the just-fair part.  Each successful prosecution deforms the two parts such that a just-fair prosecution in the unjust-unfair part tends to decrease the unjust-unfair part and increase the just-fair part.  The old way of talking about the connection between the two parts is to call it a "fibration" between "manifolds"--but those are the physicists and maths whizzos who don't have a handle on the niceties of social theories.  Now, the fibration are just functional connections between the two parts, and it turns out, all that you need to know that could ever really happen between the two parts are embedded in the fibration.  In Homotopy Type Theory, the fibrations are the essence of the "covering space" between the two parts.  We can start to work out certain kinds of equivalences.

2.  Now, assume criminal prosecutions are "transport functions" between the two parts of the oval.

3.  Bizarrely, (and this is a big guess) very dense litigation and all forms of risk of loss (default in the widest possible sense) are functors and act as covering spaces between the two parts.


4.  Implication:  you don't need to know the substance of each criminal prosecution, just the fact that it is being done, that deforms the two parts towards or away from the ideal state of society.  

5.  Please note that the term ‘ideal state’ here does not mean Plato’s ideal good state; it means a perfectly continuous geometric construct of the intuition that does not require anything at all except a few arrows and some ovals.

Sunday, 9 November 2014

Fault Tree Analysis is the Teleology that Ontology Needs; Dr. Kent Stephens' Classic Paper

Gosh!  Here's one of the great papers of the 20th century that very few people have even heard of.  I'm serious.  I think this paper ranks higher than Akerloff's information asymmetry paper on a market for "lemons", and just a tad below Claude Shannon's masters thesis on information theory.

http://files.eric.ed.gov/fulltext/ED095588.pdf

This is Kent G. Stephens paper on "Fault Tree Analysis."

Once I gave a 2 day seminar in London to a delegation of Russian academics from Moscow State University who were in the department of engineering and organisations.  The first day was a total disaster because they said they wanted something "on practical project management".  So, that evening I produced some slides about "and-logic" and "or-logic" and combined it with a flow diagram on "critical project analysis and implementation."  I said, "This work comes largely from Dr. Kent Stephens."  And before I could finish my sentence, the Head of the Department, a very sharp tongued professor, said, "Yes, we know all his work in our department, and we can see that you put much effort OVERNIGHT to bring to us today your original thoughts.  Thank you."  And I was dismissed!  The point of this story is that this was the only time in over 20 years of using Dr Kent G. Stephens ideas that anyone had ever said they knew him and his ideas.

The reason I think this paper is one of the most important papers in the 20th century is because it is the first and only paper I know of that successfully combines cultural value analysis to figuring out how organisations FAIL!  Back in the day, Dr. Stephens had assistants with questionnaires ask individuals in an organisation particular sorts of "valued questions" to determine what we now call the "critical path" within an organisation. He'd figure out the critical path of communications and were most of the errors occurred that jammed up the organisation.  if Aristotle were alive, he'd be very proud of Dr. Stephens' work because it's been used to fix a lot of otherwise "failing institutions".  And unlike the BS consulting you see 99.99% of the time, the good doctor and his team would come up with fantastically elegant solutions.  E.g., he took a failing elementary-to-high school that was in the bottom 5 of California to the top 10 in one year!

His paper is important to keep in mind if one embarking on building an "ontology of an organisation".  Too many times, I see ontologies being built without a fundamental understanding of the TELEOLOGY of the human actors.  A complete ontology needs to understand teleology deeply, and I think Dr. Stephens helps us a long way in this regard.      

GDP-Derivatives: A Global Risk Management Tool; What do we mean by "invariance up to isomorphism"

Economic statistics are compiled and written by bureaucrats who get fired only if they show they haven't been doing any work, so is it any surprise that their figures should be revised?

http://www.nytimes.com/2014/11/07/business/economy/doubting-the-economic-data-consider-the-source.html?partner=rss&emc=rss&_r=2

One of the problems in financial engineering is getting a set of figures that the world can agree on. This is what I call the problem of finding the invariance.  One way to think about Category Theory is that it's all about finding invariance at the level of isomorphism, or more forcefully, of finding what is uniquely true and accurate because it is indicated by gestural arrows that point directly at it.  At a visceral level, notice how when we point at something saying that "it's right there", we are in a state of "understanding is not merely a pointing but an extension of the pointing as part of the activities of the world."  Category Theory at the level of functoriality tells us "it's all about the pointing" so the object itself is not at necessary, or again, to put it more forcefully, the object is completely defined by the infinite number of pointings that we have of that object, so the substantiality of the object disappears!  We don't need the object at all, because now we know it completely in its infinite possibilities of being.  This sounds very abstract (and it is) but in everyday life, we do this "gestural understanding" all the time, whenever we eat, sleep, converse, enjoy a drink...all of these "things" are invariances at the level of isomorphism.  But the problem of government statistics...

is that they get revised and so we have tremendously long time-lags in response to "certainties of announcements" that affect our buying and selling decisions.

In 1999, I had worked with an ex-Merrill Lynch derivatives trader to create a "GDP-derivative" which basically would allow you to take take bets on the GDP of any nation in the world.  Of all the derivatives that could help humanity manage its "spaceship resources", I thought a GDP derivative would be the best.  It would mean essentially that a globally active company (or any other legal entity including a state) could manage its risk.  So, if say you wanted to hedge Brazilian GDP risk, you could.  The conceptual design for this product was pretty EZ.  All you have to do is think "swap", i.e., the cash flows of a buyer and seller in relation to the data regarding the GDP figure.  For 'proving out' the instrument, we just made a table of natural buyers versus sellers, and listed the sectors underneath each heading, and thought through which companies would be "natural buyers and sellers" given different scenarios of "expected GDP".  Anyone doing a masters level course on quantitative finance should be able to knock up this model in a leisurely afternoon.

Anyway, the problem we had was the "revisions" on GDP data.  Since these numbers came out 6 to 18 months after the first announcement, it became difficult to "match up" reality.   In the language I use today, I'd say, "We couldn't get a simple isomorphism and therefore, no invariance."  Without an invariance (an agreement on the GDP-figures), our model would not work.  Of course, that was back then, before we had Google data.  Now, I'm pretty sure we could crunch up our own GDP-index in order to create the GDP-derivative.  Then it's a matter of selling and marketing...     

Friday, 7 November 2014

The End of Education Monopolies, Long Live Personal Learning Assistants


1.  Suppose all information about every subject is at your fingertips.  Why would you bother to go to University, indeed, high school, if only to learn the social rituals, find mating partners, travel together, get a good job?  Human-to-human interfaces are good for some things, but for certainty of info the human-to-machine interface is about 20x better. See, Dr. Kent Stephens studies on the ICBM in the 1950s.

2.  In the year 2000, I was in Pasadena at a certain prestigious University whose name will remain unnamed and I was speaking to the Dean who said that their Uni had just received $35 million from a billionaire-developer-alumna to build a beautiful new hi-tech law school building.  The building was about 25 to 30 stories and they had their own television channel, and all lecture halls had at least two digital video cameras and the data from the lectures were sent to a Media Centre, where media workers developed the content into broadcast.  The Dean said to me, "What will we (law school bricks and mortar) do when we put all these courses onto two disks?"  I said, "Isn't one more than enough?"

3.  The disintermediation of universities hasn't occurred YET because no one has yet figured out how to make a University within your own simple point and click powers.  There's too much content on the Web and it's not at all clear what if anything but the certainty of exchange approximates reality.

4.  I propose the Personal Learning Assistant.  The PLA grows with you and   mirrors your hobbyist interests (non-profit and at a cost of consumption) and professional specialised knowledge (for profit and chargeable at globally competitive rates).  The PLA is not you, but it's pretty close to being your Web-clone.  It can apply for jobs, do jobs, and even multiply in terms of identifiable profiles on the Web.  BTW, the PLA does not simply exist in digital code, but will have significant impact on your physical being and others.  For example, who'll find the best heart specialist just in time, get the appropriate therapeutic apps to manage your incipient diabetes, run every sort of psycho-chemical tests and ensure that you are aware of the survival odds at the next traffic junction other than your PLA?  Who's your Guardian Angel and Protector?  If you want to go on "auto-PLA" then you can turn down the control to subliminal 0,005 second input-outputs.  What's the point of a university education if you can have this much fun at 10 magnitudes above and below the normal medium of perception?  I guess this is what web-based education has to offer.  Open the doors of perception and get a universal education.  BTW building your own PLA immediately answers questions about long-term social welfare.

Thursday, 6 November 2014

Ebola's Decay in Contagion



My toy model of Ebola mortality where the doubling occurs every 20 days with an arbitrary start date of January 11th 2014 predicted 4,096 deaths for the 9th of October 2014, and the official WHO statistic was 4,033 for the 10th of October 2014.  Assuming the same rate of doubling, the toy model predicted 8,192 deaths for the 29th of October 2014.  The official figure hovered around 5,000 deaths for the 1st of November 2014.  Thus, the toy model is dead.  If the official figures are accurate then the doubling factor in days has increased to around 40 days, that is, about 100 deaths per day.  While sad in an absolute sense, this figure is very good news globally because it shows that there is a decay in contagion.

The first chart above shows some correlation to the news effect of Ebola on VIX (a volatility or "fear" indicator).

The second chart shows history of potential pandemics from the 1950s.

So far humanity appears to be missing the bullet and technological and policy responses, no matter how awkward at first, appear to be protective enough for the species.