Friday 28 November 2014

Why We Need a Proof Assistant in Law and Finance.

Generalising the "Proof Assistant" for Understanding the Models of Law and Finance.

With oh so many subjects available to study & so many amalgamations going on at both the subject and institutional level, maybe we ought  to think about what's "worth doing" and "why".  There is a trend at the higher-end of mathematics to build "proof assistants" (see Voevosky's videos at his website at the Institute of Advance Studies),  It's clear to him that building a communication tool to computers so that proofs can be checked WILL eventually be the way mathematics will be taught in the general culture someday.  This is absolutely determined, because how else can the mathematical space be explored with such complex proofs, NOBODY can be certain if they are true?  A machine-based (including a quantum machine-based) tool that does most of the mechanical "checking" quickly would help ensure that complex proofs are true and accurate.  Most complex proofs after all have no way of being checked to 100% accuracy by mere humans.

Apply this same idea to much more difficult subjects such as law and finance [Remember Von Neumann's comment? "Anyone who thinks mathematics is difficult, has not yet experienced real life."], and the concept of a "law and finance 'proof assistant'" becomes both theoretically and practically interesting.

Now, imagine the irony if building that sort of machine were really, really difficult?  Voevosky states that the only computer language that could take on the formalities of his Homotopy Type Theory (HOTT) is Coq.  So everybody and his mother is now writing in Coq to get to a universal proof assistant.

But Gross, Chlipala & Spivak (http://adam.chlipala.net/papers/CategoryITP14/CategoryITP14.pdf) say that doing rather easy category theory in Coq is pretty hard!  So they've written some short-cuts to make the use of Coq less burdensome.

This brings me to my point that category theory which was invented [see Eilenberg & Mac Lane 1945) so that we could compare complex theories in mathematics could be made to be extremely useful for COMPARATIVE LAW and a genuine understanding of how COMPLEX FINANCIAL INSTRUMENTS actually work.  In effect, the hinge is that legal systems are models and in the extreme are isomorphic categories which can be compared using functors.  Same can be said about financial instruments. From a legal and financial perspective, the MOST COMPLICATED instrument in the financial universe is the MORTGAGE because it has centuries of legal strata embedded within it (legal-historical interpretations are about 2000 years) and its current re-interpretation via ASSET-BACKED SECURITIES REGULATIONS as an underlying asset of pass-through or senior-subordinated note structures has been further complexified with CREDIT RISK RETENTION REGULATIONS.  All of these legal-financial rules are complex models that need to be re-arranged into MODEL-TYPES than can be formally compared.  Otherwise, really, just as Voevosky says about complex proofs, we have no chance at all in understanding how these instruments actually work in the real world.

For a way to get started in this approach, I recommend reading David Spivak's (2013/2014) A Category Theory for Scientists.  There's an old version freely available on the web and the MIT edition is also quite convenient.  

Wednesday 12 November 2014

S&P 500 in 392 Weeks: Scale Invariance Test Coming


In one of Mandelbrot's original works on "scale invariance," he studied the cotton markets around the mid- to late 19th century and found that the shape of the price versus time graphs were similar no matter whether you measured the average price per week or month.  A discovery of any kind of invariance is an important fact about how the way the world works.  No matter how quirky the invariance is, any theory worth being called a theory needs to explain the invariance's existence.

The above graph is not scale invariant.  But it might be multi-scale invariant.  Much depends on what will happen in the next week or so.

Towards a Homotopy Type Theory for Law and Finance

1.  Imagine a homotopy diagram for law and finance involving contracts, torts and criminal law, as well as the media, culture, justice, fairness.  The universe of discourse is represented by an oval that looks like the cosmic background radiation map (LOL) and it is divided in half so that we have a starting frame (ideal initial conditions) between one part on the left which is an unjust and unfair society and another part on the right which is a just and fair society.    Criminal litigation is a partition that moves from right to left with the ideal as the central line axis.  Thus, societies can maximize or minimize the unjust-unfair part in relation to the just-fair part.  Each successful prosecution deforms the two parts such that a just-fair prosecution in the unjust-unfair part tends to decrease the unjust-unfair part and increase the just-fair part.  The old way of talking about the connection between the two parts is to call it a "fibration" between "manifolds"--but those are the physicists and maths whizzos who don't have a handle on the niceties of social theories.  Now, the fibration are just functional connections between the two parts, and it turns out, all that you need to know that could ever really happen between the two parts are embedded in the fibration.  In Homotopy Type Theory, the fibrations are the essence of the "covering space" between the two parts.  We can start to work out certain kinds of equivalences.

2.  Now, assume criminal prosecutions are "transport functions" between the two parts of the oval.

3.  Bizarrely, (and this is a big guess) very dense litigation and all forms of risk of loss (default in the widest possible sense) are functors and act as covering spaces between the two parts.


4.  Implication:  you don't need to know the substance of each criminal prosecution, just the fact that it is being done, that deforms the two parts towards or away from the ideal state of society.  

5.  Please note that the term ‘ideal state’ here does not mean Plato’s ideal good state; it means a perfectly continuous geometric construct of the intuition that does not require anything at all except a few arrows and some ovals.

Sunday 9 November 2014

Fault Tree Analysis is the Teleology that Ontology Needs; Dr. Kent Stephens' Classic Paper

Gosh!  Here's one of the great papers of the 20th century that very few people have even heard of.  I'm serious.  I think this paper ranks higher than Akerloff's information asymmetry paper on a market for "lemons", and just a tad below Claude Shannon's masters thesis on information theory.

http://files.eric.ed.gov/fulltext/ED095588.pdf

This is Kent G. Stephens paper on "Fault Tree Analysis."

Once I gave a 2 day seminar in London to a delegation of Russian academics from Moscow State University who were in the department of engineering and organisations.  The first day was a total disaster because they said they wanted something "on practical project management".  So, that evening I produced some slides about "and-logic" and "or-logic" and combined it with a flow diagram on "critical project analysis and implementation."  I said, "This work comes largely from Dr. Kent Stephens."  And before I could finish my sentence, the Head of the Department, a very sharp tongued professor, said, "Yes, we know all his work in our department, and we can see that you put much effort OVERNIGHT to bring to us today your original thoughts.  Thank you."  And I was dismissed!  The point of this story is that this was the only time in over 20 years of using Dr Kent G. Stephens ideas that anyone had ever said they knew him and his ideas.

The reason I think this paper is one of the most important papers in the 20th century is because it is the first and only paper I know of that successfully combines cultural value analysis to figuring out how organisations FAIL!  Back in the day, Dr. Stephens had assistants with questionnaires ask individuals in an organisation particular sorts of "valued questions" to determine what we now call the "critical path" within an organisation. He'd figure out the critical path of communications and were most of the errors occurred that jammed up the organisation.  if Aristotle were alive, he'd be very proud of Dr. Stephens' work because it's been used to fix a lot of otherwise "failing institutions".  And unlike the BS consulting you see 99.99% of the time, the good doctor and his team would come up with fantastically elegant solutions.  E.g., he took a failing elementary-to-high school that was in the bottom 5 of California to the top 10 in one year!

His paper is important to keep in mind if one embarking on building an "ontology of an organisation".  Too many times, I see ontologies being built without a fundamental understanding of the TELEOLOGY of the human actors.  A complete ontology needs to understand teleology deeply, and I think Dr. Stephens helps us a long way in this regard.      

GDP-Derivatives: A Global Risk Management Tool; What do we mean by "invariance up to isomorphism"

Economic statistics are compiled and written by bureaucrats who get fired only if they show they haven't been doing any work, so is it any surprise that their figures should be revised?

http://www.nytimes.com/2014/11/07/business/economy/doubting-the-economic-data-consider-the-source.html?partner=rss&emc=rss&_r=2

One of the problems in financial engineering is getting a set of figures that the world can agree on. This is what I call the problem of finding the invariance.  One way to think about Category Theory is that it's all about finding invariance at the level of isomorphism, or more forcefully, of finding what is uniquely true and accurate because it is indicated by gestural arrows that point directly at it.  At a visceral level, notice how when we point at something saying that "it's right there", we are in a state of "understanding is not merely a pointing but an extension of the pointing as part of the activities of the world."  Category Theory at the level of functoriality tells us "it's all about the pointing" so the object itself is not at necessary, or again, to put it more forcefully, the object is completely defined by the infinite number of pointings that we have of that object, so the substantiality of the object disappears!  We don't need the object at all, because now we know it completely in its infinite possibilities of being.  This sounds very abstract (and it is) but in everyday life, we do this "gestural understanding" all the time, whenever we eat, sleep, converse, enjoy a drink...all of these "things" are invariances at the level of isomorphism.  But the problem of government statistics...

is that they get revised and so we have tremendously long time-lags in response to "certainties of announcements" that affect our buying and selling decisions.

In 1999, I had worked with an ex-Merrill Lynch derivatives trader to create a "GDP-derivative" which basically would allow you to take take bets on the GDP of any nation in the world.  Of all the derivatives that could help humanity manage its "spaceship resources", I thought a GDP derivative would be the best.  It would mean essentially that a globally active company (or any other legal entity including a state) could manage its risk.  So, if say you wanted to hedge Brazilian GDP risk, you could.  The conceptual design for this product was pretty EZ.  All you have to do is think "swap", i.e., the cash flows of a buyer and seller in relation to the data regarding the GDP figure.  For 'proving out' the instrument, we just made a table of natural buyers versus sellers, and listed the sectors underneath each heading, and thought through which companies would be "natural buyers and sellers" given different scenarios of "expected GDP".  Anyone doing a masters level course on quantitative finance should be able to knock up this model in a leisurely afternoon.

Anyway, the problem we had was the "revisions" on GDP data.  Since these numbers came out 6 to 18 months after the first announcement, it became difficult to "match up" reality.   In the language I use today, I'd say, "We couldn't get a simple isomorphism and therefore, no invariance."  Without an invariance (an agreement on the GDP-figures), our model would not work.  Of course, that was back then, before we had Google data.  Now, I'm pretty sure we could crunch up our own GDP-index in order to create the GDP-derivative.  Then it's a matter of selling and marketing...     

Friday 7 November 2014

The End of Education Monopolies, Long Live Personal Learning Assistants


1.  Suppose all information about every subject is at your fingertips.  Why would you bother to go to University, indeed, high school, if only to learn the social rituals, find mating partners, travel together, get a good job?  Human-to-human interfaces are good for some things, but for certainty of info the human-to-machine interface is about 20x better. See, Dr. Kent Stephens studies on the ICBM in the 1950s.

2.  In the year 2000, I was in Pasadena at a certain prestigious University whose name will remain unnamed and I was speaking to the Dean who said that their Uni had just received $35 million from a billionaire-developer-alumna to build a beautiful new hi-tech law school building.  The building was about 25 to 30 stories and they had their own television channel, and all lecture halls had at least two digital video cameras and the data from the lectures were sent to a Media Centre, where media workers developed the content into broadcast.  The Dean said to me, "What will we (law school bricks and mortar) do when we put all these courses onto two disks?"  I said, "Isn't one more than enough?"

3.  The disintermediation of universities hasn't occurred YET because no one has yet figured out how to make a University within your own simple point and click powers.  There's too much content on the Web and it's not at all clear what if anything but the certainty of exchange approximates reality.

4.  I propose the Personal Learning Assistant.  The PLA grows with you and   mirrors your hobbyist interests (non-profit and at a cost of consumption) and professional specialised knowledge (for profit and chargeable at globally competitive rates).  The PLA is not you, but it's pretty close to being your Web-clone.  It can apply for jobs, do jobs, and even multiply in terms of identifiable profiles on the Web.  BTW, the PLA does not simply exist in digital code, but will have significant impact on your physical being and others.  For example, who'll find the best heart specialist just in time, get the appropriate therapeutic apps to manage your incipient diabetes, run every sort of psycho-chemical tests and ensure that you are aware of the survival odds at the next traffic junction other than your PLA?  Who's your Guardian Angel and Protector?  If you want to go on "auto-PLA" then you can turn down the control to subliminal 0,005 second input-outputs.  What's the point of a university education if you can have this much fun at 10 magnitudes above and below the normal medium of perception?  I guess this is what web-based education has to offer.  Open the doors of perception and get a universal education.  BTW building your own PLA immediately answers questions about long-term social welfare.

Thursday 6 November 2014

Ebola's Decay in Contagion



My toy model of Ebola mortality where the doubling occurs every 20 days with an arbitrary start date of January 11th 2014 predicted 4,096 deaths for the 9th of October 2014, and the official WHO statistic was 4,033 for the 10th of October 2014.  Assuming the same rate of doubling, the toy model predicted 8,192 deaths for the 29th of October 2014.  The official figure hovered around 5,000 deaths for the 1st of November 2014.  Thus, the toy model is dead.  If the official figures are accurate then the doubling factor in days has increased to around 40 days, that is, about 100 deaths per day.  While sad in an absolute sense, this figure is very good news globally because it shows that there is a decay in contagion.

The first chart above shows some correlation to the news effect of Ebola on VIX (a volatility or "fear" indicator).

The second chart shows history of potential pandemics from the 1950s.

So far humanity appears to be missing the bullet and technological and policy responses, no matter how awkward at first, appear to be protective enough for the species.

Sunday 26 October 2014

On the Laws of Immortality (of the Virtual Sort)

On the Laws of Immortality (of the Virtual Sort).

I'm wondering whether we need to have an educational course that anticipates a "religious - technological future that presumes virtual immortality." See, reference to Tipler's Physics of Immortality below.  I have no idea what to call this course.  The course would be built on anticipating fundamental technological breakthroughs aimed at the ultimate teleology or Omega Point and would try to figure out their consequences in terms of the law.  This isn't about patent law or innovations--the best course for that is at the Law School at Stanford University.  This is a course that would bring back Aristotle's causa of teleology so that Ethics and Politics could be discussed in relation to the causae of Form (mathematical and visual technologies), Efficient (the material implementation of such breakthroughs from the micro to the macro) and the Substantive (the uniqueness of breakthrough).  Permit me an example.

Yesterday, I was speaking with a young friend who sells computers by day and plans film-making at night.  His next project is a 15 minute short on "How to become a Roman Emperor."  He tells me that there is a breakthrough that has occurred that will require ALL ELECTRONIC equipment to be redone.  The fundamental breakthrough will enable a mobile phone to carry 1,000 terrabytes of memory. And it will mean that a lot of programming to make things "compact" will become unnecessary.  Maybe we won't need any programming language at all, and maybe a lot of processes we take for granted to check, validate and verify our electronic memories will become obsolete.

Now, what kind of universe of legal discourse would that innovation imply?

This technology will be out within 4 or 5 years.

Reference:  Tipler, F.T.  (2000) The Physics of Immortality.  All the fundamental breakthroughs according to Tipler (who was a top rated physicist until he published this CRAZY book that makes him look looney--but I think it's just a great example of taking a rather simple idea very seriously), will be about achieving an Omega Point (Theilhard de Chardin's idea back in the 1930's) that everything in life will be resurrected.  Now, resurrection in a physics sense is a White Hole where an infinitude of memories can be played back.  Tipler describes the evolution of the universe from black hole singularity to white hole resurrection.  To get to a white hole, all the energy of the universe will be used to store all events.  It's a CRAZY idea because it is an ultimate teleology and modern science does not like teleology at all.

Friday 24 October 2014

Mapping Ebola Event Risk by Postulating a Financial Markets Ontology

Mapping Ebola Event Risk by Postulating a Financial Markets Ontology.










1.  As we said last time, correlation is not causation.  This is an important distinction in science.  If we wish to do science in law and finance, then using a Category Theory approach, we should translate our statements made of legal and financial terms into SCIENTIFIC PROPOSITIONS that adhere to at least a FIRST ORDER LOGIC.  Translated into this first order logic, we can check our premises and inspect our deductions and inferences for their weaknesses and soundness.  If you would like to know what scientific language would look like, you can turn to Spivak's A Category Theory for Scientists (Old Version) 2013-14 which is freely available on the Web.  In brief, theories and models within theories are written in propositional form using "arrow-language", thus, f:A-->B, where f designates the name of the arrow between A and B.  If you trace Spivak's work who is at MIT, you'll note that he wrote a paper together with Robert E. Kent on "Ologs", which are basically "ontological statements" of real processes.  Robert E. Kent (who's affiliation to any academic institution appears non-existent) has written on the INSTITUTIONAL APPROACH which is a version of Category Theory that is translated for use among people who work and play in the area of INSTITUTIONAL THEORY.  As far as I can tell with my almost total ignorance of the field, the first successful transplantation of a fundamental device from Category Theory into Institutional Theory was by Dimaggio & Powell in their extremely well-cited paper (over 27,000 citations so far) on "The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields" (1983) see Jistor.  They used the concept of isomorphism which in Category Theory means f:A->B and g:B->A, so that the morphisms f and g which represent "processes" link the two objects A and B in a manner such that the objects are isomorphic, that is, that A and B are unique up to isomorphism.  Even if you don't quite understand what this "definition" means, you should understand that isomorphism is the way Category Theorists think about "equivalence."  It is all process driven.  

2.  Now Robert E. Kent's work on the Institutional Approach is part of a blossoming field called ONTOLOGIES.  An ontology is simply a model of real processes involving people and machines, especially computational devices that can be linked.  The link between Category Theory and Ontology goes back to a prescient genius named Goguen who was at the University of California San Diego.  His paper on "A Categorical Manifesto" 1991 is a must read.  And the paper co-authored with Burstal on:  INSTITUTIONS:  ABSTRACT MODEL THEORY FOR SPECIFICATION AND PROGRAMMING (1992).  This paper lays out a fantastic theory of institutions that is rigged according to category theory formalities.  Now how do these theories relate to the Ebola versus Financial Charts above?

3.  Well, the point is that it is genuinely difficult to understand what the above charts MEAN unless we have some kind of FINANCIAL MARKET ONTOLOGY buzzing away in the back of our minds.  That ontology would need to be made explicit in order for us to have an explicit understanding of the EVENT RISK that Ebola (as an event) poses onto the minds of financial traders.

4.  To do that which is required in 3 may seem very complicated indeed!  But I don't think so.  We will examine that space--the space of financial market ontology--in further blogs.      

Wednesday 22 October 2014


Let's Get Rid of Causation in Finance.

This chart comes from Zerohedge today.  It makes me laugh.  Is the world one?  That is, it is so interconnected in terms of web-based and email connections, that messages from the market manipulators instantaneously infects our own wet-ware so that we unconsciously push buttons that amplify particular messages in the noosphere?  "Correlation is not causation" so says Zerohedge repeating the mantra that distinguishes the knee-jerk Palovian-Skinnerian stimulus-response reductionism of our poor little brain from the rationality and decision-making processes of the game-theoretic monsters that can crunch big data until a particular solution fits the circumstances just inside the horizon of noncomputability (NP-Hard problems).  The best papers still in this area where SPIRITUALITY meets PHYSICS are by Emilie Nother -- Einstein's tutor in infinite-dimensional Hilbert space and lovely well-loved teacher who only wanted to teach but the idiotic German Universities could not even give her a proper teaching post.  In the long run of a future history, Germany in the late 19th and 20th centuries will be castigated not only for Nazism but also because its chauvinistic attitude against women--and Noether will be remembered far into the future, farther than when people forget about Hitler.  When Noether died, Einstein wrote something to the effect that she had discovered the first spiritual law of physics.  There are a couple Noether Theorems, but the one FINANCE THEORETICIANS SHOULD PAY ATTENTION TO but do not is the one that allows one to move from an energy conservation law to dimensional space.  How she moves from correlation-covariance is very similar to what the finance theorists from Bachelier to Miller tried with almost brute force.  Basically, Noether moves from correlation through covariance using symmetries. Symmetries as we know in the early to first half of the 20th century is the algebra of group theory, and allegedly, our standard model is based in the ideal on our methods developed in group theory.  Group theory to me looks like it depends on identity, associativity and invertibility (reversal).  I was studying this area for a while back in 2006-07, and then imagine my delight and surprise to find a theory that could encompass and ground group theory using only identity and associativity!  That is, why study Group Theory when Category Theory could do all of group theory and go even much farther?

So now we can come back to the two graphs above.  How do we compare them?  You might say, "Do a correlation analysis and determine the variance."  OK. Then you can do the "co-variance analysis" and come up with a number you can compare against other co-variances.  But this assumes a standard deviation metric underneath, in other words, a normal distribution.  As everyone knows in finance except the crazy financial regulators, there is no such animal in the de facto.  "Volatility" is not a good measure.  What would be a much measure would be a fractal dimension a la Mandelbrot.  If we got use to a Mandelbrotian measure, we'd have a much better feel for the "jitteriness" and "emotionality" of prices, and very importantly, we'd have much better metrics and therefore, a better language to gauge and communicate our intimations and observations of what's actually going on.

To realize these intuitions, I believe we have to build from the ground up, and link the above charts to particular geometries, and to talk about them properly, we would need a new vocabulary that would allow us to make comparisons between the charts and the contexts that surround them in an absolutely precise way.

While we would never say Chart 1 causes Chart 2 or vice versa, we could say without any problem that Chart 1 is isomorphic to Chart 2, and then we would be forced to make explicit exactly WHAT THE PROCESS that makes the isomorphism up.  If we could do that, we would not ever again bother with the concept of "causation" in finance.       
A Category Theory of Financial Instruments and Financial Institutions -  A Fundamental Ontology of Law and Finance


-Draft Abstract-

In an early paper examining the regulation of private placement memorandum (PPM) regulation, we found that there are in general two approaches to financial regulations: (1) the regulation of the behaviours of the financial institution; and (2) regulation of the financial instruments.  PPMs are interesting financial instruments since they sit in-between two extremes of financial regulatory types: (1) purely private form of financial intermediation (that is, at the extreme, we have bilateral financial contracts) and (2) highly ritualized guidance on what can be communicated in the raising of capital (that is, prospectus-type regulations) stemming from the Securities Act of 1933 and Securities Exchange At of 1934.  The unnatural divide between the regulation of financial institutions and the regulation of financial instruments has played itself out in terms regulations which aim at adjusting the incentives relating to particular types of businesses within a financial institutional framework.  In general, the play out has been a border conflict between banking regulations and capital market regulations.  This is not to say that any particular jurisdiction uses one type of financial regulation exclusively to the exclusion of the other, but rather there is a combination or admixture of financial regulations, that in total, ascribe to one general tendency or the other.  Recall the legal theorists who attempted to justify financial regulations on the basis that “Law Matters”, stating in particular that law matters in that it set outs the initial permit or license to practice a certain form of business.  As we shall see, if we apply certain fundamental risk and return models, which the financial industry itself uses to measure its own financial instruments, we can distinguish different sorts of businesses according to these risk and return financial instrument components.  Thus, from an extreme financial institutional perspective, which applies financial theory to its own institutional design, behaviours and assessment of behaviours, financial instrument theory applies laws to accomplish the institution’s particular objectives and the discipline and guidance therefore are simple financial theory models of risk and return, which captures the market definition of money.  From the institutional perspective apply financial theory, an institution is simply a financial instrument with certain risk and return characteristics.  Thus, financial institutions regarding themselves as financial instruments are constrained by the “rules of the game” of finance, which are basically arbitrage (“the law of one price”) which forever use laws such as contracts, and every other sort of law and regulation, as merely instrumental.  There are researchers who assert that there should be a legal theory of finance and use legal ideas to promote the logical priority of law to finance. There is no argument with this thesis if we add the distinction that law is the context in which finance exists.  However, it is not entirely plausible to say that finance exists because of law, nor is it plausible to assert that law is absolutely necessary for finance to exist in the world.  In any case, the purpose of this paper is not to adjust or determine where the horse and carriage can become one or the other—we believe that is actually an obvious distinction--but rather to understand if possible the system, model or ecology in which law and finance co-exist.  As a theoretical approach, we shall combine the financial instrument view and the financial institutional view into one totality, and call that totality, a law and finance ontology.  Our view, which is slightly complicated because of the distinctness of disciplines and variety of sub-disciplines involved in law and finance, is to start with the simplest ideal financial instrument and then to ask ourselves what would happen to that financial instrument in the real world of law and finance.

This model of moving from a well-defined financial instrument (which is the particular in the Aristotelian sense of a genus-specie category or more modernly, the event) to how this financial instrument operates in the real world (the generalized reality of abstract continuity) is in effect a study of the reality of finance given a legal context in the form of a mapping or under a mapping technology.   


Thursday 21 August 2014

“If-then’s” generate “choices” or “how to free yourself of other’s causal claims”.

I read a bit of news—mainstream, twitter & FB--and I have this desperate urge to review a bit of logic.

For example, when someone declares "Do it my way or the highway," you don't have to worry.  You can reconstruct that statement into:

If it is x, then it must be y.
It is either not x or it is y.

For example:
If he wears brown shorts, then he must be a terrorist.
Either he does not wear brown shorts or he is a terrorist.

Translating the above threat, “Do it my way or the highway,” means “If you don’t do it my way then you must take the highway.”  The phrase “do it my way” is negated in the antecedent.

Let’s go further.  How about the statement:

“If you believe in me, then you will go to heaven”?

Translates to:

“Either you don’t believe in me or you will go to heaven.”

That’s proper.

But suppose that statement gets warped into:

“If you don’t believe in me then you will go to hell.”

Which translated into “or” form becomes:

“Either you believe in me or you will go to hell.”

Bottom-line:  when you get into arguments about choices, watch out for causation type statements as blame that turn out to be dichotomies.  Once you see this rule operating, you don't have to take sides.

You are free to mosey along.