Month: August 2016

Q-UEL Toolkit for Medical Decision Making :- Science of Uncertainty and Probabilities

Screen Shot 2016-08-24 at 11.07.49 AM

Quantum Universal Exchange Language

Emergent | Interoperability | Knowledge Mining | Blockchain


  1. It is a toolkit / framework
  2. Is an Algorithmic Language for constructing Complex System
  3. Results into a Inferential Statistical mechanism suitable for a highly complex system – “Hyperbolic Dirac Net”
  4. Involves an approach that is based on the premise that a Highly Complex System driven by the human social structures continuously strives to achieve a higher order in the entropic journey by continuos discerning the knowledge hidden in the system that is in continuum.
  5. A System in Continuum seeking Higher and Higher Order is a Generative System.
  6. A Generative System; Brings System itself as a Method to achieve Transformation. Similar is the case for National Learning Health System.
  7. A Generative System; as such is based on Distributed Autonomous Agents / Organization; achieving Syndication driven by Self Regulation or Swarming behavior.
  8. Essentially Q-UEL as a toolkit / framework algorithmically addresses interoperability, knowledge mining and blockchain; while driving the Healthcare Eco-system into Generative Transformation achieving higher nd higher orders in the National Learning Health System.
  9. It has capabilities to facilitate medical workflow, continuity of care, medical knowledge extraction and representation from vast large sets of structured and unstructured data, automating bio-statistical reasoning leading into large data driven evidence based medicine, that further leads into clinical decision support system including knowledge management and Artificial Intelligence; and public health and epidemiological analysis.


A Large Chaotic System driven by Human Social Structures has two contending ways.

a. Natural Selection – Adaptive – Darwinian – Natural Selection – Survival Of Fittest – Dominance

b. Self Regulation – Generative – Innovation – Diversity – Cambrian Explosion – Unique Peculiarities – Co Existence – Emergent

Accountable Care Organization (ACO) driven by Affordability Care Act transforms the present Healthcare System that is adaptive (competitive) into generative (collaborative / co-ordinated) to achieve inclusive success and partake in the savings achieved. This is a generative systemic response contrasting the functional and competitive response of an adaptive system.

Natural selection seems to have resulted in functional transformation, where adaptive is the mode; does not account for diversity.

Self Regulation – seems like is a systemic outcome due to integrative influence (ecosystem), responding to the system constraints. Accounts for rich diversity.

The observer learns generatively from the system constraints for the type of reflexive response required (Refer – Generative Grammar – Immune System –

From the above observation, should the theory in self regulation seem more correct and that adheres to laws of nature, in which generative learning occurs. Then, the assertion is “method” is offered by the system itself. System’s ontology has an implicate knowledge of the processes required for transformation (David Bohm – Implicate Order)

For very large complex system,

System itself is the method – impetus is the “constraint”.

In the video below, the ability for the cells to creatively create the script is discussed which makes the case for self regulated and generative complex system in addition to complex adaptive system.


Further Notes on Q-UEL / HDN :-

  1. That brings Quantum Mechanics (QM) machinery to Medical Science.
  2. Is derived from Dirac Notation that helped in defining the framework for describing the QM. The resulting framework or language is Q-UEL and it delivers a mechanism for inferential statistics – “Hyperbolic Dirac Net”
  3. Created from System Dynamics and Systems Thinking Perspective.
  4. It is Systemic in approach; where System is itself the Method.
  5. Engages probabilistic ontology and semantics.
  6. Creates a mathematical framework to advance Inferential Statistics to study highly chaotic complex system.
  7. Is an algorithmic approach that creates Semantic Architecture of the problem or phenomena under study.
  8. The algorithmic approach is a blend of linguistics semantics, artificial intelligence and systems theory.
  9. The algorithm creates the Semantic Architecture defined by Probabilistic Ontology :- representing the Ecosystem Knowledge distribution based on Graph Theory

To make a decision in any domain, first of all the knowledge compendium of the domain or the system knowledge is imperative.

System Riddled with Complexity is generally a Multivariate System, as such creating much uncertainty

A highly complex system being non-deterministic, requires probabilistic approaches to discern, study and model the system.

General Characteristics of Complex System Methods

  • Descriptive statistics are employed to study “WHAT” aspects of the System
  • Inferential Statistics are applied to study “HOW”, “WHEN”, “WHY” and “WHERE” probing both spatial and temporal aspects.
  • In a highly complex system; the causality becomes indeterminable; meaning the correlation or relationships between the independent and dependent variables are not obviously established. Also, they seem to interchange the position. This creates dilemma between :- subject vs object, causes vs outcomes.
  • Approaching a highly complex system, since the priori and posterior are not definable; inferential techniques where hypothesis are fixed before the beginning the study of the system become enviable technique.

Review of Inferential Techniques as the Complexity is Scaled

Step 1:- Simple System (turbulence level:-1)

Frequentist :- simplest classical or traditional statistics; employed treating data random with a steady state hypothesis – system is considered not uncertain (simple system). In Frequentist notions of statistics, probability is dealt as classical measures based only on the idea of counting and proportion. This technique is applied to probability to data, where the data sets are rather small.

Increase complexity: Larger data sets, multivariate, hypothesis model is not established, large variety of variables; each can combine (conditional and joint) in many different ways to produce the effect.

Step 2:- Complex System (turbulence level:-2)

Bayesian :- hypothesis is considered probabilistic, while data is held at steady state. In Bayesian notions of statistics, probability is of the hypothesis for a given sets of data that is fixed. That is, hypothesis is random and data is fixed. The knowledge extracted contains the more subjectivist notions of uncertainty, belief, reliability, or confidence often used in automated inference and decision support systems.

Additionally the hypothesis can be explored only in an acyclic fashion creating Directed Acyclic Graphs (DAG)

Increase the throttle on the complexity: Very large data sets, both structured and unstructured,  Hypothesis random, multiple Hypothesis possible, Anomalies can exist, There are hidden conditions, need arises to discover the “probabilistic ontology” as they represent the system and the behavior within.

Step 3: Highly Chaotic Complex System (turbulence level:-3)

Certainly DAG is now inadequate, since we need to check probabilities as correlations and also causations of the variables, and if they conform to a hypothesis producing pattern, meaning some ontology is discovered which describes the peculiar intrinsic behavior among a specific combinations of the variables to represent a hypothesis condition. And, there are many such possibilities within the system, hence  very chaotic and complex system.

Now the System itself seems probabilistic; regardless of the hypothesis and the data. This demands Multi-Lateral Cognitive approach

Telandic …. “Point – equilibrium – steady state – periodic (oscillatory) – quasiperiodic – Chaotic – and telandic (goal seeking behavior) are examples of behavior here placed in order of increasing complexity”

A Highly Complex System, demands a Dragon Slayer – Hyperbolic Dirac Net (HDN) driven Statistics (BI-directional Bayesian) for extracting the Knowledge from a Chaotic Uncertain System.

Advertisements :- High Performance Cloud Computing Platform

Screenshot 2016-08-03 17.51.37

Non-Hypothesis driven Unsupervised Machine Learning Platform delivering Medical Automated Reasoning Programming Language Environment (MARPLE)

Evidence Based Medicine Decision Process is based on PICO

From above link “Using medical evidence to effectively guide medical practice is an important skill for all physicians to learn. The purpose of this article is to understand how to ask and evaluate questions of diagnosis, and then apply this knowledge to the new diagnostic test of CT colonography to demonstrate its applicability. Sackett and colleagues1 have developed a step-wise approach to answering questions of diagnosis:”

Uncertainties in the Healthcare Ecosystem Platform

Is High Performance Cloud Computing Platform delivering both probabilistic and deterministic computations; while combining HDN Inferential Statistics and Descriptive Statics.

The bio-statistical reasoning algorithm have been implemented in the Wolfram Language; which is a knowledge based programming unified symbolic language. As such symbolic language has a good synergy in implementing Dirac Notational Algebra.

The; brings the Quantum Mechanics machinery to Healthcare analytics; delivering a comprehensive data science experience that covers both Patient Health and Public Health analytics driven by a range of bio-statistical methods from descriptive to inferential statistics, leading into evidence driven medical reasoning.

The transforms the large clinical data sets generated by interoperability architectures, such as in Health Information Exchange (HIE) into semantic lake representing the Health ecosystem that is more amenable to bio-statistical reasoning and knowledge representation. This capability delivers evidence based knowledge needed for Clinical Decision Support System better achieving Clinical Efficacy by helping to reduce medical errors.

Algorithm based on Hyperbolic Dirac Net (HDN)

An HDN is a dualization procedure performed on a given inference net that consists of a pair of split-complex number factorizations of the joint probability and its dual (adjoint, reverse direction of conditionality). Hyperbolic Dirac Net is derived from Dirac Notational Algebra that forms the mechanism to define Quantum Mechanics.

A Hyperbolic Dirac Net (HDN) is a truly Bayesian model and a probabilistic general graph model that includes cause and effect as players of equal importance. It is taken from the mathematics of Nobel Laureate Paul A. M. Dirac that has become standard notation and algebra in physics for some 70 years.  It includes but goes beyond the Bayes Net that is seen as a special and (arguably) usually misleading case. In attune with nature, the HDN does not constrain interactions and may contain cyclic paths in the graphs representing the probabilistic relationships between all things (states, events, observations, measurements etc.).  In the larger picture, HDNs define a probabilistic semantics and so are not confined to conditional relationships, and they can evolve under logical, grammatical, definitional and other relationships. It is also, in its larger context, a model of the nature of natural language and human reasoning based on it that takes account of uncertainty.

Explanation: An HDN is an inference net, but it is also best explained by showing that it stands in sharp contrast to the current notion of an inference net that, for historical reasons, is today often taken as meaning the same thing as a  Bayes Net. “A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.”  [ wiki/Bayesian_ network].  In practice, such nets have little to do with Bayes, nor Bayes’ rule, law, theorem or equation that  allows verification that probabilities used are consistent with each other and all other probabilities that can be derived from data. Most importantly, in reality, all things interact in the manner of a general graph, and a DAG is in general a poor model of reality since it consequently may miss key interactions.


Is a machine learning based biostatistical algorithm that transforms Large Data Sets such as Millions of Patient Records  into Semantic Lake as defined by HDN driven computations that is a mix of Numbers theory (Riemann Zeta) and Information Theory (Dual Bayesian or HDN)

The HDN – Semantic Lake, represents the health-ecosystem as captured in Knowledge Representation Store (KRS) consisting of Billions of Tags (Q-UEL Tags).


Send an HDN query to KRS to seek HDN probabilistic inference / estimate. The Query for the inference contains the HDN that the user would like to have, and DiracBuilder helps get the best similar dual net by looking at what Billions of QUEL tags and joint probabilities are available.

High Performance Cloud Computing

The Platform computes (probabilistic computations) against the billions of Q-UEL tags employing extended in-memory processing technique. The creation of the billions of Q-UEL tags and querying against them is combinatorial explosionproblem.

The Bioingine platform working against large clinical data sets or while residing within the large Patient Health Information Exchange (HIE) works in creating opportunity for Clinical Efficacy and also facilitates in the better achievement of “Efficiencies in the Healthcare Management” that ACO seeks.

Our endeavors have resulted in the development of revolutionary Data Science to deliver Health Knowledge by Probabilistic Inference. The solution developed addresses critical areas both scientific and technical, notably the healthcare interoperability challenges of delivering semantically relevant knowledge both at patient health (clinical) and public health level (Accountable Care Organization).

Multivariate Cognitive Inference from Uncertainty

Solving High-dimentional Multivariate Inference involving variables factors excess of factor 4 representing the high-dimentioanlity that characteristics of the healthcare domain.

EBM Diagnostic Risk Factors and Calculating Predictive Odds

Q-UEL tags of form

< A Pfwd:=x |  assoc:=y | B Pbwd:=z >

Say A = disease, B = cause,  drug,  or diagnostic prediction of disease, are designed to imply the following, knowing numbers x, y, and z.

P(A|B) = x

K(A; B) = P(A,B) / (P(A)P(B))   = y

P(BIA) = z

From which we can calculate the following….

P(A) = P(A|B)/K(A;B)

P(B) = P(B|A)/K(A;B)

P( NOT A) = 1 – P(A)

P(NOT B) = 1 – P(B)

P(A, B) = P(A|B)P(B) = P(B|A) P(A)

P(NOT A,  B)= P(B) – P(A B)

P(A, NOT B) = P(A) – P(A B)

P(NOT A, NOT B) = 1 – P(A, B) – P(NOT A, B) – P(A NOT B)

P(NOT A | B)  = 1  – P(A|B)

P(NOT B | A) = 1 –  P(B|A)

P(A | NOT B) =  P(A, NOT B)/P(NOT B)

P(B | NOT A) =  P(NOT A, B)/P(NOT A)

Positive Predictive Value P+ = P(A | B)

Negative Predictive value  P- = P(NOTA | NOT B)

Sensitivity = P(B | A)

Specificity = P(NOT B | NOT A)

Accuracy A =   P(A | B) + P(NOT A | NOT B)

Predictive odds PO = P(A | B) / P(NOT A | B)

Relative Risk RR = Positive likelihood ratio  LR+ =  P(A | B) / P(A | NOT B)

Negative  likelihood ratio  LR- =  P(NOT A | B) /  NOT A | NOT B)

Odds ratio OR = P(A, B)P(NOT A, NOT B)  /  (  P(NOT A,  B)P(A, NOT B) )

Absolute risk reduction ARR =  P(NOT A | B) – P(A | B) (where A is disease and B is drug etc).

Number  Needed to Treat NNT = +1 / ARR if ARR > 0 (giving positive result)

Number  Needed to Harm  NNH = -1 / ARR  if ARR > 0 (giving positive result)


BP = blood pressure (high)

This case is very similar, because high BP and diabetes are each comorbidities with high BMI and hence to some extent with each other.  Consequently we just substitute diabetes by BP throughout.

(0) We can in f act test the strength of the above  with the following RR, which in effect reads as “What is the relative risk of needing to take BP medication if you are diabetic as opposed to not diabetic?

<‘Taking BP  medication’:=’1’  |  ‘Taking diabetes medication’:= ‘1’>

/<‘Taking BP  medication’:=’1’  | ‘Taking diabetes medication’:= ‘0’>

The following predictive odds  PO make sense and are useful here:-

<‘Taking BP  medication’:=’1’  |  ‘BMI’:= ’50-59’  >

/<‘Taking BP  medication’:=’0’  |  ‘BMI’:= ’50-59’  >

and (separately entered)

<‘Taking diabets medication’:=’1’  |  ‘BMI’:= ’50-59’  >

/<‘Taking diabetes  medication’:=’0’  |  ‘BMI’:= ’50-59’  >

And the odds ratio OR would be a good measure here (as it works in both directions). Note Pfwd = Pbw theoretically for an odds ratio.

<‘Taking BP  medication’:=’1’  | ‘Taking diabetes medication’:= ‘1’>

<‘Taking BP  medication’:=’0’  | ‘Taking diabetes medication’:= ‘0’>

/<‘Taking BP  medication’:=’1’  | ‘Taking diabetes medication’:= ‘0’>

/<‘Taking BP  medication’:=’0’  | ‘Taking diabetes medication’:= ‘1’>