Platform for BigData Driven Medicine and Public Health Studies [ Deep Learning & Biostatistics ]


Bioingine.com; Platform for comprehensive statistical and probability studies for BigData Driven Medicine and Public Health.

Importantly helps redefine Data driven Medicine as:-

Ontology (Semantics) Driven Medicine

Comprehensive Platform that covers Descriptive Statistics and Inferential Probabilities.

Beta Platform on the anvil. Signup for Demo by sending mail to


Bioingine.com employs algorithmic approach based on Hyperbolic Dirac Net that allows inference nets that are a general graph (GC), including cyclic paths, thus surpassing the limitation in the Bayes Net that is traditionally a Directed Acyclic Graph (DAG) by definition. The Bioingine.com approach thus more fundamentally reflects the nature of probabilistic knowledge in the real world, which has the potential for taking account of the interaction between all things without limitation, and ironically this more explicitly makes use of Bayes rule far more than does a Bayes Net.

It also allows more elaborate relationships than mere conditional dependencies, as a probabilistic semantics analogous to natural human language but with a more detailed sense of probability. To identify the things and their relationships that are important and provide the required probabilities, the Bioingine.com scouts the large complex data of both structured and also information of unstructured textual character.

It treats initial raw extracted knowledge rather in the manner of potentially erroneous or ambiguous prior knowledge, and validated and curated knowledge as posterior knowledge, and enables the refinement of knowledge extracted from authoritative scientific texts into an intuitive canonical “deep structure” mental-algebraic form that the Bioingine.com can more readily manipulate.

BigData Driven Medicine Program :-


Objectives and Goals

Informatics & Data-Driven Medicine (IDDM) is a foundation area within the Scholarly Concentration program that explores the new transformative paradigm called BIG DATA that is revolutionizing medicine. The proliferation of huge databases of clinical, imaging, and molecular data are driving new biomedical discoveries and informing and enabling precision medical care. The IDDM Scholarly Concentration will provide students insights into this important emerging area of medicine, and introducing fundamental topics such as information management, computational methods of structuring and analyzing biomedical data, and large-scale data analysis along the biomedical research pipeline, from the analysis and interpretation of new biological datasets to the integration and management of this information in the context of clinical care.


Students who pursue Informatics & Data-Driven Medicine in conjunction with an application area, such as Immunology, are required to complete 6 units including:

Biomedin 205: Precision Practice with Big Data

Statistically caused, and so perhaps then Uncaused and neither Determined nor Pre-Determined

Cogito ergo sum [a] is a  Latin philosophical  proposition by  René Descartes usually translated into English as ” I think, therefore I am“.

Cartesianism – is usually understood as deterministic, i.e. about  definite answers, deductive logic . Actually Descartes held that all existence consists in three distinct substances, each with its own essence (https://en.wikipedia.org/wiki/Cartesianism)

  1. matter, possessing extension in three dimensions
  2. mind, possessing self-conscious thought
  3. God, possessing necessary existence

It was arguably a bold attempt to be coldly rational by divorcing the first of these from our notions of the mental and the spiritual, that the first was divorced from the influence of the mental and the spiritual. It is only a convenient partitioning that limits our perceptions.

Descartes explained, “We cannot doubt of our existence while we doubt.” A fuller form, dubito, ergo cogito, ergo sum (“I doubt, therefore I think, therefore I am”)A non-Cartesian view would be that such things are not distinct, any more than matter and energy is distinct. There is only information, and in the sense that it has meaning, impact on us, actionability and consequences,  we mean not a string of bits or the stratum that holds them, but its organization into knowledge and wisdom.)

Insomuch that belief, cause, and  probability are all one, this has much to do with the philosophy of  the Presbyterian minister Thomas Bayes https://en.wikipedia.org/wiki/Thomas_Bayes, which has little to do with the popular Bayes Net, merely a use of conditional probabilities  constrained such that lements of reasoning imply a knowledge network that must be a directed acyclic graph.

In modern physics, many hold the view that all three of Descartes distinct substances are human perceptions within the universe as a giant computation device in which what we perceive as the external world, our minds, and the mind we exist within, are the same.

In philosophy, the new wave non-Cartesian was Edward Jonathan Lowe (1950 – 2014), usually cited as E. J. Lowe but known personally as Jonathan Lowe, was a British philosopher and academic. Oxford-trained, he was Professor of Philosophy at Durham University, England. He developed a version of psychophysical dualism that he called non-Cartesian substance dualism.

“It is an interactionist substance dualism. (Cf. John Eccles and early Karl Popper.)… Lowe argued, however,

That events (both mental and physical) should properly not be thought of as  causes, because only actors (human or animal agents – or inanimate physical agents) can cause things. Events are more properly simply happenings, some caused, some uncaused. [As] quantum indeterminism states, some are only statistically caused, and so  perhaps then uncaused and neither determined nor pre-determined.”(http://www.philosophyonline.co.uk/oldsite/pom/pom_non_cartesian_dualism.htm,https://en.wikipedia.org/wiki/E._J._Lowe_(philosopher).

Our approach based on the mathematics of physics and in particular Paul A. M. Dirac and Albert Einstein’s’ view of space-time as one says

that there is no cause and effect in the specific sense that, in a network of interaction the illusion of non-locality is maintained by the fact that who was the cause of preparing a state and who observed it is not meaningful. Rather, all  co-exists in space-time, so that while   the Bayes Net (again, little to with Bayes) denies  that something can cause something that causes something that causes the  original cause, such statements are meaningless, in that all things interact in space and time, and cyclic effects abound in nature as they do in quantum mechanics,  and in our reasoning about them..

Our approach  additionally includes inductive logic and is combination of number theory, information theory,  quantum mechanics and knowledge / extraction Bayesian statistics that have all to do with processing the interactions between things in a larger knowledge picture.  Only such a view

  • can solve the difficult classes of problems:-
  • creates semantic data lake
  • can do cyclic Bayesian computations
  • can do reasoning and inference to reach a degree of knowledge
  • can manage the non-predicated and probabilistic, and  discover the possible hypotheses existing and new.

Bioingine :- Multivariate Cognitive Computing Platform – Distributed Concurrent Computing by Dockerized Microservices


Employ of Dockerized Apps Opens a Vistas of Possibilities with Hadoop Architecture. Where, the Hadoop’s traditional data management architecture is extended beyond data processing and management into Distributed Concurrent Computing.


Data Management (Storage, Security,  MapReduce based Pre-processing) and Data Science (Algorithms) Decoupled.

Microservices driven Concurrent Computing :- Complex Distributed Architecture made Affordable

Conceptual View of Yarn driven Dockerized Session Management of  Multiple Hypothesis over Semantic Lake

Notes on HDN (Advanced Bayesian), Clinical Semantic Data Lake and Deep Learning / Knowledge Mining and Inference 



Evidence based Medicine driven by Inferential Statistics – Hyperbolic Dirac Net



From above link

Descriptive Statistics (A quantitative summary)

Descriptive statistics includes statistical procedures that we use to describe the population we are studying. The data could be collected from either a sample or a population, but the results help us organize and describe data. Descriptive statistics can only be used to describe the group that is being studying. That is, the results cannot be generalized to any larger group.

Descriptive statistics are useful and serviceable if you do not need to extend your results to any larger group. However, much of social sciences tend to include studies that give us “universal” truths about segments of the population, such as all parents, all women, all victims, etc.

Frequency distributionsmeasures of central tendency (meanmedian, and mode), and graphs like pie charts and bar charts that describe the data are all examples of descriptive statistics.

Inferential Statistics

Inferential statistics is concerned with making predictions or inferences about a population from observations and analyses of a sample. That is, we can take the results of an analysis using a sample and can generalize it to the larger population that the sample represents. In order to do this, however, it is imperative that the sample is representative of the group to which it is being generalized.

To address this issue of generalization, we have tests of significance. A Chi-square or T-test, for example, can tell us the probability that the results of our analysis on the sample are representative of the population that the sample represents. In other words, these tests of significance tell us the probability that the results of the analysis could have occurred by chance when there is no relationship at all between the variables we studied in the population we studied.

Examples of inferential statistics include linear regression analyseslogistic regression analysesANOVAcorrelation analysesstructural equation modeling, and survival analysis, to name a few.

Inferential Statistics:- Bayes Net  [Good for simple Hypothesis]

“Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it’s raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler is usually not turned on)… The joint probability function is: P(G, S, R) = P(G|S, R)P(S|R) P(R)”. The example illustrates features common to homeostasis of biomedical importance, but is of interest here because, unusual in many real world applications of BNs, the above expansion is exact, not an estimate of P(G, S, R).

Inferential Statistics: Hyperbolic Dirac Net (HDN) – System contains innumerable Hypothesis

HDN Estimate (forward and backwards propagation)

P(A=’rain’) = 0.2 # <A=’rain’ | ?>

P(B=’sprinkler’) = 0.32 # <B=’sprinkler’ | ?>

P(C=’wet grass’) =0.53 # <? | C=’wet grass>

Pxx(not A) = 0.8

Pxx(not B) = 0.68

Pxx(not C) = 0.47

# <B=’sprinkler’ | A=’rain’>

P(A, B) = 0.002

Px(A) = 0.2

Px(B) = 0.32

Pxx(A, not B) = 0.198

Pxx(not A, B) = 0.32

Pxx(not A, not B) = 0.48

#<C=’wet grass’|A=’rain’,B=’sprinkler’>

P(A,B,C) = 0.00198

Px(A, B) = 0.002

Px(C=’wet grass’) =0.53

Pxx(A,B,not C) = 0.00002


Since the focus in this example is on generating a coherent joint probability, Pif and Pif* are not included in this case, and we obtain {0.00198, 0.00198} = 0.00198. We could us them to dualize the above to give conditional probabilities. Being an exact estimate, it allows us to demonstrate that the total stress after enforced marginal summation (departure from initial specified probabilities) is very small, summing to 0.0005755. More typically, though, a set of input probabilities can be massaged fairly drastically. Using the notation initial -> final, the following transitions occurred after a set of “bad initial assignments”.

P (not A) = P[2][0][0][0][0][0][0][0][0][0] = 0.100 -> 0.100000

P (C) = P[0][0][1][0][0][0][0][0][0][0] = 0.200 -> 0.199805

P ( F,C) = P[0][0][1][0][0][1][0][0][0][0] = 0.700 -> 0.133141

P (C,not B,A) = P[1][2][1][0][0][0][0][0][0][0] = 0.200 -> 0.008345

P (C,I,J,E,not A) = P[2][1][0][1][0][0][0][1][1][0] = 0.020 -> 0.003627

P (B,F,not C,D) = P[0][1][2][1][0][1][0][0][0][0] = 0.300 -> 0.004076

P (C) = P[0][0][1][0][0][0][0][0][0][0] = 0.200 -> 0.199805

P ( F,C) = P[0][0][1][0][0][1][0][0][0][0] = 0.700 -> 0.133141

P (C,not B,A) = P[1][2][1][0][0][0][0][0][0][0] = 0.200 -> 0.008345

P (C,I,J,E,not A) = P[2][1][0][1][0][0][0][1][1][0] = 0.020 -> 0.003627

P (B,F,not C,D) = P[0][1][2][1][0][1][0][0][0][0] = 0.300 -> 0.004076

Implicate Order as Ontology For Complex System – Creating Generative Transformation

For a messy complex system - undergoing generative transformation - Implicate Order provides the direction.

For a messy complex system – undergoing generative transformation – Implicate Order provides the direction.

Implicate Order as an Ontology v1.1


When Cartesian Breaks Down - Implicate Order Reins

When Cartesian Breaks Down – Implicate Order Reins

What came first Design (Gene – DNA – Chromosomes) or Building Material (Protein) :- Architecture Paradox

Below is the Reply to a Question what came first DNA or Protein? One seem not to exist without the other. This a paradox.

System is combination of questions (problem domain) and answers (solution domain) – why, how, what, when, where. Also, it alludes to transformation – lets say driven by entropy. So, system dynamics.

A. DNA precedes Protein – possible

B. Relation between Gene, DNA, Chromosome and what it does to Protein

C. Paradox of DNA vs Protein – leads into Intelligent Design Discussion

D. Method to bring our mind to study them and arrive at more hypothesis, postulates or just appreciate the grandeur we live in.

A. Paper presented at NIH – discusses the possibility of DNA preceding protein.

DNA before proteins? Recent discoveries in nucleic acid catalysis strengthen the case.


B. Think Architecture Talk Language. I think we are struggling to represent complex concept structurally – typical architecture problem discussed with constraints of linearity in language.

a. Instruction Set
Chromosome is a molecule containing DNA cell
DNA cell is made up of gene segment
gene has basic unit code or stroke on piano
chromosome describes the script or note on piano = set of strokes
{Chromosome [(DNA(GENE)]} = information = coded tune for set of functions to emerge
I think chromosomes also knows how to orchestrate the choreography that finally realizes the living organism.

b. Construction Material
Protein is the building block. It gets instruction from set of chromosomes and it builds creating complex forms from simple building blocks. Complex forms are uniquely created with different functional capability.

More complex living organism, more complex sets of chromosomes. Simpler organisms have smaller number of sets of chromosomes. Humans have 23 pairs, that is 46 chromosomes

DNA, Chromosomes and Gene

Discussion on DNA – Protein

C. Paradox of DNA vs Protein leading into discussion about Intelligent Design discussed below


D. My own spin – studying Implicate Order and System Dynamics; proposed by David Bohm

Design (DNA) was created either by a designer or by an “emergence process – emergent architecture”. Also, it could be affected by both causality or acausality.

Chromosomes (DNA) is the implicate order – by generative process it creates the explicate order (everything external) in rich diversity. It is such an architecture, where the ontology of which has containment of the process of creation (notes, tunes, choreography and orchestration)

Complex subjects or ideas needs to be investigated employing integrative tools. To understand the how vs what paradox, we have applied ideas from what we have understood of the world from following disciplines – biology, chemistry, physics and economics + maths. It is the fight for resource (economics), which brings in modes of life form emergence from its simpler forms to complex forms. The fight for resource is both inter and intra. Fiscal Cliff is the fight amongst ourselves. Rich and Poor’ who should own the resource and who should benefit. It is fight for control and eventually our life.

Implicate order, studied within the notion of system driven by entropy discusses energy and need for achieving stability via entropic journey creating order and chaos. This creates various social structures with varying stresses. Order and Chaos are English words to describe the state of entropy as it dynamically exists in a continuum – System Dynamics. This is very well discussed in the video below.

Very beautiful video to see if a different perspective on thermodynamics helps our understanding of complexity and even more of life itself.

password “montana”

The unique form that living organism takes; probably could be described combining macro & micro physics (quantum dynamics) and chemistry. Applying Pauli’s Exclusion Principle – different energy pattern affect composition, so we have a periodic table.

Implicate Order – Descriptive Mechanism for Large System(s) of Systems Behavior

Past is your Enemy

Changing patterns. Moving targets. Contending objectives. Subjective Strategies. Difficult to align objective actions. Probabilistically Deterministic. High Occurrence of Random events.

When System Behavior Characterized by Cartesian Dilemma.

Then “Implicate Order” as Ontology is the answer

Click the picture to Zoom.

When Cartesian Breaks Down - Implicate Order Reins

When Cartesian System Breaks Down - Implicate Order Reins

When System is a Spaghetti – The Cartesian System Breakdown

Spaghetti – Metaphor for Complexity

Enterprise (System) in reality is a complex interweaving mass. Architecture is a mere attempt to represent the complex entity via a systematic process while relying on some sense of an Ontology’, which has classically relied on the Cartesian System. The dynamics emerging from the complex interplay among the various resources, with People being the most prominent – contributes to energy that is responsible for any work done. All-together in some sense the enterprise can only be best visualized as a spaghetti. Each strand being an instance of any ‘thing’ that the mind is capable of envisioning at that time. For instance a ‘business process’, or a ‘procedure’, or a technology enabler. More larger the enterprise the spaghetti depiction becomes that much more complex and so the dynamic interplay experiencing Cartesian Dilemma, eventually contributes to the break-down of the  classical ontological model. And as always, an enterprise has a complex co-existence with other enterprises. This means a simple mathematical models can no longer be applied. Especially when the complexity compounds, the degree of freedom reduces (Theory of Constraints). This is the real reason for the present mess that wall-street and consequently the main-street is in.

One cannot argue that mortgage is the main contributor to the current financial systemic mess. It is is but one iota. Overall it is a complex systemic issue. Probably some theory that works in explaining the microcosms and its interdependency on the behavior of the other systems can best explain the ‘logic’ if any where the mechanistic ’cause and effect’ has a diminishing role to play. This is where the notion of ‘implicate order’  becomes important to understand the ontology behind the ‘Enterprise Architecture’.

Note: Cartesian System inherently promotes extreme linearization. These results into gaussian distribution and hence disparities in the system. Such a system is considered to be containing systemic fault owing to Cartesian Dilemma.

CIAO – Interesting pursuit after EA Ontology

Interesting work in EA Ontology space. Especially the ‘separation of concerns’ is well applied to the notion of ‘ontology’ and its separation from the principles of construction.

Transcript below is complied from the following link

Cooperation & Interoperability – Architecture & Ontology


CIAO! is an initiative whose mission is to stimulate the development of the emerging discipline of enterprise engineering, as well as its practical application in improving the societal performance of enterprises. By an enterprise is understood any kind of enterprise, like commercial, not-for-profit, governmental, etc., as well as any kind of alliance between enterprises: enterprise networks, supply chains, etc. The name CIAO! is an acronym for Cooperation, Interoperability, Architecture, and Ontology. The first two constitute the key themes of CIAO!: the cooperation within and between enterprises and the interoperability of their information systems. The second two constitute the key concepts in addressing these themes. The concepts of architecture and ontology have a specific meaning in CIAO!. Architecture is defined as the normative restriction of design freedom. It is made operational by means of principles that guide the design of systems (enterprises, information systems, etc.). Ontology is defined as the implementation independent understanding of the construction and operation of systems (enterprises, information systems, etc.).


The traditional organizational sciences take a function-oriented point of view towards enterprises. They are analytic by nature, and the dominant type of model of an enterprise, therefore, is the black-box model. The black-box based knowledge provided is sufficient and adequate for managing the behavior of an enterprise within settled ranges of control; it is inadequate for changing an enterprise. Change, however, is the imperative adagium of current (and certainly future) enterprises. They need to be agile and flexible since they (will) operate in an increasingly dynamic and global environment. Moreover, enterprises need to be transparant; they will be held publicly accountable for every effect they produce. To meet these requirements, an engineering, construction-oriented, point of view has to be taken. Synthetic knowledge is needed, in addition to the analytic knowledge, that effectively supports the changing of an enterprise. Developing and incorporating this kind of knowledge in the organizational sciences, means no less than a paradigm shift. The emerging discipline that takes the needed construction-oriented point of view, and that will provide the synthetic knowledge for letting enterprises continuously adapt to threats and challenges, is called enterprise engineering.


The current situation resembles very much the one that existed around 1970 in the computing sciences. At that time a revolution took place in the way people conceived information technology and its applications. Since that time, people are aware of the distinction between the form and the content of information. This revolution marks the transition from data system engineering to information system engineering. The comparison with computing sciences is not an arbitrary one: the key enabling technology for shaping future enterprises is the modern information and communication technology (ICT). A growing insight in the computing sciences is that the intention in communicative action is the central notion for understanding profoundly the relationship between organization and ICT. So, like the content of communication was put on top of its form in the 1970’s, the intention is put on top of the content now. The current revolution marks the transition from information system engineering to enterprise engineering.