Month: July 2017

The Bioingine.com :- “HDN = Semantic Knowledge + General Graph + Probability = Best Decision Making”

Patient_Records_HDN

METHODS USED IN The BioIngine APPROACH: ROOTS OF THE HYPERBOLIC DIRAC NETWORK (HDN). – Dr. Barry Robson

General Approach : Solving the Representation and Use of Knowledge for the Real World.

Blending Systematically Produced and Unsystematically Existing Information and Synthesizing the Knowledge.

The area of our efforts in the support of healthcare and biomedicine is essentially one in Artificial Intelligence (AI). For us, however, this means a semantic knowledge engineering approach intimately combined with principles of probability theory, information theory, number theory, theoretical physics, data analytic principles, and even linguistic theory. These contributions and the unification of these, in the manner described briefly later below, is the general theory of an entity called the Hyperbolic Dirac Net (HDN), a means of representing and probabilistically quantifying networks of knowledge of both a simple probabilistic, and an even more sophisticated probabilistic semantic, nature in a way that has not been possible for previous approaches. It provides the core methodology for making use of medical knowledge in the face of considerable uncertainty and risk in the practice of medicine, and not least the need to manage massive amounts of diverse data, including both structured data and unstructured natural language text. As described here, the ability of the HDN and its supporting Q-UEL language to handle also the kind of interactions between things that we describe in natural language by using verbs and propositions, take account of the complex lacework of interactions between things, and do so when our knowledge is of probabilistic character, are of pressing and crucial importance to development of a higher level of information technology in many fields, but particularly in medicine.

In a single unified strike, the mathematics of the HDN, adapted in a virtually seamlessand natural way from a standard in physics due to Nobel Laureate Paul Dirac as discussed below, addresses several deficiencies (both well-known and less well advertised) in current forms of automated inference. These deficiencies largely relate to assumptions and representations that are not fully representative of the real world. They are touched upon later below, but the general one of most strategic force is as follows. As is emphasized and as discussed here, of essential importance to modern developments in many industries and disciplines, and not least in medicine, is the capture of large amounts of knowledge in what we call a Knowledge Representation Store (KRS). Each entry or element in such a store is a statement about the world.  Whatever the name, the captured knowledge includes basic facts and definitions about the world in general, but also knowledge about specific cases (and looking more like what is often meant by “data”), such as a record about the medical status of a patient or a population. From such a repository of knowledge, general and specific, end users can invoke automated reasoning and inference to predict, aid decision making, and move forward acting on current best evidence Wide acceptance and pressing need is demonstrated (see below) by numerous efforts from the earliest Expert systems to the emerging Semantic Web, an international effort to link not just web pages (as with the World Wide Web) but also data and knowledge, and comparable efforts such as Never-Ending Language Learning system (NELL) at Carnegie Mellon University.  The problem is that there is no single agreed way to actually using such a knowledge store in automated reasoning and inference, especially when uncertainty is involved.

In part this problem is perhaps in part because there is the sense that there is something deep that is still missing in what we mean by “Artificial Intelligence” (AI), and in part by lack of agreement in how to reason with connections of knowledge represented as a general graph. The latter is even to the extent that the popular Bayes Net is, by its original definition, a directed acyclic graph (DAG) that ignores or denies cyclic paths in knowledge networks, in stark contrast to the multiple interactions in a “mind map” concept map in student study notes, a subway map, biochemical pathways, physiological interactions, the wiring of the human brain, and the network of interactions in ecology. Primarily, however, the difficulty is that the elements of knowledge in the Semantic Web and other KRS-like efforts are for the most part presented as authoritative assertions rather than treated probabilistically.  This is the despite the fact that the pioneering Expert Systems for medicine needed from the outset to be essentially probabilistic in order to manage uncertainty in the knowledge used to make decisions and the combining of it, and to deduce most probable diagnosis and select best therapy amongst many initial options, although here too there is lack of agreement, and almost every new method represented a different perception and use of uncertainty.  Many of the aspects, use of a deeper theory, arrangement of knowledge elements into a general graph, might be addressed in the way a standard repository of knowledge is used, i.e. applied after a KRS is formed, but a proper and efficient treatment can only associate probability with the elements of represented knowledge from the outset (even though, like any aspect of knowledge, the probabilities should be allowed to evolve by refinement and updating).  One cannot apply a probabilistic logic without probabilities in the axioms, or at least not to any advantage. Further, it makes no sense to have elements of knowledge, however they are used, that state unequivocally that some things are true, e.g. that obese patients are type 2 diabetics, because it is a matter of probability, in this case describing the scope of applicability of the statement to patients, i.e. only some 20-30% are so. Indeed, in that case, using only certainty or near-certainty, this medically significant association might never have appeared as a statement in the first place. Note that the importance of probabilistic thinking is also exemplified here by the fact that the reader may have been expecting or thinking in terms of “type 2 patients are obese”, which is not the same thing and has a probability of about 90%, closer to certainty, but noticeably still not 100%. All the above aspects, including the latter “two way” diabetes example, relate to matters that are directly relevant, and the differentiating features, of an HDN. The world that humans perceive is full of interactions in all directions, yet full of uncertainty, so we cannot only say that

“HDN = Semantic Knowledge + General Graph + Probability = Best Decision Making”

but also that any alternative method runs the risk of being seriously wrong or severely approximate  if ignores any of knowledge or general graph or probability. For example, the popular Bayes Net as discussed below is probabilistic, but it uses only conditional and prior probabilities as knowledge, is a very restricted form of graph. Conversely, approach like that of IBM’s well-known Watson is clearly limited, and leaves a great deal to be sifted, corrected, and reasoned by the user, if is primarily a matter of “a super search engine” rather than inferring from an intricate lacework of probabilistic interactions. Importantly, even if it might be argued that some areas of science and industry can for the most part avoid such subtleties relating to probability, it is certainly not true in medicine, as the above diabetes example illustrates. From the earliest days of clinical decision support it clearly made no sense to pick, for example, “a most true diagnosis” from a set of possible diagnoses each registered only, on the evidence available so far, as true or false. What is vitally important to medicine is a semantic system that the real world merits, one capable of handling degree of truth and uncertainty in a quantitative way. Our larger approach, additionally building on semantic and linguistic theory, can reasonably be called probabilistic semantics. By knowledge in an HDN we also mean semantic knowledge in general, including that expressed by statements with relationships that are verbs of actions. In order to be able also to draw upon the preexisting Semantic Web and other efforts that contain such statements, however, the HDN approach is capable of making use of knowledge represented as certain[2].

Knowledge and reasoning from it does not stand alone from the rest of information management in the domain that generates and uses it, and it is a matter to be seriously attended to when, in comparison to many other industries such as finance, interoperability and universally accepted standards are lacking. Importantly, the application of our approach, and our strategy for healthcare and biomedicine, covers a variety of areas in healthcare information technology that we have addressed as proofs-of-concept in software development, welded into a single focus by a unification made possible through the above theoretical and methodological principles. These areas include digital patient records, privacy and consent mechanisms, clinical decision support, and translational research (i.e. getting the results of relevant biomedical research such as new genomics findings to physicians faster). All of these are obviously required to provide information for actions taken by physicians and other medical workers, but the broad sweep is also essential because no aspect stands alone: there has been a need for new semantic principles, based on the core features of the AI approach, to achieve interoperability and universal exchange.

  1. There are various terms for such a knowledge store. “Knowledge Representation Store” is actually our term emphasizing that it is (in our view) analogous to human memory as enabled and utilized by human thought and language, but now in a representation that computers can readily read directly and use efficiently (while in our case also remaining readable directly by humans in a natural way).
  2. In such cases, probability one (P=1) is the obvious assignment, but strictly speaking in our approach this technically means that it is an assertion that awaits refutation, in the manner of the philosophy of Karl Popper, and consistent with information theory in which the information content I of any statement of probability P is I = -ln(P), i.e. we find information I=0 when probability P=1. A definition such as “cats are mammals” seems an exception, but then, as long as it stands as a definition, it will not be refuted.
  3. These are the rise of medical IT (and AI in general) as the next “Toffler wave of industry”, the urgent need to greatly reduce inefficiency and the high rate of medical error, especially considering to the strain on healthcare systems by the booming elderly population,  the rise of genomics and personalized medicine, their impact on the pharmaceutical industry, belief systems and ethics, and their impact on the increased need for management of privacy and consent.

Advertisements

2004 to 2017 Convergence of Big Data, Machine Learning, Semantic Web, Graph Analytics, High Performance Computing – All These and Yet Big Data Analytics Sucks

2004 – Tim Lee Berner

 

Semantic Web

OWL and RDF introduced to address Semantic Web and also Knowledge Representation. This really calls for BigData technology that was still not ready.

https://www.w3.org/2004/01/sws-pressrelease

 

2006 – Hadoop Apache Hadoop is an open source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

https://opensource.com/life/14/8/intro-apache-hadoop-big-data

 

2008

Scientific Method Obsolete for BigData

 The Data Deluge Makes the Scientific Method Obsolete

 

2008 – MapReduce

Large Data Processing – classification

Google created the framework for MapReduce – MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

•        https://research.google.com/archive/mapreduce.html

 

2009 – Machine Learning Emergence of BigData Machine Learning Framework and Libraries

 

2009 – Apache Mahout Apache Mahout – Machine Learning on BigData Introduced.  Apache Mahout is a linear algebra library that runs on top of any distributed engine that have bindings written.

https://www.ibm.com/developerworks/library/j-mahout/

Mahout ML is mostly restricted to set theory. Apache Mahout is a project of the Apache Software Foundation to produce free implementations of distributed or otherwise scalable machine learning algorithms focused primarily in the areas of collaborative filtering, clustering and classification.

 

 

2012 – Apache SPARK Apache SPARK Introduced to deal with Very Large Data and IN-Memorry Processing. It is an architecture for cluster computing – that increases the computing compared with slow MapReduce by 100 times and also better solves parallelization of the algorithm. Apache Spark is an open-source cluster-computing framework. Originally developed at the University of California, Berkeley’s AMPLab

https://en.wikipedia.org/wiki/Apache_Spark

 

Mahout vs Spark Difference between Mahout vs SPARK

https://www.linkedin.com/pulse/choosing-machine-learning-frameworks-apache-mahout-vs-debajani

 

2012 – GraphX GraphX is a distributed graph processing framework on top of Apache Spark. Because it is based on RDDs, which are immutable, graphs are immutable and thus GraphX is unsuitable for graphs that need to be updated, let alone in a transactional manner like a graph databasE. GraphX can be viewed as being the Spark in-memory version of Apache Giraph, which utilized Hadoop disk-based MapReduce.
2013 – DARPA PPAML https://www.darpa.mil/program/probabilistic-programming-for-advancing-machine-learning

 

Machine learning – the ability of computers to understand data, manage results and infer insights from uncertain information – is the force behind many recent revolutions in computing. Email spam filters, smartphone personal assistants and self-driving vehicles are all based on research advances in machine learning. Unfortunately, even as the demand for these capabilities is accelerating, every new application requires a Herculean effort. Teams of hard-to-find experts must build expensive, custom tools that are often painfully slow and can perform unpredictably against large, complex data sets.

The Probabilistic Programming for Advancing Machine Learning (PPAML) program aims to address these challenges. Probabilistic programming is a new programming paradigm for managing uncertain information.

Ingine Responded to DARPA’s RFQ with a detailed architecture based on Barry’s innovation in the algorithm that basically solves the above ask to some extent. Importantly it solve Probabilistic Ontology for  Knowledge Extraction from Uncertainty and Semantic Reasoning.

2017 – DARPA Graph Analytics https://graphchallenge.mit.edu/scenarios

 

In this era of big data, the rates at which these data sets grow continue to accelerate. The ability to manage and analyze the largest data sets is always severely taxed.  The most challenging of these data sets are those containing relational or network data. The HIVE challenge is envisioned to be an annual challenge that will advance the state of the art in graph analytics on extremely large data sets. The primary focus of the challenges will be on the expansion and acceleration of graph analytic algorithms through improvements to algorithms and their implementations, and especially importantly, through special purpose hardware such as distributed and grid computers, and GPUs. Potential approaches to accelerate graph analytic algorithms include such methods as massively parallel computation, improvements to memory utilization, more efficient communications, and optimized data processing units.

 

2013 Other Large Graph Analytics Reference An NSA Big Graph experiment

http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf

2017 Data Science Dealing with Large Data Still Sucks

 

Despite emergence of Big Data, Machine Learning, Graphing Techniques and Semantic Web. The convergence is still far fleeting. Especially Semantic / Cognitive / Knowledge Extraction techniques are very poorly defined and there does not exists a framework approach to knowledge engineering leading into Machine Learning and automation in Knowledge Extraction, Representation, Learning and Reasoning. This is what  Q-UEL and HDN solves at the algorithmic level.

Datamining against Healthcare Waste – $1.6 Trillion

Screen Shot 2017-07-12 at 3.27.50 PM

Revolutionary Hyperbolic Dirac Net (HDN) based Data Mining Technique in fretting out Rogue Claims – Dr. Barry Robson, Ingine, Inc.

DiracSmash, or just SMASH for short, is a Q-UEL application in the sense that it is compatible with QUEL. It extracts probabilistic knowledge from csv files and renders it in the form of Q-UEL tags. DiracSmash is a development of techniques developed in The BioIngine.com, DiracMiner, DiracBuilder and other Q-UEL applications to treat sporadic data efficiently, and is being progressively adapted to handle sporadic data such as payment claims data. Note that Q-UEL has a full set of tags enabling translation of codes for diseases, procedures, triggers, complications, management etc to allow conversion from the codes to more readable forms. The typical and main purpose of

DiracSMASH is two fold, exemplified by the following. use case.

i. “data mining” and construction of potentially huge inference nets to obtain e.g. the probability that a payment will normally be above a certain amount given the input data, when for example a particular patient has obtained a claim for that amount, and

ii. “pattern discovery”, e.g. to help explain this probability by discovering patterns that are associated with cases where this probability is above that amount.

For example, it may build an HDN inference network (analogous to a Bayes Net but not confined to an Directed Acyclic Graph) implying thousands or millions of conditional probabilities, though for special reason discussed below (sporadic data), there are in this payment example merely 85 odds ratios as positive predictive odds and 85 as the corresponding odds likelihood ratio (analogous to relative risk), two probabilities comprising each, i.e. just 85 x 2 x2 = 340 probabilities.

################ NET of 85 odds ratios.

################ NETforward (predictive odds) = 2.038 ######################

################ NETbackward (likelihood ratio) = 16.477 ####################

################ NETassoc (ratio of association constants) = 9.098 #########

FORWARD PROBABILTY P(‘CLM_PMT_AMT’:=’ge100′ | NET) = 0.110

Joint probability ratio forward = 2.03780243432802 should ideally agree with following.

Joint probability ratio backward = 2.03103720070852

Real part = 2.03441981751827 (existential, coherence, extent of agreement).

Imaginary part = 0.00338261680975416 (universal, incoherence, extent of disagreement).

It can seek to help explain this with many discovered patterns, such as

<Q-UEL-PATFACTORS-3 ‘HCPCS_CD_32′:=’97110’ Pfwd:=0.00000529 | if:=count:=36 | ‘CLM_PMT_AMT’:=’ge100′ ‘ICD9_DGNS_CD_1′:=’V5832’ Q-UELPATFACTORS-3>

<Q-UEL-PATFACTORS-7 ‘ICD9_DGNS_CD_1′:=’V5832’ ‘ICD9_DGNS_CD_5′:=’78079’ ‘ICD9_PRCDR_CD_4′:=’40390’ ‘HCPCS_CD_33′:=’94762’ ‘HCPCS_CD_35′:=’94761’ Pfwd:=0.00000029 | if:=count:=2 | ‘CLM_PMT_AMT’:=’ge100′ ‘ICD9_DGNS_CD_1′:=’V5832’ Q-UEL-PATFACTORS-7>

The principles are not confined to the above scenario, nor even to payment data at all. No questions may be asked at all, and mining can still be done. Conversely, there may also be an indefinitely large list of “cases” (“conditions”, “constraints”, “denominators”) such as as say age, blood pressure 140 etc, and the data mining will apply to these cases considered collectively, i.e. to cases that satisfy all. The questions asked may also be of a different nature, such as equal or not equal to a name or code (see below). For example, the DiracSmash process produced list of all those tags having the predictive risk over 0.1 for the along with other supporting evidences, but this level is adjustable, as is an optional minimum number of required observations, and a test on significant information content. Although as noted above SMASH can be run without any guidance it is almost always given a “hitlist” file. For example

‘CLM_PMT_AMT’:=>’100′

‘ICD9_DGNS_CD_1′:=’V5832’

# ‘ICD9_DGNS_CD_2′:=’V5861’

# ‘ICD9_DGNS_CD_5′:=’V5869’

means predict and calculate probability for payment amounts greater than $100, considering only cases in which ‘ICD9_DGNS_CD_1′:=’V5832’. Convenient input is X:=value, X:=>value, X:=<value, but a full range of logical comparitors, eq, ne, gt,ge, lt, le is available. Optionally the primary condition, the second line on the list, may also use the range notation. The two entries starting ‘#’ are simply ignored, and use of these “comment out” feature familiar to programmers allows one to experiment with various conditions ad constraints. The first line is special and called the “target”. Questions asked by the first line can be greater than (gt), less than (lt), greater than or equal to (ge), less than or equal to (le), or equal to (eq) or note equal to (ne), for quantitative data, or equal to (eq, here meaning the same as) or not equal to (ne, here meaning different to) specified categorical data. The alternative and more usual input is to use the following, though it converted to the above notation internally and in reports.

‘CLM_PMT_AMT’:=>’100′ (ge, this value or higher as opposed to less than)

‘CLM_PMT_AMT’:=<‘100’ (le, this value or lower as opposed to equal to of higher)

‘CLM_PMT_AMT’:= ‘100’ (eq, this value or word as opposed to anything else.

Relevant Definitions

Hyperbolic Dirac Net (HDN) – A probabilistic inference-based statistical reasoning algorithm and technology described in some detail below. An HDN may be considered as related to the Bayes Net (BN, see below) but, the HDN does not have the severe and unrealistic graph-theoretic constraints that define the traditional BN, and naturally extends to a more inference using general probabilistic semantics and exploiting natural language processing. The HDN approach was developed employing the following.

Q-UEL – Quantum Universal Exchange Language (Q-UEL) is an algebraic notational language derived from the Dirac Notation, the mathematical machinery that defines quantum mechanics and a long and widely accepted standard in physics. Q-UEL was originally proposed as an interoperability language in response [8-13] to a Federal report of the President’s Council of Advisors on Science and Technology for a Universal Exchange Language (UEL) for healthcare in December 2010 [14]. Q-UEL has from the outset been applied to electronic health records and biomedical data. Its concept endures as a powerful architectural principle, managing the problem of the interchange and merging of medical data and knowledge from a variety of formats and ontologies.

Dirac Notation – The HDN and Q-UEL are both based on the long used standard in quantum mechanics (QM) called Dirac Notation [15]. “Notation” is generally understood to be an understatement as it is also a algebra for expressing uncertainty in observations and measurements. The notational and algebraic aspects can also map to use in the everyday world, interpreting it as a probabilistic inference algorithm with semantic applications.