ingine inc

New Kind of Cognitive Science – Medical Ontology – A.I driven Reasoning

cover_imageBioingine_Platform

 

 

Advertisements

Part B – Healthcare Interoperability, Standards and Data Science (Resolving the Problem)

Srinidhi Boray | Ingine, Inc | Bioingine.com

Slide06

Introducing, Ingine, Inc. it is a startup in its incipient stages of developing BioIngine platform, which brings advancement in data science around Interoperability. Particularly with healthcare data mining and analytics dealing with medical knowledge extraction. Below are some of the lessons learned discussed while dealing with the healthcare transformation concerns, especially with the ONC’s Interoperability vision.

As an introduction, want to include the following passage from the book

The Engines of Hippocrates: From the Dawn of Medicine to Medical and Pharmaceutical Informatics

By Barry Robson, O. K. Baek

https://books.google.com/books?id=DVA0QouwC4YC&pg=PA8&lpg=PA8&dq=MEDICAL+FUTURE+SHOCK+barry+robson&source=bl&ots=Qv1cGRIY1L&sig=BgISEyThQS-8bXt-g7oIQ873cN4&hl=en&sa=X&ved=0CCQQ6AEwAWoVChMI0a2s-4zMyAIVSho-Ch216wrQ#v=onepage&q=MEDICAL%20FUTURE%20SHOCK%20barry%20robson&f=false

MEDICAL FUTURE SHOCK

Healthcare administration has often been viewed as one of the most conservative of institutions. This is not simply a matter of the inertia of any complex bureaucratic system. A serious body with an impressive history and profound responsibilities cannot risk unexpected disruptions to public service by changing with every fashionable new convenience, just for the sake of modernity. A strong motivation is needed to change a system on which lives depend and which, for all its faults, is still for the most part an improvement on anything that went before. However, this is also to be balanced against the obligation of healthcare, as an application of science and evolving human wisdom, to make appropriate use of the new findings and technologies available. This is doubly indicated when significant inefficiencies and accidents look as if they can be greatly relieved by upgrading the system. Sooner or later something has to give, and the pressure of many such accumulating factors can sometimes force a relatively entrenched system to change in a sudden way, just as geological pressures can precipitate an earthquake. An Executive Forum on Personalized Medicine organized by the American College of Surgeons in New York City in October 2002 similarly warned of the increasingly overwhelming accumulation of arguments demanding reform of the current healthcare system…if there is to be pain in making changes to an established system, then it makes sense to operate quickly, to incorporate all that needs to be incorporated and not spin out too much the phases of the transitions, and lay a basis for ultimately assimilating less painfully all that scientific vision can now foresee. But scientific vision is of course not known for its lack of imagination and courage, and is typically very far from conservative, still making an element of future shock inevitable in the healthcare industry.

  1. Complicated vs Complexity

A) Generally approaching to characterize a system, there are two views, complicated and complex. Complicated is with problems of system operations and population management, while complex problems are about multi-variability with an individual patient diagnosis.

Below link discusses providing better scenarios regarding complicated vs complexity

http://www.beckershospitalreview.com/healthcare-blog/healthcare-is-complex-and-we-aren-t-helping-by-making-it-more-complicated.html

https://www.bcgperspectives.com/content/articles/organization_design_human_resources_leading_complex_world_conversations_leaders_thriving_amid_uncertainty/

Generally, all management concerns around operations, payment models, healthcare ecosystem interactions, etc deal with delivering the systemic efficiencies. These are basically complicated problems residing in the system, which when resolved yield the hidden efficiencies.

All those that affect the delivery of the clinical efficacy have to deal with complex problem. Mostly owing to the high dimensionality (multi-variability) of the longitudinal patient data.

When both, complicated and complex concerns are addressed the Healthcare as an overarching complex system will begin to yield the desired performance driven outcomes.

B) Standards around Interoperability has generally dealt with following three levels of health information technology interoperability:

Ref:-http://www.himss.org/library/interoperability-standards/what-is-interoperability

From the above link:-

1) Foundational; 2) Structural; and 3) Semantic.

1 – “Foundational” interoperability allows data exchange from one information technology system to be received by another and does not require the ability for the receiving information technology system to interpret the data.

2 – “Structural” interoperability is an intermediate level that defines the structure or format of data exchange (i.e., the message format standards) where there is uniform movement of healthcare data from one system to another such that the clinical or operational purpose and meaning of the data is preserved and unaltered. Structural interoperability defines the syntax of the data exchange. It ensures that data exchanges between information technology systems can be interpreted at the data field level.

3 – “Semantic” interoperability provides interoperability at the highest level, which is the ability of two or more systems or elements to exchange information and to use the information that has been exchanged. Semantic interoperability takes advantage of both the structuring of the data exchange and the codification of the data including vocabulary so that the receiving information technology systems can interpret the data. This level of interoperability supports the electronic exchange of patient summary information among caregivers and other authorized parties via potentially disparate electronic health record (EHR) systems and other systems to improve quality, safety, efficiency, and efficacy of healthcare delivery.

The above levels of interoperability only deal with achieving semantic compatibility between systems in the data transacted from the large number of myriad systems (EHRs) while they converge into a heterogeneous architecture (HIE / IoT). This only deals with the complicated concerns within the system. They do not necessarily deal with the extraction and discernment of the knowledge hidden in the complex health ecosystem system. To achieve this for some simplicity sake, let us define need for a second order semantic interoperability that concerns with the data mining approaches required in the representation of the systemic medical knowledge. It is this medical knowledge; implicit, explicit and tacit that all together form evidence based medicine much desired to facilitate any clinical decision support system.

C) In the present efforts around Interoperability, which centers mostly around data standards (HL7v2, HLv3, FHIR, C-CDA, ICD-10, LOINC, SNOMED etc) and clinical quality measures (QRDA); only complicated concerns have been addressed and not necessarily the complex problems. This is the vexation in the quality measures reporting. While this has advanced the adoption of EHR by the hospitals, it is still far from it becoming an effective decision support tool for the physicians

It must be noted that in the MU2 criteria, it is suggested that besides achieving health information exchange pivotal to the creation of Accountable Care Organization (ACO), at the least five-health priority or critical health risk conditions must be addressed employing clinical decision support system. Deservedly created, this point creates a need for addressing clinical efficacy, in addition to achieving best possible system efficiencies leading to systemic performance driven outcomes. This means a much deeper perspective is required to be included in the Interoperability efforts to better drive data science around data mining that can help better engage physicians in the realization of the performance driven outcomes. Rather than allowing physicians to be encumbered by the reimbursement model driven EHRs. Also, although most EHR vendors employ C-CDA to frame the longitudinal patient view, they do not necessarily send all the data to the Health Information Exchange, this results into truncating the full view of the longitudinal patient records to the physicians.

D) Physician, Primary Care, Cost in Rendering and Shortage Physician workforce

http://www.ncbi.nlm.nih.gov/pubmed/20439861

When dealing with the primary care, it is desired that today’s physicians who are over-burdened, moving forward works as a team lead engaging variety of healthcare professionals, while also better enabling trained nurse practitioners. Furthermore, also rendering the work in a lesser-cost environment while moving away from higher cost environments such as hospitals and emergency care facilities. This also means moving away from service-based models into performance based payment models becomes imperative.

It must be noted that dealing with the way an organization generally works reassigning responsibilities both horizontally and vertically, has to do only with the complicated concerns of the system, not the complex problem. Again it must be emphasized that data mining related to evidence based medicine, which is in a way knowledge culled from the experiences of the cohorts within the health ecosystem, will play a vital role in improving the much desired clinical efficacy leading ultimately to better health outcomes. This begins to address the complex systemic problems, while also better engaging the physicians who find the mere data entry into the EHR cumbersome and intrusive; and not able to derive any clinical decision support from the integration of the System of systems (SoS).

  1. Correlation vs Causations

A) While we make a case for better enabling evidence based medicine (EBM) driven by data mining as a high priority in the interoperability scheme of things, we also would like to point out the need for creating thorough systematic review aided by automation which is vital to EBM. This also means dealing with Receiver-Operating Characteristic (ROC) Curves http://www.ncbi.nlm.nih.gov/pubmed/15222906

https://www.sciencebasedmedicine.org/evidence-in-medicine-correlation-and-causation/

From the above link:-

“”The consensus of expert opinion based upon systematic reviews can either result in a solid and confident unanimous opinion, a reliable opinion with serious minority objections, a genuine controversy with no objective resolution, or simply the conclusion that we currently lack sufficient evidence and do not know the answer.””

Also, another reference to:-

Reflections on the Nature and Future of Systematic Review in Healthcare. By:- Dr. Barry Robson

http://www.diracfoundation.com/?p=146

In the recent times Bayesian statistics has emerged as a gold standard to developing curated EBM (http://www.ncbi.nlm.nih.gov/pubmed/10383350) and; in this context we would like to draw attention that while correlation is important as discussed in the above linked article, which is developed from the consensus of the cohorts in the medical community, it is also important to ascertain the causation. This demands need for a holistic Bayesian statistics as proposed in the new algorithms, including those built on proven ideas in physics advancing the scope of the Bayesian Statistics as developed by Dr. Barry Robson. The approach and its impact on the Healthcare Interoperability and analytics are discussed in the link provided below.

http://www.ncbi.nlm.nih.gov/pubmed/26386548

From the above link: –

“”””Abstract

We extend Q-UEL, our universal exchange language for interoperability and

inference in healthcare and biomedicine, to the more traditional fields of public health surveys. These are the type associated with screening, epidemiological and cross-sectional studies, and cohort studies in some cases similar to clinical trials. “”There is the challenge that there is some degree of split between frequentist notions of probability as (a) classical measures based only on the idea of counting and proportion and on classical biostatistics as used in the above conservative disciplines, and (b) more subjectivist notions of uncertainty, belief, reliability, or confidence often used in automated inference and decision support systems. Samples in the above kind of public health survey are typically small compared with our earlier “Big Data” mining efforts. An issue addressed here is how much impact on decisions should sparse data have. “””””

B) Biostatistics, Algebra, Healthcare Analytics and Cognitive Computing

Another interesting aspect that emerges is the need for biostatistics and such many doctors with MD qualification are getting additionally qualified in Public Health Management, which also deals with Biostatistics. Dealing with population health one hand and clinical efficacy on the other, Interoperability via biostatistics has to deliver both views macro wrt systemic outcomes and at the micro level clinical efficacies. Developing such capabilities means much grander vision for Interoperability, as discussed in the OSEHRA, VA sponsored Open Source Efforts in making VistA available to the world market at a fraction cost. More discussion on the OSEHRA forum in the below link.

https://www.osehra.org/content/joint-dod-va-virtual-patient-record-vpr-iehr-enterprise-information-architecture-eia-0

From the above link:-

“”””Tom Munnecke – The Original Architect of VistA – This move to a higher level of abstraction is a bit like thinking of things in terms of algebra, instead of arithmetic. Algebra gives us computational abilities far beyond what we can do with arithmetic. Yet, those who are entrenched in grinding through arithmetic problems have a disdain for the abstract facilities of algebra.””””

Interesting point to note in the discussions on the above link, is that a case is being made for the role of data science (previously called Knowledge Engineering during last three decades) driving better new algorithms, including those built on proven ideas in physics in the Healthcare Interoperability. This helps in advancing the next generations of the EHR capabilities, eventually emerging as a medical science driven cognitive computing platform. The recommendation is in the employ of advances in the data science in moving the needle from developing a deterministic or a predicated System of systems (SoS) based on schemas such as FHIM (http://www.fhims.org), that proves design laborious and is outmoded, to harnessing the data locked in the heterogeneous system by the employ of advanced Bayesian statistics, new algorithms, including those built on proven ideas in physics and especially exploitation of the algebra. This approach delivered on a BigData architecture as a Cognitive Computing Platform with schema-less approaches has a huge benefit in terms of cost, business capability and time to market, delivering medical reasoning from the healthcare ecosystem as realized by the interoperability architectures.

Quantum Theory driven (QEXL Approach) Cognitive Computing Architecture resolving Healthcare Interoperability (BigData – HIE/ ACO )

http://www.BioIngine.com

[healthcare cognitive computing platform]

Conquering Uncertainties Creating Infinite Possibilities

(Possible application :- Achieving Algorithm Driven ACO)

HDN_Cognitive_Computing

Introduction

The QEXL Approach is a Systems Thinking driven technique that has been designed with the intension of developing “Go To Market” solutions for Healthcare Big Data applications requiring integration between Payor, Provider, Health Management (Hospitals), Pharma etc; where the systemic complexities tethering on the “edge of chaos” pose enormous challenges in achieving interoperability owing to existence of plethora of healthcare system integration standards and management of the unstructured data in addition to structured data ingested from diverse sources. Additionally, The QEXL Approach targets for the creation of Tacit  Knowledge Sets by inductive techniques and probabilistic inference from the diverse sets of data characterized by volume, velocity and variability. In fact, The QEXL Approach facilitates algorithmic driven Proactive Public Health Management, while rendering business models achieving Accountable Care Organization most effective.

The QEXL Approach is an integrative multivariate declarative cognitive architecture proposition to develop Probabilistic Ontology driven Big Data applications creating interoperability among Healthcare systems. Where, it is imperative to develop architecture that enable systemic capabilities such as Evidence Based Medicine, Pharmacognomics, biologics etc; while also creating  opportunities for studies such as Complex Adaptive System (CAS). Such approach is vital to develop ecosystem as an response to mitigate the Healthcare systemic complexities. Especially CAS studies makes it possible to integrate both macro aspects (such as epidemiology) related to Efficient Heathcare Management Outcomes ; and micro aspects (such as  Evidence Based Medicine and Pharmacogenomics that helps achieve medicine personalization) achieving Efficacy in the Healthcare delivery, to help achieve systemic integrity. In The QEXL Approach QEXL stands for “Quantum Exchange Language”, and Q-UEL is the initial proposed language. The QEXL Consortium embraces Quantal Semantics, Inc; (NC) and Ingine, Inc; (VA), and collaborates with The Dirac Foundation (UK), which has access to Professor Paul Dirac’s unpublished papers. The original consortium grew as a convergence of responses to four stimuli:

  1. The “re-emerging” interest in Artificial Intelligence (AI) as “computational thinking”, e.g. under the American Recovery Act;
  2. The President’s Council of Advisors on Science and Technology December 2010 call for an “XML-like” “Universal Exchange Language” (UEL) for healthcare;
  3. A desire to respond to the emerging Third World Wide Web (Semantic Web) by an initiative based on generalized probability theory  – the Thinking Web; and
  4. In the early courses of these  efforts, a greater understanding  of what Paul Dirac meant in his  Nobel Prize dinner speech where he stated that quantum mechanics should be applicable to all aspects of human thought.

The QEXL Approach

The QEXL Approach is developed based on considerable experiences in Expert Systems, linguistic theory, neurocognitive science, quantum mechanics, mathematical and physics-based approaches in Enterprise Architecture, Internet Topology, Filtering Theory, Semantic Web, Knowledge Lifecycle Management, and principles of Cloud Organization and Integration. The idea for well-formed probabilistic programming reasoning language is simple.  Importantly, also, the more essential features of it for reasoning and prediction are correspondingly simple such that the programmers are not necessarily humans, but structured and unstructured (text-analytic) “data mining” software robots. We have constructed a research prototype Inference Engine (IE) network (and more generally a program) that “simply” represents a basic Dirac notation and algebra compiler, with the caveat that it extends to Clifford-Dirac algebra; notably a Lorentz rotation of the imaginary number i (such that ii = -1) to the hyperbolic imaginary number h (such that hh = +1) corresponding to Dirac’s s, and gtime or g5) is applied.

[Outside the work of Dr. Barry Robson, this approach has not been tried in the inference and AI fields, with one highly suggestive exception: since the late 1990s it has occasionally been used in the neural network field by T. Nitta and others to solve the XOR problem in a single “neuron” and to reduce the number of “neurons” generally. Also suggestively, in particle physics it may be seen as a generalization of the Wick rotation time i x time used by Richard Feynman and others to render wave mechanics classical.  It retains the mathematical machinery and philosophy of Schrödinger’s wave mechanics but, instead of probability amplitudes as wave amplitudes, it yields classical but complex probability amplitudes encoding two directions of effect: “A acts on B, and B differently on A”. It maps to natural language where words relate to various types of real and imaginary scalar, vector, and matrix quantities. Dirac’s becomes the XML-like semantic triple . ]  

The QEXL Approach involves following  interdependent components.

  • Q-UEL (Probabilistic Inference + Phenomenon Of Interest): Addresses global issues that potentially pervade all human endeavors, and hence universal interoperability is of key importance
  •  (Inference Engine + Semantic Inferencing): Project addressing universal meaning underlying diverse natural languages on the Internet, and the use of that in knowledge representation
  • Inference Engine + Decentralized Infra: A link infrastructure for intra- and inter-cloud interoperability and integration in a coherent high level “metaware” environment. This component can also be explored to be replaced with simpler industry ready solutions such as MarkLogic® Enterprise NoSQL Database on Hadoop Distributed File System.

In an endeavor of this kind the partitions-of-work are inevitably artificial; it is important that this does not impede the integrity of optimal solutions.  The most important aspect in The QEXL Approach is, in essence where architecturally Probabilistic Inference (PI) and Data Architecture for the Inference Engine (IE)  is designed to be cooperative; software robots are created while PI and IE interact; and the inference knowledge gained by the PI and IE provide rules for solvers (robots) to self compile and conduct queries etc. This is therefore the grandeur of the scheme: This approach will have facilitated programming by nice compilers so that writing the inference network is easy, but it is not required to write the inference net as input code to compile, with the exception of reusable metarules as Dirac expressions with variables to process other rules by categorical and higher order logic. The robots are designed and programmed to do the remaining coding required to perform as solvers. So the notion of a compiler disappears under the hood. The robots are provided with well-formed instructions as well formed queries. Once inferences are formed, different “what – if” questions can be asked. Given that probability or that being the case, what is the chance of… and so on. It is as if having acquired knowledge, Phenomenon Of Interest (POI) is in a better state to explore what it means. Hyperbolic Dirac Networks (HDNs) are inference networks capable of overcoming the limitations imposed by Bayesian Nets (and statistics) and creating generative models richly expressing the “Phenomenon Of Interest” (POI) by the action of expressions containing binding variables. This may be thought of as an Expert System but analogous to Prolog data and Prolog programs that act upon the data, albeit here a “probabilistic Prolog”. Upfront should be stated the advantages over Bayes Nets as a commonly used inference method, but rather than compete with such methods the approach may be regarded as extending them. Indeed a Bayes Net as a static directed acyclic conditional probability graph is a subset of the Dirac Net as a static or dynamic general bidirectional graph with generalized logic and relationship operators, i.e. empowered by the mathematical machinery of Dirac’s quantum mechanics.

 The QEXL Approach Theory :- Robson Quantitative Semantics Algebra (RQSA)

Developed by Dr. Barry Robson

Theory :- The QEXL Approach based on Robson Quantitative Semantics Algebra – RQSA (Link to development of algorithm – overcoming limitations of Gold Stand Bayesian Network – to solve uncertainty while developing probabilistic ontology)

Impact Of The QEXL Approach

Impact of The QEXL Approach creating Probabilistic Ontology based on Clifford-Dirac algebra has immense opportunity in advancing the architecture to tackle large looming problems involving System of Systems; in which vast uncertain information emerge. Generally, as such systems are designed and developed employing Cartesian methods; such systems do not offer viable opportunity to deal with vast uncertain information when ridden with complexity. Especially when the context complexity poses multiple need for ontologies, and such a system inherently defies Cartesian methods. The QEXL Approach develops into an ecosystem response while it overcomes the Cartesian dilemma (link to another example for Cartesian Dilemma) and allows for generative models to emerge richly expressing the POI. The models generatively develops such that the POI behavior abstracted sufficiently lend for the IE and the Solvers to varieties of studies based on evidence and also allows for developing systemic studies pertaining to Complex Adaptive System and Complex Generative Systems afflicted by multiple cognitive challenges. Particularly, The QEXL Approach has potential to address complex challenges such as evidence based medicine (EBM); a mission that DoD’s Military Health System envisions while it modernizes its Electronics Health Record System – Veterans Health Information Systems and Technology Architecture (VistA). Vast potential also exists in addressing Veteran Administration’s (VA) Million Veteran Program (MVP); an effort by VA to consolidate genetic, military exposure, health, and lifestyle information together in one single database. By identifying gene-health connections, the program could consequentially advance disease screening, diagnosis, and prognosis and point the way toward more effective, personalized therapies.

Although The QEXL Approach is currently targeted to the healthcare and pharmaceutical domains where recognition of uncertainty is vital in observations, measurements and predictions, and probabilities underlying a variety of medical metrics, the scope of application is much more general. The QEXL Approach is to create a generic multivariate architecture for complex system characterized by Probabilistic Ontology that employing generative order will model “POI” facilitating creation of “communities of interest” by self-regulation in diverse domains of interest, requiring integrative of disciplines to create complex studies. The metaphor of “Cambrian Explosion” may aptly represent the enormity of the immense possibilities in advancing studies that tackle large systemic concerns riddled with uncertain information and random events that The QEXL Approach can stimulate.

Image

The inference engine can be conceptualized into solutions such as MarkLogic NoSQL + Hadoop (HDFS). http://www.marklogic.com/resources/marklogic-and-hadoop/

It is interesting to note that in the genesis of evolving various NoSQL solutions based on Hadoop few insights have emerged related to need for designing the components recognizing their cooperative existence.

The Goal of The QEXL Approach: Is all about Contextualization 

The goal employing The QEXL Approach is to enable the realization of cognitive multivariate architecture for Probabilistic Ontology, advancing the Probabilistic Ontology based architecture for context specific application; such as Healthcare. Specifically, The QEXL Approach will develop PI  that helps in the creation of generative models that depicts the systemic behavior of the POI riddled with vast uncertain information. Generally, uncertainty in the vast information is introduced by the System of Systems complexity that is required to resolve multiples of ontologies, standards etc., these further introduce cognitive challenges. The further goal of The QEXL Approach is to overcome such challenges, by addressing interoperability at all levels, including the ability to communicate data and knowledge in a way that recognizes uncertainty in the world, so that automated PI and decision-making is possible. The aim is semiotic portability, i.e. the management of signs and symbols that deals especially with their function and interactions in both artificially constructed and natural languages. Existing systems for managing semantics and language are mostly systems of symbolic, not quantitative manipulation, with the primary exception of BayesOWL.  RQSA, or Robson Quantitative Semantic Algebra by its author Dr. Barry Robson, to distinguish it from other analogous systems, underlies Q-UEL. It is the development of (a) details of particular aspect of Dirac’s notation and algebra that is found to be of practical importance in generalizing and correctly normalizing Bayes Nets according to Bayes Theorem (i.e. controlling coherence, which ironically Bayes Nets usually neglect, as they are unidirectional), (b) merged with the treatment of probabilities and information based on finite data using the Riemann Zeta function that he has employed for many years in bioinformatics and data mining (http://en.wikipedia.org/wiki/GOR_method), and (c) the extension to more flavors of hyperbolic imaginary number to encode intrinsic “dimensions of meaning” under a revised Rojet’s thesaurus system.

The Layers of the Architecture Created by The QEXL Approach

The QEXL Layered View

The QEXL Layered View

Layer 1- Contextualization: Planning, Designing driven by Theories 

A.    Probabilistic Ontology creating Inferencing leading into Evidence Based Medicine

i.     Aspects addressed by Q-UEL Tags and Kodaxil Inferencing

  1. Autonomy / Solidarity
  2. Inferencing (Kodaxil and Q – UEL)
  3. MetaData
  4. Security / Privacy
  5. Consented vs Un-consented Data
  6. Creating Incidence Rule (predicated – Q-UEL and Kodaxil)

ii.     Kodaxil:-  Enforcing Semantics across data sources (global text and data interoperability) – universal meaning underlying diverse natural languages on the Internet

iii.     Fluxology:- Logical Meta Data Cloud (A link infrastructure for intra- and inter-cloud interoperability and integration in a international setting)

  1. Adaptive
  2. Emergent Data Usage Patterns (networks of networks – enables by Probabilistic Ontology rules)
  3. Modeless Emergent Hierarchies
  4. Federation and Democratization Rule for Data (contract, trust, certificates, quality)

B.    Development of Probabilistic Model Representing Universal Abstraction of Phenomenon Of Interest

C.   Targeting Architecture to Application

  • Evidence Based Medicine
  • Genomics
  • Systemic Healthcare Studies
  • etc

Layer 2 – A: Operational Architecture (Logical )

A.    Reference Architecture

  1. Business Con Ops (Use cases)
  2. Conceptual Target Solution Architecture

Layer 2 – B: Data Management – Data Ingestion and Processing 

  1.  The processing of entries in the source data into form suitable for data mining
  2. The data mining of that processed data to obtain summary rules
  3. The capture of the appropriate released summary rules for inference

B.    Data Storage and Retrieval, Transactions

  1. Secure Storage and Retrieval
  2. Enable Secure Transactions
  3. Secure Data Exchange among several stake-holders and data owners

C.    Data Lifecycle, Data Organization Rules, Data Traceability to the Events, 

  1. Security and privacy by encryption and disaggregation of the EHR in a manner that is balanced against authorized access for extraction of global clinical and biomedical knowledge.
  2. Mechanisms for fine-grained consent permitting sharing and data mining.
  3. Mechanisms for secure alerting of patient or physician by backtrack when an authorized researcher or specialist notes that a patient is at risk.
  4. Structure and format that allows all meaningful use cases to be applied in reasonable time, including large-scale data mining.
  5. Assemblies across sources and data users forming contextual work patterns
  6. Hardened Security Framework

D.    Large EHR repository scaling

E.    Data Mining Rules

F.     Extracting and creating Incidence Rules

G.    Experimenting, observing and creating Semantic Inferences

H.    Visualization 

 The below two layers can be implemented in varieties of BigData platforms such as Hortonworks, Pivotal, Altiscale

Layer 3 – Application Layer (Schema-less for structured and unstructured Knowledge Repository – KRS)

Layer 4 – Infrastructure Architecture (Physical) (Hadoop and MapReduce for Large Data File-management and Processing; and Distributed / Concurrent Computations)