hl7

Part B – Healthcare Interoperability, Standards and Data Science (Resolving the Problem)

Srinidhi Boray | Ingine, Inc | Bioingine.com

Slide06

Introducing, Ingine, Inc. it is a startup in its incipient stages of developing BioIngine platform, which brings advancement in data science around Interoperability. Particularly with healthcare data mining and analytics dealing with medical knowledge extraction. Below are some of the lessons learned discussed while dealing with the healthcare transformation concerns, especially with the ONC’s Interoperability vision.

As an introduction, want to include the following passage from the book

The Engines of Hippocrates: From the Dawn of Medicine to Medical and Pharmaceutical Informatics

By Barry Robson, O. K. Baek

https://books.google.com/books?id=DVA0QouwC4YC&pg=PA8&lpg=PA8&dq=MEDICAL+FUTURE+SHOCK+barry+robson&source=bl&ots=Qv1cGRIY1L&sig=BgISEyThQS-8bXt-g7oIQ873cN4&hl=en&sa=X&ved=0CCQQ6AEwAWoVChMI0a2s-4zMyAIVSho-Ch216wrQ#v=onepage&q=MEDICAL%20FUTURE%20SHOCK%20barry%20robson&f=false

MEDICAL FUTURE SHOCK

Healthcare administration has often been viewed as one of the most conservative of institutions. This is not simply a matter of the inertia of any complex bureaucratic system. A serious body with an impressive history and profound responsibilities cannot risk unexpected disruptions to public service by changing with every fashionable new convenience, just for the sake of modernity. A strong motivation is needed to change a system on which lives depend and which, for all its faults, is still for the most part an improvement on anything that went before. However, this is also to be balanced against the obligation of healthcare, as an application of science and evolving human wisdom, to make appropriate use of the new findings and technologies available. This is doubly indicated when significant inefficiencies and accidents look as if they can be greatly relieved by upgrading the system. Sooner or later something has to give, and the pressure of many such accumulating factors can sometimes force a relatively entrenched system to change in a sudden way, just as geological pressures can precipitate an earthquake. An Executive Forum on Personalized Medicine organized by the American College of Surgeons in New York City in October 2002 similarly warned of the increasingly overwhelming accumulation of arguments demanding reform of the current healthcare system…if there is to be pain in making changes to an established system, then it makes sense to operate quickly, to incorporate all that needs to be incorporated and not spin out too much the phases of the transitions, and lay a basis for ultimately assimilating less painfully all that scientific vision can now foresee. But scientific vision is of course not known for its lack of imagination and courage, and is typically very far from conservative, still making an element of future shock inevitable in the healthcare industry.

  1. Complicated vs Complexity

A) Generally approaching to characterize a system, there are two views, complicated and complex. Complicated is with problems of system operations and population management, while complex problems are about multi-variability with an individual patient diagnosis.

Below link discusses providing better scenarios regarding complicated vs complexity

http://www.beckershospitalreview.com/healthcare-blog/healthcare-is-complex-and-we-aren-t-helping-by-making-it-more-complicated.html

https://www.bcgperspectives.com/content/articles/organization_design_human_resources_leading_complex_world_conversations_leaders_thriving_amid_uncertainty/

Generally, all management concerns around operations, payment models, healthcare ecosystem interactions, etc deal with delivering the systemic efficiencies. These are basically complicated problems residing in the system, which when resolved yield the hidden efficiencies.

All those that affect the delivery of the clinical efficacy have to deal with complex problem. Mostly owing to the high dimensionality (multi-variability) of the longitudinal patient data.

When both, complicated and complex concerns are addressed the Healthcare as an overarching complex system will begin to yield the desired performance driven outcomes.

B) Standards around Interoperability has generally dealt with following three levels of health information technology interoperability:

Ref:-http://www.himss.org/library/interoperability-standards/what-is-interoperability

From the above link:-

1) Foundational; 2) Structural; and 3) Semantic.

1 – “Foundational” interoperability allows data exchange from one information technology system to be received by another and does not require the ability for the receiving information technology system to interpret the data.

2 – “Structural” interoperability is an intermediate level that defines the structure or format of data exchange (i.e., the message format standards) where there is uniform movement of healthcare data from one system to another such that the clinical or operational purpose and meaning of the data is preserved and unaltered. Structural interoperability defines the syntax of the data exchange. It ensures that data exchanges between information technology systems can be interpreted at the data field level.

3 – “Semantic” interoperability provides interoperability at the highest level, which is the ability of two or more systems or elements to exchange information and to use the information that has been exchanged. Semantic interoperability takes advantage of both the structuring of the data exchange and the codification of the data including vocabulary so that the receiving information technology systems can interpret the data. This level of interoperability supports the electronic exchange of patient summary information among caregivers and other authorized parties via potentially disparate electronic health record (EHR) systems and other systems to improve quality, safety, efficiency, and efficacy of healthcare delivery.

The above levels of interoperability only deal with achieving semantic compatibility between systems in the data transacted from the large number of myriad systems (EHRs) while they converge into a heterogeneous architecture (HIE / IoT). This only deals with the complicated concerns within the system. They do not necessarily deal with the extraction and discernment of the knowledge hidden in the complex health ecosystem system. To achieve this for some simplicity sake, let us define need for a second order semantic interoperability that concerns with the data mining approaches required in the representation of the systemic medical knowledge. It is this medical knowledge; implicit, explicit and tacit that all together form evidence based medicine much desired to facilitate any clinical decision support system.

C) In the present efforts around Interoperability, which centers mostly around data standards (HL7v2, HLv3, FHIR, C-CDA, ICD-10, LOINC, SNOMED etc) and clinical quality measures (QRDA); only complicated concerns have been addressed and not necessarily the complex problems. This is the vexation in the quality measures reporting. While this has advanced the adoption of EHR by the hospitals, it is still far from it becoming an effective decision support tool for the physicians

It must be noted that in the MU2 criteria, it is suggested that besides achieving health information exchange pivotal to the creation of Accountable Care Organization (ACO), at the least five-health priority or critical health risk conditions must be addressed employing clinical decision support system. Deservedly created, this point creates a need for addressing clinical efficacy, in addition to achieving best possible system efficiencies leading to systemic performance driven outcomes. This means a much deeper perspective is required to be included in the Interoperability efforts to better drive data science around data mining that can help better engage physicians in the realization of the performance driven outcomes. Rather than allowing physicians to be encumbered by the reimbursement model driven EHRs. Also, although most EHR vendors employ C-CDA to frame the longitudinal patient view, they do not necessarily send all the data to the Health Information Exchange, this results into truncating the full view of the longitudinal patient records to the physicians.

D) Physician, Primary Care, Cost in Rendering and Shortage Physician workforce

http://www.ncbi.nlm.nih.gov/pubmed/20439861

When dealing with the primary care, it is desired that today’s physicians who are over-burdened, moving forward works as a team lead engaging variety of healthcare professionals, while also better enabling trained nurse practitioners. Furthermore, also rendering the work in a lesser-cost environment while moving away from higher cost environments such as hospitals and emergency care facilities. This also means moving away from service-based models into performance based payment models becomes imperative.

It must be noted that dealing with the way an organization generally works reassigning responsibilities both horizontally and vertically, has to do only with the complicated concerns of the system, not the complex problem. Again it must be emphasized that data mining related to evidence based medicine, which is in a way knowledge culled from the experiences of the cohorts within the health ecosystem, will play a vital role in improving the much desired clinical efficacy leading ultimately to better health outcomes. This begins to address the complex systemic problems, while also better engaging the physicians who find the mere data entry into the EHR cumbersome and intrusive; and not able to derive any clinical decision support from the integration of the System of systems (SoS).

  1. Correlation vs Causations

A) While we make a case for better enabling evidence based medicine (EBM) driven by data mining as a high priority in the interoperability scheme of things, we also would like to point out the need for creating thorough systematic review aided by automation which is vital to EBM. This also means dealing with Receiver-Operating Characteristic (ROC) Curves http://www.ncbi.nlm.nih.gov/pubmed/15222906

https://www.sciencebasedmedicine.org/evidence-in-medicine-correlation-and-causation/

From the above link:-

“”The consensus of expert opinion based upon systematic reviews can either result in a solid and confident unanimous opinion, a reliable opinion with serious minority objections, a genuine controversy with no objective resolution, or simply the conclusion that we currently lack sufficient evidence and do not know the answer.””

Also, another reference to:-

Reflections on the Nature and Future of Systematic Review in Healthcare. By:- Dr. Barry Robson

http://www.diracfoundation.com/?p=146

In the recent times Bayesian statistics has emerged as a gold standard to developing curated EBM (http://www.ncbi.nlm.nih.gov/pubmed/10383350) and; in this context we would like to draw attention that while correlation is important as discussed in the above linked article, which is developed from the consensus of the cohorts in the medical community, it is also important to ascertain the causation. This demands need for a holistic Bayesian statistics as proposed in the new algorithms, including those built on proven ideas in physics advancing the scope of the Bayesian Statistics as developed by Dr. Barry Robson. The approach and its impact on the Healthcare Interoperability and analytics are discussed in the link provided below.

http://www.ncbi.nlm.nih.gov/pubmed/26386548

From the above link: –

“”””Abstract

We extend Q-UEL, our universal exchange language for interoperability and

inference in healthcare and biomedicine, to the more traditional fields of public health surveys. These are the type associated with screening, epidemiological and cross-sectional studies, and cohort studies in some cases similar to clinical trials. “”There is the challenge that there is some degree of split between frequentist notions of probability as (a) classical measures based only on the idea of counting and proportion and on classical biostatistics as used in the above conservative disciplines, and (b) more subjectivist notions of uncertainty, belief, reliability, or confidence often used in automated inference and decision support systems. Samples in the above kind of public health survey are typically small compared with our earlier “Big Data” mining efforts. An issue addressed here is how much impact on decisions should sparse data have. “””””

B) Biostatistics, Algebra, Healthcare Analytics and Cognitive Computing

Another interesting aspect that emerges is the need for biostatistics and such many doctors with MD qualification are getting additionally qualified in Public Health Management, which also deals with Biostatistics. Dealing with population health one hand and clinical efficacy on the other, Interoperability via biostatistics has to deliver both views macro wrt systemic outcomes and at the micro level clinical efficacies. Developing such capabilities means much grander vision for Interoperability, as discussed in the OSEHRA, VA sponsored Open Source Efforts in making VistA available to the world market at a fraction cost. More discussion on the OSEHRA forum in the below link.

https://www.osehra.org/content/joint-dod-va-virtual-patient-record-vpr-iehr-enterprise-information-architecture-eia-0

From the above link:-

“”””Tom Munnecke – The Original Architect of VistA – This move to a higher level of abstraction is a bit like thinking of things in terms of algebra, instead of arithmetic. Algebra gives us computational abilities far beyond what we can do with arithmetic. Yet, those who are entrenched in grinding through arithmetic problems have a disdain for the abstract facilities of algebra.””””

Interesting point to note in the discussions on the above link, is that a case is being made for the role of data science (previously called Knowledge Engineering during last three decades) driving better new algorithms, including those built on proven ideas in physics in the Healthcare Interoperability. This helps in advancing the next generations of the EHR capabilities, eventually emerging as a medical science driven cognitive computing platform. The recommendation is in the employ of advances in the data science in moving the needle from developing a deterministic or a predicated System of systems (SoS) based on schemas such as FHIM (http://www.fhims.org), that proves design laborious and is outmoded, to harnessing the data locked in the heterogeneous system by the employ of advanced Bayesian statistics, new algorithms, including those built on proven ideas in physics and especially exploitation of the algebra. This approach delivered on a BigData architecture as a Cognitive Computing Platform with schema-less approaches has a huge benefit in terms of cost, business capability and time to market, delivering medical reasoning from the healthcare ecosystem as realized by the interoperability architectures.

Advertisements

Part A – Healthcare Interoperability Measures:- Cartesian Dilemma (Diagnosis)

Those in blue in the below content are reproduced from the referenced links.Slide06

Definition of Cartesian Dilemma; per Alexander Christopher

(what eyes sees and the mind sees are two different things)

Cartesian Dilemma

http://www.worldsystema.com/worldsystema/2011/10/christopher-alexander-templeto-1.html

From above link

“””””Alexander has been inexorably led to the revolutionary necessity of revising our basic picture of the universe to include a conception of the personal nature of order and our belonging to the world in which the wholeness of space and the extent to which it is alive is perceived as rooted in the plenum behind the visible universe, “the luminous ground” that holds us all. This form of extended objective truth will ultimately resolve our Cartesian dilemma by teaching us a new view of order and a new cosmology in which objective reality “out there” and a personal reality “in here” are thoroughly connected and the bifurcation of nature healed.””””””

“”To Rene Descartes the “Method” (1638) was a convenient mental trick but its success has left us with a mindset that conceives of the universe as a machine without any intrinsic value: the realms of human experience and of feeling are simply absent from the Cartesian world. Whilst inspiring generations of architects and many others from all walks of life concerned with the fate of the earth, Alexander’s ultimately life changing work has understandably provoked powerful opposition from those invested within the establishment of the old paradigm. Social disorder, mental illness, ecological degradation, these and many other problems are due to a misunderstanding of the structure of matter and the nature of the universe and, until quite recently, there has been no coherent way of explaining the order that we respond to and love in nature.””

———————————————————————-

Affordability Care Act and HITECH Act lead into EHR Incentive Program. Based on the EHR Incentive Program CMS has already payed out 24+ Billions of dollars to Eligible Participants. Has it or will it drive the envisioned Healthcare Interoperability still remains a big question. Specifically will it be possible to mine the millions of records and discover opportunity for improvement? Without emphasis on clinical decision support will it be possible to achieve efficacy in the healthcare delivery, while also advancing the opportunities for “pay for performance” outcomes?

To advance EHR adoption in the Healthcare Ecosystem CMS proposed formation of Accountable Care Organization

https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2011-Fact-sheets-items/2011-12-19.html

From the above link

“”The Pioneer ACO Model is designed for health care organizations and providers that are already experienced in coordinating care for patients across care settings. It will allow these provider groups to move more rapidly from a shared savings payment model to a population-based payment model on a track consistent with, but separate from, the Medicare Shared Services Program. And it is designed to work in coordination with private payers by aligning provider incentives, which will improve quality and health outcomes for patients across the ACO, and achieve cost savings for Medicare, employers and patients.””

Importantly CMS proposed roadmap for EHR Adoption based on Meaningful Use (MU) 3 Stages, in the hope of advancing interoperability in the healthcare ecosystem ultimately achieving performance driven model, where the payment models shifts from “pay for service” towards “pay for performance”. Looking at the Healthcare ecosystem, one must take note that achieving efficiency is in the healthcare management; while achieving efficacy is in the healthcare delivery.

You will see in the end of the discussion that somehow efforts of the EHR Incentive Program lays more emphasis on the helathcare efficiency without paying required attention to clinical efficacy. This leads to the systemic entropic discontinuity that can be described by the Boltzmann constant.

This results into missed Line of Sight, where the established “objective”s at the IT / EHR level do not deliver all the required the “business capabilities” or the output and hence the desired “transformative outcomes” are not realized.

https://en.wikipedia.org/wiki/Boltzmann%27s_entropy_formula

From the above link:-

“”In statistical mechanicsBoltzmann’s equation is a probability equation relating the entropy S of an ideal gas ( or consider healthcare ecosystem) to the quantity W, which is the number of microstates corresponding to a given macrostate.”””

Following are the EHR Adoption Meaningful Use Stages:-

MU Stage 1 :- Achieves electronic capture of the patient data (Data Capture and Sharing)

MU Stage 2 :- Achieves Health Information Exchanges (Advances co-ordinated clinical processes)

MU Stage 3:- Target Improved Outcomes ( achieved by moving the payment model from pay for service to pay for performance)

The eligible participants, physicians, hospitals and the ACOs have to demonstrate that they have met the MU criteria in stages. To demonstrate that they have met the requirements, first of all it is required to demonstrate that the data being captured adhere to a prescribed format. This is ascertained by MU attestation.

Additionally, the eligible participants are required to submit quality measures reports to CMS

https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Quality_Measures_Standards.html

From the above link

“””” Quality Measures and Performance Standards

Quality data reporting and collection support quality measurement, an important part of the Shared Savings Program. Before an ACO can share in any savings generated, it must demonstrate that it met the quality performance standard for that year. There are also interactions between ACO quality reporting and other CMS initiatives, particularly the Physician Quality Reporting System (PQRS) and meaningful use. The sections below provide resources related to the program’s 33 quality measures, which span four quality domains: Patient / Caregiver Experience, Care Coordination / Patient Safety, Preventive Health, and At-Risk Population. Of the 33 measures, 7 measures of patient / caregiver experience are collected via the CAHPS survey, 3 are calculated via claims, 1 is calculated from Medicare and Medicaid Electronic Health Record (EHR) Incentive Program data, and 22 are collected via the ACO Group Practice Reporting Option (GPRO) Web Interface.””””

https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment Instruments/QualityMeasures/index.htm/lredirect=/QUALITYMEASURES/

National Quality Forum (NQF) endorsed for CMS reports are :

  • The Hospital Inpatient Quality Reporting (IQR) Program,
  • The Hospital Outpatient Quality Reporting (OQR) Program,
  • The Physician Quality Reporting System (PQRS), and
  • Others as directed by CMS, such as long-term care settings and ambulatory care settings

The CMS quality reporting is based on the schematic derived from HL7, termed QRDA

https://www.cms.gov/regulations-and-guidance/legislation/ehrincentiveprograms/downloads/qrda_ep_hqr_guide_2015.pdf

From the above link

Overview of QRDA

“””The Health Level Seven International (HL7) QRDA is a standard document format for the exchange of electronic clinical quality measure (eCQM) data. QRDA reports contain data extracted from electronic health records (EHRs) and other information technology systems. QRDA reports are used for the exchange of eCQM data between systems for a variety of quality measurement and reporting initiatives, such as the Centers for Medicare & Medicaid Services (CMS) EHR Incentive Program: Meaningful Use Stage 2 (MU2).1

The Office of the National Coordinator for Health Information Technology (ONC) adopted QRDA as the standard to support both QRDA Category I (individual patient) and QRDA Category III (aggregate) data submission approaches for MU2 through final rulemaking in September 2012.2 CMS and ONC subsequently released an interim final rule in December 2012 that replaced the QRDA Category III standard adopted in the September 2012 final rule with an updated version of the standard.3 QRDA Category I and III implementation guides (IGs) are Draft Standards for Trial Use (DSTUs). DSTUs are issued at a point in the standards development life cycle when many, but not all, of the guiding requirements have been clarified. A DSTU is tested and then taken back through the HL7 ballot process to be formalized into an American National Standards Institute (ANSI)-accredited normative standard.

QRDA is a subset of CDA HL7 Standard; QRDA is a constraint on the HL7 Clinical Document Architecture (CDA), a document markup standard that specifies the structure and semantics of clinical documents for the purpose of exchange.4 To streamline implementations, QRDA makes use of CDA templates, which are business rules for representing clinical data consistently. Many QRDA templates are reused from the HL7 Consolidated CDA (C-CDA) standard5, which contains a library of commonly used templates that have been harmonized for MU2. Templates defined in the QRDA Category I and III IGs enable consistent representations of quality reporting data to streamline implementations and promote interoperability.”””

On the contrary we have Office Of National Coordinator (ONC) stipulate and regulate standards to achieve Healthcare Interoperability

ONC Roadmap Vision in the below link

https://www.healthit.gov/policy-researchers-implementers/interoperability

From above link:-

Sadly, although Evidence based is discussed, data mining and concerns around algorithm development is missing.

“””””””

Connecting Health and Care for the Nation: A Shared Nationwide Interoperability Roadmap version 1.0 (Roadmap) [PDF – 3.7 MB] supports the vision that ONC outlined in Connecting Health and Care for the Nation: A 10 Year Vision to Achieve An Interoperable Health IT Infrastructure [PDF – 607 KB]. The Roadmap, shaped by stakeholder input, lays out a clear path to catalyze the collaboration of stakeholders who are going to build and use the health IT infrastructure. The collaborative efforts of stakeholders is crucial to achieving the vision of a learning health system where individuals are at the center of their care; providers have a seamless ability to securely access and use health information from different sources; an individual’s health information is not limited to what is stored in electronic health records (EHRs), but includes information from many different sources and portrays a longitudinal picture of their health, not just episodes of care; and where public health agencies and researchers can rapidly learn, develop, and deliver cutting edge treatments.

“”””””””

http://www.healthit.gov/buzz-blog/from-the-onc-desk/oncinteroperability- roadmap-update/

There is no doubt that ONC aspires to achieve true Healthcare Interoperability, by bringing more clarity to the Health Information Exchange (HIE) as discussed in the below link.

Interoperability vs Health Information Exchange: Setting the Record Straight

ONC under its purview has Office of Standards and Technology, which drives the Interoperability Standards; and it acknowledges that there are numerous challenges in realizing the ONC roadmap; as discussed in the below link

Interoperability Standards – Shades of Gray

Also ONC specifies roadmap in achieving MU stages for physicians, hospitals and ACOs ( HIE)
Slide06https://www.healthit.gov/providers-professionals/ehrimplementation-steps/step-5-achieve-meaningful-use

Specifically for the Semantic Interoperability it recommends Consolidated – Clinical Document Architecture ( C-CDA).

https://www.healthit.gov/policy-researchers-implementers/consolidated-cda-overview

CDA helps in representing a comprehensive view of the patient; complete birth-to-death view – Longitudinal Record.

Also ONC Interoperability Specification Address the Following three levels (Not adequate to achieve EBM driven CDSS):-

There are three levels of health information technology interoperability:  1) Foundational; 2) Structural; and 3) Semantic.

1 – “Foundational” interoperability allows data exchange from one information technology system to be received by another and does not require the ability for the receiving information technology system to interpret the data.

2 – “Structural” interoperability is an intermediate level that defines the structure or format of data exchange (i.e., the message format standards) where there is uniform movement of healthcare data from one system to another such that the clinical or operational purpose and meaning of the data is preserved and unaltered. Structural interoperability defines the syntax of the data exchange. It ensures that data exchanges between information technology systems can be interpreted at the data field level.

3 – “Semantic” interoperability provides interoperability at the highest level, which is the ability of two or more systems or elements to exchange information and to use the information that has been exchanged. Semantic interoperability takes advantage of both the structuring of the data exchange and the codification of the data including vocabulary so that the receiving information technology systems can interpret the data. This level of interoperability supports the electronic exchange of patient summary information among caregivers and other authorized parties via potentially disparate electronic health record (EHR) systems and other systems to improve quality, safety, efficiency, and efficacy of healthcare delivery.

Desired or Recommended 2nd Order Semantic Interoperability

Probabilistic Ontology Driven Knowledge Engineering

Ref:- http://www.ncbi.nlm.nih.gov/pubmed/22269224

Chronically ill patients are complex health care cases that require the coordinated interaction of multiple professionals. A correct intervention of these sort of patients entails the accurate analysis of the conditions of each concrete patient and the adaptation of evidence-based standard intervention plans to these conditions. There are some other clinical circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases or prevention, whose detection depends on the capacities of deduction of the professionals involved.

< diagnosis > < procedures > < outcomes > [triple store]

Conclusion:-

From the above points it must be noted that QRDA and C-CDA achieves different things. Unfortunately, against MU attestation and quality reports that are filed by the eligible participants (physicians, hospitals and ACOs) based on QRDA (especially PQRA), CMS runs the EHR incentives program. Whereas, in the MU2 stage ( as per ONC), it is also required by the participants to demonstrate that they have achieved interoperability within ACO, while implementing HIE, this requires C-CDA. This stage must demonstrate that coordinated clinical processes have been achieved.

Also, clinical decision support system (CDSS) has been established addressing at least 5 critical or clinical priority areas.  Unfortunately this particular capability does not seems to be addressed adequately by the ACOs; who only pursue to demonstrate quality measures have been achieved which necessarily does not mean clinical efficacy have been addressed. 

It seems an important architectural problem has been glossed over by the policy designers, who proposed quality measures model with the motivation for capturing the metrics that eventually demonstrate “pay for performance”; and somehow assumed that the proposed metrics based on QRDA also demonstrate that the clinical efficacies have been achieved. This leads into systemic entropic discontinuity, where the efforts at macro states that represents healthcare management leading into healthcare efficiency  is not necessarily a cumulative realization for the efforts at the micro states which represents gaining clinical efficacy. This entropic discountuinity between the macro state and the micro states is measured by Boltzmann Constant.

Link to more discussion on micro states and macro states within a complex system. Basically discusses for a given complex system, and for all the efforts towards the input; the entropy arrested and created loss, so the output is a actually created incurring loss. This means the systemic efficiency incurred losses and did not realize all the benefits arising out of the clinical efficacy. This is a model problem which inaccurately represents the “phenomenon of interest”.

https://books.google.com/books?id=dAhQBAAAQBAJ&pg=PT295&lpg=PT295&dq=boltzmann+constant+macro+state&source=bl&ots=ubpGEUymWc&sig=cQ4Nz9f6OA0ryDGEupOHDUAyiRc&hl=en&sa=X&ved=0CCwQ6AEwA2oVChMI0qeqv4G4yAIVCzo-Ch07WAkU#v=onepage&q=boltzmann%20constant%20macro%20state&f=false

To achieve Clinical Decision Support System capability which rather plays a very important role in enhancing clinical efficacy, developing data mining driven Evidence Based Medicine capability is imperative. This capability does not seem as being achieved because most HIE / ACO is being developed around QRDA; although discussed in the ONC Connecting Health and Care for the Nation: A Shared Nationwide Interoperability Roadmap version 1.0 (Roadmap) [PDF – 3.7 MB]; unless data mining related algorithmic challenges are addressed which means standards beyond mere capture of the required data fields, interoperability efforts will be in vain.

Role of EBM in achieving CDSS discussed on following sites

CMS Site

https://www.healthit.gov/providers-professionals/achieve-meaningful-use/core-measures/clinical-decision-support-rule

NIH Site

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC130063/

As such it must be noted clinical errors is one among the highest risk becoming the No 3 Killer in the US.

http://www.healthcareitnews.com/news/deaths-by-medical-mistakes-hit-records

From above link

“””It’s a chilling reality – one often overlooked in annual mortality statistics: Preventable medical errors persist as the No. 3 killer in the U.S. – third only to heart disease and cancer – claiming the lives of some 400,000 people each year. At a Senate hearing Thursday, patient safety officials put their best ideas forward on how to solve the crisis, with IT often at the center of discussions. “””

P.S:-

Bioingine (www.bioingine.com); a Cognitive Computing Platform transforms the patient information (millions of records) created by the HIE into Ecosystem Knowledge Landscape that is inherently evidence based, allowing for study of the Tacit Knowledge, as discovered from the millions of patient records (large data sets) by mining and knowledge inference in an automated way. This is achieved employing AI, Machine Learning and such techniques. Thereby, creating Clinical Decision Support System.

Quantum Theory driven (QEXL Approach) Cognitive Computing Architecture resolving Healthcare Interoperability (BigData – HIE/ ACO )

http://www.BioIngine.com

[healthcare cognitive computing platform]

Conquering Uncertainties Creating Infinite Possibilities

(Possible application :- Achieving Algorithm Driven ACO)

HDN_Cognitive_Computing

Introduction

The QEXL Approach is a Systems Thinking driven technique that has been designed with the intension of developing “Go To Market” solutions for Healthcare Big Data applications requiring integration between Payor, Provider, Health Management (Hospitals), Pharma etc; where the systemic complexities tethering on the “edge of chaos” pose enormous challenges in achieving interoperability owing to existence of plethora of healthcare system integration standards and management of the unstructured data in addition to structured data ingested from diverse sources. Additionally, The QEXL Approach targets for the creation of Tacit  Knowledge Sets by inductive techniques and probabilistic inference from the diverse sets of data characterized by volume, velocity and variability. In fact, The QEXL Approach facilitates algorithmic driven Proactive Public Health Management, while rendering business models achieving Accountable Care Organization most effective.

The QEXL Approach is an integrative multivariate declarative cognitive architecture proposition to develop Probabilistic Ontology driven Big Data applications creating interoperability among Healthcare systems. Where, it is imperative to develop architecture that enable systemic capabilities such as Evidence Based Medicine, Pharmacognomics, biologics etc; while also creating  opportunities for studies such as Complex Adaptive System (CAS). Such approach is vital to develop ecosystem as an response to mitigate the Healthcare systemic complexities. Especially CAS studies makes it possible to integrate both macro aspects (such as epidemiology) related to Efficient Heathcare Management Outcomes ; and micro aspects (such as  Evidence Based Medicine and Pharmacogenomics that helps achieve medicine personalization) achieving Efficacy in the Healthcare delivery, to help achieve systemic integrity. In The QEXL Approach QEXL stands for “Quantum Exchange Language”, and Q-UEL is the initial proposed language. The QEXL Consortium embraces Quantal Semantics, Inc; (NC) and Ingine, Inc; (VA), and collaborates with The Dirac Foundation (UK), which has access to Professor Paul Dirac’s unpublished papers. The original consortium grew as a convergence of responses to four stimuli:

  1. The “re-emerging” interest in Artificial Intelligence (AI) as “computational thinking”, e.g. under the American Recovery Act;
  2. The President’s Council of Advisors on Science and Technology December 2010 call for an “XML-like” “Universal Exchange Language” (UEL) for healthcare;
  3. A desire to respond to the emerging Third World Wide Web (Semantic Web) by an initiative based on generalized probability theory  – the Thinking Web; and
  4. In the early courses of these  efforts, a greater understanding  of what Paul Dirac meant in his  Nobel Prize dinner speech where he stated that quantum mechanics should be applicable to all aspects of human thought.

The QEXL Approach

The QEXL Approach is developed based on considerable experiences in Expert Systems, linguistic theory, neurocognitive science, quantum mechanics, mathematical and physics-based approaches in Enterprise Architecture, Internet Topology, Filtering Theory, Semantic Web, Knowledge Lifecycle Management, and principles of Cloud Organization and Integration. The idea for well-formed probabilistic programming reasoning language is simple.  Importantly, also, the more essential features of it for reasoning and prediction are correspondingly simple such that the programmers are not necessarily humans, but structured and unstructured (text-analytic) “data mining” software robots. We have constructed a research prototype Inference Engine (IE) network (and more generally a program) that “simply” represents a basic Dirac notation and algebra compiler, with the caveat that it extends to Clifford-Dirac algebra; notably a Lorentz rotation of the imaginary number i (such that ii = -1) to the hyperbolic imaginary number h (such that hh = +1) corresponding to Dirac’s s, and gtime or g5) is applied.

[Outside the work of Dr. Barry Robson, this approach has not been tried in the inference and AI fields, with one highly suggestive exception: since the late 1990s it has occasionally been used in the neural network field by T. Nitta and others to solve the XOR problem in a single “neuron” and to reduce the number of “neurons” generally. Also suggestively, in particle physics it may be seen as a generalization of the Wick rotation time i x time used by Richard Feynman and others to render wave mechanics classical.  It retains the mathematical machinery and philosophy of Schrödinger’s wave mechanics but, instead of probability amplitudes as wave amplitudes, it yields classical but complex probability amplitudes encoding two directions of effect: “A acts on B, and B differently on A”. It maps to natural language where words relate to various types of real and imaginary scalar, vector, and matrix quantities. Dirac’s becomes the XML-like semantic triple . ]  

The QEXL Approach involves following  interdependent components.

  • Q-UEL (Probabilistic Inference + Phenomenon Of Interest): Addresses global issues that potentially pervade all human endeavors, and hence universal interoperability is of key importance
  •  (Inference Engine + Semantic Inferencing): Project addressing universal meaning underlying diverse natural languages on the Internet, and the use of that in knowledge representation
  • Inference Engine + Decentralized Infra: A link infrastructure for intra- and inter-cloud interoperability and integration in a coherent high level “metaware” environment. This component can also be explored to be replaced with simpler industry ready solutions such as MarkLogic® Enterprise NoSQL Database on Hadoop Distributed File System.

In an endeavor of this kind the partitions-of-work are inevitably artificial; it is important that this does not impede the integrity of optimal solutions.  The most important aspect in The QEXL Approach is, in essence where architecturally Probabilistic Inference (PI) and Data Architecture for the Inference Engine (IE)  is designed to be cooperative; software robots are created while PI and IE interact; and the inference knowledge gained by the PI and IE provide rules for solvers (robots) to self compile and conduct queries etc. This is therefore the grandeur of the scheme: This approach will have facilitated programming by nice compilers so that writing the inference network is easy, but it is not required to write the inference net as input code to compile, with the exception of reusable metarules as Dirac expressions with variables to process other rules by categorical and higher order logic. The robots are designed and programmed to do the remaining coding required to perform as solvers. So the notion of a compiler disappears under the hood. The robots are provided with well-formed instructions as well formed queries. Once inferences are formed, different “what – if” questions can be asked. Given that probability or that being the case, what is the chance of… and so on. It is as if having acquired knowledge, Phenomenon Of Interest (POI) is in a better state to explore what it means. Hyperbolic Dirac Networks (HDNs) are inference networks capable of overcoming the limitations imposed by Bayesian Nets (and statistics) and creating generative models richly expressing the “Phenomenon Of Interest” (POI) by the action of expressions containing binding variables. This may be thought of as an Expert System but analogous to Prolog data and Prolog programs that act upon the data, albeit here a “probabilistic Prolog”. Upfront should be stated the advantages over Bayes Nets as a commonly used inference method, but rather than compete with such methods the approach may be regarded as extending them. Indeed a Bayes Net as a static directed acyclic conditional probability graph is a subset of the Dirac Net as a static or dynamic general bidirectional graph with generalized logic and relationship operators, i.e. empowered by the mathematical machinery of Dirac’s quantum mechanics.

 The QEXL Approach Theory :- Robson Quantitative Semantics Algebra (RQSA)

Developed by Dr. Barry Robson

Theory :- The QEXL Approach based on Robson Quantitative Semantics Algebra – RQSA (Link to development of algorithm – overcoming limitations of Gold Stand Bayesian Network – to solve uncertainty while developing probabilistic ontology)

Impact Of The QEXL Approach

Impact of The QEXL Approach creating Probabilistic Ontology based on Clifford-Dirac algebra has immense opportunity in advancing the architecture to tackle large looming problems involving System of Systems; in which vast uncertain information emerge. Generally, as such systems are designed and developed employing Cartesian methods; such systems do not offer viable opportunity to deal with vast uncertain information when ridden with complexity. Especially when the context complexity poses multiple need for ontologies, and such a system inherently defies Cartesian methods. The QEXL Approach develops into an ecosystem response while it overcomes the Cartesian dilemma (link to another example for Cartesian Dilemma) and allows for generative models to emerge richly expressing the POI. The models generatively develops such that the POI behavior abstracted sufficiently lend for the IE and the Solvers to varieties of studies based on evidence and also allows for developing systemic studies pertaining to Complex Adaptive System and Complex Generative Systems afflicted by multiple cognitive challenges. Particularly, The QEXL Approach has potential to address complex challenges such as evidence based medicine (EBM); a mission that DoD’s Military Health System envisions while it modernizes its Electronics Health Record System – Veterans Health Information Systems and Technology Architecture (VistA). Vast potential also exists in addressing Veteran Administration’s (VA) Million Veteran Program (MVP); an effort by VA to consolidate genetic, military exposure, health, and lifestyle information together in one single database. By identifying gene-health connections, the program could consequentially advance disease screening, diagnosis, and prognosis and point the way toward more effective, personalized therapies.

Although The QEXL Approach is currently targeted to the healthcare and pharmaceutical domains where recognition of uncertainty is vital in observations, measurements and predictions, and probabilities underlying a variety of medical metrics, the scope of application is much more general. The QEXL Approach is to create a generic multivariate architecture for complex system characterized by Probabilistic Ontology that employing generative order will model “POI” facilitating creation of “communities of interest” by self-regulation in diverse domains of interest, requiring integrative of disciplines to create complex studies. The metaphor of “Cambrian Explosion” may aptly represent the enormity of the immense possibilities in advancing studies that tackle large systemic concerns riddled with uncertain information and random events that The QEXL Approach can stimulate.

Image

The inference engine can be conceptualized into solutions such as MarkLogic NoSQL + Hadoop (HDFS). http://www.marklogic.com/resources/marklogic-and-hadoop/

It is interesting to note that in the genesis of evolving various NoSQL solutions based on Hadoop few insights have emerged related to need for designing the components recognizing their cooperative existence.

The Goal of The QEXL Approach: Is all about Contextualization 

The goal employing The QEXL Approach is to enable the realization of cognitive multivariate architecture for Probabilistic Ontology, advancing the Probabilistic Ontology based architecture for context specific application; such as Healthcare. Specifically, The QEXL Approach will develop PI  that helps in the creation of generative models that depicts the systemic behavior of the POI riddled with vast uncertain information. Generally, uncertainty in the vast information is introduced by the System of Systems complexity that is required to resolve multiples of ontologies, standards etc., these further introduce cognitive challenges. The further goal of The QEXL Approach is to overcome such challenges, by addressing interoperability at all levels, including the ability to communicate data and knowledge in a way that recognizes uncertainty in the world, so that automated PI and decision-making is possible. The aim is semiotic portability, i.e. the management of signs and symbols that deals especially with their function and interactions in both artificially constructed and natural languages. Existing systems for managing semantics and language are mostly systems of symbolic, not quantitative manipulation, with the primary exception of BayesOWL.  RQSA, or Robson Quantitative Semantic Algebra by its author Dr. Barry Robson, to distinguish it from other analogous systems, underlies Q-UEL. It is the development of (a) details of particular aspect of Dirac’s notation and algebra that is found to be of practical importance in generalizing and correctly normalizing Bayes Nets according to Bayes Theorem (i.e. controlling coherence, which ironically Bayes Nets usually neglect, as they are unidirectional), (b) merged with the treatment of probabilities and information based on finite data using the Riemann Zeta function that he has employed for many years in bioinformatics and data mining (http://en.wikipedia.org/wiki/GOR_method), and (c) the extension to more flavors of hyperbolic imaginary number to encode intrinsic “dimensions of meaning” under a revised Rojet’s thesaurus system.

The Layers of the Architecture Created by The QEXL Approach

The QEXL Layered View

The QEXL Layered View

Layer 1- Contextualization: Planning, Designing driven by Theories 

A.    Probabilistic Ontology creating Inferencing leading into Evidence Based Medicine

i.     Aspects addressed by Q-UEL Tags and Kodaxil Inferencing

  1. Autonomy / Solidarity
  2. Inferencing (Kodaxil and Q – UEL)
  3. MetaData
  4. Security / Privacy
  5. Consented vs Un-consented Data
  6. Creating Incidence Rule (predicated – Q-UEL and Kodaxil)

ii.     Kodaxil:-  Enforcing Semantics across data sources (global text and data interoperability) – universal meaning underlying diverse natural languages on the Internet

iii.     Fluxology:- Logical Meta Data Cloud (A link infrastructure for intra- and inter-cloud interoperability and integration in a international setting)

  1. Adaptive
  2. Emergent Data Usage Patterns (networks of networks – enables by Probabilistic Ontology rules)
  3. Modeless Emergent Hierarchies
  4. Federation and Democratization Rule for Data (contract, trust, certificates, quality)

B.    Development of Probabilistic Model Representing Universal Abstraction of Phenomenon Of Interest

C.   Targeting Architecture to Application

  • Evidence Based Medicine
  • Genomics
  • Systemic Healthcare Studies
  • etc

Layer 2 – A: Operational Architecture (Logical )

A.    Reference Architecture

  1. Business Con Ops (Use cases)
  2. Conceptual Target Solution Architecture

Layer 2 – B: Data Management – Data Ingestion and Processing 

  1.  The processing of entries in the source data into form suitable for data mining
  2. The data mining of that processed data to obtain summary rules
  3. The capture of the appropriate released summary rules for inference

B.    Data Storage and Retrieval, Transactions

  1. Secure Storage and Retrieval
  2. Enable Secure Transactions
  3. Secure Data Exchange among several stake-holders and data owners

C.    Data Lifecycle, Data Organization Rules, Data Traceability to the Events, 

  1. Security and privacy by encryption and disaggregation of the EHR in a manner that is balanced against authorized access for extraction of global clinical and biomedical knowledge.
  2. Mechanisms for fine-grained consent permitting sharing and data mining.
  3. Mechanisms for secure alerting of patient or physician by backtrack when an authorized researcher or specialist notes that a patient is at risk.
  4. Structure and format that allows all meaningful use cases to be applied in reasonable time, including large-scale data mining.
  5. Assemblies across sources and data users forming contextual work patterns
  6. Hardened Security Framework

D.    Large EHR repository scaling

E.    Data Mining Rules

F.     Extracting and creating Incidence Rules

G.    Experimenting, observing and creating Semantic Inferences

H.    Visualization 

 The below two layers can be implemented in varieties of BigData platforms such as Hortonworks, Pivotal, Altiscale

Layer 3 – Application Layer (Schema-less for structured and unstructured Knowledge Repository – KRS)

Layer 4 – Infrastructure Architecture (Physical) (Hadoop and MapReduce for Large Data File-management and Processing; and Distributed / Concurrent Computations)