CLINICAL QUALITY MEASURES

Hyperbolic Dirac Net (HDN) + Data Mining to Map Clinical Pathways (The Tacit Knowledge)

 

Bioingine.com employs algorithmic approach based on Hyperbolic Dirac Net that allows inference nets that are a general graph (GC), including cyclic paths, thus surpassing the limitation in the Bayes Net that is traditionally a Directed Acyclic Graph (DAG) by definition.

The Bioingine.com approach thus more fundamentally reflects the nature of probabilistic knowledge in the real world, which has the potential for taking account of the interaction between all things without limitation, and ironically this more explicitly makes use of Bayes rule far more than does a Bayes Net.

It also allows more elaborate relationships than mere conditional dependencies, as a probabilistic semantics analogous to natural human language but with a more detailed sense of probability. To identify the things and their relationships that are important and provide the required probabilities, the Bioingine.com scouts the large complex data of both structured and also information of unstructured textual character.

It treats initial raw extracted knowledge rather in the manner of potentially erroneous or ambiguous prior knowledge, and validated and curated knowledge as posterior knowledge, and enables the refinement of knowledge extracted from authoritative scientific texts into an intuitive canonical “deep structure” mental-algebraic form that the Bioingine.com can more readily manipulate.

Discussion on employing HDN to map Clinical Pathways (The Tacit Knowledge)

Screenshot 2016-01-05 21.04.17

In the below referenced articles on the employ of Bayesian Net to model Clinical Pathways as probabilistic inference net, replace Bayesian Net to achieve stress tested Hyperbolic Dirac Net (HDN) which is a non-acyclic Bayesian resolving both correlation and causation in both the direction; etymology –> outcomes and outcomes –> etymology

1. Elements of Q-UEL 

Q-UEL is based on the Dirac Notation and associated algebra The notation was introduced into later editions of Dirac’s book to facilitate understanding and use of quantum mechanics (QM) and it has been a standard notation in physics and theoretical chemistry since the 1940s

a) Dirac Notation

In the early days of quantum theory, P. A. M. (Paul Adrian Maurice) Dirac created a powerful and concise formalism for it which is now referred to as Dirac notation or bra-ket (bracket ) notation

<bra vector exprn* | operator exprn* | ket vector exprn*> 

[ exprn* is expression]

It  is an  algebra for observations and measurements, and probabilistic inference from  them

 QM is a system for representing observations and measurements, and drawing probabilistic inference from them.

In Dirac’s notation what is known is put in a ket, “|>” . So, for example, “|p >” expresses the fact that a particle has momentum p. It could also be more explicit: |p = 2> , the particle has momentum equal to 2; | x = 1.23 , the particle has position 1.23 |Ψ > represents a system in the state and is therefore called the state vector. 

The ket |> can also be interpreted as the initial state in some transition or event.

The bra <| represents the final state or the language in which you wish to express the content of the ket

Hyperbolic Dirac Net, has ket |> as row vector, and bra <| as column vector

b) hh = +1 Imaginary Number

QM is a system for representing observations and measurements, and drawing probabilistic inference from them. The Q in Q-UEL refers to QM, but a simple mathematical transformation of QM gives classical everyday behavior. Q-UEL inherits the machinery of QM by replacing the more familiar imaginary number i (such that ii = -1), responsible for QM as wave mechanics, by the hyperbolic imaginary number h (such that hh=+1). Hence our inference net in general is called the Hyperbolic Dirac Net (HDN)

In probability theory A, B, C, etc. represent things, states, events, observations, measurements, qualities etc. In this paper we mean medical factors, including demographic factors such as age and clinical factors such as systolic blood pressure value or history of diabetes.

They can also stand for expressions containing many factors, so note that by e.g.

P(A|B) we would usually mean that it also applies to, say, P(A, B | C, D, E). In text, P(A,B, C,…) with ellipsis ‘…’ means all combinatorial possibilities, P(A), P(B), P(A, C), P(B, D, H) etc.

2) Employing Q-UEL  preliminary inference net as the query can be created.

“Will my female patient age 50-59 taking diabetes medication and having a body mass index of 30-39 have very high cholesterol if the systolic BP is 130-139 mmHg and HDL is 50-59 mg/dL and non-HDL is 120-129 mg/dL?”.

This forms a preliminary inference net as the query, which may be refined and to which probabilities must be assigned

The real answers of interest here are not qualitative statements, but the final probabilities. The protocols involved map to what data miners often seem to see as two main options in mining, although we see them as the two ends of a continuum.

Method (A) may be recognized as Unsupervised (or unrestricted) data mining and post-filtering, and is the method mainly used here. In this approach

we (1) mine data (“observe”),(2) compute a very large number of the more significant probabilities and render them as tags and maintained as Knowledge Representative Store (KRS) or Semantic Lake (“evaluate”), (3) use a propose inference net as a query to search amongst the probabilities represented by those tags, but only looking for those relevant to complete the net and assign probabilities to it, assessing what is available, and seeing what can be substituted (“interpret”), and (4) compute the overall probability of the final inference net in order to make a decision (“decide”). Unsupervised data mining is preferred because it generates many tags for an SW-like approach, and may uncover new unexpected relationships that could be included in the net.

Method (B) uses supervised (or restricted) data mining and prefiltering. Data mining considers only what appears in the net. The down-stream user interested in inference always accesses the raw database, while in (A) he or she may never see it.

The advantage of (B) is that mining is far less computationally demanding both in terms of processing and memory. Useful to computing HDN for a specified Hypothesis.

The Popular Bayes Net BN Compared with our Hyperbolic Dirac Net HDN.

Each probabilities of any kind can also be manipulated for inference in a variety of ways, according to philosophy (which is a matter of concern ). The BN is probably the most popular method, perhaps because it does seem to be based on traditional, conservative, principles of probability. However, the BN is traditionally (and, strictly speaking, by definition) confined to a probability network that is a directed acyclic graph (DAG).

In general, reversibility, cyclic paths and feedback abound in the real world, and we need probabilistic knowledge networks that are general graphs, or even more diffuse fields of influence, not DAGs. In our response as the Hyperbolic Dirac Net (HDN), “Dirac” relates to use of Paul A. M. Dirac’s view of quantum mechanics (QM).

QM is not only a standard system for representing probabilistic observation and inference from it in physics, but also it manages and even promotes concepts like reversibility and cycles. The significance of “hyperbolic” is that it relates to a particular type of imaginary number rediscovered by Dirac. Dirac notation entities, Q-UEL tags, and the analogous building blocks of an HDN all have complex probabilities better described as probability amplitudes. This means that they have the form x + jy where x and y are real numbers and j is an imaginary number, though they can also be vectors or matrices with such forms as elements.

Q-UEL is seen as a Lorentz rotation i → h of QM as wave mechanics. The imaginary number involved is now no longer the familiar i such that ii = -1, but the hyperbolic imaginary number, called h in Q-UEL, such that hh = +1.

This renders the HDN to behave classically. A basic HDN is an h-complex BN.

Both BN and basic HDN may use Predictive Odds in which conditional probabilities (or the HDN’s comparable h-complex notions) are replaced by ratios of these.

Discussions on Employing Bayesian Net to Model Clinical Pathways (Replace BN by HDN to achieve Hyperbolic BN)

Development of a Clinical Pathways Analysis System with Adaptive Bayesian Nets and Data Mining Techniques 

D. KOPEC*, G. SHAGAS*, D. REINHARTH**, S. TAMANG

 

Pathway analysis of high-throughput biological data within a Bayesian network framework

Senol IsciCengizhan OzturkJon Jones and Hasan H. Otu

Are Standardized Clinical Pathways Stymying Drug Innovation?

HDN :- Need for Agile Clinical Pathways that do not impede Drug Innovation

Oncologists Say Clinical Pathways Are Too Confining

Creating fixed plans for treating common malignancies promises to make the work of nurses and other staff more predictable and practiced, increasing efficiency and reducing errors that could lead to poor outcomes and hospitalization. For payers, pathways also gave them another way to insert awareness of costs directly into the examining room.

“The way the pathways are constructed does promote consideration of value-driven practice, which is to say that the pathways vendors all take into account cost of care, but only after considering efficacy and toxicity,” said Michael Kolodziej, MD, national medical director of oncology solutions at Aetna, and a former medical director at US Oncology. “So there is an element here of reduction of cost of care, by trying to encourage physicians to consider the relative value of various treatment options. This has now become the mantra in oncology.”

Studies found that using pathways can indeed cut costs substantially without hurting outcomes.

Clinical Data Analytics – Loss of Innocence (Predictive Analytics) in a Large High Dimensional Semantic Data Lake

Slide1

From Dr. Barry Robson’s notes:-

Is Data Analysis Particularly Difficult in Biomedicine?

Looking for a single strand of evidence in billions of possible semantic multiple combinations by Machine Learning

Of all disciplines, it almost seems that it is clinical genomics, proteomics, and their kin, which are particularly hard on the data-analytic part of science. Is modern molecular medicine really so unlucky? Certainly, the recent explosion of biological and medical data of high dimensionality (many parameters) has challenged available data analytic methods.

In principle, one might point out that a recurring theme in the investigation of bottlenecks to development of 21st century information technology relates to the same issues of complexity and very high dimensionality of the data to be transformed into knowledge, whether for scientific, business, governmental, or military decision support. After all, the mathematical difficulties are general, and absolutely any kind of record or statistical spreadsheet of many parameters (e.g., in medicine; age, height, weight, blood-pressure, polymorphism at locus Y649B, etc.) could, a priori, imply many patterns, associations, correlations, or eigensolutions to multivariate analysis, expert system statements, or rules, such as jHeight:)6ft, Weight:)210 lbs> or more obviously jGender:)male, jPregnant:)no>. The notation jobservation> is the physicists’ ket notation that forms part of a more elaborate “calculus” of observation. It is mainly used here for all such rule-like entities and they will generally be referred to as “rules”.

As discussed, there are systems, which are particularly complex so that there are many complicated rules not reducible to, and not deducible from, simpler rules (at least, not until the future time when we can run a lavish simulation based on physical first principles).

Medicine seems, on the whole, to be such a system. It is an applied area of biology, which is itself classically notorious as a nonreducible discipline.

In other words, nonreducibility may be intrinsically a more common problem for complex interacting systems of which human life is one of our more extreme examples. Certainly there is no guarantee that all aspects of complex diseases such as cardiovascular disease are reducible into independently acting components that we can simply “add up” or deduce from pairwise metrics of distance or similarity.

At the end of the day, however, it may be that such arguments are an illusion and that there is no special scientific case for a mathematical difficulty in biomedicine. Data from many other fields may be similarly intrinsically difficult to data mine. It may simply be that healthcare is peppered with everyday personal impact, life and death situations, public outcries, fevered electoral debates, trillion dollar expenditures, and epidemiological concerns that push society to ask deeper and more challenging questions within the biomedical domain than routinely happen in other domains.

 Large Number of Possible Rules Extractable a Priori from All Types of High-Dimensional Data

For discovery of relationships between N parameters, there are almost always x (to the power N) potential basic rules, where x is some positive constant greater than unity and which is characteristic of the method of data representation and study. For a typical rectangular data input like a spreadsheet of N columns,

[2 to the power of N] – N – 1  = X numbers of tag rules from which evidence requires being established. Record with 100 variables and joint probability 2 means;

2^100-100-1 = 1.267650600228229401496703205275 × 10^30

Part A – Healthcare Interoperability Measures:- Cartesian Dilemma (Diagnosis)

Those in blue in the below content are reproduced from the referenced links.Slide06

Definition of Cartesian Dilemma; per Alexander Christopher

(what eyes sees and the mind sees are two different things)

Cartesian Dilemma

http://www.worldsystema.com/worldsystema/2011/10/christopher-alexander-templeto-1.html

From above link

“””””Alexander has been inexorably led to the revolutionary necessity of revising our basic picture of the universe to include a conception of the personal nature of order and our belonging to the world in which the wholeness of space and the extent to which it is alive is perceived as rooted in the plenum behind the visible universe, “the luminous ground” that holds us all. This form of extended objective truth will ultimately resolve our Cartesian dilemma by teaching us a new view of order and a new cosmology in which objective reality “out there” and a personal reality “in here” are thoroughly connected and the bifurcation of nature healed.””””””

“”To Rene Descartes the “Method” (1638) was a convenient mental trick but its success has left us with a mindset that conceives of the universe as a machine without any intrinsic value: the realms of human experience and of feeling are simply absent from the Cartesian world. Whilst inspiring generations of architects and many others from all walks of life concerned with the fate of the earth, Alexander’s ultimately life changing work has understandably provoked powerful opposition from those invested within the establishment of the old paradigm. Social disorder, mental illness, ecological degradation, these and many other problems are due to a misunderstanding of the structure of matter and the nature of the universe and, until quite recently, there has been no coherent way of explaining the order that we respond to and love in nature.””

———————————————————————-

Affordability Care Act and HITECH Act lead into EHR Incentive Program. Based on the EHR Incentive Program CMS has already payed out 24+ Billions of dollars to Eligible Participants. Has it or will it drive the envisioned Healthcare Interoperability still remains a big question. Specifically will it be possible to mine the millions of records and discover opportunity for improvement? Without emphasis on clinical decision support will it be possible to achieve efficacy in the healthcare delivery, while also advancing the opportunities for “pay for performance” outcomes?

To advance EHR adoption in the Healthcare Ecosystem CMS proposed formation of Accountable Care Organization

https://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2011-Fact-sheets-items/2011-12-19.html

From the above link

“”The Pioneer ACO Model is designed for health care organizations and providers that are already experienced in coordinating care for patients across care settings. It will allow these provider groups to move more rapidly from a shared savings payment model to a population-based payment model on a track consistent with, but separate from, the Medicare Shared Services Program. And it is designed to work in coordination with private payers by aligning provider incentives, which will improve quality and health outcomes for patients across the ACO, and achieve cost savings for Medicare, employers and patients.””

Importantly CMS proposed roadmap for EHR Adoption based on Meaningful Use (MU) 3 Stages, in the hope of advancing interoperability in the healthcare ecosystem ultimately achieving performance driven model, where the payment models shifts from “pay for service” towards “pay for performance”. Looking at the Healthcare ecosystem, one must take note that achieving efficiency is in the healthcare management; while achieving efficacy is in the healthcare delivery.

You will see in the end of the discussion that somehow efforts of the EHR Incentive Program lays more emphasis on the helathcare efficiency without paying required attention to clinical efficacy. This leads to the systemic entropic discontinuity that can be described by the Boltzmann constant.

This results into missed Line of Sight, where the established “objective”s at the IT / EHR level do not deliver all the required the “business capabilities” or the output and hence the desired “transformative outcomes” are not realized.

https://en.wikipedia.org/wiki/Boltzmann%27s_entropy_formula

From the above link:-

“”In statistical mechanicsBoltzmann’s equation is a probability equation relating the entropy S of an ideal gas ( or consider healthcare ecosystem) to the quantity W, which is the number of microstates corresponding to a given macrostate.”””

Following are the EHR Adoption Meaningful Use Stages:-

MU Stage 1 :- Achieves electronic capture of the patient data (Data Capture and Sharing)

MU Stage 2 :- Achieves Health Information Exchanges (Advances co-ordinated clinical processes)

MU Stage 3:- Target Improved Outcomes ( achieved by moving the payment model from pay for service to pay for performance)

The eligible participants, physicians, hospitals and the ACOs have to demonstrate that they have met the MU criteria in stages. To demonstrate that they have met the requirements, first of all it is required to demonstrate that the data being captured adhere to a prescribed format. This is ascertained by MU attestation.

Additionally, the eligible participants are required to submit quality measures reports to CMS

https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Quality_Measures_Standards.html

From the above link

“””” Quality Measures and Performance Standards

Quality data reporting and collection support quality measurement, an important part of the Shared Savings Program. Before an ACO can share in any savings generated, it must demonstrate that it met the quality performance standard for that year. There are also interactions between ACO quality reporting and other CMS initiatives, particularly the Physician Quality Reporting System (PQRS) and meaningful use. The sections below provide resources related to the program’s 33 quality measures, which span four quality domains: Patient / Caregiver Experience, Care Coordination / Patient Safety, Preventive Health, and At-Risk Population. Of the 33 measures, 7 measures of patient / caregiver experience are collected via the CAHPS survey, 3 are calculated via claims, 1 is calculated from Medicare and Medicaid Electronic Health Record (EHR) Incentive Program data, and 22 are collected via the ACO Group Practice Reporting Option (GPRO) Web Interface.””””

https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment Instruments/QualityMeasures/index.htm/lredirect=/QUALITYMEASURES/

National Quality Forum (NQF) endorsed for CMS reports are :

  • The Hospital Inpatient Quality Reporting (IQR) Program,
  • The Hospital Outpatient Quality Reporting (OQR) Program,
  • The Physician Quality Reporting System (PQRS), and
  • Others as directed by CMS, such as long-term care settings and ambulatory care settings

The CMS quality reporting is based on the schematic derived from HL7, termed QRDA

https://www.cms.gov/regulations-and-guidance/legislation/ehrincentiveprograms/downloads/qrda_ep_hqr_guide_2015.pdf

From the above link

Overview of QRDA

“””The Health Level Seven International (HL7) QRDA is a standard document format for the exchange of electronic clinical quality measure (eCQM) data. QRDA reports contain data extracted from electronic health records (EHRs) and other information technology systems. QRDA reports are used for the exchange of eCQM data between systems for a variety of quality measurement and reporting initiatives, such as the Centers for Medicare & Medicaid Services (CMS) EHR Incentive Program: Meaningful Use Stage 2 (MU2).1

The Office of the National Coordinator for Health Information Technology (ONC) adopted QRDA as the standard to support both QRDA Category I (individual patient) and QRDA Category III (aggregate) data submission approaches for MU2 through final rulemaking in September 2012.2 CMS and ONC subsequently released an interim final rule in December 2012 that replaced the QRDA Category III standard adopted in the September 2012 final rule with an updated version of the standard.3 QRDA Category I and III implementation guides (IGs) are Draft Standards for Trial Use (DSTUs). DSTUs are issued at a point in the standards development life cycle when many, but not all, of the guiding requirements have been clarified. A DSTU is tested and then taken back through the HL7 ballot process to be formalized into an American National Standards Institute (ANSI)-accredited normative standard.

QRDA is a subset of CDA HL7 Standard; QRDA is a constraint on the HL7 Clinical Document Architecture (CDA), a document markup standard that specifies the structure and semantics of clinical documents for the purpose of exchange.4 To streamline implementations, QRDA makes use of CDA templates, which are business rules for representing clinical data consistently. Many QRDA templates are reused from the HL7 Consolidated CDA (C-CDA) standard5, which contains a library of commonly used templates that have been harmonized for MU2. Templates defined in the QRDA Category I and III IGs enable consistent representations of quality reporting data to streamline implementations and promote interoperability.”””

On the contrary we have Office Of National Coordinator (ONC) stipulate and regulate standards to achieve Healthcare Interoperability

ONC Roadmap Vision in the below link

https://www.healthit.gov/policy-researchers-implementers/interoperability

From above link:-

Sadly, although Evidence based is discussed, data mining and concerns around algorithm development is missing.

“””””””

Connecting Health and Care for the Nation: A Shared Nationwide Interoperability Roadmap version 1.0 (Roadmap) [PDF – 3.7 MB] supports the vision that ONC outlined in Connecting Health and Care for the Nation: A 10 Year Vision to Achieve An Interoperable Health IT Infrastructure [PDF – 607 KB]. The Roadmap, shaped by stakeholder input, lays out a clear path to catalyze the collaboration of stakeholders who are going to build and use the health IT infrastructure. The collaborative efforts of stakeholders is crucial to achieving the vision of a learning health system where individuals are at the center of their care; providers have a seamless ability to securely access and use health information from different sources; an individual’s health information is not limited to what is stored in electronic health records (EHRs), but includes information from many different sources and portrays a longitudinal picture of their health, not just episodes of care; and where public health agencies and researchers can rapidly learn, develop, and deliver cutting edge treatments.

“”””””””

http://www.healthit.gov/buzz-blog/from-the-onc-desk/oncinteroperability- roadmap-update/

There is no doubt that ONC aspires to achieve true Healthcare Interoperability, by bringing more clarity to the Health Information Exchange (HIE) as discussed in the below link.

Interoperability vs Health Information Exchange: Setting the Record Straight

ONC under its purview has Office of Standards and Technology, which drives the Interoperability Standards; and it acknowledges that there are numerous challenges in realizing the ONC roadmap; as discussed in the below link

Interoperability Standards – Shades of Gray

Also ONC specifies roadmap in achieving MU stages for physicians, hospitals and ACOs ( HIE)
Slide06https://www.healthit.gov/providers-professionals/ehrimplementation-steps/step-5-achieve-meaningful-use

Specifically for the Semantic Interoperability it recommends Consolidated – Clinical Document Architecture ( C-CDA).

https://www.healthit.gov/policy-researchers-implementers/consolidated-cda-overview

CDA helps in representing a comprehensive view of the patient; complete birth-to-death view – Longitudinal Record.

Also ONC Interoperability Specification Address the Following three levels (Not adequate to achieve EBM driven CDSS):-

There are three levels of health information technology interoperability:  1) Foundational; 2) Structural; and 3) Semantic.

1 – “Foundational” interoperability allows data exchange from one information technology system to be received by another and does not require the ability for the receiving information technology system to interpret the data.

2 – “Structural” interoperability is an intermediate level that defines the structure or format of data exchange (i.e., the message format standards) where there is uniform movement of healthcare data from one system to another such that the clinical or operational purpose and meaning of the data is preserved and unaltered. Structural interoperability defines the syntax of the data exchange. It ensures that data exchanges between information technology systems can be interpreted at the data field level.

3 – “Semantic” interoperability provides interoperability at the highest level, which is the ability of two or more systems or elements to exchange information and to use the information that has been exchanged. Semantic interoperability takes advantage of both the structuring of the data exchange and the codification of the data including vocabulary so that the receiving information technology systems can interpret the data. This level of interoperability supports the electronic exchange of patient summary information among caregivers and other authorized parties via potentially disparate electronic health record (EHR) systems and other systems to improve quality, safety, efficiency, and efficacy of healthcare delivery.

Desired or Recommended 2nd Order Semantic Interoperability

Probabilistic Ontology Driven Knowledge Engineering

Ref:- http://www.ncbi.nlm.nih.gov/pubmed/22269224

Chronically ill patients are complex health care cases that require the coordinated interaction of multiple professionals. A correct intervention of these sort of patients entails the accurate analysis of the conditions of each concrete patient and the adaptation of evidence-based standard intervention plans to these conditions. There are some other clinical circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases or prevention, whose detection depends on the capacities of deduction of the professionals involved.

< diagnosis > < procedures > < outcomes > [triple store]

Conclusion:-

From the above points it must be noted that QRDA and C-CDA achieves different things. Unfortunately, against MU attestation and quality reports that are filed by the eligible participants (physicians, hospitals and ACOs) based on QRDA (especially PQRA), CMS runs the EHR incentives program. Whereas, in the MU2 stage ( as per ONC), it is also required by the participants to demonstrate that they have achieved interoperability within ACO, while implementing HIE, this requires C-CDA. This stage must demonstrate that coordinated clinical processes have been achieved.

Also, clinical decision support system (CDSS) has been established addressing at least 5 critical or clinical priority areas.  Unfortunately this particular capability does not seems to be addressed adequately by the ACOs; who only pursue to demonstrate quality measures have been achieved which necessarily does not mean clinical efficacy have been addressed. 

It seems an important architectural problem has been glossed over by the policy designers, who proposed quality measures model with the motivation for capturing the metrics that eventually demonstrate “pay for performance”; and somehow assumed that the proposed metrics based on QRDA also demonstrate that the clinical efficacies have been achieved. This leads into systemic entropic discontinuity, where the efforts at macro states that represents healthcare management leading into healthcare efficiency  is not necessarily a cumulative realization for the efforts at the micro states which represents gaining clinical efficacy. This entropic discountuinity between the macro state and the micro states is measured by Boltzmann Constant.

Link to more discussion on micro states and macro states within a complex system. Basically discusses for a given complex system, and for all the efforts towards the input; the entropy arrested and created loss, so the output is a actually created incurring loss. This means the systemic efficiency incurred losses and did not realize all the benefits arising out of the clinical efficacy. This is a model problem which inaccurately represents the “phenomenon of interest”.

https://books.google.com/books?id=dAhQBAAAQBAJ&pg=PT295&lpg=PT295&dq=boltzmann+constant+macro+state&source=bl&ots=ubpGEUymWc&sig=cQ4Nz9f6OA0ryDGEupOHDUAyiRc&hl=en&sa=X&ved=0CCwQ6AEwA2oVChMI0qeqv4G4yAIVCzo-Ch07WAkU#v=onepage&q=boltzmann%20constant%20macro%20state&f=false

To achieve Clinical Decision Support System capability which rather plays a very important role in enhancing clinical efficacy, developing data mining driven Evidence Based Medicine capability is imperative. This capability does not seem as being achieved because most HIE / ACO is being developed around QRDA; although discussed in the ONC Connecting Health and Care for the Nation: A Shared Nationwide Interoperability Roadmap version 1.0 (Roadmap) [PDF – 3.7 MB]; unless data mining related algorithmic challenges are addressed which means standards beyond mere capture of the required data fields, interoperability efforts will be in vain.

Role of EBM in achieving CDSS discussed on following sites

CMS Site

https://www.healthit.gov/providers-professionals/achieve-meaningful-use/core-measures/clinical-decision-support-rule

NIH Site

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC130063/

As such it must be noted clinical errors is one among the highest risk becoming the No 3 Killer in the US.

http://www.healthcareitnews.com/news/deaths-by-medical-mistakes-hit-records

From above link

“””It’s a chilling reality – one often overlooked in annual mortality statistics: Preventable medical errors persist as the No. 3 killer in the U.S. – third only to heart disease and cancer – claiming the lives of some 400,000 people each year. At a Senate hearing Thursday, patient safety officials put their best ideas forward on how to solve the crisis, with IT often at the center of discussions. “””

P.S:-

Bioingine (www.bioingine.com); a Cognitive Computing Platform transforms the patient information (millions of records) created by the HIE into Ecosystem Knowledge Landscape that is inherently evidence based, allowing for study of the Tacit Knowledge, as discovered from the millions of patient records (large data sets) by mining and knowledge inference in an automated way. This is achieved employing AI, Machine Learning and such techniques. Thereby, creating Clinical Decision Support System.