Month: November 2015

Data Science On Non-Locality, Hidden Problems and Lack of Information

Discourse with Dr. Barry Robson, about some thing quite bizarre in solving an unknown problem against uncertainty.

nash

Srinidhi Boray (SB) – Hey Barry Question !!! “Graphing should pave way for creating the tacit knowledge by context for a chosen hypothesis; and infinite varieties of the hypothesis is technically possible in an ecosystem”

Dr. Barry Robson (BR) – Your question does not make sense, it is like John Nash’s lecture while he was having a schizophrenic breakdown in ‘A Beautiful mind”

“And so we see that if the zeroes of the Riemann Zeta Function correspond to singularities in the space-time then conventional number theory breaks down in the face of relativistic exploration… Sometimes, our expectations are betrayed by the numbers… And variables are impossible to assign any rational value…”

SB – What I am struggling is with this:-

High dimensionality creates billions of probabilities, from the billions of probabilities knowledge is Inferred working through the morass, while applying probabilistic ontology and looking for weighted evidence

As such, knowledge is spread over the system topology – so it is non-local. This means the ecosystem Consciousness is all of the non-local knowledge in continuum both as experience and evidence.

With the above proposition, how can one solve a problem that is yet to surface, since non-local knowledge supposes that a problem is lurking in the corner, and I am not aware. Is this a Bizarre question ?

BR – “But, If you meant that we should emphasize the vast spaces and entropy that we need to overcome, then yes  that is the essence of the problem.”

On Non-Locality, Hidden Problems and Lack of Information

Dr. Barry Robson

We sometimes speak of “non-locality” to mean some business to do hidden problems, when one does not know even how to get information to achieve the solution, even assuming that the solution exists at all to allow us get an exact solution. This can seem odd because location would not seem to have much to do with solvability of problems, except   perhaps in a mathematical or simulated space. In some sense, of course, that may indeed be exactly what we do mean, and we might mean more precisely that we cannot locate something in a descriptive space.

We might have meant “non-locatability”, i.e. the needle in a haystack, and essentially entropic, problem.

But equally and alternatively, we might mean that the shape of a space, real or mathematical, changes so locally, so suddenly, as to have the characteristics of a singularity, making behavior in those regions impossible to integrate, and require infinite information to locate and describe it precisely, and with complete certainty.   We might speak better of “non-localizability”, or perhaps of “a limit to localizability”.

Alternatively, because we cannot dissect it further, “non-separability” might be a better word, and as discussed briefly below, Schrödinger used that term and it later transformed into non-locality.

We can reach many concepts that seem to have rather little to do with each other and even less to do with our starting, point, although there also seems to be a thread of connection. Many and perhaps ultimately all of these may in fact be manifestations of the same core ideas, as follows.

For a long time after Newton, it was envisaged that we predict the past and future of everything if we had the required information. We speak of not knowing the details of motions of molecules, on certain important statistics like temperature and pressure, simple because we cannot look at each particle and its properties plot the effects of Newton’s laws of motion backward and forward in time.

There are two modern views that say that these things are most usually unknowable, that we cannot get all the required information, Uncertainty principle of quantum mechanics, and Chaos Theory.

Looking first at the first of these, Albert Einstein, the most famous objector to a fundamentally probabilistic nature for quantum mechanics nonetheless felt that there had to be hidden variables from which the seemingly probabilistic nature of things emerged. This was rather like particle properties and Newton’s laws, so in that sense he was Newtonian, and certainly a Newtonian revisionist, applying corrections to Newton’s view of the world to allow for the effect of the finite velocity of light and the bending of space-time.

The famous interpretation of Bohm’s interpretation, is all about hidden variables but it seems somewhat different constitutes an implicate order (hidden order) which organizes a particle, and which may itself be the result of yet a higher meta- or super-implicate order which organizes a field.

The flavor of the above seems closer to what we can quite readily understand in the modern world of widespread computer technology. We are very familiar with worlds of people and things that are not really in places and moments of time that they seem to be, nor are they really doing things with probabilities that can only be understood and quantified retrospectively by observations and counting and statistical analysis, but are instead pre-computed to occur with that probability. It is of course true of movies and plays, and stories and myths and legends in general, and perhaps even in the ongoing narrative in our brains that we call consciousness, but it is certainly most clearly linked to precise concepts about information in the world of simulation and computer games.

From that perspective, quantum mechanics is like the processor architecture and machine language, and as in Plato’s “Allegory of the Shadows in the Cave” what is simulated and perceived has little to do with what is “really” the case. Importantly, like a real computer system, it has to forge reality with a limited information capacity so that we cannot have infinite velocities, precise singularities, and total separation of the behavior of particles. Even in the wave description, the behavior of distinct particles not the evolution of the wave equation cannot always be separated into finer-grained parts.

“Non-separability” in that kind of sense was originally, and perhaps still should be, used in place of “non-locality”. As in our computer simulation, we may not information available to separate out events so that they appear entangled. As in our brains, only a few bits per second might in fact reach central working memory, and like the blind spot in our retina, we fill the gaps with the illusion of assumption. We may not have enough information to describe every little bump and crack in a mountain, but resort to an equation to simulate fractals.

Singularities like those we discuss inside black holes stand out as requiring infinite precision for their description and represent a seeming breach of the uncertainty principles: they are just too darned sharp. The singularity of a black hole cannot be observed because there seems to be a principle of cosmic censorship consistent with lack of availability for any information about a singularity. Since anything can classically become a black hole if it has some mass and is small and localized enough, i.e. can reach a certain critical density, black holes for this reason (and other presumably related reasons) meet up with the quantum world.

The information about the “action” of in quantum system, at least as an i–complex quantity as is required for Schrödinger’s wave mechanics,   cannot be finer grained than about 1/(4 x pi) nor exceed a change of 2 x pi or more without becoming redundant (the phase problem).

But as noted above, there are two modern views that say how we cannot get all the required information, not just Uncertainty principle of quantum mechanics, but also Chaos Theory. We may expect that they are not distinct. Indeed, quantum chaos is a branch of physics, which studies chaotic classical dynamical systems in terms of quantum theory, to try and understand the relationship between quantum mechanics and classical Chaos Theory. The correspondence principle sees classical mechanics as the classical limit of quantum mechanics. If so, there must be quantum mechanisms underlying classical chaos. A Chaotic systems if often taken as synonymous with a non-linear one. In a non-linear system, output is unpredictable as not proportional in any sense to input, there is a deficiency of information between the independent (input) and dependent (output) variables.  There is fundamental connections here with Riemann who worked on number theory and on the curvature of spaces that underlay Einstein’s theories of relativity, though as far as we know, these two topics were distinct, at least as much as anything is distinct from anything else, for Riemann. In the 1950’s though, Atle Selberg was studying the number-theoretic implications of the analytic structure of certain curved spaces, and derived an equation that had the eigenvalues of a differential operator on one side and the lengths of closed curves in the space on the other side. This equation known as the known as the Selberg Trace Formula, encodes the number-theoretic properties that underlie the structure of the curved space. Dennis Hejhal later computed eigenvalues and closed curves for various curved spaces. In one of his early calculations, Hejhal thought he had solved an important conjecture by Riemann that about the location of certain non-trivial zeros in the i-complex space of the Riemann zeta function that   controlled the location of the prime numbers in the natural number series. These zeros turned out to really be the zeta zeros, but their presence turned out to be error in the sense that the curved space had a singularity that required an adjustment in the differential operator in order to handle it, and once done, the zeta zeros disappeared. However, there remains the well known interest of mathematical physicists in connecting quantum chaos with the mysteries of number theory through Riemann’s   zeta function, and there may remain considerable mileage in attempting to understand   the curvature of spaces in terms of limitations in the amount of information available to describe them.

For example, intriguingly, in 1977 Berry and Tabor made a still unproven mathematical conjecture that for the quantum dynamics of flow on compact Riemann surface, the spectrum of quantum energy eigenvalues behaves like a sequence of independent random variables, provided that the underlying classical dynamics contain enough information to be completely integratable.

Advertisements

ICD – 10; Medical Knowledge Extraction with Increasing Variables Means Dealing with Billions of Probabilities.

ICD-10 VS 1CD-9

Note: Bioingine Cognitive Computing Platform employs Hyperbolic Dirac Net, an advanced Bayesian that overcomes acyclic constraint; thereby the resultant statistics method delivers coherent results when dealing with a messy system where both hypothesis and data have become random messing the entropy.

By. Dr. Barry Robson

Bioingine.com (Ingine, Inc.)

The hunger and thirst for more fine grained and powerful descriptions of our nature, health and treatment has meant that the latest International classification of Diseases takes us from ICD-9 with 3,824 procedure codes and 14,025 diagnosis codes to ICD-10 with 71,924 procedure codes and 69,823 diagnosis codes. It is important to contemplate what this really means for extracting knowledge, such as statements with associated probabilities, from clinical data that will help in public health analysis, biomedical research, and clinical decision making.

At first glance, it appears to the problem of sitting down before an enormous feast at which new courses arrive faster and faster, more than we can ever consume. In liquid terms, it is of course the proverbial “drinking from the fire hose”. Medical information technology already abounds in entities that are in some sense elaborate statements, in the sense of being statements about many demographic and clinical factors at a time. For example, 30 to 40 factors, including age, gender, ethnicity, weight, systolic blood pressure, fasting glucose and so are routinely recorded in public health studies. Even if we merely recorded these in a binary way, with values yes/no, good/bad normal/atypical and so on, there are a great many statements that we would like to explore and use, and they are represented by the many probabilities that we would ideally like to know.

For just 30 factors there are mathematically potentially = 1,152,921,504,606,846,975 different probabilities, e.g.P(gender=male), P(Hispanic=no and ‘type 2 diabetes’=no), P(‘age greater than 60’=yes and ‘systolic blood pressure’=’not high’) P(male=no and smoker = yes and ‘heart attack=yes), and so on up to the one probability that has all 30 factors.

For a basic health record of N=100, we have at least 1,000,000,000,000,000,000,000,000,000,000 such statements. Being more realistic, for the order of X possible values for reach factor we are speaking of the order of X to the power N, and for a typical electronic health record we being to exceed to number of quarks in the known universe.

But then, we already began to appreciate that as we addressed the problem of seeking meaning in DNA after the human genome projects, where the factors of interest are three billion per patient, the approximate number of DNA base pairs in our DNA. For 4 kinds of base (G,C,A or T) to the power 3 billion, your calculator will just likely say “invalid input for function”.

This above multifactor problem is often called “the curse of high dimensionality”, but it also implies a saving grace, which is an ultimate irony.

The ultimate irony is that the problem is one of sparse data. There is a terrible famine amongst the feast of possibilities, where it is mostly only crumbs that we can reach. Our fire hose mostly issues just a tiny drop at a time of the most interesting and precious liquids. Our most powerful computers appearing over the next twenty years will still not run out of steam because they will run out of data first. There aren’t actually 1,152,921,504,606,846,975 (billion tags) or 1,000,000,000,000,000,000, 000,000,000,000 patients on planet Earth, let alone a number so large as to be “invalid input for function”. We won’t find the cases to count to obtain probabilities of interest. High dimensionality means lots and lots of sparse data, of which we see combinations occurring just one or two times, and far more never occurring at all.

The problem is that if we take a “classical” statistical approach and simply say that we have too little data in all that and simply ignore it, lots of weak evidence can add up to overthrow a decision made without it. Neglect of sparse data is a systematic error. Millions of crumbs can prevent someone somewhere from dying from hunger; millions of drops can stop someone somewhere dying from thirst.

It is in these and the many related areas in which we must look for improvements in our handling of data and knowledge.