machine learning

2004 to 2017 Convergence of Big Data, Machine Learning, Semantic Web, Graph Analytics, High Performance Computing – All These and Yet Big Data Analytics Sucks

2004 – Tim Lee Berner

 

Semantic Web

OWL and RDF introduced to address Semantic Web and also Knowledge Representation. This really calls for BigData technology that was still not ready.

https://www.w3.org/2004/01/sws-pressrelease

 

2006 – Hadoop Apache Hadoop is an open source software framework for storage and large scale processing of data-sets on clusters of commodity hardware.

https://opensource.com/life/14/8/intro-apache-hadoop-big-data

 

2008

Scientific Method Obsolete for BigData

 The Data Deluge Makes the Scientific Method Obsolete

 

2008 – MapReduce

Large Data Processing – classification

Google created the framework for MapReduce – MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper.

•        https://research.google.com/archive/mapreduce.html

 

2009 – Machine Learning Emergence of BigData Machine Learning Framework and Libraries

 

2009 – Apache Mahout Apache Mahout – Machine Learning on BigData Introduced.  Apache Mahout is a linear algebra library that runs on top of any distributed engine that have bindings written.

https://www.ibm.com/developerworks/library/j-mahout/

Mahout ML is mostly restricted to set theory. Apache Mahout is a project of the Apache Software Foundation to produce free implementations of distributed or otherwise scalable machine learning algorithms focused primarily in the areas of collaborative filtering, clustering and classification.

 

 

2012 – Apache SPARK Apache SPARK Introduced to deal with Very Large Data and IN-Memorry Processing. It is an architecture for cluster computing – that increases the computing compared with slow MapReduce by 100 times and also better solves parallelization of the algorithm. Apache Spark is an open-source cluster-computing framework. Originally developed at the University of California, Berkeley’s AMPLab

https://en.wikipedia.org/wiki/Apache_Spark

 

Mahout vs Spark Difference between Mahout vs SPARK

https://www.linkedin.com/pulse/choosing-machine-learning-frameworks-apache-mahout-vs-debajani

 

2012 – GraphX GraphX is a distributed graph processing framework on top of Apache Spark. Because it is based on RDDs, which are immutable, graphs are immutable and thus GraphX is unsuitable for graphs that need to be updated, let alone in a transactional manner like a graph databasE. GraphX can be viewed as being the Spark in-memory version of Apache Giraph, which utilized Hadoop disk-based MapReduce.
2013 – DARPA PPAML https://www.darpa.mil/program/probabilistic-programming-for-advancing-machine-learning

 

Machine learning – the ability of computers to understand data, manage results and infer insights from uncertain information – is the force behind many recent revolutions in computing. Email spam filters, smartphone personal assistants and self-driving vehicles are all based on research advances in machine learning. Unfortunately, even as the demand for these capabilities is accelerating, every new application requires a Herculean effort. Teams of hard-to-find experts must build expensive, custom tools that are often painfully slow and can perform unpredictably against large, complex data sets.

The Probabilistic Programming for Advancing Machine Learning (PPAML) program aims to address these challenges. Probabilistic programming is a new programming paradigm for managing uncertain information.

Ingine Responded to DARPA’s RFQ with a detailed architecture based on Barry’s innovation in the algorithm that basically solves the above ask to some extent. Importantly it solve Probabilistic Ontology for  Knowledge Extraction from Uncertainty and Semantic Reasoning.

2017 – DARPA Graph Analytics https://graphchallenge.mit.edu/scenarios

 

In this era of big data, the rates at which these data sets grow continue to accelerate. The ability to manage and analyze the largest data sets is always severely taxed.  The most challenging of these data sets are those containing relational or network data. The HIVE challenge is envisioned to be an annual challenge that will advance the state of the art in graph analytics on extremely large data sets. The primary focus of the challenges will be on the expansion and acceleration of graph analytic algorithms through improvements to algorithms and their implementations, and especially importantly, through special purpose hardware such as distributed and grid computers, and GPUs. Potential approaches to accelerate graph analytic algorithms include such methods as massively parallel computation, improvements to memory utilization, more efficient communications, and optimized data processing units.

 

2013 Other Large Graph Analytics Reference An NSA Big Graph experiment

http://www.pdl.cmu.edu/SDI/2013/slides/big_graph_nsa_rd_2013_56002v1.pdf

2017 Data Science Dealing with Large Data Still Sucks

 

Despite emergence of Big Data, Machine Learning, Graphing Techniques and Semantic Web. The convergence is still far fleeting. Especially Semantic / Cognitive / Knowledge Extraction techniques are very poorly defined and there does not exists a framework approach to knowledge engineering leading into Machine Learning and automation in Knowledge Extraction, Representation, Learning and Reasoning. This is what  Q-UEL and HDN solves at the algorithmic level.
Advertisements

2nd Order Semantic Web and A.I driven Reasoning – 300 Years Plus of Crusade

Bioingine.com | Ingine Inc

Screenshot 2016-02-02 11.30.13

Chronology of Development of Hyperbolic Dirac Net (HDN) Inference. 

https://en.wikipedia.org/wiki/Thomas_Bayes

From Above Link:-

1. 1763. Thomas Bayes was an English statistician, philosopher and Presbyterian minister who is known for having formulated a specific case of the theorem that bears his name: Bayes’ theorem.

 Bayes’s solution to a problem of inverse probability was presented in “An Essay towards solving a Problem in the Doctrine of Chances” which was read to the Royal Society in 1763 after Bayes’ death

https://en.wikipedia.org/wiki/Bayes%27_theorem

From Above Link:-

In probability theory and statisticsBayes’ theorem (alternatively Bayes’ law or Bayes’ rule) describes the probability of an event, based on conditions that might be related to the event.

When applied, the probabilities involved in Bayes’ theorem may have different probability interpretations. In one of these interpretations, the theorem is used directly as part of a particular approach to statistical inference. With the Bayesian probability interpretation the theorem expresses how a subjective degree of belief should rationally change to account for evidence: this is Bayesian inference, which is fundamental to Bayesian statistics. However, Bayes’ theorem has applications in a wide range of calculations involving probabilities, not just in Bayesian inference.

https://en.wikipedia.org/wiki/Bayesian_inference

From Above Link:-

Bayesian inference is a method of statistical inference in which Bayes’ theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including scienceengineeringphilosophymedicinesport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called “Bayesian probability“.

2. 1859, Georg Friedrich Bernhard Riemann proposed Riemann zeta function,function useful in number theory for investigating properties of prime numbers. Written as ζ(x), it was originally defined as the infinite series

ζ(x) = 1 + 2−x + 3−x + 4−x + ⋯.

The theory should perhaps be distinguished from an existing purely number-theoretic area sometimes also known as Zeta Theory, which focuses on the Riemann Zeta Function and the ways in which it governs the distribution of prime numbers

http://mathworld.wolfram.com/RiemannZetaFunction.html

The Riemann zeta function is an extremely important special function of mathematics and physics that arises in definite integration and is intimately related with very deep results surrounding the prime number theorem. While many of the properties of this function have been investigated, there remain important fundamental conjectures (most notably the Riemann hypothesis) that remain unproved to this day. The Riemann zeta function is defined over the complex plane for one complex variable, which is conventionally denoted (instead of the usual ) in deference to the notation used by Riemann in his 1859 paper that founded the study of this function (Riemann 1859). It is implemented in the Wolfram Language as Zeta[s].

3. 1900. Ramanujan’s mathematical work was primarily in the areas of number theory and classical analysis. In particular, he worked extensively with infinite series, integrals, continued fractions, modular forms, q-series, theta functions, elliptic functions, the Riemann Zeta-Function, and other special functions.

Hardy wrote in Ramanujan’s obituary [14]:

There is always more i n one of Ramanujan’s formulae than meets the eye, as anyone who sets to work to verify those which look the easiest will soon discover. In some the interest lies very deep, in others comparatively near the surface; but there is not one, which is not curious and entertaining.

http://www.integralworld.net/collins18.html

From above link :-

Now there is a famous account of the gifted Indian mathematician Ramanujan who when writing to Hardy at Cambridge regarding his early findings included the seemingly nonsensical result,

1 + 2 + 3 + 4 + ……(to infinity) = – 1/12.

Initially Hardy was inclined to think that he was dealing with a fraud, but on further reflection realized that Ramanujan was in fact describing the Riemann Zeta Function (for s = – 1). He could then appreciate his brilliance as one, who though considerably isolated and without any formal training, had independently covered much of the same ground as Riemann.

However it still begs the question as to what the actual meaning of such a result can be, for in the standard conventional manner of mathematical interpretation, the sum of the series of natural numbers clearly diverges.

The startling fact is that this result – though indirectly expressed in a quantitative manner – actually expresses a qualitative type relationship (pertaining to holistic mathematical interpretation).

Uncovering Ramanujan’s “Lost” Notebook: An Oral History

http://arxiv.org/pdf/1208.2694.pdf

ROBERT P. SCHNEIDER

From above link :-

Whereas Ramanujan’s earlier work dealt largely with classical number-theoretic objects such as q-series, theta functions, partitions and prime numbers—exotic, startling, breathtaking identities built up from infinite series, integrals and continued fractions—in these newfound papers, Andrews found never-before-seen work on the mysterious “mock theta functions” hinted at in a letter written to Hardy in Ramanujan’s final months, pointing to realms at the very edge of the mathematical landscape. The content of Ramanujan’s lost notebook is too rich, too ornate, too strange to be developed within the scope of the present article. We provide a handful of stunning examples below, intended only to tantalize—perhaps mystify—the reader, who is encouraged to let his or her eyes wander across the page, picking patterns like spring flowers from the wild field of symbols.

The following are two fantastic q-series identities found in the lost notebook, published by Andrews soon after his discovery, in which is taken to be a complex number with |q| <1

Another surprising expression involves an example of a mock theta function provided by Ramanujan in the final letter he sent to Hardy

In the words of mathematician Ken Ono, a contemporary trailblazer in the field of mock theta functions, “Obviously Ramanujan knew much more than he revealed [14].” Indeed, Ramanujan then “miraculously claimed” that the coefficients of this mock theta function obey the asymptotic relation

The new realms pointed to by the work of Ramanujan’s final year are now understood to be ruled by bizarre mathematical structures known as harmonic Maass forms. This broader perspective was only achieved in the last ten years, and has led to cutting-edge science, ranging from cancer research to the physics of black holes to the completion of group theory. 

Yet details of George Andrews’s unearthing of Ramanujan’s notes are only sparsely sketched in the literature; one can detect but an outline of the tale surrounding one of the most fruitful mathematical discoveries of our era. In hopes of contributing to a more complete picture of this momentous event and its significance, here we weave together excerpts from interviews we conducted with Andrews documenting the memories of his trip to Trinity College, as well as from separate interviews with mathematicians Bruce Berndt and Ken Ono, who have both collaborated with Andrews in proving and extending the contents of Ramanujan’s famous lost notebook.

4. Elie Joseph Cartan, developed “Theory of Spinors

https://archive.org/details/TheTheoryOfSpinors

https://en.wikipedia.org/wiki/Spinor

From above link:-

In geometry and physics, spinors are elements of a (complexvector space that can be associated with Euclidean space. Like geometric vectors and more general tensors, spinors transform linearly when the Euclidean space is subjected to a slight (infinitesimal) rotation. When a sequence of such small rotations is composed (integrated) to form an overall final rotation, however, the resulting spinor transformation depends on which sequence of small rotations was used, unlike for vectors and tensors. A spinor transforms to its negative when the space is rotated through a complete turn from 0° to 360° (see picture), and it is this property that characterizes spinors. It is also possible to associate a substantially similar notion of spinor to Minkowski space in which case the Lorentz transformations of special relativity play the role of rotations. Spinors were introduced in geometry by Élie Cartan in 1913. In the 1920s physicists discovered that spinors are essential to describe the intrinsic angular momentum, or “spin”, of the electron and other subatomic particles.

5. 1928, Paul A M Dirac derived the Dirac equation, which In particle physics, is a relativistic wave equation.

From above link:-

http://www.mathpages.com/home/kmath654/kmath654.htm

http://mathworld.wolfram.com/DiracEquation.html

The quantum electrodynamical law which applies to spin-1/2 particles and is the relativistic generalization of the Schrödinger equation. In dimensions (three space dimensions and one time dimension), it is given by

DIRAC1

6. 1930. Dirac publishes his book on his pivotal view of quantum mechanics, including his earliest mentions of an operator with the properties of the hyperbolic number such that hh = +1. It extends the theory of wave mechanics to particle mechanics. 
P. A. M. Dirac, The Principles of Quantum Mechanics, First Edition, Oxford University Press, Oxford (1930).

7. 1933. In his Nobel Prize Dinner speech, Dirac states that mechanical methods are applicable to all forms of human thought where numbers are involved. http://www.nobelprize.org/nobel_prizes/physics/laureates/1933/dirac-speech.html

8. 1939. DIRAC PUBLISHES HIS BRAKET NOTATION. It is incorporated into the third edition of his book.

P.A.M. Dirac (1939). A new notation for quantum mechanics, Mathematical Proceedings of the Cambridge Philosophical Society 35 (3): 416–418

9. 1974. Robson develops his Expected Information approach that preempts the Bayes Net method.

B. Robson, Analysis of the Code Relating Sequence to Conformation in Globular Proteins: Theory and Application of Expected Information, Biochem. J141, 853-867 (1974).

10. 1978. The Expected Information approach crystallizes as the GOR method widely used in bioinformatics.

Garnier, D. J. Osguthorpe, and B. Robson, Analysis of the Accuracy and Implications of Simple Methods for Predicting the Secondary Structure of Globular Proteins”, J. Mol. Biol. 120, 97-120 (1978). 


11. 1982 . Buchannan and Shortliffe describe the first medical Expert System. It is based on probabilistic statements, but sets a tradition of innovation and diverse controversial methods in automated medical inference.

Buchanan, E.H. Shortliffe, (1982) Rule Based Expert Systems. The Mycin Experiments of the Stanford Heuristic Programming Project, Addison-Wesley: Reading, Massachusetts.

12. 1985. Pearl Gives Full Accound the Bayes Net method.

Pearl, Probabilistic Reasoning in Intelligent Systems. San Francisco CA: Morgan Kaufmann (1985).

13. March 1989, Sir Tim Berners-less invented WWW: – Introduced non-linear linking of information across systems.

Tim laid out his vision for what would become the Web in a document called “Information Management: A Proposal”.Believe it or not, Tim’s initial proposal was not immediately accepted. In fact, his boss at the time, Mike Sendall, noted the words “Vague but exciting” on the cover. The Web was never an official CERN project, but Mike managed to give Tim time to work on it in September 1990. He began work using a NeXT computer, one of Steve Jobs’ early products.

14. 1997. Clifford Algebra using becomes more widely recognized as a tool for engineers as well as scientists and physicists.

Gürlebeck, W. Sprössig, Quaternionic and Clifford Calculus for Physicists and Engineers, Wiley, Chichester (1997)

15. 1999. Tim Berners-Lee described the Semantic Web vision in the following terms

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web, the content, links, and transactions between people and computers. A Semantic Web, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The intelligent agents people have touted for ages will finally materialize. (1999)

16. 2000. Khrennikov gives description of a primarily h-complex quantum mechanics.

Khrenikov, Hyperbolic quantum mechanics, Cornell University Library, arXiv:quant-ph/0101002v1 (2000).

17. 2000. Bucholz and Sommer refine work showing that neural networks as inference systems modeled on the brain can usefully use the hypercomplex imaginary number h.

S. Buchholz, and G. Sommer, A hyperbolic multilayer perceptron International Joint Conference on Neural Networks, IJCNN 2000, Como,Italy, Vol. 2 of pp. 129-133. Amari, S-I and. Giles, C.L M. Gori. M. and Piuri, V. Eds. IEEE Computer Society Press, (2000).

18. 2003. Robson Points out that the Expected Information method in bioinformatics is really the use of the partially summated Riemann Zeta function, and a best choice for treatment of sparse data in data mining in general.

B Robson (2003) “Clinical and Pharmacogenomic Data Mining. 1. The generalized theory of expected information and application to the development of tools” J. Proteome Res. (Am. Chem. Soc.) 283-301, 2 

19. 2003. Nitta Shows that the power of the h-complex approach in neural nets is primarily due to its ability to solver the notorious exclusive-or logical problem in a single neuron.

Nitta, Solving the XOR problem and the detection of symmetry using a single complex-valued neuron, Neural Networks 16:8, 1101-1105, T. (2003).

20. 2003. Khrennikov consolidates the notion of an extensively h-complex quantum mechanics, but feels that i-complex, h-complex, and real world mechanics are three spate systems.

A.Khrennikov, A. Hyperbolic quantum mechanics, Adv. in Applied Clifford Algebras, Vol.13, 1 (2003). 

21.2004. Khrennikov notes possible relation between h-complex quantum mechanics and mental function.

Khrennikov, On Quantum-Like Probabilistic Structure of Mental Information, Open Systems Information Dynamics, Vol. 11, 3, 267-275 (2004).

22. 2004 Rochon shows that the full Riemann Zeta function is both i-complex and h-complex.

Rochon, A Bicomplex Riemann Zeta Function, Tokyo J. of Math.

23. 2004. Robson argues that zeta theory is a solution to high dimensionality problems in data mining.

Robson, The Dragon on the Gold: Myths and Realities for Data Mining in Biotechnology using Digital and Molecular Libraries, J. Proteome Res. (Am. Chem. Soc.) 3 (6), 1113 – 9 (2004).

24. 2005. Robson argues that all statements in zeta theory and in prime number theory are really statements relevant to data and data mining, and describes first link to Dirac’s quantum mechanics and Dirac’s braket notation.

Robson, Clinical and Pharmacogenomic Data Mining: 3. Zeta Theory As a General Tactic for Clinical Bioinformatics, J. Proteome Res. (Am. Chem. Soc.) 4(2); 445-455 (2005) 


25. 2005. Code CliniMiner/Fano based on Zeta Theory and prime number theory is used in first pioneering effort in data mining large number of patient records.

Mullins, I. M., M.S. Siadaty, J. Lyman, K. Scully, G.T. Garrett, G. Miller, R. Muller, B. Robson, C. Apte, C., S. Weiss, I. Rigoutsos, D. Platt, and S. Cohen, Data mining and clinical data repositories: Insights from a 667,000 patient data set, Computers in Biology and Medicine, 36(12) 1351 (2006). 


26. 2007. Robson recognizes that the imaginary number required to reconcile zeta theory with quantum mechanics and to allow Dirac notation to be used in inference is the hyperbolic imaginary number h, not the imaginary number i. Unaware of the work of Khrennikov, he makes no Khrennikov-like distinction between h-complex quantum mechanics and the everyday world.

Mullins, I. M., M.S. Siadaty, J. Lyman, K. Scully,G.T. Garrett, G.Miller, R. Muller, B.Robson, C. Apte, C., S. Weiss, I. Rigoutsos, D. Platt, and S. Cohen, Data mining and clinical data repositories: Insights from a 667,000 patient data set, Computers*in*Biology* and*Medicine, 36(12) 1351 (2006)

27. 2007. Robson recognizes that the imaginary number required to reconcile zeta theory with 
quantum mechanics and to allow Dirac notation to be used in inference is the hyperbolic imaginary number h, not the imaginary number i. Unaware of the work of Khrennikov, he makes no Khrennikov like distinction between h complex quantum mechanics and the every day world.

Robson, The New Physician as Unwitting Quantum Mechanic: Is Adapting Dirac’s Inference System Best Practice for Personalized Medicine, Genomics and Proteomics, J. Proteome Res. (A. Chem. Soc.), Vol. 6, No. 8: 3114 – 3126, (2007). 


Robson, B. (2007) “Data Mining and Inference Systems for Physician Decision Support in Personalized Medicine” Lecture and Circulated Report at the 1st Annual Total Cancer Care Summit, Bahamas 2007. 


28. 2008. Data Mining techniques using the full i-complex and h-complex zeta function are developed.

Robson, Clinical and Pharmacogenomic Data Mining: 4. The FANO Program and Command Set as an Example of Tools for Biomedical Discovery and Evidence Based Medicine” J. Proteome Res., 7 (9), pp 3922–3947 (2008). 


29. 2008. Nitta and Bucholtz explore decision process boundaries of h-complex neural nets.

Nitta, and S. Bucholtz, On the Decision Boundaries of Hyperbolic Neurons. In 2008 International Joint Conference on Neural Networks (IJCNN). 


30. 2009. Semantic Web starts to emerge but runs into bottleneck regarding the best approach for probabilistic treatment.

Prediou and H. Stuckenschmidt, H. Probabilistic Models for the SW – A Survey. http://ki.informatik.unimannheim.de/fileadmin/ publication/ Predoiu08Survey.pdf (last accessed 4/29/2010) 


31. 2009. Baek and Robson propose that, for reasons of bandwidth limitations and security, the Internet should consist of data-centric computing by smart software robots. Robson indicates that they could be based on h-complex inference systems and link to semantic theory.

Robson B.. and Baek OK. The Engines of Hippocrates. From the Dawn of Medicine to Medical and Phrmaceuteutical Infomatics, Wiley, 2009. 

Robson B. (2009) “Towards Intelligent Internet-Roaming Agents for Mining and Inference from Medical Data”, Future of Health Technology Congress, Technology and Informatics, Vol. 149, 157-177 IOS Press 

Robson, B. (2009) “Links Between Quantum Physics and Thought” (A. I. Applications in Medicine) , Future of Health Technology Congress, Technology and Informatics, Vol. 149, 157-177 IOS Press. 

32. 2009. Nivitha et al. develop new learning algorithms for complex-valued networks.

S. Savitha, S. Suresh, S. Sundararajan, and P, Saratchandran, A new learning algorithm with logarithmic performance index for complex-valued neural networks, Neurocomputing 72 (16-18), 3771-3781 (2009).

33. 2009. Khrennikov argues for the h-complex Hilbert space as providing the “contextual” (underlying rationale, hidden variables etc.) for all quantum mechanics.

Khrennikov, Contextual Approach to Quantum Formalism, Springer (2009) 

34. 2010. Robson and Vaithiligam describe how zeta theory and h-complex probabilistic algebra can resolves challenges in data mining by the pharmaceutical industry.

Robson and A. Vaithiligam, Drug Gold and Data Dragons: Myths and Realities of Data Mining in the Pharmaceutical Industry pp25-85 in Pharmaceutical Data Mining, Ed Balakin, K. V. , John Wiley Sons (2010).

35. 2010. PCAST. December Report by the US President’s Council of Advisors on science and Technology calls for an XML-like Universal Exchange Langue for medicine including disaggregation for the patient record on the Internet for patient access, security, and privacy.

http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-health-it- report.pdf 

36. 2011. First description of Q-UEL in response to PCAST 2010.

Robson, B., Balis, U. G. J. and Caruso, T. P. (2011)“Considerations for a Universal Exchange Language for Healthcare.” In Proceedings of 2011 IEEE 13th International Conference on e-Health Networking, Applications and Services (Healthcom 2011), 173– 176. Columbus, MO: IEEE, 2011. 

 37. 2011. Robson and Colleagues develop the method of match-and-edit instructions for extracting

Robson, B., Li, J., Dettinger, R., Peters, A., and Boyer, S.K. (2011), Drug discovery using very large numbers of patents. General strategy with extensive use of match and edit operations. Journal of Computer-Aided Molecular Design 25(5): 427-441 

38. 2011. Kuroe et al. consolidate the theory of h– complex neural nets.

Kuroe, T. Shinpei, and H. Iima, Models of Hopfield-Type Clifford Neural Networks and Their Energy Functions – Hyperbolic and Dual Valued Networks, Lecture Notes in Computer Science, 7062, 560 (2011).

39. 2012. Robson argues that h-complex algebra is an appropriate basis for Artificial Intelligence in the Pharmaceutical Industry.

Robson, B. (2012) “Towards Automated Reasoning for Drug Discovery and Pharmaceutical Business Intelligence”, Pharmaceutical Technology and Drug Research, 2012 1: 3 ( 27 March 2012 ) 


40. 2013. Goodman and Lassiter attempt to reconcile and restore interest in probabilistic semantics after a long period of domination by classical logic. 
N. D. Goodman and D. Lassiter, Probabilistic Semantics and Pragmatics: Uncertainty in Language and Thought,

https://web.stanford.edu/~ngoodman/papers/Goodman-HCS-final.pdf

41. 2013. Robson argues for importance of h-complex approach for measures in epidemiology. Robson, B. (2013)

“Towards New Tools for Pharmacoepidemiology”, Advances in Pharmacoepidemiology and Drug Safety, 1:6,

http://www.omicsgroup.org/journals/towards-new-tools-for-pharmacoepidemiology-2167-1052.1000123.pdf

42. 2013 Robson promotes Q-UEL from a public health perspective.
B. Robson, Rethinking Global Interoperability in Healthcare. Reflections and Experiments of an e-Epidemiologist from Clinical Record to Smart Medical Semantic Web Johns Hopkins Grand Rounds Lectures (last accessed 3/14/2013).

Screenshot 2016-02-02 11.30.13

http://dhsi.med.jhmi.edu/GrandRoundsVideo/Feb15-2013/SilverlightLoader.html

43. 2013 Robson and Caruso describe first version of Q-UEL in greater Detail.

Robson, B, and TP Caruso (2013) “A Universal Exchange Language for Healthcare” MedInfo ’13: Proceedings of the 14th World Congress on Medical and Health Informatics, Copenhagen, Denmark, Edited by CU Lehmann, E Ammenwerth, and C Nohr. IOS Press, Washington, DC, USA. http://quantalsemantics.com/documents/MedInfo13-RobsonCaruso_V6.pdf; http://ebooks.iospress.nl/publication/34165

44. 2014. Robson et al. release formal description of consolidated second version of Q-UEL.

Robson, T. P. Caruso and U. G. J. Balis, Suggestions for a Web Based Universal Exchange and Inference Language for Medicine, Computers in Biology and Medicine, 
43(12) 2297 (2013).

45. 2013. Moldoveneua expresses view that hyperbolic quantum mechanics can’t also include wave mechanics. Possible attack on Khrennikov’s idea that hyperbolic quantum mechanics can show 
interference as for waves. Signs of growing sense that hyperbolic quantum mechanics is simply the everyday world described in terms of the machinery of traditional quantum mechanics.

Moldoveanu, Non viability of hyperbolic quantum mechanics as a theory of Nature, Cornell University Library, arXiv:1311.6461v2 [quant-ph] (2013).

46. 2013. First full description of the Hyperbolic Dirac Net and its relation to Q-UEL and to Bayes Nets.

Robson, Hyperbolic Dirac Nets for Medical Decision Support. Theory, Methods, and Comparison with Bayes Nets, Computers in Biology and Medicine, 51, 183 (2013).

http://www.sciencedirect.com/science/article/pii/S0010482514000778

47. 2014. Kunegis et al.c develop h-complex algorithms for dating recommender systems.

Kunegis, G. Gröner, and T, Gottrron, On-Line Dating Recommender Systems, the Split Complex Number Approach, (Like/Dislike, Similar/Disimilar) http://userpages.uni- koblenz.de/~kunegis/paper/kunegis-online-dating-recommender-systems-the-split- complex-number-approach.pdf (last accessed 6/1/2014).

48. 2015. Robson describes extension of Hyperbolic Dirac Net to semantic reasoning and probabilistic lingusitics. 


Robson, B. “POPPER, a Simple Programming Language for Probabilistic Semantic Inference in Medicine. Computers in Biology and Medicine ” Computers in biology and Medicine”, (in press), DOI: 10.1016/j.compbiomed.2014.10.011 (2015). 


http://www.ncbi.nlm.nih.gov/pubmed/25464353

49. 2014. Yosemite Manifesto – a response to PCAST 2010 that the Semantic Web should provide healthcare IT, al though preempted by Q-UEL

http://yosemitemanifesto.org/ (last accessed 7/5/2014). 

50. 2015. Robson et al. describe medical records in Q-UEL format and PCAST disaggregation for patient security and privacy.

Robson, B., Caruso, T, and Balis, U. G. J. (2015) “Suggestions for a Web Based Universal Exchange and Inference Language for Medicine. Continuity of Patient Care with PCAST Disaggregation.” Computers in Biology and Medicine (in press) 01/2015; 56:51. DOI: 10.1016/j.compbiomed.2014.10.022 

51. 2015. Mathematician Steve Deckelman of U. Wisconsin-Stout and Berkeley validates the theoretical principles Hyperbolic Dirac Net.

Deckelman and Robson, B. (2015)“Split-Complex Numbers and Dirac Bra-Kets” Communications in Information andSystems (CIS), in press.

http://www.diracfoundation.com/?p=148

From Above Link:-

The inference net on which this dualization is performed is defined as an estimate of a probability as an expression comprising simpler probabilities and or association measures, i.e. each with fewer attributes (i.e. arguments, events, states, observations or measurements) that the joint probability estimated, where each attribute corresponds to nodes of a general graph and the probabilities or association measures represent their interdependencies as edges. It is not required that the inference net be an acyclic directed graph, but the widely used BN that satisfies that description by definition is a useful starting point for making use of the given probabilities to address the same or similar problems. Specifically for the estimation of a joint probability, and HDN properly constructed with prior probabilities, and whether or not it contains cyclic paths, is purely real valued and its construction principles represent a generalization of Bayes Theorem. Any imaginary part indicates the degree of departure from Bayes Theorem over the net as a whole, and the direction of conditionality in which the degree of departure occurs, and thus the HDN provides an excellent book-keeping tool that Bayes Theorem is satisfied overall. Specially for the estimation of a conditional probability, it follows conversely from the above that any expression for a joint probability validated by the above means can serve as the generator of an HDN for the estimation of a conditional probability simply by dividing it by the HDN counterparts of prior probabilities, whence the resulting net is not purely real save by coincidence of probability values.

52. 2015. Implementation of a web based universal exchange and inference language for medicine: Sparse data, probabilities and inference in data mining of clinical data repositories

Barry Robson and Srinidhi Boray

http://www.computersinbiologyandmedicine.com/article/S0010-4825(15)00257-7/abstract

52. 2015. Robson, B., and S. Boray, The Structure of Reasoning in Answering Multiple Choice Medical Licensing Examination Questions. Computer Studies   towards Formal Theories of Clinical Decision Support and Setting and Answering Medical Licensing Examinations, Workshop Lecture presentation, Proceedings of the IEEE International conference of Bioinformatics and Biomedicine, 9th-11th November, Washington DC (2015)

https://www.osehra.org/sites/default/files/Computer_Exams_V10.pdf

https://cci.drexel.edu/ieeebibm/bibm2015/BIBM2015Program.pdf

 

 

 

 

 

 

 

 

 

Bioingine :- Multivariate Cognitive Computing Platform – Distributed Concurrent Computing by Dockerized Microservices

HDN_Cognitive_Computing

Employ of Dockerized Apps Opens a Vistas of Possibilities with Hadoop Architecture. Where, the Hadoop’s traditional data management architecture is extended beyond data processing and management into Distributed Concurrent Computing.

 

Data Management (Storage, Security,  MapReduce based Pre-processing) and Data Science (Algorithms) Decoupled.

Microservices driven Concurrent Computing :- Complex Distributed Architecture made Affordable

Conceptual View of Yarn driven Dockerized Session Management of  Multiple Hypothesis over Semantic Lake

Notes on HDN (Advanced Bayesian), Clinical Semantic Data Lake and Deep Learning / Knowledge Mining and Inference 

 

 

Quantum Theory driven (QEXL Approach) Cognitive Computing Architecture resolving Healthcare Interoperability (BigData – HIE/ ACO )

http://www.BioIngine.com

[healthcare cognitive computing platform]

Conquering Uncertainties Creating Infinite Possibilities

(Possible application :- Achieving Algorithm Driven ACO)

HDN_Cognitive_Computing

Introduction

The QEXL Approach is a Systems Thinking driven technique that has been designed with the intension of developing “Go To Market” solutions for Healthcare Big Data applications requiring integration between Payor, Provider, Health Management (Hospitals), Pharma etc; where the systemic complexities tethering on the “edge of chaos” pose enormous challenges in achieving interoperability owing to existence of plethora of healthcare system integration standards and management of the unstructured data in addition to structured data ingested from diverse sources. Additionally, The QEXL Approach targets for the creation of Tacit  Knowledge Sets by inductive techniques and probabilistic inference from the diverse sets of data characterized by volume, velocity and variability. In fact, The QEXL Approach facilitates algorithmic driven Proactive Public Health Management, while rendering business models achieving Accountable Care Organization most effective.

The QEXL Approach is an integrative multivariate declarative cognitive architecture proposition to develop Probabilistic Ontology driven Big Data applications creating interoperability among Healthcare systems. Where, it is imperative to develop architecture that enable systemic capabilities such as Evidence Based Medicine, Pharmacognomics, biologics etc; while also creating  opportunities for studies such as Complex Adaptive System (CAS). Such approach is vital to develop ecosystem as an response to mitigate the Healthcare systemic complexities. Especially CAS studies makes it possible to integrate both macro aspects (such as epidemiology) related to Efficient Heathcare Management Outcomes ; and micro aspects (such as  Evidence Based Medicine and Pharmacogenomics that helps achieve medicine personalization) achieving Efficacy in the Healthcare delivery, to help achieve systemic integrity. In The QEXL Approach QEXL stands for “Quantum Exchange Language”, and Q-UEL is the initial proposed language. The QEXL Consortium embraces Quantal Semantics, Inc; (NC) and Ingine, Inc; (VA), and collaborates with The Dirac Foundation (UK), which has access to Professor Paul Dirac’s unpublished papers. The original consortium grew as a convergence of responses to four stimuli:

  1. The “re-emerging” interest in Artificial Intelligence (AI) as “computational thinking”, e.g. under the American Recovery Act;
  2. The President’s Council of Advisors on Science and Technology December 2010 call for an “XML-like” “Universal Exchange Language” (UEL) for healthcare;
  3. A desire to respond to the emerging Third World Wide Web (Semantic Web) by an initiative based on generalized probability theory  – the Thinking Web; and
  4. In the early courses of these  efforts, a greater understanding  of what Paul Dirac meant in his  Nobel Prize dinner speech where he stated that quantum mechanics should be applicable to all aspects of human thought.

The QEXL Approach

The QEXL Approach is developed based on considerable experiences in Expert Systems, linguistic theory, neurocognitive science, quantum mechanics, mathematical and physics-based approaches in Enterprise Architecture, Internet Topology, Filtering Theory, Semantic Web, Knowledge Lifecycle Management, and principles of Cloud Organization and Integration. The idea for well-formed probabilistic programming reasoning language is simple.  Importantly, also, the more essential features of it for reasoning and prediction are correspondingly simple such that the programmers are not necessarily humans, but structured and unstructured (text-analytic) “data mining” software robots. We have constructed a research prototype Inference Engine (IE) network (and more generally a program) that “simply” represents a basic Dirac notation and algebra compiler, with the caveat that it extends to Clifford-Dirac algebra; notably a Lorentz rotation of the imaginary number i (such that ii = -1) to the hyperbolic imaginary number h (such that hh = +1) corresponding to Dirac’s s, and gtime or g5) is applied.

[Outside the work of Dr. Barry Robson, this approach has not been tried in the inference and AI fields, with one highly suggestive exception: since the late 1990s it has occasionally been used in the neural network field by T. Nitta and others to solve the XOR problem in a single “neuron” and to reduce the number of “neurons” generally. Also suggestively, in particle physics it may be seen as a generalization of the Wick rotation time i x time used by Richard Feynman and others to render wave mechanics classical.  It retains the mathematical machinery and philosophy of Schrödinger’s wave mechanics but, instead of probability amplitudes as wave amplitudes, it yields classical but complex probability amplitudes encoding two directions of effect: “A acts on B, and B differently on A”. It maps to natural language where words relate to various types of real and imaginary scalar, vector, and matrix quantities. Dirac’s becomes the XML-like semantic triple . ]  

The QEXL Approach involves following  interdependent components.

  • Q-UEL (Probabilistic Inference + Phenomenon Of Interest): Addresses global issues that potentially pervade all human endeavors, and hence universal interoperability is of key importance
  •  (Inference Engine + Semantic Inferencing): Project addressing universal meaning underlying diverse natural languages on the Internet, and the use of that in knowledge representation
  • Inference Engine + Decentralized Infra: A link infrastructure for intra- and inter-cloud interoperability and integration in a coherent high level “metaware” environment. This component can also be explored to be replaced with simpler industry ready solutions such as MarkLogic® Enterprise NoSQL Database on Hadoop Distributed File System.

In an endeavor of this kind the partitions-of-work are inevitably artificial; it is important that this does not impede the integrity of optimal solutions.  The most important aspect in The QEXL Approach is, in essence where architecturally Probabilistic Inference (PI) and Data Architecture for the Inference Engine (IE)  is designed to be cooperative; software robots are created while PI and IE interact; and the inference knowledge gained by the PI and IE provide rules for solvers (robots) to self compile and conduct queries etc. This is therefore the grandeur of the scheme: This approach will have facilitated programming by nice compilers so that writing the inference network is easy, but it is not required to write the inference net as input code to compile, with the exception of reusable metarules as Dirac expressions with variables to process other rules by categorical and higher order logic. The robots are designed and programmed to do the remaining coding required to perform as solvers. So the notion of a compiler disappears under the hood. The robots are provided with well-formed instructions as well formed queries. Once inferences are formed, different “what – if” questions can be asked. Given that probability or that being the case, what is the chance of… and so on. It is as if having acquired knowledge, Phenomenon Of Interest (POI) is in a better state to explore what it means. Hyperbolic Dirac Networks (HDNs) are inference networks capable of overcoming the limitations imposed by Bayesian Nets (and statistics) and creating generative models richly expressing the “Phenomenon Of Interest” (POI) by the action of expressions containing binding variables. This may be thought of as an Expert System but analogous to Prolog data and Prolog programs that act upon the data, albeit here a “probabilistic Prolog”. Upfront should be stated the advantages over Bayes Nets as a commonly used inference method, but rather than compete with such methods the approach may be regarded as extending them. Indeed a Bayes Net as a static directed acyclic conditional probability graph is a subset of the Dirac Net as a static or dynamic general bidirectional graph with generalized logic and relationship operators, i.e. empowered by the mathematical machinery of Dirac’s quantum mechanics.

 The QEXL Approach Theory :- Robson Quantitative Semantics Algebra (RQSA)

Developed by Dr. Barry Robson

Theory :- The QEXL Approach based on Robson Quantitative Semantics Algebra – RQSA (Link to development of algorithm – overcoming limitations of Gold Stand Bayesian Network – to solve uncertainty while developing probabilistic ontology)

Impact Of The QEXL Approach

Impact of The QEXL Approach creating Probabilistic Ontology based on Clifford-Dirac algebra has immense opportunity in advancing the architecture to tackle large looming problems involving System of Systems; in which vast uncertain information emerge. Generally, as such systems are designed and developed employing Cartesian methods; such systems do not offer viable opportunity to deal with vast uncertain information when ridden with complexity. Especially when the context complexity poses multiple need for ontologies, and such a system inherently defies Cartesian methods. The QEXL Approach develops into an ecosystem response while it overcomes the Cartesian dilemma (link to another example for Cartesian Dilemma) and allows for generative models to emerge richly expressing the POI. The models generatively develops such that the POI behavior abstracted sufficiently lend for the IE and the Solvers to varieties of studies based on evidence and also allows for developing systemic studies pertaining to Complex Adaptive System and Complex Generative Systems afflicted by multiple cognitive challenges. Particularly, The QEXL Approach has potential to address complex challenges such as evidence based medicine (EBM); a mission that DoD’s Military Health System envisions while it modernizes its Electronics Health Record System – Veterans Health Information Systems and Technology Architecture (VistA). Vast potential also exists in addressing Veteran Administration’s (VA) Million Veteran Program (MVP); an effort by VA to consolidate genetic, military exposure, health, and lifestyle information together in one single database. By identifying gene-health connections, the program could consequentially advance disease screening, diagnosis, and prognosis and point the way toward more effective, personalized therapies.

Although The QEXL Approach is currently targeted to the healthcare and pharmaceutical domains where recognition of uncertainty is vital in observations, measurements and predictions, and probabilities underlying a variety of medical metrics, the scope of application is much more general. The QEXL Approach is to create a generic multivariate architecture for complex system characterized by Probabilistic Ontology that employing generative order will model “POI” facilitating creation of “communities of interest” by self-regulation in diverse domains of interest, requiring integrative of disciplines to create complex studies. The metaphor of “Cambrian Explosion” may aptly represent the enormity of the immense possibilities in advancing studies that tackle large systemic concerns riddled with uncertain information and random events that The QEXL Approach can stimulate.

Image

The inference engine can be conceptualized into solutions such as MarkLogic NoSQL + Hadoop (HDFS). http://www.marklogic.com/resources/marklogic-and-hadoop/

It is interesting to note that in the genesis of evolving various NoSQL solutions based on Hadoop few insights have emerged related to need for designing the components recognizing their cooperative existence.

The Goal of The QEXL Approach: Is all about Contextualization 

The goal employing The QEXL Approach is to enable the realization of cognitive multivariate architecture for Probabilistic Ontology, advancing the Probabilistic Ontology based architecture for context specific application; such as Healthcare. Specifically, The QEXL Approach will develop PI  that helps in the creation of generative models that depicts the systemic behavior of the POI riddled with vast uncertain information. Generally, uncertainty in the vast information is introduced by the System of Systems complexity that is required to resolve multiples of ontologies, standards etc., these further introduce cognitive challenges. The further goal of The QEXL Approach is to overcome such challenges, by addressing interoperability at all levels, including the ability to communicate data and knowledge in a way that recognizes uncertainty in the world, so that automated PI and decision-making is possible. The aim is semiotic portability, i.e. the management of signs and symbols that deals especially with their function and interactions in both artificially constructed and natural languages. Existing systems for managing semantics and language are mostly systems of symbolic, not quantitative manipulation, with the primary exception of BayesOWL.  RQSA, or Robson Quantitative Semantic Algebra by its author Dr. Barry Robson, to distinguish it from other analogous systems, underlies Q-UEL. It is the development of (a) details of particular aspect of Dirac’s notation and algebra that is found to be of practical importance in generalizing and correctly normalizing Bayes Nets according to Bayes Theorem (i.e. controlling coherence, which ironically Bayes Nets usually neglect, as they are unidirectional), (b) merged with the treatment of probabilities and information based on finite data using the Riemann Zeta function that he has employed for many years in bioinformatics and data mining (http://en.wikipedia.org/wiki/GOR_method), and (c) the extension to more flavors of hyperbolic imaginary number to encode intrinsic “dimensions of meaning” under a revised Rojet’s thesaurus system.

The Layers of the Architecture Created by The QEXL Approach

The QEXL Layered View

The QEXL Layered View

Layer 1- Contextualization: Planning, Designing driven by Theories 

A.    Probabilistic Ontology creating Inferencing leading into Evidence Based Medicine

i.     Aspects addressed by Q-UEL Tags and Kodaxil Inferencing

  1. Autonomy / Solidarity
  2. Inferencing (Kodaxil and Q – UEL)
  3. MetaData
  4. Security / Privacy
  5. Consented vs Un-consented Data
  6. Creating Incidence Rule (predicated – Q-UEL and Kodaxil)

ii.     Kodaxil:-  Enforcing Semantics across data sources (global text and data interoperability) – universal meaning underlying diverse natural languages on the Internet

iii.     Fluxology:- Logical Meta Data Cloud (A link infrastructure for intra- and inter-cloud interoperability and integration in a international setting)

  1. Adaptive
  2. Emergent Data Usage Patterns (networks of networks – enables by Probabilistic Ontology rules)
  3. Modeless Emergent Hierarchies
  4. Federation and Democratization Rule for Data (contract, trust, certificates, quality)

B.    Development of Probabilistic Model Representing Universal Abstraction of Phenomenon Of Interest

C.   Targeting Architecture to Application

  • Evidence Based Medicine
  • Genomics
  • Systemic Healthcare Studies
  • etc

Layer 2 – A: Operational Architecture (Logical )

A.    Reference Architecture

  1. Business Con Ops (Use cases)
  2. Conceptual Target Solution Architecture

Layer 2 – B: Data Management – Data Ingestion and Processing 

  1.  The processing of entries in the source data into form suitable for data mining
  2. The data mining of that processed data to obtain summary rules
  3. The capture of the appropriate released summary rules for inference

B.    Data Storage and Retrieval, Transactions

  1. Secure Storage and Retrieval
  2. Enable Secure Transactions
  3. Secure Data Exchange among several stake-holders and data owners

C.    Data Lifecycle, Data Organization Rules, Data Traceability to the Events, 

  1. Security and privacy by encryption and disaggregation of the EHR in a manner that is balanced against authorized access for extraction of global clinical and biomedical knowledge.
  2. Mechanisms for fine-grained consent permitting sharing and data mining.
  3. Mechanisms for secure alerting of patient or physician by backtrack when an authorized researcher or specialist notes that a patient is at risk.
  4. Structure and format that allows all meaningful use cases to be applied in reasonable time, including large-scale data mining.
  5. Assemblies across sources and data users forming contextual work patterns
  6. Hardened Security Framework

D.    Large EHR repository scaling

E.    Data Mining Rules

F.     Extracting and creating Incidence Rules

G.    Experimenting, observing and creating Semantic Inferences

H.    Visualization 

 The below two layers can be implemented in varieties of BigData platforms such as Hortonworks, Pivotal, Altiscale

Layer 3 – Application Layer (Schema-less for structured and unstructured Knowledge Repository – KRS)

Layer 4 – Infrastructure Architecture (Physical) (Hadoop and MapReduce for Large Data File-management and Processing; and Distributed / Concurrent Computations)