Interview with Professor Leonid Perlovsky, Visiting Scholar at Harvard University, Principal Research Physicist and Technical Advisor at the United States Air Force Research Laboratory, Hanscom Air Force Base. (Interviewed by Dr. David Fogel, co-chair, 2017 IEEE Symposium Series on Computational Intelligence.)
Prof. Leonid Perlovsky is a long-standing member of the computational intelligence community. He’s authored or co-authored eight books, including Neural Networks and Intellect (Oxford, 2001), Emotional Cognitive Neural Algorithms with Engineering Applications (Springer, 2011), and Music: Passions and Cognitive Functions (Academic Press, 2017). Prof. Perlovsky has given consideration to the physics of the mind, and how we can create algorithms to model cognitive processing. You can read more about his background at www.leonid-perlovsky.com. I had the opportunity to ask him some questions about the 2017 IEEE Symposium on Computational Intelligence in Cognitive Algorithms, Mind, and Brain, which will be held in Honolulu HI at the Hilton Hawaiian Village Resort as part of the 2017 IEEE Symposium Series on Computational Intelligence (Nov. 27-Dec. 1).
DF: Let’s start by asking what are cognitive algorithms?
LP: Cognitive algorithms are algorithms that are based on the models of the mind. It is generally accepted that the mind is more powerful than computers when solving many complex problems. At the same time computers potentially can use more memory and produce more operations per second than the brain. Cognitive algorithms aim at using powerful models of the mind to design algorithms that can be implemented on computers.
DF: I’d like to address what these algorithms can do in term of problem-solving but let me first ask something more esoteric. Many people would say the “mind” is amorphous. How are researchers in your field applying computational intelligence to study or emulate the mind?
LP: Is the mind really amorphous? It’s possible to construct a theoretical physics of the mind based on the same methodology as found in all other areas of physics. Here, I have to introduce some terminology that may not be familiar with your audience. But they can read more in published papers to better understand the foundations of this field. Researchers in our field (including me) have created a theory that describes perception, cognition, language-cognition interactions, and aesthetic emotions that satisfy what I call our “knowledge instinct.” Together, these allow for the improvement of conceptual knowledge, mental hierarchy, and even contents of the “top” mental concept-models. In this theory, these top contents unify our entire life experience and are perceived as the highest meaning, or “the meaning of life.” Aesthetic emotions satisfying the knowledge instinct at the top of the hierarchy are “emotions of the beautiful.” These unexpected relations between the “beautiful” and the “meaning” have been confirmed experimentally. Another unexpected model prediction is the cognitive function of musical emotions, which could not have been explained by Aristotle, Kant, or Darwin, who called music the greatest mystery. The fundamental cognitive function of music is to help overcome cognitive dissonances. I assert that understanding the cognitive functions of the “beautiful” and music is necessary in order to eventually construct robots with human abilities.
DF: Can you point to examples where algorithms based on this theoretical framework have been used in practice?
LP: Yes, let me first discuss engineering applications and then switch to modeling the mind. Many engineering problems have been solved using cognitive algorithms, which could not have been solved before. These include clustering in the presence of noise, while separating noise from clusters of interest, as well as computing regression models in noise, while separating noise from signals of interest. In addition, there’s tracking moving objects in noise, fusing-integrating signals coming from multiple sensors or databases, cyber-defense by separating malware packets from benign ones, correlating genes with diseases when multiple genes might be at fault, helping doctors to treat patients by predicting the progress of treatment. This theory explained mechanisms of the mind that have not been understood. Even “simple” object recognition has caused problems for pattern recognition techniques and, more broadly, for artificial intelligence for decades. We discovered the fundamental reason for these difficulties. For a while it was understood as a result of computational complexity, an exorbitant number of computations were often required even in relatively simple problems. Recently, computational complexity has been understood surprisingly as a result of classical logic. Computational complexity is as fundamental as Gödelian incompleteness: When using classical logic in a finite system, such as computer or brain, Gödelian arguments lead to computational complexity. A new area of mathematics, dynamic logic, has been developed to overcome this difficulty. This enables modeling mechanisms of the mind that have not been previously understood.
DF: Such as?
LP: One fundamental mechanism of the mind is a process “from vague representation to crisp representation.” This makes it possible to understand our highest cognitions and emotions (for example, emotions of the beautiful, purpose of life, musical emotions and their role in cognition). These innovations will stimulate significant changes in many areas of psychology, including theoretical and experimental ones. Such fundamental changes will require a new generation of psychologists with much better understanding of mathematics and who are capable of employing dynamic logic. It’s not really that difficult mathematically but requires a new intuition.
DF: Based on your experience, how close do you think we are to creating artificial brains that can be compared to, say, any mammal’s brain. Or should I say even any invertebrate?
LP: Let me aim higher: Can we create an artificial brain similar to human capabilities? That is really the question. I believe we are potentially close to creating artificial brains with human capabilities. The difficulty possibly is not in that we have no ideas how to proceed, but that there are many ideas. The majority of researchers do not know which research directions would lead to human brain capabilities and which directions instead can solve complicated but narrow engineering problems.
DF: What have been the most interesting developments in applying computational intelligence to the mind and brain in the past 5 years?
LP: For me, the most interesting recent developments in applying computational intelligence to the mind and brain come in the physics of the mind. On the one hand, this progress is well recognized: for example, the journal Physics of Life has achieved an impact factor of 9.5. On the other hand, relatively few researchers developing artificial minds or working on their theory know about this direction, despite the high impact factor. For example, dynamic logic describes a fundamental mechanism of the mind as I’ve mentioned: the evolution from vague representations to crisp ones. This is helpful in many engineering problems from perception to language and cognition. When I give a lecture on this, it takes only a few minutes for me to write the equations, but practical understanding will require solving problems and developing necessary intuition.
DF: What are you working on in this area?
LP: I work recently on aesthetic emotions, emotions that motivate learning. A central part of the theory is the learning instinct, which drives learning. We may perceive aesthetic emotions as those that satisfy this instinct. Many of my recent publications discuss related topics and they are open, so just typing my name in a web search will bring you to related publications. A recent one is a book, Music: Passions and Cognitive Functions. It explains why music has such a hold over us, and it goes well beyond music. It discusses the theory and experiments testing theoretical predictions.
DF: What led you to this work?
LP: I was led to this direction by an attempt to understand music. Why did such a strange ability appear in evolution? It took me several years to solve this problem. The first step was to understand emotions of “the beautiful.”
DF: What do you think someone not working directly in this area of computational intelligence would gain by attending your symposium?
LP: Cognitive algorithms provide a general approach to artificial intelligence and machine learning. Any area of computational intelligence can benefit. My book, Music: Passions and Cognitive Functions, discusses why music affects us so strongly and actually the book goes well beyond music. A previous book that I co-authored, Emotional Cognitive Neural Algorithms with Engineering Applications; Dynamic Logic: From Vague to Crisp, discusses algorithms for many applications. At the conference, I’ll discuss the main ideas and intuitions of everything we’ve addressed here, so that the general theory can be applied to solving many problems. I’ll also discuss equations of dynamic logic, how to solve them, and how to apply them to typical problems.
Contact information:
Leonid Perlovsky: Visiting Scholar, School of Engineering and Applied Sciences, 336 Maxwell Dworkin, 33 Oxford St Cambridge MA 02138, 617-495-7871, Fax:(617)496-6404, lperl@rcn.com.
David Fogel: Natural Selection, Inc., 6480 Weathers Pl., Suite 350, San Diego, CA 92121, dfogel@natural-selection.com. (858) 455-6449. www.natural-selection.com
The URL for the 2017 IEEE Symposium Series on Computational Intelligence is forthcoming.
To read more:
L. Perlovsky and R. Kozma (eds.) Neurodynamics of Cognition and Consciousness, Springer, 2007
L. Perlovsky, Music, Passion, and Cognitive Function, Academic Press, 2017
(C) David Fogel, 2017