I recently had the opportunity to interview my friend and colleague Prof. Kalyanmoy Deb, and I’m pleased to share it with you here.

Prof. Deb is Koenig Endowed Chair Professor at Department of Electrical and Computer Engineering in Michigan State University, USA. Prof. Deb received his bachelor’s degree from Indian Institute of Technology Kharagpur in 1985 and masters and doctoral degrees from University of Alabama, Tuscaloosa, USA in 1989 and 1991, respectively. Kalyanmoy’s research interests are in evolutionary optimization and their application in multi-criteria optimization, modeling, and machine learning. He has been a visiting professor at various universities across the world and has received numerous awards including the Infosys Prize, the TWAS Prize in Engineering Sciences, the CajAstur Mamdani Prize, the Distinguished Alumni Award from IIT Kharagpur, and the Edgeworth-Pareto award. He’s a Fellow of IEEE, ASME, and three Indian science and engineering academies. He’s also published two textbooks on evolutionary algorithms and over 470 research papers with Google Scholar citations now exceeding 100,000 and has an h-index of 102. He’s also on the editorial board on 20 major international journals. More information about his research can be found from http://www.egr.msu.edu/~kdeb.

DF: First, thanks very much for your time with this interview. I want to say congratulations for going over 100,000 citations on GoogleScholar.

KD: Thank you, Dave, for your wishes and giving me this unique opportunity. I would also like to take this opportunity to acknowledge and appreciate your many years of leadership and active research and promoting the evolutionary computation field.

DF: Thanks very much. I remember when we were each starting out in evolutionary algorithms back in the 1980s that even getting 1 citation was pretty nice. One hundred thousand is quite an achievement.

KD: Thinking back, my first journal paper on applying a genetic algorithm (GA) to an engineering problem took more than a year to come back from review with questions like “Is there any proof of GA’s convergence to an optimum solution?” The paper was eventually published and the full paper was reduced to a three-page short paper emphasizing only the results I obtained through the GA procedure, rather than allowing me to discuss much on the algorithm. I realized then that it would be a difficult journey to build a career on the subject, but I was too engrossed and optimistic about evolutionary computation’s prospects in search and optimization.

DF: I’ve found that particularly for people who got started in evolutionary algorithms at an early stage, there was a specific motivation or a moment of epiphany. Obviously for me, that came from having my dad be a pioneer in the field so I’m not sure I had any choice in it. [Laughing] But, what motivated you to start researching evolutionary algorithms?

KD: It goes back to my University of Alabama days in 1987, where I went to do my master’s degree. At the very first semester, I was supposed to choose one elective course, in addition to three compulsory courses. I met several professors for a suitable course and then for the first time I heard “genetic” and “algorithms” mixed up in a course offered by Dave Goldberg. I was extremely curious to know what genetics had to do with engineering, as from my childhood days I was fascinated with nature’s evolutionary process. Dave was extremely motivating and instantly gave me a few of his papers demonstrating the use of GAs in engineering problems. As the course progressed, I got more fascinated by GA’s appeal, scope, and breadth. Pretty quickly, I signed up with Dave to work on my thesis.

DF: Before we move to the present day, what are your recollections of doing evolutionary algorithms at first with computers that were at the speed of, say, a Mac II? (For those who remember what a Mac II is.)

KD: My first GA code was in Pascal, supplied to us by Dave Goldberg for our GA class. We were using a personal computer (PC-386) then and could run GAs with a memory limitation that allowed a maximum of 100 population members each having about 100 bits. I remember I felt proud when I modified the code to run large-sized problems by saving and reading additional population members in and out of a file, instead of storing all population members to PC’s RAM. Every little achievement was fun and seemed like a breakthrough then.

DF: [Laughing] Oh my, I did that too! Also, that helped when a machine crashed because you had a backup up of whatever the most recently written generation was. Good times! OK, let’s jump forward to the area that you’re best known for, which is multi-objective optimization. For readers who aren’t familiar with the area of multi-objective optimization, how does this differ from standard optimization over multiple variables?

KD: Most of the optimization literature is built up on the idea of single-objective optimization, in which the sole purpose is to minimize or maximize a single goal or a single objective function. This is great if an application demands a sole focus of minimizing weight or cost, or maximizing a single performance metric. When such an optimization task is performed, it usually results in a very specialized solution that is best for the chosen objective. However, the solution is almost useless for other equally important goals or objectives. Most pragmatic engineering problem-solving tasks demand simultaneous optimization of more than one conflicting objectives, such as minimization of weight and simultaneous maximization of a performance metric.

Clearly, in these multi-objective optimization problems, no single solution is an absolute optimum for all objectives. These problems give rise to a set of optimal solutions, known as Pareto-optimal solutions. The important matter is that every Pareto-optimal solution is, by definition, an optimal solution of the problem, making it best for a specific aggregation of all objectives. Thus, the goal in solving these problems is to find as many variant Pareto-optimal solutions as possible, so that a good idea of their range and trade-off information can be gathered before choosing a single preferred solution. Evolutionary multi-objective optimzation (EMO) methods use the concept of non-dominance – a partial-ordering concept from mathematics – and the concept of a diversity-preservation technique for finding multiple well-converged and well-diversified Pareto-optimal solutions over generations.

DF: Do you remember the moment or moments that led to your coming up with your approaches for multi-objective optimization? How did that go?

KD: Back in early nineties when I started my academic career and was visiting industries for potential applications, I realized that the existing evolutionary algorithms needed to be made more efficient in a few key areas: (i) mixed real-integer parameter handling, (ii) constraint handling and (iii) multiple objectives handling. Although I proposed some ideas on the first two topics and they have become quite popular over the years, let me talk about a moment that motivated me to focus on the multi-objective optimization more. There was an incident at a steel-making company that I visited in 1993 to discuss a possible optimization-related project that motivated me to move my attention to multi-objective optimization. After a tour of the plant that housed the object of our proposed project, I asked my hosts a simple question: What was the criterion that I was supposed to optimize to come up with an improved design? This is where both of my hosts had a different viewpoint for arriving at an improved design. It then dawned on me that it is not a staightforward matter to come up with a single criterion that’s satisfactory to everyone. There are multiple views even within a single industry to define what makes a design better. This led me to look at the evolutionary computation (EC) and optimization literature for possible prior studies on multi-objective optimization. While the classical literature had many years of studies with “generative methods” in which one Pareto-optimal solution was targetted at a time in a sequential manner, the population approach of EC enables it to find multiple Pareto-optimal solutions in a single simulation run. After a few early studies and suggestions, I proposed “Non-dominated Sorting GA” or NSGA in 1995 with my student N. Srinivas.

DF: And that started to get a lot of attention.

KD: It did. After the publication of NSGA paper, I had been getting increasing attention from interested researchers and practitioners. However, NSGA had one issue, which I wasn’t very happy about: It required users to set a diversity parameter for every problem. After some experimentation, in 2000, my students and I finally came up with the next version – NSGA-II, which eliminated the diversity parameter completely. I was fortunate to have three talented undergraduate students at IIT Kanpur – Sameer, Amrit and Meyrivan – to do this study.

DF: Was there a moment where you hit upon the key step from NSGA to NSGA-II?

KD: While we had a rough idea of our proposed approach, four of us met on a Saturday morning in my computer lab. While I narrated a version of idea to Sameer to code and produce some results, I moved to the next seat to discuss a slightly different idea with Amrit. Then, I discussed another related idea to Meyrivan. The students were so quick and apt with their coding skills that by the time I finished with Meyrivan, Sameer wanted to me check on some results on test problems that he was getting. Parallel processing was very effective and their skill and zeal enabled us to come up with the NSGA-II procedure by noon!

DF: What is the state of the art in multi-objective optimization now? What is the limitation of that state of the art?

KD: NSGA-III is our latest approach, which is capable of solving many-objective problems involving up to 20 objectives. I’d say the current state-of-the-art in evolutionary multi-objective optimization (EMO) is (i) many-objective optimization using EC (involving more than three objectives), (ii) surrogate modeling approaches in EMO, (iii) set-based performance metrics, (iv) visualization of many-dimensional Pareto-optimal set, (v) interactive EMO and decision-making approaches, (vi) multi-objectivization methods for aiding single-objective optimization through additional helper objectives, (vi) handling practicalities such as uncertainties in decision variables, noise, dynamically changing objectives, and others. A few years ago, I proposed an “innovization” approach (creating innovation through optimization), which extracts features associated with decision variables and objectives that commonly appear in multiple Pareto-optimal solutions. These features bring out vital knowledge about high-performing solutions, often leading to innovative solution principles.

DF: For me, I’ve found thinking about how to visualize solutions to a problem to be very helpful when it’s possible. Of course, that’s easier in two or three dimensions. What are your thoughts on this, particularly as the number of dimensions in a problem increases?

KD: Yes, extending the EMO idea to address a large number of objectives brings a number of apparent difficulties – an effective visualization of the Pareto-optimal set and need to find an exponentially larger number of solutions to represent the Pareto-optimal hypersurface. Also, with many objectives, most solutions become non-dominated to each other, making algorithms difficult to progress towards the Pareto-optimal region. While the optimization aspect can be improved and made faster with parallel computing and smart algorithms, an effective visualization for choosing a preferred trade-off solution remains an immediate challenging task.

DF: When moving from a Pareto set of solutions to a particular engineering implement, we often have to choose a particular single solution. What rationale do you utilize for helping select a solution, presumably from the Pareto front?

KD: This is an area EMO researchers have not paid much attention to, yet. Every time I hear an EMO application presentation, I am always interested in this aspect, but in almost all cases the presentation ends with a visual representation of obtained trade-off solutions and there is no mention of how a single preferred solution may be chosen for the specific problem.

DF: That might be suggestive of a lack of real-world experience since it’s a natural thought for someone who’s worked on a lot of real-world problems.

KD: I guess the practitioners are trying to grapple with not one, but multiple trade-off solutions and we may have to give them a bit more time before they usher an algorithmic path to decision making. However, multi-criterion decision making (MCDM) researchers have traditionally focused on this aspect and I have tried to mingle with them and borrow some MCDM ideas and integrate some of them with EMO. These go by the names of reference-point based NSGA-II, light-beam-based NSGA-II, epsilon-MOEA, and cone-domination-based EMO if someone would like to look them up. But, I believe that there can’t be a single approach that’s ideal for choosing a preferred Pareto-optimal solution for various problems. This is an area where we have to do a lot more work, quickly and conclusively.

DF: Probably some extension of the no free lunch theorem there, Kalyan.

KD: You are right! There is no free lunch for a specific MCDM method. Preference information and preference elicitation take different forms in different problems. Simple preference information, such as “cost objective is twice more important than emission objective” is too naïve to be answered in an agreeable manner. Moreover, there are usually multiple decision-makers in practice, requiring flexible group decision-making approaches to be developed. After more than 23 years of work in this area, I now believe that more emphasis must be made in developing interactive EMO-MCDM methods in which, instead of a serial application of EMO and then a decision-making approach to choose a preferred solution, a decision-making task must be performed during an EMO run. Decision-makers may intervene and analyze current-best non-dominated solutions while EMO iterations are underway and elicit their preferences. The EMO methodology is then modified by the suggested preferences and then improved solutions in the preferred subregions can be targeted. For many-objective optimization problems, such interactive methods appear viable.

Nevertheless, EMO’s demonstrated ability to find and maintain multiple trade-off solutions for a multi-objective optimization problem has already attracted non-EC researchers and practitioners to the EC field. A few commercial software companies (Red Ceder Technology, Esteco, for example) are thriving on EMO methodologies, providing industries with a platform to solve their multi-objective optimization problems. Major EC conferences usually have packed programs on EMO sessions, tutorials, as with your upcoming IEEE SSCI event in Honolulu. With all the excitement and developments in EMO underway, this is also raising more interesting questions and opening up avenues for further research and proliferating into other domains. One aspect is pretty clear to me – EMO and its popularity and applicability are not likely to fade away anytime soon, at least not in our lifetime.

More information about Kalyan’s work can be found at his website at MSU: http://www.egr.msu.edu/~kdeb and from his COIN lab site: https://www.coin-laboratory.com.

© David Fogel, 2017