Leaders Shaping the Digital Landscape
Aug. 1, 2023

The Power of AI

In a recent episode of Tech Leaders Unplugged, host  sat down with , Optimization Principal of , to discuss Artificial Intelligence and how to use it to master optimization in your toolkit.

In a recent episode of Tech Leaders Unplugged, host Tullio Siragusa sat down with Irv Lustig, Optimization Principal of Princeton Consultants, to discuss Artificial Intelligence and how to use it to master optimization in your toolkit.

#ai #artificialintelligence #optimization #optimizationstrategies #liveinterview #podcast #techleadersunplugged

Transcript

Carlos Ponce (00:00):

Good morning everyone. Welcome to another episode of Tech Leaders Unplugged. And this is your host, Carlos Ponce. And well, today we're hosting our guest, and of course, I'm joined. I'm sorry to I almost forgot that we couldn't have this episode without my fellow teammate. Wade.

Wade Erickson (00:38):

Nice to meet you.

Carlos Ponce (00:41):

And let, let me just frame all of us here so that we can see that everyone can watch us. So, we have Wade Erickson here with us as co-host, and today we're going to be speaking with our guest, Irv Lustig. He's the optimization principle at Princeton Consultants. So, Irv, welcome to the show.

Irv Lustig (01:01):

Oh, thank you for having me.

Carlos Ponce (01:02):

Absolutely. Our pleasure. So, wait I'm sorry. Irv let's start with you first. So, tell us a little bit about you, you know, a little bit about your background, your, where you're coming from, and then tell us about the company, and then we'll go from there. Thank you.

Irv Lustig (01:19):

All right. Yeah, sure. So, I've had a long career my PhD that I received in 1987 from Stanford was in an area called Operations Research. And my thesis advisor was a famous guy by the name of George Dansk. He is what's known as the father of linear programming, which started out in the optimization area back in the late 1940s. After Stanford, I went on to teach at Princeton University for six years, developed some new algorithms, and joined a small software company called Cplex, which got acquired by a bigger software company called Ilog which got acquired by an even bigger company called IBM that everybody's familiar with. Spent time at IBM and my career ranged from doing development of software products to marketing to sales. And then at IBM, we were integrating the sales organization. In my last few years at IBM, I was in IBM research, got back to my technical things integrating looking at areas where we could marry what I do in optimization and other colleagues as well with the area er then emerging area of predictive analytics. And then I left IBM just over eight years ago and joined Princeton. Here I lead our optimization sales and efforts as well as architecture and implementations of our optimization-based project projects for customers that range from a wide variety of different industries.

Carlos Ponce (02:55):

Great. Thank you so much, Irv. And what about Princeton Consultants? Can, what can you tell us about the company? How did it all start?

Irv Lustig (03:04):

Sure. Yeah. So, Princeton Consultants, actually, we've been in business for over 40 years. The founders were a couple of Princeton University grads. Started off as a management consulting company, and as it evolved, our tagline is management consulting and information technology. So, we do a lot of work that has nothing to do with AI advanced analytics a lot of work in the transportation industry or all the ma major US railroads are our clients. So, we end up building decision support systems based on software to help them operate their businesses. And then as part of that, then we'll also do some optimization work in other areas of analytics and AI that we apply to help them make better decisions. We have pretty much a special focus in the areas of transportation and rail as well as in trucking. But then in the optimization, we do everything from finance to healthcare to government. I'm probably leaving out a few as well, supply chain, I mean, it's all, it's all over the place. And one reason I came here is really the management consulting side of the house is having been involved with optimization since I left grad school, I would often run into people that said, I've got this great optimization thing and it can save my company millions of dollars, but it wasn't getting used. And the reason was that often when you're implementing these projects, and this is true in the optimization or for any type of AI project you have to think about the effects on the organization as a whole. How are their operations going to change? How are people going to change the way they're doing their daily lives? How will the customers interact differently? And that's a very soft skill. So, we have a number of people here that are really skilled in these areas. I've learned from them, and I'm getting better at it myself. Things related to change management that are important when you're out there actually deploying these solutions into the business. So, we've been doing that kind of work for 40-plus years. I think we've done over this time, I've only been here eight and a half years, and we've done something on the order of like 1800 different projects for a large number of different companies.

Carlos Ponce (05:19):

Well certainly 1800 different projects can say a lot about your experience in this particular field, especially, especially with now with the AI that's been around, well, it's been around for a long time, but these days in this day and age there's a lot of words about the bustle, hustle, and bustle about AI. So, we're all looking into what's going on, right? So, I look forward to this conversation and I look forward to also jumping into the topic that you chose for today's conversation. So, the topic as chosen by you, is the power of AI mastering optimization in your toolkit. So, let's start with that. Let's start with the topic itself. Why did you choose this particular topic and why did you feel it was relevant for this day and age, can you help us understand that part please a little bit?

Irv Lustig (06:08):

Yeah, for sure. So, optimization has been around for a long time, as I mentioned, you know, , the founder of the field kind of started out in, in the forties. And it's one of these technologies that is actually used to make better decisions about you with limited resources. And it's around you every day. You don't even realize it. So, as examples, the airlines all use optimization to determine which planes go on which flights. You don't want big planes on flights with low demand. You don't want small planes on flights with high demand, but then the flights have to connect, right? So, if you're, you know, going, I live in Miami, if I'm, if a plane goes from Miami to Dallas on American Airlines, then where's that plane going to go next? How do you schedule the maintenance? And how do you not only do that, how do you schedule the crews, the pilots, and the flight attendants? So, another example is if you take a, you know, a can of your favorite beverage how did that can of beverage get to your store? Right? There was transportation involved, there are distribution centers involved. It's all part of the supply chain. We've all heard, you know, in Covid about a lot of the supply chain disruptions. The planning of what those supply chains look like is actually all done with optimization. Speaking to some of the kinds of projects that I've worked on here at Princeton, we did a project in coordination with the US Census Bureau for the 2020 census. And I'm going to use this project that's an illustration of the kinds of impact that optimization can have. So, with the census, it has two parts. We all just went through it a few years ago. The first part is traditionally they would send out a form that you would fill out at your house in 2020, they changed that to give you a postcard and an internet link. You would go to the internet; you would fill out your form and give your responses for the census. About 60 to 65% of households respond that way. But the US Constitution mandates that every single person in the US gets counted. So, the next step is the census bureau hires in the past around 500,000 enumerators. In 2020, it was 300,000 basically going around and knocking on the doors of all the people who didn't respond to get them to fill out their responses. In 2010, the way this was done is a supervisor of these enumerators would meet them at, like a McDonald's or a Starbucks, give them a bunch of cases to work on, and the enumerators would then work the cases until completion. Generally, they visit each household six times, and then they say, I couldn't figure it out. But they work, each case and the enumerators do it on their own time, and they do the cases in any order they want, and they're paid by time and mileage in 2020. This was changed in 2020. They used optimization and also some other types of machine learning techniques as well to figure out and say, we're going to give each enumerator on a cell phone and tell him this is the cases you're going to work and here's the order you're going to do them in. Right? And so, what this did is it took about two and a half billion dollars of cost out of the US census operations. They didn't need as many enumerators. The enumerators were twice as productive. They were basically completing twice as many cases per hour than they were under the previous system. And so, this was all driven by the fact that optimization was able to give them the best routes and also assign them the work. Because within an area, say a county, you've got multiple enumerators. You want to give them work close to their house because they're paid from when they drive from their house to the first place they visit. So, you don't want to send an, you know, numerator like, I'm in Miami from South Miami to go up to North Miami. It doesn't make any sense. So doing that efficiently took out this cost and made the census a lot more efficient. And then it turned out, this was all planned for April 1st, 2020, but we all kind of remember where we were on April 21st, 2020. We were all locked down as were the enumerators. So, the whole enumeration process was delayed until August to October of that year. And what these optimization algorithms instead of algorithms that were used to do this, basically made the census work. They could not have done it. We’re not for having this type of automation driven by optimization techniques to deliver that kind of value. And I can go on and on with example, after example. Yeah. So, the reason I chose this topic is that a lot of the focus in AI today is on machine learning. All right? And what optimization about is really about, is about making decisions about your resources and how to allocate them efficiently. So, going back to my census for example is I've got enumerators, how do I make them more efficient? How do I make them more productive? And so, it's kind of the next level after machine learning, machine learning, you know, in a simple way, I can take an image and say, that's a cat and that's a dog. Well, okay, but once I know it's a cat and dog, what do I want to do about it? So, it's really about driving the decisions that come out and being able to automate those decision processes, whether it's planning allocation or scheduling tasks. All these types of things are, are done with optimization.

Carlos Ponce (11:29):

Yeah. Well, what I'm hearing is that you know, and then I'm going to let Wade talk, but what I'm hearing is that the role of optimization, I guess it's kind of supercharging, you know, AI performance and AI systems, that kind of thing, right? So, it's kind of on steroids. Okay. Wade, you know, I got a bunch of questions that are popping up in my head, but I'm going to please be my guest.

Wade Erickson (11:52):

Yeah. So obviously a lot of the listeners come from the tech space that watch this show, and you know, we've, we've seen quite a frenzy around ai. An AI's been around a long time, but really in the last, I don't know, the ChatGPT and all that stuff has really kind of exploded the interest in AI and inserting that in the software products of, of that are available through SaaS and those kinds of things. Can you talk a little bit about the process that's involved when you've presented a project, how the optimization goes into maybe the design thinking, and the evaluation of the current workflows in the application? And where do you kind of insert, your background and intellect into that process? And then how does that show up then in the software development teams to insert the new algorithms and the new models and, and methods to optimize? Can you talk a little bit about that process and how you leverage your maybe your in-house talent and maybe look for outside talent? So, there are these developers out there that want to get into AI, and maybe what they need to do to prepare themselves to be part of a team like that.

Irv Lustig (13:10):

Oh, it's a big, big, long question there, Wade. So, what we first really look for are business processes where typically people are making decisions in spreadsheets or on whiteboards about allocating resources. Often the scheduling area, they're doing things like figuring out a schedule and a spreadsheet and trying to understand how they can ji things around, et cetera. It's often a manual process. So, we first look at how are they making decisions today and what can, and data are they using to inform those decisions. So, the first step of the process is to say, okay, you're making these decisions, but what information and data are you using? Is the data available, right? Sometimes the data's sitting in people's heads, right? It's, well, we've always done it this way. I kind of know that, you know, I need to have this customer serviced by this, you know, engineer or what have you on a route or something like that. So, we first look at the decision problem, and then what we do is then go out and define and find the data that's going to drive the decisions, where's the data located and get it organized. And we write documentation that says, here's all data, it's these tables and how they interrelate etcetera. And the next step is then we write a mathematical model. And that's kind of the secret sauce of optimization, is you have to be, have that talent and skill to be able to write those kinds of models. You know, my, the area it's traditionally been taught in is my, where my degree is in operations research. Now you'll find optimization courses in many business analytics and data science programs mainly cause a lot of the folks like me who are in the academic world have gone on and said, hey, we got to reorient our programs, but we want to teach people optimization. And, and there's kind of an education you have to get, you can read some books, but the real hard stuff, you just, it's a, there's an art and a science to writing a good math model. And what you're basically doing is saying, I have all this data and I'm going to mathematically define my decisions as variables. And then you also write out constraints. How are these decisions constrained? And this is all part of understanding the business process. How do you know what is saying if they're allocating resources? Well, I've only got, you know, 10 people to schedule. Okay, that's a limit. There's only 10 people to work with or what other types of limits might exist. And then you also look at typically what's called an objective function. This is the key part of optimizing is that you define what is it? Maybe we're going to be minimizing costs or minimizing the amount of time it takes to get a job done, or maximizing profits have some quantification of KPIs that if we change our decisions, how will those KPIs change? And we all write it out mathematically. Once we have all that, that math documentation done, then we get to the coding part, which is we then use different tools. There's a number of good commercial tools typically not in the open-source world, but commercial tools that allow you to represent these mathematical models, marry it with data so that you basically, I like to describe it as a big square root button on a calculator. You have thrown a bunch of numbers in; you get some numbers out that give you that optimal decision. And it's provably optimal mathematically that you can say, given the way you've set up your problem, there's no better solution way of doing, making those decisions as measured by those objective functions and KPIs. Take those numbers out and now you're going to use them in some kind of application. We typically start by building an application that has a user interface that is going to allow the user to understand the answer and see why it is making those decisions and, sometimes they want to change the data. So, we're doing a project right now where we're doing scheduling of a bunch of different tasks and there's a certain constraint on how many tasks can be done by different people at over time. So, you can imagine it as basically building a big, gigantic Gantt chart where you're not exceeding the number of resources. Well, determining the value of how many simultaneous things can be done is user input. So, we have a screen that lets them edit that number, right? They've given us some default numbers, but they can override that and then they can change those numbers and see what the result is resulting new schedule that comes out of it. So, the path really starts with understanding the business problem first and, seeing if it is amenable to optimization and trying to get an estimate that if we are optimizing, what will improve. Will it, sometimes the improvements are, as I said, minimizing costs. Like in the case of the census, maximizing profits or coming up with more efficiency in terms of time. Sometimes the improvement is that you've automated a process that was taking people a week to do that can now be done in a couple of minutes. That's going to change the way they're working, right? So, they're going to start working and thinking differently and looking at the world in a very different way because now they can make decisions faster. So, we try to elucidate what those benefits will be upfront, go out, and then do the steps of defining the data, defining a model implementation, and then building it into a software application that typically has a database to store the data needed for optimization, a user interface, etcetera. And then, you know, once people are comfortable with the user interface type of application, we may get to put it in a black box where it's now becoming an automated decision process. So, as an example, we worked with one e-commerce company in this, what they do is, they get a lot of orders 24 hours a day, seven days a week. They have to determine where they are going. They have about 15, I think, fulfillment warehouses. When they get an order, they have to say which fulfill, and which warehouse should fulfill that order. Cause They would like to make it, customers typically order more than one product, and you want to put them all in one box, maybe two boxes from the same place, so the customer gets the shipment on the same day. But you've got to allocate the work. You know, you don't, it, you also have to figure out, well what happens if this one warehouse doesn't have all five-products customer ordered? Is there another warehouse that does, does it, or do I have to split the shipment? And what are the costs of shipping? You know, if I'm in Miami and they figure it in the warehouse that has all my products that's in Seattle, maybe it's better to split the shipment into two different warehouses that are in Tennessee and Texas and get the product to me that way. Because I'll get it faster. So, all of this is done with optimization. They solve this problem every five minutes. They take a batch of 500 orders, allocate them across all their warehouses to send to the customers, and five minutes later, get another batch of 500 orders and keep repeating this process, all black-boxed. But they didn't get to that black box without doing the experimentation with a user interface and looking at how the KPIs were working overtime. So, I hope that's a long-winded answer to your long, to your pretty long question there, Wade.

Wade Erickson (20:15):

Yeah, exactly. That fits it well. Tell me a little bit about the test and valuation the validation of these models and the math. I know you said that the mathematics kind of proves that this is the optimum, but obviously, there's some validation and testing that goes into that, and how, what's kind of the process there? Because a lot of times with ai, it's coming up with solutions that are not necessarily <inaudible>. There are so many different inputs that the mathematical model has to crunch. How do you validate and test for that?

Irv Lustig (20:52):

It's, this is actually a real challenge. It, there's a couple of different things that we do. So, one thing that we do is when I write one of these mathematical models and define the constraints, one way we test it is we'll have somebody else on our team who knows nothing about optimization, write what we call a solution validator. And what the solution validator will do is take the same set of data inputs plus the solution that I've generated and say, does it satisfy all the constraints? And is the objective value the value of the KPIs? Exactly what I was getting from the optimization. So, this is kind of a way of checking that one of us hasn't messed up or misunderstood what the requirements are. The second part, which is more difficult is we solving the right business problem? Are we actually giving an answer that makes sense in the context of the business? And that typically is done by multiple cycles with the, you know, experts, the people who were making the decisions before and having them evaluate the quality of the solution, right? So as an example, in, in this company we're working with right now in scheduling activities, we, we do a year's worth of activities. They want to make sure that all of them start by the end of the year. But what happens is our solution says, I can't do that. The, I'm going to have some that are starting in January. They say, well, I can't happen. Well, my, then I come back, and I do some analysis and I say to them, well, you know what, you are, you were at capacity all the way through January. There's no way I could schedule something for December because you the people who needed to do that work were at capacity. So, you have to do this analysis back and forth. It's very difficult to do like automated testing like we do in typically in software. Because the, the values can change, the numbers can change, you know, we can correct make sure we're doing testing that you try to build out a big test harness of a lot of different data sets and test all of those data and make sure that, you know, there's not failures that are occurring, that the answers are making sense that the answer coming out the top model is equal to what comes from the solution validator. But doing the formal testing like that we typically like to do in the software development process is actually a real challenge because as I used before, it's a big square root button on a calculator. So, there are a lot of numbers and just changing one or two numbers can drastically change the solution that comes out. So, you know, we do our best here to try to make sure that we've put in guardrails that won't make it so that there won't be a failure. But then again, that's also why we work with applicate, tend to work with applications where the users are involved and their ability to see, and we visualize lots of parts of the solution. When they see these visuals, they're able to say, yeah, that makes sense, or hey, why did this happen? And then we have to do some analysis to get an understanding of that.

Carlos Ponce (23:52):

Great. I have one more question we're coming up on time Irv, but I have one more question Wade if I may really quickly. For many of us, for most of us, if not all of us, AI is fascinating. And more so given, the widespread knowledge that there is out there for us to absorb, right? So, I am assuming that, from the layman's perspective, it's not all milk and honey, it poses some challenges, for a lot of people out there, right? So, it could be anything, it could be computational, it could be you know, trade-offs. So, whatever it is. So, my question to you, Irv, is this, what sort of common challenges do AI practitioners face when incorporating optimization into their toolkit?

Irv Lustig (24:50):

Well, so I think part of it is to some extent, one of the issues with AI is that, you know, we see movies like the movie AI about robots and controlling our lives and, you know, thinking of AI as something that's, you know, becoming human with emotions and things like that. And it's a scary thing, but the kind of AI that I'm talking about in optimization is about trying to automate a decision process. Taking something that humans are doing today or really is very difficult for a human to do and letting the computer do the work. And so, it's intelligent in the sense that it's doing what a human would do, but it's not, it's doing it just with really a bunch of math behind it. So, what we look at when, in terms of getting adoption, what we find is really important is that I don't come out and say, I've got optimization, and let me find the optimization problem and I'll solve that problem. I try to look at it in reverse. I try to say, let me understand your business problems. Let me talk about the decisions that you're making and then see if I can apply this technology. And I think that a lot of times with, you know, the AI and we hear about new things like ChatGPT has come up, you know, is the hot buzzword and generative AI over the last, you know, half a year. It's, you know, technology is a means to an end. And I think the important thing is we have to find that end. That is what is the value that we're going to do and also understand where some of the risks may be, right? So I think a lot of the things that are happening in the machine learning and the generative AI space present risks that are creating a lot of fear, the nice thing in optimization is when we work with our clients when they see the benefits and they see the potential benefits of it because they're quantifiable in terms of dollars saved or profits increased or time saved, etcetera, it's an easy argument to make. And the risks may come in, you know, I, I like to tell a story of if we're going to start, say we're doing a schedule and we're going to be scheduling a whole bunch of people and we figure out that out of a thousand people you don't need 50 of them anymore and you can still operate your business, well we'll have a conversation with the HR department about what are you going to do about these 50 people, right? So, we try to really understand what the downstream impacts are. When we do our projects, we actually do a risk assessment along what we call it the Princeton 20. There's information on our web, and our blogs about this, where we evaluate 10 enterprise risks that are on the business side and 10 technical risks that are, that factors to evaluate the risk and saying, if you don't mitigate the risks certain associated with certain factors, you're going to fail. And this is based upon our 40 years of experience in doing these kinds of projects. So, we really try to do these risk assessments way upfront so that we can identify ahead of time the things that we might occur as we get downstream. And that's why we use this Princeton 20 to really help us. And we think of it more of we're developing a solution to a business problem and the technology, AI optimization, machine learning ChatGPT, we're starting to investigate applications of that as well is going to, what problem is it going to solve? And by identifying the right problem and identifying the benefits, if we are to solve that problem, it makes it much easier to sell the projects and make our clients comfortable with the solutions we deliver.

Carlos Ponce (28:24):

Thank you so much, Irv. Well as we're about to wrap up, Wade, do you have any more questions for Irv or.

Wade Erickson (28:33):

No, I appreciate your time. I, I think this is a really you know, topic that a lot is a lot of people's minds. They get a flurry of media around ai and, and, and this is obviously an area that with logistics and shipping and I mean, the military's been having to do this all for a long time because they have to move a lot of machinery and a lot of people and a lot of food and everything in a very short period of time. And so, I know they have been a major funder of this kind of modeling and analysis. And then, of course, the big retail outfits now that have, you know, centralized warehousing and now the customer's distributed. So obviously this is an area that is, is very mature and stable but still very much needed. And we just appreciate your time and, you know, thank you for your insights to this community that watches our show.

Irv Lustig (29:35):

All right, well thank you for having me.

Carlos Ponce (29:37):

Absolutely. Thank you. And before we go, let me just make a quick announcement, please. So tomorrow, tomorrow we're, excuse me, tomorrow we're going to be talking right here on Tech Leaders Unplugged with let's see whom we have. No, that was no, we have, sorry about that. We have the wrong graphic, I apologize. But we're going to be speaking with Harry Michaelson, the co-founder and CTO of Halla.io and that's tomorrow right here on Tech Leaders Unplugged. So, join us tomorrow, right here at 9:30 Pacific, and see you then and there. Thank you, Irv, and thank you, Wade. Talk to you soon. Bye-Bye.

 

Irv LustigProfile Photo

Irv Lustig

Optimization Principal

Irv Lustig is passionate about the application of advanced analytics and data science to drive better decisions. He has been involved in the areas of sales, marketing, development, consulting, and deployment of analytics and data science-based solutions. With a deep understanding of the complete analytics and data science project life cycle, he is interested in applying his leadership, business, and communication skills to bring the benefits of analytics and data science to organizations.