AI promises to revolutionize how we interact with technology, businesses, and each other, reshaping industries, economies, and societal norms in profound ways. Join us in this insightful episode as Doctor Anand Rao, renowned AI expert from Carnegie Mellon University, delves into the vast applications of artificial intelligence across various sectors, shedding light on its transformative impact. Dr. Rao dissects the intricate facets of AI, emphasizing its pivotal role in sensing, thinking, and acting, and discusses the evolutionary journey towards autonomous intelligence. Tune in to gain invaluable insights about finance and AI.

Watch the episode here


Listen to the podcast here

The Future of Finance and AI – A Conversation with Anand Rao

Hi, everybody. Welcome back to the futurist society, where, as always, I am your host, Doctor awesome. And as always, we will be talking in the present, but talking about the future with my very special guest, Doctor Anand Rao, distinguished professor at Carnegie Mellon University at the Center for Applied Data Science and Artificial Intelligence. And as always, I will be asking him what he thinks the future holds. So thank you so much for joining us, Anand. 

I want to ask you specifically what it is that you’re doing at Carnegie Mellon and what effect you think AI is going to have in regards to the future.

Thank you for having me. I’m a distinguished professor of applied data science and artificial intelligence at Carnegie Mellon. I joined there ten months back, and I’m teaching a few courses, all focused around operationalizing AI, responsible AI, and also the applications of generative AI or large language models. And also looking a little bit into the near future, I would say, in terms of agent-based modeling, agent architectures, and how the notion of AI is going to proceed further in all the great ways in which we have seen looking at manual labor, cognitive, creative, and a number of different facets. I also teach executive education. I have had a 25-30 year history of being in the commercial sector as part of the global AI leader for PwC and before that, in the company that they acquired. So a lot of industry expertise. And one of the reasons I joined CMU was to bring that industry expertise to the students and the executive education, and also to be more closely involved with public policy and where we set the policies for the country and for the globe as well.

Financial space

So I just wanted to double click on the commercial aspect, because you said you were applying artificial intelligence at PriceWaterhouseCooper. So for those people who are not familiar, it’s a big accounting firm. It’s called one of the big four. And if you’re doing taxes as a business, you’re probably using one of these big four companies. And when I think of artificial intelligence and what I’m sure most of our listeners think about artificial intelligence, they think about large language models, the ability to communicate with this thing that has a more human interface than what we use in Google or what we use on Yahoo or Bing or any of these search engines. But certainly it’s much more than that. And I just want to know, specifically, how is it being used in the financial space?

Yeah, great question. So, as you just said, artificial intelligence is much more than what we are used to today. Even before the current, I guess, adoption, I would say of all these voice interfaces, whether it’s Chat-GPT or Gemini or any one of those number of models that are out there, AI has been around and has been used by people. 

I know most people use photos on their cell phones. And over the past, I would say five years, the cameras or the phones can recognize your face, can recognize people in your family, right? So if you start tagging a particular image with a name, then it starts going and grabbing those and naming those. And not only that, it can actually go back to even a five-year-old photograph where you may not look exactly like what you look now, but slightly different, younger. So it would still capture that. So all of that is actually artificial intelligence as well. Even if we go a little bit earlier, we had all different kinds of search techniques, and search techniques were also using artificial intelligence in the background. And then even before that, expert systems, knowledge-based systems and a number of other areas like recognizing the handwriting on the check. So that’s been commercial since around the nineties. And that also has the AI technology behind it. So AI is sort of very pervasive, but it has not really touched the users  directly, I would say, other than through applications. So here we are seeing it much more directly connected. 

Now to answer your question around financial services, it’s essentially being used in financial services across the spectrum. So let me try and start from the consumer side, so the customer side. So you look for, “hey, I want advice around how much to save for retirement” or “how much to save for kids’ education”. That again, you can have a financial advisor do it. But now the financial advisors themselves can use the AI or the consumers can directly start using the AI to try and understand, “hey, what are the different options here?” And again, we talk about the future, what it would look like five years from now, ten years from now. So we can say, hey, no one can predict that the stock market can go up or down, but there is a very rich history of the stock market, at least for the 100 years we have the data. So we can start putting forward scenarios. What if there was, in the next ten years, a recession? How will your pot of money fair? As you look at the ten years, what if there is a continuous period of growth? How will it grow? So all of those scenarios are something that the consumers can explore. 

Not just broad market economic scenarios, you can also look at your very specific scenarios. I have two kids and I want to put them through private college. What would it look like in 20 years time when they go to college? So the inflation and what it would look like, right? So how much do I need to save? All of those things are things that consumers can use with the AI essentially powering it. 

But then you go inside the organization, whether the organization is trying to develop the financial products or trying to market it… trying to find out who is the right kind of a customer. Again, AI comes in there. AI comes into the operations of companies, as you buy and sell stocks, everything behind it is being managed by the AI. So connecting to the various systems. And currently it’s the software, even though softwares is being written now, or at least initial versions are being written by the AI. Earlier we used to have programmers doing it. We still have programmers, but now the programmers are using these kinds of tools to make it easier and faster for them to do the development. And then of course you go into the backend, which most people ignore. We call it boring problems, but cool solutions for boring problems. So you can still use AI for much of the administration of the accounting. Opening of accounts, closing of accounts, managing the customer relationships, in all of those things AI comes in. So pretty much it’s pervasive everywhere in the financial services industry.

We still have programmers, but now the programmers are using these kinds of tools to make it easier and faster for them to do the development.

I can tell that there’s a difference between… even like my banking experience now, like even having a chat bot that greets me as soon as I enter into the Bank of America website. Like I’ve seen that change. Just like what you’re talking about, which is the small little advancements in society, like Apple sending me a video of all of the pictures of my daughter even when she was a baby to now. Just to get some sentimental value from me using the phone, it creates this stuff and I just want to use the phone more, right? And when I look at that – it’s pattern recognition, right? And just trying to wrap my head around it as someone who’s not in the industry, I know what pattern recognition is and all of its different facets and that that helps our lives. 

Doing a deeper dive into it. It’s, you know, the pure raw information that the computer is processing is these CSV files, right? Like these comma separated value files, and it’s just raw information separated by comma. And so the idea of the software looking through all this data, it recognizes a pattern and creates some sort of either solution or calculation or something from that data, right?


Sensing, thinking and acting

To me I feel like that is really the lowest level of intelligence, right? Calculation is maybe one step lower, like on a calculator you can do nine times nine. And when I look at someone who maybe has a developmental disability like autism or really high functioning, even less high functioning autism, they can do calculations very fast. The next step up is pattern recognition. Right? So, like, that person who has, you know, some sort of developmental disability can still recognize, “okay, it’s morning time, time to brush our teeth.” They associate that time with brushing their teeth. What is the next step up from that? Is it judgment? Is it probability? Like, you know, like where’s the next layer of artificial intelligence going to go?

Yeah, a great description there in terms of the different levels. I would just add one more layer to it. So the way, at least I think about AI, again in very simple terms, is just equated to a human. We pretty much do three things at the very, very top level. As you said, it’s sensing, thinking and acting. 

So sensing, obviously we have our sense organs. We see, we hear, we smell, we feel the touch, we speak, all of those senses. Now, computer science, or AI, has been very much, “No, no, you need to give it to me in a very specific format. You need to type in maybe in a programming language or maybe the data in some kind of a defined format. Then I’ll be able to understand.” So we are going from that type of “Give me the code the way I want it” to “I’ll talk to you. I’ll just send you a camera feed of our conversation. Go figure out the facial movement, the body movement of the two of us. Transcribe what we are saying. Even though my accent may not be a purely American accent, go translate it or transcribe it.” So all of those things it’s now performing. So that’s some of the advancements. So the sensing part, it is increasingly being able to come to where we are as opposed to us going the way it wants. 

I think what you mentioned was very much around the thinking aspects, right? So in the thinking aspects, as we all know, we are very good at certain types of computation, right? So addition, multiplication, sort of basic set of math. We as people are also good at certain types of logical thinking. So a implies b, b implies c, therefore a implies c. So as you just said, it’s sunny outside, so I don’t need an umbrella. Or if it is sort of cloudy, then I might say if I go out and if it rains, then I’ll get wet. And I don’t want to get wet, therefore I should take an umbrella, right? So that type of very simple reasoning. Of course, it can get very complex. 

Then, as you said, pattern recognition. So looking at the sky and saying, “It’s not going to rain.” or “It’s going to rain and I better take my umbrella.” So that’s sort of recognizing at a somewhat deeper level where we have our gut feel or our experience over time, looking at the clouds, the darkness. So that’s sort of more the perception that you were talking about. But then there’s also learning. One of the things that AI obviously has seen quite a few advances over the past couple of days is learning. And the way we learn is from previous data, but we also learn from experimenting with the world, right? So we do certain things, we see what happens, and then based on that, we change our behavior, what some people call reinforcement learning. Or we can learn just by having a number of those issues. 

Now, where we are going is…  in the relative past, so 5-10 years back, most of the technology was, I needed good historical data, and then I would essentially project forward based on my historical information. So in other words, based on the past, I’m projecting the future. And yes, that is also an important technique and an important skill to have. So the AI was doing more of that, what are called predictive models. So you wanted to make sure that your prediction holds even when based on the history. So now where we are going is not just predicting, but imagining different scenarios. Being able to do what if analysis. Being able to say, “Today it’s like this, but when I take action, something else will happen and I need to take that into account.”Again, there are various types of causality that we are bringing in. So we can say, hey, this is caused by this. Let’s look at the root causes and keep going down to really identify how this chain of reasoning works and then essentially to develop those causal loops. 

Now, further on, with all the Chat GPT and all the bots that we are hearing, is this capability of generating something new. Everyone is excited about this generation. Generation of text to speech, images… because that is something that we have always viewed as hey, that’s sort of more human, to create something. The AI can always come back and based on a history, can do something, but generating is something more creative than normal. So now that excitement is a little bit around that creative part.

Of course, AI is all of these. So it can essentially combine all of these different elements. So that’s where we are seeing the thinking actually evolve from very simple rule-based to prediction based on history to scenario modeling and what if analysis based around causality. So the thinking itself is sort of expanding into very complex systems, as they call it. 

And then the last one I don’t want to miss out on is action. Right? So the action part, we have always been somewhat more hesitant to let the AI go and act on its own. We have always been in a “You tell me, and I’ll go and do the transaction.” So again, in terms of the advice, for example, you tell me what stocks to buy or sell, and then I’ll go and then push the trade button here. So buy or sell, I don’t want you to do it. But of course, now there is automated trading where now we are feeling more and more comfortable and confident in having some of those transactions go through. Not just in trading, but also… let the AI send the email, let this AI take those actions. So we are also devolving some of the actions to AI and autonomous vehicles being a great example. Even though we don’t have full-fledged autonomy, we know we all have experienced it in our cars. We can just leave the steering wheel take off and then it just follows the road. So we are able to at least trust enough to give that level of control back to the machine in terms of directly acting, rather than us being the mediators.

Honestly, I don’t think that everybody trusts their vehicle the same way that you might. And I think that the action part, just like what you said, there is this tendency for much of the population to think, I want to be in between the action and the reasoning. When I think about that, I think of that as like, that’s judgment, right? The human has to judge whether this action is beneficial or not beneficial. Whether it’s autonomous vehicles, whether it’s trading, whatever it is.

Some of those things that you’re talking about, it outsources that judgment to the AI. Just like fully self-driving – You’re really just letting the AI take the wheel. So the judgment is completely on the car. And I think that that scares a lot of people. Right? I don’t have a Tesla, but I rented one when I went on vacation. And, you know, we tried full self-driving, and it was just, it was unsettling to my wife. That she could really let go.

What would you say to those people? Because personally, you know, I don’t have a problem with it. And for me, it was a really nice experience to let go. And I think that would be more beneficial for society if we could be more comfortable with AI making those judgments. But then also I would say, other than like, what you would say to that person, how, like, when is our Chat-GPT moment going to be for the judgment portion? When are we going to say, okay, actually, you know, it is safer to drive with everybody on full self-driving. When is that cultural change going to happen?

Four types of AI

I think the types of tasks that it is handling, again, varies by the level of interaction, if you like, between the human and the machine. So we actually have four types of AI, if you like. The first one, call it automated intelligence. The second… it’s a quadrant. So on one side of the quadrant is automated intelligence. On the next side is assisted intelligence. Then below that it is augmented intelligence, and then it is autonomous intelligence. Let me try and explain all four. 

Now, when the task is very simple, very repetitive, one off, where it is so mundane that it is more of a chore for the human to keep doing it… I’ll give you an example. Even a few years back, people used to your insurance application or your DMV application, where you need to fill in the name and your Social Security, all of those numbers, and then someone will be there keying it in whatever other system. So now that’s a mundane kind of a job. Of course, we do employ people, pay them, but it’s a boring task. 

Now most of those things are being automated or copying from one system to the other, because there is a way of verifying the ground truth. So someone can look at the application form, someone can look at what they have entered in the system and say, “yes, that is correct.” 

Now, the more and more that happens, there is no human actually test it, checking it, because you have actually got the system automatically doing it. That’s automated intelligence, and you do that only for very specific tasks and very simple specific tasks. And that also happened over time. It didn’t immediately happen. The first system, we didn’t go and install it. Over a period of time, we have perfected that. There are some of those things that go into automated intelligence, so it is a level of human involvement. It’s pretty close to zero, the AI is basically doing most of it. And for very simple tasks, it is automated tasks (distinguish between automated and autonomous). 

Now, most of the way in which we have used AI till around a few years back, at least a decade back, has been what I would call assisted intelligence. In other words, there is a lot of data there. Again, in a medical profession, for example, there’s a great body of knowledge, there’s a great body of data around the individual, around the drugs and so on. And essentially, the system is analyzing it, and as you said, it gives it to the physician or gives it to the radiographer. Who then looks at it and says,”yes”, and they are using their judgment and then taking whatever actions that they need to take, whether it’s the medical or the financial domain. So that is assisted intelligence, or you’re being assisted. And if you like, the AI is more like a slave for you, right? So it is doing the task, but you are in complete control of what it is doing. So that is the assisted intelligence. 

Augmented intelligence

To answer your question, we are now delving deeper into augmented intelligence. Augmented intelligence, as we mean it, involves AI learning from us. It’s not merely static, consuming all the data and then producing an output, right? It’s akin to the previous model, but now it’s also attempting to make judgments. It tries to infer how you make judgments, so it can actually recommend things to you, and you still take the actions. Now, it’s actively calculating: Hey, how many times did humans accept my recommendation? And how many times did they override my recommendation? Why did they override it? Am I making errors? Can I adjust? So it actively learns from humans. So in that sense, we’re transitioning from AI being a slave to potentially being a coworker. Now, I can consult with it; I can brainstorm. I’m not sure what exactly to do here. Provide me with three alternatives for how I should approach this particular problem. It comes up with suggestions for you, right? So we’re all thinking, and in that sense, it’s assisting us, and indeed, we are assisting it as well, right? 

Based on the decisions made, it learns from us, improving itself, and we learn from it, improving ourselves as well, right? That’s the phase of augmented intelligence we’re in, a much closer collaboration between humans and AI. So now, both working together, we move into autonomous agents, where some tasks can be entirely autonomous, fitting into the category where we’re comfortable enough with AI interaction to trust it. This trust develops at various levels, both individually and technologically. Something very new, few would trust, including researchers, I’d say. We want to thoroughly test and validate the model before deploying it. 

There’s a whole range of things we need to ensure the AI or the machine can do, but over time, we gain confidence. Again, this confidence might not develop simultaneously for everyone, but as you just mentioned, I’ve had a similar experience with my Tesla, and my wife is quite apprehensive. She insists on not using the auto parking feature. It maneuvers quickly, swiveling the wheel back and forth, squeezing into tight spaces much better than I could manually. But it’s still a bit nerve-wracking.

So what happens if it hits? It gets so close to the other vehicle but still manages it. So, again, the first time, I was scared too. But I decided to try a couple more times, and then it worked. So I still use it, but she still finds it very scary. Now, that’s a process that needs to occur. And as more people see others doing it, they gain more confidence. I talked to you, and you’ve been doing it. “Oh, it’s no big deal. I do it all the time.” Then you find a little more courage to try it once. And of course, the crucial thing here is that the AI shouldn’t fail at any point. If it fails, if it gets too close, or if it hits the other vehicle, then I’m not going to use it. So there’s a whole host of things around that trust that we have between us and the machine that need to be managed. And then, based on that, we might move towards autonomy. My own view is that this should be left to society, to individuals, and to society. There are things that AI can do; maybe there are things that AI could potentially do in the future. But just because we can do it doesn’t mean that it has to be done by everyone and that it has to be integrated into products where there’s no choice, right? Because people are very different, and they shouldn’t be compelled to relinquish that judgment. Some might still prefer to exercise their judgment, doing it themselves rather than relying on the machine. So just because we can do it technologically doesn’t mean that we should make it happen or socially accept it. So social acceptance is very different to technology capability. So try to distinguish the two. 

There are things that AI can do; maybe there are things that AI could potentially do in the future.

Yeah, I mean, I do think that social acceptance is different, but I think back to when chat, GPT first came out, right? A friend of mine texted it to me, and it just blew my mind, right? The idea that I could say, “Hey, write an email to my staff to remind them to clean out the fridge every week,” you know, and it writes this beautiful email that would have taken me like ten minutes, and it took me 10 seconds, right? I was like, “Oh my God, I can think of so many test cases for this,” right? And immediately, because it was low-stakes, it wasn’t really that big of a deal. I think it spread throughout the entire world very quickly. Within the span of three months, everybody knew what chatty was. Before that, they had no idea.

And I think that was a really significant moment. And, like, I look back at other technological revolutions, like when the iPod came out, right? Steve Jobs unveils the iPod. Everybody thinks it’s amazing and very fast. Everybody’s on digital audio rather than traditional CDs. What do you think that moment is going to look like for artificial intelligence when it comes to trust? Like, when do you think that? Or, like, how is it going to come about? Do you think it’s going to come from robots in the household? Do you think it’s going to come from full-time driving? Based on your experience, where do you think it’s going to come from?

Cognitive and creative

So I would say that will probably come more in areas where there is no, I guess, physical action. Which is much more cognitive and creative, just as you said, the ChatGPT side of things, and being able to see something new. So. And I think some of those moments are also happening at. And this is what happens with the human mind. So the first time you see something, there’s a wow element, but then there are sort of significant improvements that happen along the way. It looks similar, and that same wow moment is not there.

So, I’ll give you an example. When the chat was there and people said, “Oh, this is great, but hey, you can’t really do images,” it was also there, but they said, “Oh, you can’t do any videos and animation.” Now it’s possible, right? There are so many realistic videos being shot by AI. You just have to describe a scene, and it does it. Or you describe something in text, and it actually creates a game, which is mind-blowing. These are the kinds of moments where it’s advancing. Programming, for example, is not yet at the level where it can fully put together a complete system, but if you look at websites being automated, it’s happening. PowerPoint slides—still early days—but being able to converse with something and say, “Hey, put together a presentation for me. I want it in this format. Cover these areas. Show me three alternative diagrams on how I can illustrate this,” is not too far away. Being able to articulate your thoughts through conversation interactively, I don’t think the AI will perfectly know what we want, and we may not even articulate perfectly what we want. As we’ve all seen when interacting with other humans, like consultants, who write a lot of presentations, partners can articulate the gist of it, but unless you really start seeing the material, adjustments are needed. Making it more visual happens interactively. I think AI is getting there, becoming very interactive and presenting entire movies. Some people have started doing that with 62nd, 90-second clips. That’s where we’re headed. These are the areas where I think trust will come in. 

Looking at the negative side, what if something doesn’t produce as desired? There’s always a fallback where we can say, “No, that’s not right. Change it to this.” There’s more collaboration; we’ll start trusting the machine, rather than a complete devolution of our judgment, which will come only after these steps. So, to answer your question, it may not be so much in action-oriented things like autonomous vehicles or flying drones, but more in cognitive and creative tasks where we’ll start developing trust. This trust will transfer to some extent to other areas, but there are other ways of testing physical things. The harm it can cause physically is more socially harmful than psychological harm, although psychological harm is also significant.


I think that the physical is going to come first. Personally, I think that the last vestige of humanity is social interaction and interacting with other people. You know, there are really good chatbots out there that can help me at the bank, but I prefer to talk to somebody, you know, that’s just me, or when I’m booking an airline, I’m sure that it’s capable right now, but I prefer to call the person on the phone. And even having a conversation like this, realistically, I could be having a conversation with the chatbot. That is based on all of the previous lectures you’ve given at Carnegie Mellon; based on all of your audio and video, that sounds like you, that talks like you, that looks like you. And I prefer to talk to an actual human. And for whatever reason, I think that’s something that humanity just has deeply ingrained in them. Some of the common mundane tasks that people hate to do, like laundry or washing the dishes or something like that, if we can get a consumer robot in the house with low-stakes tasks like watering flowers or doing things like that, I think that would be a game-changer and that would change people’s lives. And I think that’s really what people are hoping for from technology. When I’m in the market for a new car now, I’m thinking I want something with full self-driving because then I can watch a movie on the way to work, or I have that time back to me, you know, and that technology, I don’t think is there now. Or, I mean, maybe it is, and I just haven’t. I don’t know about it as much as I should, but I don’t think it’s quite there now. But I think that once we get something like that, then I think that’s something that will change all of us.

Yeah. So the other way to look at that is, I think, what if you take something that a human is doing and get to 90, 95% of our capability? I wouldn’t say it’s easy, but it is relatively easier to get the AI or a robot to do it. And it is those last three, 5%, that take a long time. It’s a curve; it goes like this and doesn’t quite reach the hundred. It sort of asymptotes there for a long time. So, I think even if it is physical things, as you just said, a physical robot or autonomous vehicle that can do things like a dishwasher, what is our tolerance for those 2-3% errors? If we are willing to accept, yeah, sometimes it might get it wrong. That’s fine. Then I think you can get to that 93, 95%. So getting to those last few percentage points is going to be much harder for AI, robotics, and others. Again, as you said, there are things that are coming, but it’s the last mile. And the physical harm—for example, a dishwasher—is not moving around. It is unlikely to cause physical injury. But if a robot is moving around, you want to make sure that even if it is 2% of the time it might bump against a cat or a baby, you don’t want that to happen. So that’s where I think the hesitation is and the trust again, whether it could cause physical harm or not in those 2-3% cases, but yeah.

Would you trust a robot in your house? 

I would trust a robot only for very mechanical things. So again, I got an iRobot once, and you got a Roomba. But then again, it was not up to the mark. It can do certain things well and certain things not so well. But obviously, it’s not dangerous or anything. So safety wise it is. It was good.

But, like the Roomba, for example, it cleans 98% of the floor. And I think it’s nice that it turns on by itself when I’m not home and comes back in. I think that’s a great product. And I think that really was eye-opening to a lot of people about having that in the household. And I feel like that kind of service, when it’s available for something that is a little bit more difficult, people will realize, “Okay, if it can wash the dishes, it can water my plants. If I can water my plants, it can do my laundry. And if I can do my laundry, I can do this.” And that’s when we start getting into that exponential.

Watering plants or feeding my cat. So those are things that, again, if you’re going out, planning to go on vacation is one of the big problems. Who’s going to water the plants? And that, as you say, is fairly trivial. Even if you don’t want to overwater, you can easily have a sensor that just dips into the soil, senses the wetness of the soil, and then goes to get water and puts it in. So it’s related, one would think, and easy to do, but still, we don’t have the scale of adoption of some of the other technologies.

So with your family members who maybe don’t trust AI and might feel a little bit hesitant, what do you say to them?

What I try to do is use it myself a couple of times and then try to get them comfortable so that you can remove that fear. So that’s one. And then I think every individual is somewhat different too, right? Some people naturally adapt to certain things, and some people don’t. And then I also think sometimes it is generational when you adopt certain things. To give you an example, I’ve got my cell phone here and still today, I still go in and type, right? So of course, my kids, they’re not into AI, but they would say, “Dad, you’re supposed to be into AI; why are you typing instead of speaking?” So can’t you just speak, and it’ll become?

I type too, I’m guilty of that.

So it’s not that I’m fearful; it’s just that you get used to it, and it becomes a habit that you don’t want to change or don’t change, even though you know everything it can do perfectly. So again, earlier, at least it might have been a mistake. But I’ve tried it a few times, and it’s always come out right. But then the natural inclination is not there. So that’s where I think the adoption is; we need to let the adoption happen. Not really false, right? So we can encourage adoption by doing various things, such as removing fear, educating people, and showing them how to do it. But at the end of the day, again, we are humans, and rightly or wrongly, we are at the apex of the pyramid there.

I totally agree with the typing comment because I’m sure that if I were to talk into my phone, it would get the voice-to-text better than I would type it, more accurately than the way that I would type it. Because if I’m typing and I misspell something, I have to delete a couple of letters, and then, you know, that whole process probably takes longer. But for whatever reason, even though I know it’s more efficient, I haven’t made the jump yet. It’s just a matter of habit, right? And I do agree with you. I think that’s not the case. And I’m an early adopter. I love technology. It’s not something that, for whatever reason, I’ve made the jump yet, even though I should. But I think that you’re right that it’s going to happen organically. It’s going to happen on that person’s individual timetable. I just wanted to not be feared as much. And I feel like everybody thinks of AI as this existential threat, and to a certain extent it is. But I feel like it’s also similar. Have you seen the movie Oppenheimer? It’s similar to the atomic bomb. They were considering the idea of the bomb, and they said that there’s a really small percentage chance that it could ignite the entire atmosphere.

And they still want to go with it. I think that it’s similar to that. The likelihood of an existential threat is so low that we should be cognizant of it, because any sort of existential threat is something that we should take very seriously. But I would really like to have a robot that washes my dishes. I think that it’s not going to happen unless society gets on board and investment goes in the right direction. The last thing I wanted to talk about with you was that so much of what we talked about was commercial applications, and I don’t think that people see enough of the non-commercial, public sector. I think when we were talking earlier, you said that you personally have used artificial intelligence for nonprofits in India. Tell me more, a little bit about that, about the beneficent side of artificial intelligence.

AI in India

I think AI is sort of making inroads pretty much in every area. As you said, the public sector and the social sector. Again, I’m part of an institution called the Indian Strategic Development Management Center. So that sort of focuses much more on giving leadership qualities to not-for-profits or what they call social purpose organizations, or SPOs. As part of that, we have created a center for decision sciences for social impact and data science for social impact. And the data science for social impact is: can we take all the things that we are doing in analytics and AI but target them much more to the common people, to the social sector? 

I’ll give you a couple of examples. One example is a few people I’ve joined together; they are themselves blind, and they want to target the blind people and provide education material. Now the education material, of course, we don’t realize it as people who can see, right? There are so many books, websites, and all of these things that we can see and digest, but there is only a limited quantity that, again, is in Braille or has been transcribed. So what this group is doing, just literally a few people, is planning to take all of the material, various types of benefit material, job prospects, as well as very specific courses, and essentially enable them through voice. Not only enable them through voice, but enable them through multiple languages in India. So that’s the other thing where we see that now it can be translated into any Indian language, and everyone has a phone there, and it can be delivered through the phone. And think of so many languages of that transformation.

For the differently abled, or rather, for differently abled people. So that’s just one example. Another one is where people are looking at migrant workers and how we can facilitate them to get jobs and also to learn the right skills. AI is helping them to do the matching between the learners and the earners, and then determining what they should be learning so that they can get placed. Agriculture. So some farms are piloting certain techniques and natural farming and organic farming techniques, but they do it just within their own little plot. Now, can you take that and then spread it across the country or even across the world?

So again, there are various ways in which you can do that with AI. Part of that is the new large language models, and part of it is more traditional technology. Combining all of those things, I think there’s immense potential. And again, AI is being used on the planet, right? Trying to monitor species biodiversity, for example. If you look at what are called the SDGs, the United Nations has sustainable development goals, and each one of them is being addressed. It’s looking at climate change; it’s looking at poaching, for example. Everywhere, images are being tracked of poachers, and real-time enforcement officers go in there. Before they actually go and kill these elephants for the tusks and so on, they are able to prevent it. The immense use of this kind of technology is permeating everywhere, largely because it’s made easier to use, and more common people can use it. You don’t need technical people for that sort of democratization. Of course, the technology and the devices are there. So now, with cell phones, we are pushing in intelligence across all of these things. In that sense, it’s really great to see all these things coming together for the upliftment of more than just mankind. But as the planet as a whole. Right. So just beyond the very parochial aids of a human species; we need to, but just even other species, and making it a better planet.

So how did those people in India—especially if they’re in these kinds of edge cases—create these apps? Because when I look at artificial intelligence, I feel like there’s a really high barrier of entry. I don’t know enough about it to just make it up myself. I don’t think that the technology is there such that it’s user-friendly enough that a layperson can make something like that. So, like, how did those people that are working for these nonprofits in India partner with your organization, and then you have developers?

That’s right. So again, these things are still happening. I don’t want to give the view that it’s all there. Yes, you’re absolutely right. These social-purpose organizations are not for profit. They don’t understand data, leave alone AI; they’re fearful of data. They just want to go and do their things on their farm or with the community that they are serving. So they don’t really use any gadgets, data-driven decision-making, or all of those buzzwords. It’s only for us.

At the same time, in a country like India, there is so much pent-up demand and eagerness to help out in some of these organizations. India has a huge number of programmers serving western countries and doing all the coding and business process management, but they also feel they need to do something back to the country. So there’s this demand for something, and I want to do something. What we are trying to do, and then, of course, there are people with funding both here in the US and around the globe saying, Hey, I want to use my money for something useful.

So now what we are doing at the center is trying to bring all groups together. Yes, there are people with money, there are people with problems, and there are people who can potentially develop the solution. Can we essentially bring the two together and be facilitators of all of this? So let the funding come from whatever philanthropic sources, but let’s match the problems with the people who can create the solutions and bring them together. It’s essentially some kind of, just like you fund a startup, this is like funding a social-purpose organization to get started.

And once it gets started, there’ll be more momentum, and some of them could become commercial. There’s nothing to say that it always needs to be a social-purpose organization. Maybe there are things that other people will fund. So can you provide that initial seed funding and maybe have other organizations jump in as well to make it much more self-sustaining? So we’re just in the early days of that sort of journey, trying to bring all of those things together and create a platform. Again, there is a notion of, Where do I do it? Where do I get the data? So data is always an issue for AI, and then I don’t have the skills to build all of these models. Can you open-source it? Can we have people develop those things in an open manner, kind of a hugging face for the social-purpose organizations? Anyone who wants to do that can share their code and share their data, and over time, things will get built up by the community, and then it becomes a community resource. So that’s sort of the vision for it. Just early days with around seven organizations have been funded in this way for the first year, and then it will.

AI to benefit society

I just wanted to highlight the fact that there are lots of use cases for AI to benefit society. And I think that when people think about AI, they just think about it in the commercial aspect. They don’t think about it in the pure computational aspect. Like, you just can’t. There aren’t enough people to convert all of the Internet into braille, which is too much information. So you have to have automated processes that do that for these people, because otherwise you’re not going to have the same kind of access that you and I do. So that’s a very significant use case that you brought up that I didn’t even think about. And I think that a lot of people don’t realize how it can leverage humanity in different ways. They just look at it as this commercial product, similar to having a roomba. So I wonder if that would be a good opportunity for people to gain more trust in artificial intelligence, just because I think that’s something that, as we highlighted earlier, is just lacking. But the last thing that I wanted to ask you before we get into our final questions is: Who do you think is doing it the best? You have all of these different companies, like Meta, which just produced Lama; obviously, OpenAI is now part of Microsoft right now. Google has their own engine, even full self-driving with Tesla, Gronk, or whatever. There are all these different companies that are doing it. You’re not in the commercial sector anymore, so you probably have a little bit more freedom to speak your mind. Who do you think is doing it the best?

Yeah, so I don’t think there is one best answer. What I think is encouraging, which again has happened over the past year and a half, is that, initially, there was a scare that the whole technology might be only 3234 companies. So more of an oligopoly of maybe a few major players.

But with open source taking much more prominence now, I think we do see that there is a rich array of different providers all working on it. And then we are also moving on from just one type of architecture, the transformer architecture, to a couple of other types of architecture as well. There is more diversity coming.

Yes, there are still challenges, and I don’t want to sort of brush off the challenges. It is still highly compute-intensive for many of the large language models, right? So it’s still not under the purview of a common person to just pull together something and go and get the computation. It is still very expensive at the scale that we are talking about.

Now the expensiveness again means that there’s only commercial players who can do it. Then the question comes in as to what about the academic institutions and other sectors that are not really commercial, like the public sector, like the NSF, the Dapas, and so on. And that’s where I think more effort is being put in to create these national centers where there’s more of that compute capability that can be much more open, not under the typically shareholder kind of purview.

So in that sense, instead of thinking of who really is the best, I think of it more as creating the right ecosystem for players to actually operate different kinds of things, right? So different kinds of technologies and different types of solutions can come through, and again, the market will decide what is best. And again, there is no single best.

Now that you actually see this happening, startups are actually starting everything from scratch. If you want to be fast, then you’re better off actually going with one of the commercial versions; whichever one it is, go with that. Of course, you’ll be paying per API call, which can turn out to be quite expensive depending on what you do. But then, once you have shown the proof of concept, if it is still a big number, So in terms of your compute spend, you go to an open source model, right? So you shift to an open-source model later on, once you have proven it. And again, venture capitalists are willing to fund that to say, Hey, now we can go and make it much more proprietary.

So I think people will find the solution. Again, people are looking at the large language models of whatever 50, 70, 8100 billion parameters or even trillion parameters, but bringing that down in size, right? So yes, it’s not perfect, but it has 80%–90%  accuracy, but in a much smaller footprint. Can I run it on my cell phone? Yeah, I don’t mind the 90% again; it depends on the task, right? So if it is not something that’s critical, I can do it.

So I think instead of one single best, we have a collection of different numbers of different players and a collection of different use cases emerging for which different things are better, which is a much healthier ecosystem. I think.

AI in industry

I totally agree. I think that I really worry when a lot of the technology is concentrated in just a handful of companies. Do you think that the open source environment and these smaller companies are able to compete with companies like Facebook and OpenAI, or  are they going to be like the industry standard? The larger companies and then the other smaller companies might be use cases for smaller projects.

Yeah. So the way I think about it is somewhat like the telco industry. So telecommunications industry, right? So now, of course, we think that these big LLM providers are behemoths, obviously, but they are so great. But I think it’ll turn out to be just a utility layer. So multimodal models—maybe multimodal models for very specific industry sectors—will almost become the base, just as we have our pipe. Not that there is no advancement there, as we all know. There was 2g, then 3g, then 4g, and now you have 5g. So there will be advances there. But if you look at the overall ecosystem, of course there are so many manufacturers and many apps on top of the various phones. There are so many other devices. It’s not just phones; it’s sensors. The IoT, the industrial IoT, and a whole host of other things all rely on the underlying telemedicine infrastructure. Of course, there is wireless as well, competing with the wireline. And again, the telco operators are the players in all of them. So you’d start thinking about the LLM operators, if you like. Right. So that becomes the base-level utility on which people are going to build very specific things. So for my autonomous vehicle, I need this. For my medical diagnosis, I would need this, but it’ll sit on top of an LLM, and there could be multiple LLMs. And as the llms change in the viewer, they get better. Right.

So large language models are being used in industries like full-time self-driving. That’s completely. Okay, I didn’t know that. That’s interesting. 

So for various issues. So there are, again, large language models. But is it like, you know, for full self-driving, is it a Chat GPT large language model type that is allowing it to sense the environment?

Technology that is running it. But the reason I’m saying that the large language model could play there is that the software engineering world is completely changing. So now the programmers are working with these, similar to large language or code development modules that are actually writing the code. So even your autonomous vehicle would, at some point, have code written by software engineers working with these kinds of code generators. So it doesn’t matter where. So if there is software, if there’s a program, if there is software, then that’s probably going to be powered sometime in the future by these code models. Right.

So it’s nice to hear that it’s not going to be concentrated on just a few big companies. That’s something that also people fear—that this idea of industry having an outsized role in their daily lives. And, you know, it’s nice to hear that. Thank you so much for speaking with us. We’re coming towards the end of our time. I do want to ask you the same three questions that I ask all of my guests, which is just to gain a better understanding of people who are doing these amazing things like yourself, where they’re coming from, what they think the future holds, and that kind of stuff. So, specifically, I always talk about my inspiration, which is science fiction. I think that when I think about the future, I think back to utopian science fiction visions like Isaac Asimov or even Star Trek. Where do you get your inspiration from?

Unfortunately it’s the same one, so it’s not.

Unfortunately, it’s fortunately glad we get to know it’s.

I’ve always been a big fan of Asimov and the whole notion of psychohistory and being able to look at the future of civilizations. I think that’s fascinating. My area of interest is actually in simulation modeling, or what we call agent-based modeling, and modeling the behaviors of people. And that’s even more challenging than just the cognitive thing. How do you manage the behavior? What’s fascinating with respect to the psychohistory comment is that, yes, we are not projecting future scenarios or future civilizations, but there is a taste of some of that happening during the pandemic. We’re doing a lot of modeling around the pandemic. And yes, it was a very difficult situation for the entire world over the past two or three years, but it was fascinating to watch the different behaviors and try to even project some of those behaviors based on individual preferences as well as government interventions and, of course, the virus itself mutating, morphing, and all of those combined together, trying to see where the outbreak would be and how we would handle it, depending on our own social behavior. It was a fascinating study, and I think we might reach there in large numbers. I think we potentially might be able to do what Asimov thought of.

I look at companies like Cambridge Analytica that are looking at human behavior and all these other companies, and I think that they like to treat us as just a huge mass of animals. We have patterns, whether we like to admit it or not. We like to think of ourselves as individuals, but there is a certain component of humanity that is humanity at large. So I do like a lot of the science fiction that we talked about because I think that if we grow and if we continue to expand throughout the universe, I think that there has to be a lot more help from AI, a lot more help from artificial intelligence, and to protect us as a species, I think that would be a really good use case for artificial intelligence. So, next question: I know that you are very versed in artificial intelligence and all its software capabilities, but outside of your field of interest, what kind of technology really excites you when you’re reading about it? Like, you’re reading through the paper and you see something coming up about genetics, or who knows what? Something that you can’t get enough of, just like a side interest. Like, I can tell you, for me, it’s artificial intelligence and where robotics are going. Specifically, I can’t wait to have a humanoid robot that’s able to do my laundry for me. And every time I see a new robot video on YouTube, I have to click on it. So what about you? What do you like to learn about?

Are you about to start school again? I would probably delve very much into CRISPR-Cas9, genetic engineering, the whole area around the convergence of biology, physics, chemistry, and the study of molecules as life. The distinction between what is a mere chemical and when it becomes truly biological, something living, is a fascinating area to me. This also ties into intelligence. I’m also very interested in consciousness. Part of the reason for this interest, along with my fascination with AI, is the concept of consciousness. From a philosophical perspective, Indian philosophy offers a unique way of viewing consciousness as pervasive, with the belief that there is just one consciousness from which we all originate. This principle brings everything together, not just humans, but the planet and the universe, all arising from consciousness. So, the notion of consciousness, along with genetic engineering or even the creation of life, holds great appeal to me.

Right. We’re literally doing that right now. I mean, I’m in Cambridge, Massachusetts, which is like the center of biotech. Moderna’s headquarters are literally two blocks away. That’s right. What they’re doing is amazing. I mean, we literally cured sickle cell disease. Right. A genetic disorder that plagues so many different people. Those kinds of genetic diseases could be a thing of the past, which would be really interesting to see.

So last question. Where do you see AI in ten years? What does the future look like for humanity and for artificial intelligence? Let’s just say in the home or, at large, whichever you would prefer in regard to whatever environment. What is artificial intelligence going to look like in ten years for your environment of choice?

Yeah, I think we will have… It’s more like… I see the current excitement similar to the 1998–2000 excitement around the Internet. Hey, we can do things; we can do commerce; we can do all of that. And then obviously we had the boom and the bust, but after that, it still essentially became a mainstay. So it didn’t really completely disperse, as some people were there. So I was obviously there at that time. People were saying, “No, no, no, now the intermediaries will be completely taken off.” Right? So whether it’s insurance or banking, the Internet will take over. It didn’t happen. Yes, the advisors are still there. As you just said, we still want to talk to people, especially about health and finance. In both cases, I think we probably want to talk, especially if it’s more money, if it’s very critical, or something about health. We didn’t want to talk to people. It’s not that we don’t have the Internet. We still have the Internet. Right. So still, I can go and do my transactions at midnight, so I don’t have to call anyone. So in that sense, it has become part of our lives.

So ten years from now, what do I think AI will be? We would be talking about AI as such. Maybe we’ll be talking about some very specific advances within AI. So maybe we will not even be calling it generative AI. We might call it something else. So artificial life or intelligence in an artificial environment of some kind. It might be something else. But I think AI will become more and more a part of what we are doing, both on the consumer side as well as on the enterprise side. So people will take it for granted. Yes. Yeah, there are some AIs working there. Yeah. Just make sure that it’s sort of working properly by testing it. I know people get very worried about the hallucinations, the wrong information, or not giving references. All of those are, yes, challenges. I don’t want to deny them, but they will all be fixed. So there are various solutions already emerging. So it’s just an initial teething problem. So we’ll get over some of those things using various mechanisms, not always large language models. You might combine it with old technology, some with new technology, and have it human-validated. So very quickly, we’ll move on from there. And that might become, as I said, the utility layer, and we’ll probably be focusing on more important things. I don’t quite believe, again, that you talked about existential threats. I don’t believe in this sort of singularity notion or this AI superintelligence, which is outside of our control. I don’t believe that kind of future is what everyone is working towards. We would be assimilating AI, so we would be more. Using more and more AI. Yes. Would we lose some of the skills for doing math or doing some of that reasoning? Probably. Right. So just as some of them have lost the multiplication table, Right. So you need to go and refer to it. So, yes, we might lose some of those skills, but we’ll be internalizing and growing with the AI, as opposed to AI being somewhat separate there and then challenging us. So I don’t see that kind of future scenario.

Yeah. More of a symbiotic relationship. 

Biotic relationship. 


Yeah. Cool. Well, I hope that we get there. Thank you so much for painting that picture for us, as well as for sharing your wisdom and your insight. I really appreciate having you on the show, and I’m sure that all of our listeners do as well. And to those listeners, please like and subscribe. As always, we really appreciate your engagement as well as your downloads. So those of you guys who listen on a regular basis, we will see you in the future. Have a great one, everybody.

Thank you very much. Thanks for having me.

Important Links


About Anand Rao

Anand Rao, a Distinguished Services Professor at Carnegie Mellon University, specializes in applied data science and AI, with a focus on their business and societal applications over his extensive career spanning consulting, industry, and academia. With prior roles as Global AI Leader at PwC and Chief Research Scientist at the Australian Artificial Intelligence Institute, Anand brings a wealth of expertise in AI, management consulting, and research. His current research interests include operationalizing AI, responsible AI, systems thinking, and behavioral economics. Anand holds a PhD from the University of Sydney and an MBA from Melbourne Business School. He has received numerous awards and recognitions for his contributions to consulting and AI research and serves on several advisory boards, including those of Oxford University’s Institute for Ethics in AI and the World Economic Forum’s Global AI Council.


By: The Futurist Society