The continuous rise of disruptive technology is changing the world as we know it. As artificial intelligence, robotics, blockchain, and more find themselves deeper into our daily lives, we must know how to live with them. But are they truly beneficial to humans or do they bring more harm than good? Doctor Awesome answers this question with Dr. Subodha Kumar, Founding Director for the Fox School of Business’ Center for Business Analytics and Disruptive Technologies. Together, they look into the impact of technologies in transforming society, the necessary regulations to keep them in check, and how to ensure accountability and transparency when utilizing them. Dr. Subodha also talks about consumer robotics and its promising edge to take over menial tasks that consume a huge chunk of our time.

Watch the episode here

 

Listen to the podcast here

 

The Future Of Everything – A Conversation With Dr. Subodha Kumar

We are broadcasting from the present and talking about the future with Dr. Subodha Kumar who is a distinguished professor in Philadelphia and has a long and wide history of building the future. Dr. Kumar, why don’t you introduce yourself and tell us a little bit about all the work that you’ve done? Specifically, the things that are the most important are your work in artificial intelligence and robotics. I’ll let you fill that in. Tell us a little bit about yourself.

I’m Subodha Kumar. I’m a faculty at the Fox School of Business at Temple University. I’m the Paul R. Anderson Distinguished Chair Professor. I’m also the Founding Director of the Center for Business Analytics and Disruptive Technologies. Through this center, we work with a variety of companies in different domains, a lot in healthcare and finance but also retailing, aerospace manufacturing, and so on. We try to see where are the pain points and how AI-based solutions can help us solve those problems. That could be from improving the efficiency in healthcare, solving some of the biases in the results, or creating better solutions for financial customers and those things.

We also look at how AI applications can be improved to create more unbiased solutions or make sure that the solutions are explainable, and there is a responsibility involved in that. We also try to look at what is going to be the impact of AI on, for example, the workforce or labor market, and what policies could be designed according to that which would work better. That’s all about me. I work on a variety of domains, not just AI. AI is one part of my whole work but I work on social media, digital commerce, and blockchain.

The idea of disruptive technology is something very wide and vast. I like how you have an interest in all these different fields. When people think about disruptive technology, there are two camps of people. People like myself look at the glass half-full. There are people who look at the glass half-empty. They worry about all the downstream effects of disruptive technology. I wonder. Overwhelmingly, is it uniformly positive? Is it most often negative, from someone like yourself that looks at all of these different things that have happened?

It’s always a mixed bag with any other technology. When we think of all the disruptive technologies and AI, there are going to be a lot of good things but not everything is good. There’s a potential of being bad but it’s mostly good. How we can increase the part of the good and reduce the part of the bad lies on us. When I say us, it includes all the companies that are developing the solutions and how they’re using them, policymakers, as well as the society, and what we are ready to do. It’s mostly good. Some things are bad but if we don’t have a controlled way to look at it, then the bad part may go up. Otherwise, mostly, I’m optimistic given what we are able to do that we could not do earlier.

 

FSP - DFY 16 | Disruptive Technology

 

I would like to think that it’s always going to be a mixed bag but it always tends to err on the side of good. I like to think about Martin Luther King’s quote, “The arc of history is long but it bends toward justice.” Especially in regards to technology, the arc of history is long but it always tends to end up better for the human race. When you think about the printing press, you could point to all of the negative books that were written but overwhelmingly, it has been a phenomenal thing for humanity to have that technology.

That’s something that I want people to think about when they think about AI and all these things that are coming down the pipeline. When you are evaluating all the different people that have an interest in guiding us toward the beneficial effect of technology, who do you feel has the most responsibility for that? Is it policymakers? Is it companies? Is it the individuals? Who are the people that you say, “Those people need to take action here.”

The responsibility lies on everybody but more responsibilities lie on the companies that are creating these solutions first of all because these companies know how they’re creating the solutions and what could be the impact. Even for them, it is hard to think about the real implications unless the solutions start coming but they have a lot of responsibility to think about it carefully.

Second, the biggest piece is the policymakers. For policymakers, the challenge is that they don’t even know what is going to work and what is not going to work. What will the solution look like? How do you regulate it? If you do too many regulations, that’s also very bad because then, that will curb the growth. These are the two stakeholders for which the most responsibility lies. The impact will be a lot on the individuals but they can’t do much at this point and control what should be done and what should not be done. In terms of the controlling part, there’s more responsibility on the solution developers as well as the policymakers.

The challenge to policymakers right now is they don’t even know what will and will not work in disruptive technology. If they do too many regulations, they might curb its growth.

Is there a precedent for this stuff? Technology is always happening. New inventions and new disruptions are always happening. Is there a precedent for policymakers and businesses to gear the technology in the right direction? That’s one of the things that a lot of people worry about. We’re entering into this period of change that’s happening so rapidly. That’s exponential. It’s hard to keep up with that change.

For example, you mentioned social media. Social media was a thing that was introduced to us as a technology. A lot of people thought it was great, and it’s only now that we realize that there are some negative effects of it. That’s when the policymakers are stepping in. Other than social media, is there a precedent for these actors to intervene in these things?

It has been happening for quite some time. When the first-time bicycles were introduced, there was a protest because bicycles were disturbing the horses on those horse carts. The horse cart was the key mode of transportation. A lot of people felt that this newer technology is not going to stay for very long anyway, “They are fads. They’re disturbing our main mode of communication, which is going to stay there for quite a while.” People were not happy.

Fast forward to when calculators came, there was a big protest from teachers. They told us, “It should be completely banned by the policymakers because it will make our students dumber. There will be a lot of bad uses for calculators. It will impact the learning process.” Even when World Wide Web came, there were a lot of concerns about how it should be used, “It should be regulated. Not anybody can have a webpage and so on for very similar concerns because there will be chat rooms and so on.”

Before social media started, social media had more concerns than policymakers. Even users were not sure why somebody would go and post their pictures for the general public. This was a very remote concept. Nobody thought that this was ever going to work but there were a lot of questions raised for policymakers as well on how they can make sure that it is not misused, especially for children and so on.

Those debates keep going on. Do we see regulations? Yes. The European Union came with a lot of privacy regulations more than any other location. Even the US, China, and India all have their set of regulations. All large economies have their set of regulations. What is different here though is there have been petitions from very top people like Elon Musk and several other cofounders of Apple. They appealed to the government to take some action. In less than a month, the White House came out and said, “We are thinking of doing something.”

The reaction has been a lot stronger from both sides, the industry, a lot of experts, as well as the policymakers. This is different. We have a lot of precedence but it is different. The reason why it is different is because the impacts may be a lot bigger. Once we start putting things more on these AI-based solutions, they will start becoming more like black holes. Many years down the line, we will be using these systems without knowing what they are doing, how they are doing, and why they’re doing it.

That is the concern people have. The second biggest concern is that it may lead society in the wrong direction in the sense that it can give wrong results, which can take lives or create a lot of social problems. We will not know at that point how to solve it. People are worried about that. These are not invalid concerns. They’re quite valid concerns but some of them are sometimes blown out of proportion. We need to be careful in both directions, not letting it go but at the same time, we should not be making knee-jerk reactions.

 

 

AI is the most popular topic when it comes to futurism these days. It’s something that has the most potential for disruption. Let’s focus on that for a second. When you’re talking about the regulations that are happening from the government and things like that, what are those regulations looking like? When you said that the European Union has put out some regulations for privacy, have they already come up with regulations for AI? If so, what does that mean for the technology?

The European Union has the AI Act. They already have a draft for the newer one. The biggest piece in these regulations is number one, the transparency. They’re asking these companies to tell their users how the results are created, what data they used, and what analysis has been done for coming up with these solutions so that they can clearly articulate, “Your results may be off, and this could be the implications.” They want companies to be very clear on giving the complete details of how they came up with the result and what could be the implications of the wrong results, which is missing.

For example, if you go to ChatGPT, it will have a disclaimer, “Our results may be wrong,” but if you go and say, “Could I have cancer?” it can say, “I believe you have cancer.” It can give you those results. That can have wrong implications or financial investment. They want to be very careful. At least ChatGPT has reacted to some of these things. They have stopped giving results now. They will say, “I cannot give you results because I don’t know much.” That is a problem here. People thought that would be one of the reactions that they will not give results. We don’t want that to happen but we also want these regulations to ensure there is transparency.

The second piece is accountability. That is a tricky one. What if it gives me the wrong result, and I take the wrong decision based on that? Who’s responsible for that? Regulations are asking all these companies to take more accountability. Accountability and transparency have always been the biggest pieces in the regulations and how they will be written. All of these regulations say, “We want to save our digital citizens and the users of these technologies.” How granular it will be is a very challenging thing. Whatever you do or whatever regulations we write, there will be pluses and minuses on that. There’s no perfect regulation on these things.

Every country will go differently. My hunch at this point will be that the European Union, like in previous situations, will go harsher on the regulations. They will have more restrictions, which will create some limitations on what can be done in Europe. On the other hand, China always works more on the side of making sure that it is good for the country, not necessarily for the technology. The US will be somewhere in the middle of that but all of us are struggling with what it will look like. These two pieces will be the key transparency and accountability in any of these regulations.

I like the fact that there are intelligent people like you, Elon Musk, and the other titans of the industry that are tuned into this enough to call attention to it because if either one of those were working independently if it was a business that was responsible for this thing, the interest of the business is to make money on it. We have seen that social media can lead to issues with downstream effects because the incentive is there to gain money as opposed to protecting the citizens or the users.

On the other hand, you have this government actor that is not as well-versed in technology. It takes a little bit of time to catch up with that. We need people like you and the titans of the industry that we see that are chiming in on this, not just for AI but for a lot of these different disruptive technologies. There are so many different things happening. We need people who are experts in the field to call attention to these things.

I wanted to talk about one component of AI for a second. You’ve highlighted medicine, which is my field. You’re not going to get rid of the idea of humans having an influence on the way that patients are treated. AI might be a tool in that but it’s not going to be replacing an expert at least anytime soon from what I’ve talked to other people about. They might provide a list of five diagnoses, and an expert chooses one but ultimately, the accountability is on the medical provider.

I worry more about consumer technology like full self-driving or artificial intelligence that drives for yourself. It would be so easy to let the machine take over. That’s when we start getting into more accountability issues. If I’m in the car, I’m letting the car take over, and we get into an accident, the accountability is on me but there’s a certain amount of accountability that’s on the AI as well. Where do you see that shaking out? Do you think that ultimately the responsibility is going to be on the driver as well? Do you think that there’s going to be some culpability for the AI?

You have raised a very important point. First of all, we are not replacing physicians or many other jobs in our lifetime. I don’t see that happening or much later, which is good but at the same time, there are a lot of tasks that are either repetitive or can be good advisory to physicians. You give a good example of keeping multiple diagnoses but even there, the accountability piece comes into the picture.

There was a recent study done where they wanted to see if this generative AI provides a good solution for breast cancer treatment. What happens if an expert goes and looks at that? That expert can easily say, “These are nonsense results. This is great. I will follow up on that,” but if general users look at those results and tell, “I have 94% accuracy,” that can be extremely misleading.

As long as the results of this AI are analyzed by an expert, and then a decision is made, we are mostly okay. The problem coming is when this is directly pushed to the users. Even in healthcare, that is a serious problem. When this AI-based thing creates solutions, how do we restrict it only to the experts and not the general users? It’s like going on Google and searching based on your diagnosis, whether you have cancer. Everybody’s a doctor these days or Google Doctor as we can call them but that’s a problem. The consequences can be huge.

A lot of accountability goes in every domain on these companies and how they can restrict this. Even though you have 94% accuracy, and you tell this to the users, I don’t think that’s enough. If you go and say, “I have 94% accuracy that you have cancer. You’re going to die in six months,” and even if you were wrong, the patient will be quite nervous. We don’t want that to happen.

On the other hand, if you give this result to an oncologist, they can say, “This doesn’t make any sense. I will follow up on that.” That’s the first part. Second, you are asking about driverless vehicles and all. If you look at all the analysis in the last few years, we would have expected that by 2020, we will have a fully driving car with the way the technology was growing but we don’t have this yet. There are several technical issues around it but one of the big issues is accountability.

Humans also make mistakes. If you take any conservative study, it can clearly show then that the number of our percentage of accidents will go down substantially if we move to fully autonomous vehicles but the real problem comes here. Most AI systems lack two things. One is the common sense, and one is the intuitive solution. Some of the solutions can be quite intuitive.

For example, you’re going on the road, and you see that another car is crossing by but as humans, we can gauge that this guy will cross by a little bit, slow down, and keep going on. That is extremely difficult for the machines. They will put a hard break there because they cannot judge that much what would happen. Sometimes humans may be wrong in making that decision but machines cannot make common-sense decisions. What will happen? Let’s take the car example. These autonomous vehicles will make accidents in the case where a human could easily avoid it but they will save accidents in many cases where humans could not have done much, even expert drivers.

One can always argue that overall, it will be good but we are less forgetful of machines than of ourselves. Even those 1 or 2 mistakes where humans could easily save it can take the technology back. That’s exactly what is happening now. One of the challenges we are facing is a trolley problem. It started at MIT, and it has been experimented. You might have heard about that.

Autonomous vehicles are a big challenge. If you’re going on the road, and you have to save the life of yourself or six people walking on the road, what would you do? What if there are kids in the car? What do you do? If there’s an older person on the road, what do you do? If you look back and ask this question to a human, not everybody will answer the same way. Some people will say, “I will take this.” Some will say other things.

How do you now design that in the algorithm? One solution could be that I give this option to the users. They can tweak this setting. In Tesla, you can decide how much distance you want to keep from the car in front of you. They give you a range of that. All are in the legal range but still, they will give you the way. They can tell you that on the local road, you can drive five miles above the speed limit. They give you that option. You can think about breaking the rules there. They should not be but if they don’t, then there’s a problem. How much tweaking should be done where users can go and tweak? That is the question.

Second, once that happens, then there’s accountability. If the accident happened because of that tweaking, then it comes on the driver but if it happens because of the algorithm, then it goes on the car. We will have a lot of lawsuits where to define accountability, it happened because of the user-level thing or the machine-level thing but even reaching that point seems very challenging because we cannot define the law. The technology is not good enough to take complete control. Drivers still have to be in control. How many years it will take is unclear at this point.

It may be okay on the highways where we may have a dedicated lane for autonomous vehicles. If all cars are driverless, we may be quite okay because algorithms can talk to each other, and they will be fine. The problem is the transient phase. We don’t know how to cross the transient phase. In the long run, more accountability will shift toward the algorithms and cars but in the short run, almost everything will be on the users. You cannot be off the hook by saying that the algorithm did it because you are supposed to be in control.

 

FSP - DFY 16 | Disruptive Technology

 

That’s where we are. That’s why Tesla had to change the name of their car. They don’t call it fully autonomous and all. Other companies had to do the same. This whole accountability or what you’re referring to will shift with time more toward the individuals, and it will keep sifting toward the machine. Hopefully, in this lifetime, we will be in a situation where it will be fully on the machine. Let’s hope so.

I always say that’s the technology coming down the pipeline that I’m most excited about. The other one is consumer robotics, which I do want to talk with you about because I know you have a background in that, specifically, full self-driving. I have a 30-minute to 45-minute commute every single day. That would free up the ability for me to do something that I’m interested in for those 30 to 45 minutes. You live in the Northeast as well. It’s something almost ubiquitous around here that you have to be driving a certain amount of time to get to work. That in and of itself is going to be almost like a ChatGPT moment. When ChatGPT got spread around the national consciousness, it was like wildfire.

Everybody was like, “Have you seen this thing?” The same thing is going to happen with full self-driving because it’s something that everybody experiences. Everybody has a commute. At least most of the people that I know have a commute. Maybe it’s a bias set but most people will have a great time with the ability to free up that time from their day. Let’s talk a little bit about consumer robotics for a second because I know you have a background in robotics as well. How far are we from having robots do menial tasks like folding our clothes or washing our dishes? What are some technologies that you’re seeing that give you hope for that?

We have made very good progress on that. We have had robotic systems since the Industrial Revolution. At least until the ’50s, we have had a lot of robots used in manufacturing and logistic companies but their task was pretty well-defined. They can do the same task repeatedly well. You can write algorithms where they can even be dynamic but they can be dynamic only in the way that in place of going from 1 to 2, I will go from 1 to 3. That’s what my patent is about.

They can do this all very well for quite some time. There’s no problem with that but what we are trying to do here is folding clothes and drinking water, “Can I have humanoid robots, which can give me a massage and also water and possibly solve a problem for me?” There are some good examples. At some of their sorting facilities, FedEx has started using sorting or picker robots. It’s nice. The videos are online. They pick the items and put them on the conveyor belt.

This is a common task for any eCommerce company. Amazon has it. Walmart has it. It’s mostly done by humans. It’s considered a very tough task to do. The problem is that different items can have different shapes. They can be sprayed around at different locations, not all at the same location. That’s what robots expected. Everything has to be exactly the same way. Sometimes they can be in the corner. They have to find the dimension of it and check for it.

We implemented it for one of the large manufacturing companies. This is for the cement industry. They had to carry the cement. The place where they put the cement has to be inspected that everything is good inside before they can put cement inside. It was done manually. It’s a very difficult task. Somebody has to go in. We have created an AI-based solution, which will go inside. First, it will measure where the hole is. That is also done by a machine. They will decide, “How many holes are there? What is the dimension of the hole?”

How these things work is first, they measure this dimension and then see how they will do it according to that. That’s how the human brain mostly works anyway. We replicate that. We are seeing more of these solutions. How can we replicate these solutions to do multiple things? That’s where the challenge is but for doing complex tasks, we are there. It can fold clothes for us. It can bring water for us.

The challenging part is this. Can we make it like a human brain, which can follow tasks and bring water very easily without much training? The training has to be done for each individual task. Whatever humanoid robots we are hearing are trained for certain tasks, and they will do those tasks nicely. We keep increasing their tasks. That is already happening.

Most robots these days are being trained for an individual task. They could be improved by increasing their tasks little by little.

I don’t think complex tasks are a big problem. We saw a revolution with Roomba and things years back. They could sense the wall and all. We have come very far from that, and we have pretty advanced solutions but the challenge will still lie on how we can have a general-purpose intelligence. There’s a lot of research going on in that. We are not there yet but we have some good breakthroughs in that. In the next 5 to 10 years timeline, we will start seeing more general purpose. At least if not everything, they can do a lot of things at the same time.

I and most consumers would be even okay with a lot of their mundane tasks being taken care of by a single robot similar to the Roomba. I was an early adopter of the Roomba. I hate vacuuming. It’s not interesting to me. It’s something that is very easily done by the Roomba. Why not have this robot around? It’s something that happens on a daily basis, especially if you have a kid like I do. Stuff accumulates. You turn the Roomba on, and every day you come home to a clean floor. That’s a great opportunity for that company to make a difference.

A lot of these mundane tasks like folding clothes, washing dishes, and those types of things that we do on a daily basis but are menial tasks are ripe for innovation. I wonder, which one is going to come first? For example, washing dishes is a complicated task because there are different types of dishes. There are different things that you have to do. Is that going to come first, a single robot that does that? Is it going to be a general robot that does everything? I’m not sure. Do you have any insight into that?

We will see a lot of single-use robots and single-task robots that will come. Those single-task robots will be for a simpler task first. You will have two parallel growths. One will be that there will be multiple-task robots, which will be doing a lot of simpler things. In parallel, we will have the development of robots, which can do one complex task. Doing the first one is a lot easier but once we understand that it can do one simple task, converting it to five simple tasks is not that difficult because you have to keep adding skills to that. That will come faster than the complex ones because that is easier but both are going on in parallel. People are looking at combining more simple tasks with robots.

A similar thing happened with Roomba although could only do vacuuming. Now, it can do more stuff like mopping and all those things. We will see more of that happening but in parallel, people are trying to crack it. Another thing that is important to understand is this. Which complex task requires this solution? Consumers are not ready for all solutions. There should be a market for that as well.

Another part of the research is going on. Which ones do consumers want to get rid of? It’s still very complex. Consumers here could be businesses. Maybe it’s the logistics companies, the warehouses, or individual consumers. We will see both of them. We will start seeing a growth of a lot of such solutions coming into the market. Some of them will have higher adoption than others. Once those become clear to companies, they will invest more in that. We will see that in parallel but more so of multiple simpler tasks than doing one complex task. That’s always the holy grail of this AI industry. How do you do that?

Lessening the amount of menial tasks and making our life easier and more productive is the holy grail. Which companies, specifically in the robotics industry, do you feel are the leaders? The only one that I’m familiar with is Boston Dynamics but for consumer electronics, is there anything else out there that you think is exciting?

This industry is so interesting. On one side, there are a lot of startups that are coming up with great algorithms. Another one is the companies that have to manufacture the devices. They’re not necessarily the same company. What we are seeing is there are a lot of combinations of these companies getting together. Companies like Nvidia are very good in their algorithm side. They have all the chips, which they can do. They are getting into even the hardware side of it.

This is emerging as one of the big players that can do it but there are other companies. iRobot, which is the Roomba one, is trying to create more solutions. That’s another big player in the robotics industry. This Zurich-based company, ABB, is one of the big players in the market. There’s another company. It’s a Chinese company called Midea Group. They are developing Midea. They are getting into both the algorithm and hardware sides. There are a few more. Yamaha has also jumped into this market. They’re one of the big players. In the manufacturing industry, there’s a company called FANUC. They do a lot of numerical control machines and all. They’re another big player in the market.

What we will see is that a lot of collaborations will happen between the startups that are developing good algorithms and some of these hardware manufacturers. You will see some of the newer companies coming into the domain. Don’t be surprised if we start seeing a lot of the companies that we never heard of come. A lot of vertical-based solutions will also come.

Some companies like ABB, Rockwell, or FANUC will focus more on industrial robots. That’s what they’re trying to do. Companies like iRobot and all are trying to focus more on consumer-level robots. In addition, some of the large companies like Amazon and all are also thinking if they can get their homegrown solutions because they have a large warehouse problem. They will also work on that. It is coming from many different directions. That will happen in the next few years.

Do you see any obstacles or hurdles for consumer electronics, specifically in the robotics space?

In consumer electronics, there are two types. One is more for general-purpose uses like what we saw with Roomba solutions. We are having solutions coming for basic tasks, giving us a glass of water or a massage. I don’t see much problem with those regulatory issues. More regulatory issues are coming where they are making decisions on behalf of us, which can have bigger implications like driverless cars. That’s where we see regulatory hurdles. Otherwise, the consumer solution is more about technology and adoption. Are people ready to pay that kind of money? That’s another critical thing.

Companies are in a very capital-intensive market. Unless they know that there will be buyers, many companies are hesitant to invest that kind of money. The biggest challenge is the adoption, even more so than technology. What kinds of consumer solutions are customers ready to pay for? Once that is resolved, we will be fine. Regulations will be okay with many of those consumer-based solutions.

The biggest challenge in disruptive technologies is adoption. Once the people know what they are ready to pay for, regulations will be okay with consumer-based solutions.

It was so nice talking with you. I have three questions that I always do at the end with all of my guests. They’re not always the same. As we’re talking, we provide enough information that I’m very curious about your thoughts on a couple of different things. I would love to talk to you forever. It sounds like you have a great grasp on where we’re headed as a species but unfortunately, we’re close to the end of our time. What I always want to know is where people like yourself get inspiration from.

For me, it’s science fiction. I always love reading about robots from Isaac Asimov. I like the utopian vision of Star Trek. I like all the interesting technologies that are coming out of comic books. When I think of artificial intelligence, I always think about Vision in The Avengers and how he was a force for good as opposed to bad. What are some things that you draw inspiration from? It doesn’t need to be science fiction. What gives you passion for what you’re doing?

Mine is a little different. Most of my inspiration comes from when I see the problems and feel that they should not exist. That makes me very annoyed. That’s when I think of the solutions. This could be either in healthcare or manufacturing. I can see that there’s no reason for us to be this inefficient. That’s when I see what we can do. We got an NIH grant with some doctors in Philadelphia. We are working on a pulse oximeter, which I’m sure you know of. Later on, when we did a study, we found that the readings are inaccurate for certain skin colors because all the testing was done on one.

The thing is how to fix the problem. You cannot have different machines for all skin colors but then, we can use AI for that. What we are doing in that grant is we have a sensor that takes the skin color, runs the algorithm, and sees how the rating should be modified. Those things fascinate me and motivate me to find solutions. This is what I’m working on with an MIT team. You may know there’s Jameel Clinic at MIT. They have given us a grant on studying personalized cancer treatment.

Where this motivation came from is when I was talking to a lot of oncologists, they said that there are set rules on how the patients are treated but given that we have done so much advancement in personalized recommendations and customized solutions, why can’t we do that in the healthcare domain? That motivates me a lot. There are similar examples I can give you for manufacturing or retailing where I see inefficiencies in the system or inaccuracies, “Why don’t I use my AI-based knowledge to do that?” That has always been my inspiration and motivation to do all these things.

Leading into that, where do you see us in 5 to 10 years? What are you most excited about for the future for you and your family? We had talked about the advent of artificial intelligence and robotics making my life easier. I look at it as almost like a concierge service of being picked up by my artificial intelligence car and being driven to my work. Artificial intelligence helps me with all of my notes and everything that I need to do that’s a menial task. When I come home, I don’t need to clean anything because the robots are doing it. That to me is the future that I get excited about. Where do you see us in 5 to 10 years? What gets you excited about that?

 

FSP - DFY 16 | Disruptive Technology

 

First is the car thing. We all are fascinated with that. We want to be driven by these cars. I drive partially autonomously. I find it very useful even with this. Once I can drive for 70 to 80 miles on the highway, it is very good. Like you, I’m looking forward to that. I hope this happens sooner than later. That’s number one. Another thing that I’m looking forward to is that if I go to see a doctor or the dealer for my car repair, a lot of those things can be done by machines. I shouldn’t be doing those things. We can make the process a lot faster. They don’t need to be fancy solutions but those could be something. Third, a lot of my home chores can be done by these machines. I’m looking forward to that. I hope that happens soon too.

The last question is a general question. You’re an expert in disruptive technology in general. We are very heavy on AI and robotics, which are two of your areas of expertise. What other disruptive technologies have you come across that you’re excited about? For example, I spoke with a futurist. The two that he’s excited about are longevity or the idea of living longer and then the food revolution. He thinks that we’re going to be having more nutritious and exciting food available to us in the future. My point is this. As a blank slate, what are some other things that might not be as well-known but are still things that interest you that you think are going to be disruptive?

In terms of the technology perspective, I will say three and then talk about some of the solutions. The three things I will say are blockchain, metaverse, and virtual reality. These three things will change the way we operate. What solutions would they provide everywhere? For example, think of the food supply chain. You mentioned partly healthier food. One of the challenges we have is don’t know how the food is created for us, what we get in our hands, how it is processed, and how it is done.

The three things that will change the way we operate are blockchain, metaverse, and virtual reality.

We take a lot of things for granted. The best we get is organic or not organic but blockchains with their food supply chain have potential. They can show us the video of each step. That can change consumer behavior for good in many ways. It can force many of these companies to act in an ethical way as well as in a more efficient way. That’s the one thing that will happen that we are not seeing. It has the potential to do that.

Second, things like metaverses have the potential to provide real-time and in-person solutions for which we need to either fly or drive, which makes it very inefficient for many of us. We are going to see a lot of those kinds of solutions, which are a mix of mixed reality and metaverse. Both of these have the potential to provide a different kind of solution that we don’t see. Why I’m saying a different kind without notifying is because we don’t even know when World Wide Web was created in the ’90s. Nobody thought that we will have something called social media. We had no clue about that.

Even leaders like Bill Gates and all were not sure. If you watch his interviews, he didn’t know what will happen. We don’t even know about the solutions we will create but it will create something which will change our lives mostly for good with some challenges in very different ways. We have to look forward to this technology. As businesses, we have to see what they can do. Individuals and all of society have to see how they can leverage that.

I will end with two things. We need to look at the two sides of AI. One is the ethics side and the bias side. They’re not the same thing. Ethics will mainly deal with something which is not illegal but still, I should not be doing it. How can we create AI systems that work in an ethical manner? On the bias side, how do we make sure that the data is dynamic enough and that unbiased algorithms can train this system so that we get unbiased results going forward? If we don’t control that now, we will be in trouble. As long as we do that, we will see a lot of nice disruptions happening.

 

 

I like the term nice disruptions because there’s a lot to be excited about. If we can look at those things, then we can have an optimistic version of the future. Thank you so much, Dr. Kumar, for joining us. To all the readers, thanks for reading. We will see you again in the future. Have a good one, everybody.

 

Important Links

 

About Subodha Kumar

FSP - DFY 16 | Disruptive TechnologySubodha Kumar is the Paul R. Anderson Distinguished Chair Professor of Statistics, Operations, and Data Science and the Founding Director of the Center for Business Analytics and Disruptive Technologies at Temple University’s Fox School of Business. He has a secondary appointment in Information Systems. He also serves as the Concentration Director for Ph.D. Program in Operations and Supply Chain Management. He is a board member for many organizations. China’s Ministry of Education has awarded him a Changjiang Scholars Chair Professorship. He is also a Visiting Professor at the Indian School of Business (ISB). He has served on the University of Washington and Texas A&M University faculty. He has been the keynote speaker and track/cluster chair at leading conferences. He was elected to become a Production and Operations Management Society (POMS) Fellow in 2019. He has received numerous other research and teaching awards. He has published more than 230 papers in reputed journals and refereed conferences. He was ranked #1 worldwide for publishing in Information Systems Research. In addition, he has authored two books, book chapters, Harvard Business School cases, and Ivey Business School cases. He also holds a robotics patent. He is routinely cited in different media outlets, including NBC, CBS, Fox, Business Week, and New York Post. He is the Deputy Editor of Production and Operations Management Journal and the Founding Executive Editor of Management and Business Review (MBR). He also serves on other editorial boards. He was the conference chair for POMS Annual Conference 2018 and Decision Sciences Institute (DSI) Annual Conference 2018 and has co-chaired several other conferences. More details are available at: https://sites.temple.edu/subodha/

0 Comments

By: The Futurist Society