Will robots eventually take our jobs? The reality is more nuanced. Robotics expert Matthias Scheutz explains how robots are meant to augment human capabilities rather than replace us entirely. From automated warehouses to healthcare assistance, discover how the next generation of robots will transform our workplaces and homes.

Learn why creating robots that can function in unpredictable environments remains a major challenge, why human-like robots might not be the best approach, and how teaching machines to understand social norms is crucial for their successful integration into society. Scheutz shares insights on the practical limitations of current robotics technology and offers a realistic vision of how robots will shape our future in the next decade.

Watch the episode here

 

Listen to the podcast here


 

About Matthias Scheutz

Matthias Scheutz, Robotics

Matthias Scheutz received a PhD degree in philosophy from the University of Vienna and a joint Ph.D. in cognitive science and computer science from Indiana University. He is the Karol Family Applied Technology Professor of computer and cognitive science in the Department of Computer Science at Tufts University in the School of Engineering, and Director of the Human-Robot Interaction Laboratory and the HRI Masters and PhD programs.

He has over 400 peer-reviewed publications in artificial intelligence, artificial life, agent-based computing, natural language understanding, cognitive modeling, robotics, human-robot interaction and foundations of cognitive science. His current research focuses on complex ethical cognitive robots with natural language interaction, problem-solving, and instruction-based learning capabilities in open worlds.

Motivation and Inspiration

Matthias’ work is driven by a vision of making robotics more accessible and natural for human interaction, while ensuring robots operate ethically within human social norms. His broader inspiration comes from seeing robots as humanity’s evolutionary ambassadors to space exploration. Beyond his robotics work, he’s deeply fascinated by nutrition science and its potential to treat disease.


The Challenge of Physical Interaction

Unlike language models that process text, robots must navigate and interact with the physical world. This presents unique challenges that go far beyond processing information:

  • Real-time constraints and immediate physical consequences
  • Need for precise object manipulation and dexterity
  • Understanding and adapting to unpredictable environments
  • Integration of multiple systems: perception, movement, and decision-making

Teaching Robots Social Norms

One of the most intriguing aspects of Scheutz’s work is the emphasis on teaching robots social norms and ethical behavior. This isn’t just about programming rules; it’s about creating systems that can:

  • Understand context-dependent social expectations
  • Make ethical judgments in complex situations
  • Say “no” to inappropriate requests
  • Adapt to different social contexts, from private homes to public spaces

The Shape of Things to Come

Contrary to science fiction depictions of humanoid robots doing everything, Scheutz suggests that the future of robotics will be more diverse and specialized. Robots will come in various shapes and sizes, each designed for specific tasks:

  • Specialized robots for dangerous environments like disaster zones
  • Task-specific robots for industrial automation
  • Healthcare assistance robots to support medical staff
  • Space exploration robots that can endure conditions humans cannot

Addressing Common Fears

The conversation tackles common fears about robots replacing humans. Matthias offers a more balanced perspective:

  • Robots will primarily augment human capabilities rather than replace humans entirely
  • While some jobs will be automated, new roles will emerge in robot maintenance and supervision
  • The goal is to automate mundane, dangerous, or repetitive tasks, freeing humans for more meaningful work

The Next Decade

Looking ahead to the next 10 years, Scheutz is cautiously optimistic but realistic about the pace of progress:

  • Expect more specialized robots in industrial and commercial settings
  • Gradual integration of robots into healthcare and service industries
  • Continued challenges with making general-purpose robots affordable for average households
  • Increased automation in warehouses and factories

A Vision for the Future

Perhaps most inspiring is Scheutz’s long-term vision of robots as humanity’s proxies in space exploration. While our biological limitations restrict our ability to explore the cosmos directly, robots could serve as our ambassadors to the stars, carrying human knowledge and achievements beyond Earth.

Call to Action

As we stand on the brink of this robotics revolution, it’s crucial for everyone to:

  • Stay informed about robotics developments and their ethical implications
  • Support research and development in ethical AI and robotics
  • Engage in discussions about how we want robots to integrate into our society
  • Consider the balance between automation and human agency in various fields

The future of robotics isn’t about creating human replacements but about developing tools that can enhance our capabilities while respecting human values and social norms. As we move forward, the key will be ensuring that these technological advances serve humanity’s best interests while preserving what makes us uniquely human.

We have lots of fascinating episodes for you to check out. Depending on what interests you the most, the following are great episodes to learn more:

Links


Transcription

This interview has been transcribed using AI technology. While efforts have been made to ensure accuracy, the transcription may contain errors.

Hey everybody, welcome back to The Futurist Society, where as always, we are talking in the present, but talking about the future. A very special guest for you, a fellow professor at Tufts University, just like myself, Matthias Scheutz, who is a really important guy in the field of AI robots. He’s in the Department of the College of Engineering.

And he actually had a really important way of looking at artificial intelligence, which is one of the reasons why I wanted to speak with him. Thank you for joining us, Matthias. Can you tell us a little bit about yourself and kind of what you’re doing at your department? 

Thanks so much for having me, Imran. I really appreciate it. Yes, my name is Matthias Scheutz.

I’m a professor in the Department of Computer Science at Tufts. I direct the Human-Robot Interaction Lab there in the School of Engineering. And our work focuses on making robots easy to interact with and that involves people being able to talk with robots, teaching robots new skills, new behaviors, new tasks.

And we want to do all of that in a way that people don’t really have to learn much about the robot. They should just be able to walk up to a machine and tell the machine what to do. So that involves a lot of AI technology.

It involves understanding natural language instructions. And we take the robots that we instruct and then we evaluate how well they work in so-called human-robot interaction studies. People will do tasks with robots or teach robots tasks.

And then we evaluate how well does that work. We might have different settings where in one case the robot is not as intelligent as the other or it has a feature that the other one does not have. These are all software features and cognitive features we are talking about here.

Then we look at the benefit or the advantages of having this additional capability. For example, remembering what you’ve told the robot before or proactively thinking about what ought to come next in a joint task. So we evaluate those features.

And then afterwards, we go back to the drawing board and try to improve things more.

So I feel like some of the features that you’ve said are already present in technology, right? And Netflix tells me what it is that I’ve watched previously and it also recommends new things that I might like. I guess my question to you is, and I feel like a lot of our listeners are always wondering, when is that chat GPT moment going to happen for robots?

Because a lot of the technology is out there but nobody’s really synthesized something that’s palatable to the everyday person.

Yeah, so that’s a very good question and it’s a very complex question because it involves different aspects that have to do with how robots operate and why robots are more difficult than a virtual chatbot. And why actions of robots are more consequential than a text that gets returned from a system like chat GPT. People have been trying to put so-called generative AI models on robots for the last three years now with varying success and there are startups that will do it.

But in general, robotics is hard. That’s part of the problem where we’re not just seeing a chat GPT robot controlling things. The other aspect I would like to maybe mention here is chat GPT is not all of AI.

And the technologies that we usually use on robots are not typically the same kinds of models that are used for these bots or question answering systems.

Yeah, so I know that generative AI is different from just the kind of functional AI that you talk about with robots. And I think that the general public kind of knows the difference also. But it’s just, you see Boston Dynamics with all of these autonomous robots that are able to walk around.

You see the ability for certain robots or AI models to surpass us in chess or in Go. And from the outsider perspective, it looks like we’re making these massive leaps, but it hasn’t really gone out of the lab. It hasn’t really gotten into our hands yet.

And I guess, as someone who is an expert, what do you think that the reason for that is? And also, what are the barriers? I know that it’s difficult and I won’t take anything away from that, but I feel like it’s just like, that’s what everybody wants.

Personally, I feel like that’s gonna lock a lot of human potential. When we can ask a robot to do our dishes for us so we can spend more time with our children or do some of these more mundane tasks. There’s just so many mundane tasks in being a human in 2025, right?

And so I guess, when will that happen where people see the value in robotics outside of the laboratory?

Yeah, so it’s definitely the case that there is a lot of work going on in that area, especially for mundane tasks, right? It can be in the industrial space, it can be in the personal space, the home, people’s homes. It’s hard.

Again, it’s hard because it requires the integration of perception. You need to see what things are and what they look like. The dexterity of grabbing things, the manipulation of objects, where to pick it up and how to move it.

Assumptions about what might happen if I grab it too hard, right? Grasp an egg and press too hard to break it, right? Knowledge about common day and everyday objects.

All of that has to come together in the right kind of software infrastructure so that the system not only can perceive the thing, but can manipulate it and do the task in the right sequence. What’s challenging there is that, different from, say, controlled environments where we know all the objects that are in there and the robot has been trained on how to move them and pick them up and manipulate them. When you go to a more open environment, like a person’s home, there might be objects or what’s never seen before, right?

And there may be tasks that are what’s never done before. Comes out as a factor, maybe with a set of pre-trained tasks, but then it gets into this open environment and there it is. You go into the playroom and there’s a mess.

And then you say, clean it up. Well, it might not even understand what clean up means, but even if it understands it, it may not know what it means in that particular context. In fact, you might have very particular ways of cleaning up, right?

Certain things need to go on shelves. Other ones need to go in boxes. They need to be stowed away.

So all of that, the robot would have to learn on the fly. Part of the challenge right now is that most learning happens through the collection of lots of data. So you will have to go into millions of people’s homes, look at all the playrooms, right?

And look at all the different toys they have and have robots train on picking them up and moving them away. That’s the current orthodoxy. The idea is you learn it by just collecting data and then hopefully at some point you’ve seen enough toys that you will be able to generalize to toys you haven’t seen because they’re in some ways not too different from the ones you have seen.

That’s what people are doing. That’s what companies are doing. The difficulty with robotics as opposed to say generative AI is is that it’s hard to get that data.

It’s easy, comparatively speaking, to go to Wikipedia and download all of it. It takes a few clicks. Everyone can do it.

And then you can use all of that natural language data to train your generative AI model. And what it learns is essentially all these sentences and how to predict sequences of sentences in that data set. How are you going to do that with cleaning up your room?

Who’s going to have that data? The other challenges is in with chat GPT or generative AI, learning on a large natural language corpus, the idea of just predicting the next word at some timing that is not really clearly defined, however long it takes the model to do it, is very different from a robot that has to catch something that’s falling off a table because there’s a timing constraint and it has to be fast enough. It has to be real time.

So the timing constraints, the physical constraints, the perceptual constraints, the difficulty in being entirely sure what something is and how to manipulate that object. All of those are challenges that are really, really hard. And that’s why the robots are not there yet compared to, yeah, comparing to, say, chatbots that know lots of stuff and can answer lots of questions.

Who do you think is doing it the best? And that doesn’t need to be a company, but like even a country or even like a sector, you know? Because I see the capabilities for robotics in industrial labor, in households.

I would even venture to say that autonomous driving is like a composer robot, a component of robotics too. Who do you think is doing it the best and why? As an outsider looking in, I think of Tesla just because of the mass adoption, because that’s really what I think we would need to get before we actually, you know, have the comfort level with robotics, right?

Because right now it’s something that’s foreign. It doesn’t exist. And there’s a little bit of hesitance to it.

So in your opinion, who do you think is doing the best and why?

So it’s not clear that there is a single, you know, entity that’s doing it the best. You mentioned Tesla. You know, there’s a very interesting aspect to it.

And this is a sort of a higher level, more society level kind of point. A lot of the tech companies are using us as the guinea pigs in trying out their technology. And when it doesn’t work, you know, they fix it.

You could say, well, with the chatbot, maybe if it gives you a bad answer, that is not a problem. I think that is a problem, right? You shouldn’t do that.

You shouldn’t just throw out a product that has potentially harmful effects and tested live on people. Well, Tesla has been doing the same thing with cars. And what’s interesting is, is how the pressure, the financial pressure oftentimes then leads to technological shortcuts that ultimately are not up to the standard that we would expect.

So with Tesla, for example, they had a whole sensor suite that allowed them to get a good look at their surrounding environment of the car. And at a certain point they decided, no, we’re not going to use all these different sensors. We’re just going to use cameras.

Because let’s focus on vision, because that’s what people do. And that’s a good point, right? Roads are built for people who have two eyes.

However, but we know when it’s foggy, when it’s rainy, your vision might not help you, right? You cannot see through the fog or through the rain. And that oftentimes does lead to accidents.

And the accidents rates are higher when the weather is bad. Well, these cars had radar and they were able to penetrate the fog and the rain and I could see things that would have made them safer, right? So why cut back on that?

Well, there’s financial reasons. There are other companies that have advanced technology like OpenAI in a massive way, but they’re also ultimately interested in the financial aspect, right? And so when it comes to, for example, AI safety, there’s some lip service to it, right?

But practically speaking, these systems are not at that level yet. So overall, I think the advances are being made, but they’re all made with a particular technological orientation. That orientation is, we use one of these, what then are called foundation models or transformer-based models.

These are generative AI type models that can be configured differently. We collect a lot of data and we train them. And when something goes wrong, we train a little more.

And if it goes wrong again, we’ll train a little more. And hopefully, eventually, we get to a point where the model is just good enough. And, you know, originally, people had looked at different components in a system like a self-driving car.

So you had a component, a softer system, just for perception, just for seeing what’s out there. You had a system that just the controls or the motion planning of where to go. And there’s now a movement towards end-to-end systems.

So there’s no more components inside. You have a big neural network. It takes in the information from the sensors and it produces control outputs to the wheels.

There’s some advantages and there are disadvantages. And one of the disadvantages is you don’t know what the system has learned inside. These are all systems that are learned from the get-go.

So we will see where that is going. I cannot tell you. I mean, we have in my lab evaluated various different types of those systems, you know, systems that are end-to-end.

So they take camera input from the robot’s cameras and then the robot needs to pick something up and pour. Or a system that is composed of different modules that have very specialized functionality. And then there is sort of a larger system that sits on top and uses that information.

And in some cases, the intern system is very good at small tasks, but then it does something, sometimes really silly things and has no notion of what it’s doing because it’s not aware of what it’s doing. It’s just doing it. And so what the right architecture is that combines the advantages of this new AI, right?

The foundation models, the generative models and the idea that a system has some awareness of what it is doing and why it’s doing it and what might happen if it were to do something in a particular way. That’s still in the works, right? We’re still trying to figure out what the right combination is of what we’ve done before that had some of those aspects built in, right?

Certain level of awareness and what we’re doing now, which is this massive data collection and training of systems.

Yeah, the architecture is still not definitive on what is more superb than another.

No.

And that’s, so another gentleman in the Boston area, David Grosin, who’s at the Northeastern Center for Robotics, he was talking about robotic judgment as being kind of like the next level. Like not only is like a robot able to differentiate between red and black and all the different colors and navigate a road in such a way, but also training a robot in judgment about if I do this thing, then it’s going to be much more dangerous than going there. Can you comment on that?

Because it’s something that I feel like is outside of the news right now, but it’s something that we’re really all talking about. Like that’s really what we’re all concerned about, right? Is the ability for a robot to have judgment.

And I know that this is something that you feel very strongly about, that the robot should have the ability to say no to you.

Yes. So if by judgment, you mean risk assessment, that is important. If by judgment, you mean sort of ethical assessment, that’s even more important, I think.

Yes. One challenge I think in general with technology is that we want to make sure it’s safe for everyone, right? What makes AI special and different from other technologies like nano or bio is that we’re building these autonomous systems, autonomous agents.

And because they’re autonomous, they have to assess their environment, their state, and then make decisions of what to do to act, right? And it’s exactly that decision-making process where the judgment ought to come in, but you need to make sure that when the system decides to perform one behavior, one action, and then not another, not only should that action be safe, but it should also be… it should be in accordance with our ethical guidelines and normative expectations, right?

It should be consistent with that. It should follow what you would expect normatively the system to do. And why is that?

Well, because our societies are built on norms. You know, I’m talking, you’re not interrupting me, that’s a norm, right? I mean, we have lots of these and people are aware of them, you know, at different times to different levels.

But in general, when you’re building when you ask somebody, why did you or did you not do that? They’ll be able to tell you and oftentimes with recourse to a principle. For me, the biggest problem right now with AI is that, and generative AI, I’ll talk about this in a second, is a little different, but those systems usually have no norm awareness, zero.

They’ll just act, right? So take the shopping robot that you will see, I’m not going to name names, right? In stores around here, it’ll sit in an aisle and block the aisle, right?

Happily, because it’s supposed to detect spills, right? But it doesn’t realize that the person might want to grab something from right behind where the robot is from that shelf, right? It doesn’t move out of the way.

It doesn’t do what people normally would do. So it’s that even simple awareness of what it means to live with other agents in a society and, you know, accommodate them or act in ways that they would expect it. What’s the result?

Well, if robots behave that way, people are not going to want to work with robots, right? That’s very clear. And we’ve done experiments on that, right?

We’ve looked at what expectations people have. We know that trust, for example, goes down, right? They don’t trust the system because it doesn’t exhibit the expected behavior.

So that’s very important. With generative AI, it’s interesting because when you train a model on Wikipedia and all the corpus, the corporate that we have, right? And all the books and all the stories, there’s lots of ethical stuff in there, right?

Obviously, because if you think about it, lots of, I mean, most of the literature probably is just about humans, human interactions, human feelings, right? Emotions and all of that. Non-violations, you know, and how they’re to address.

So the systems know about it. And when you ask them about those kinds of questions, they can answer. What’s tricky is, is that they know only in the sense that if you ask the right kind of question based on what they were trained on, this is the succession of words that you would usually produce with some probability, right?

So they don’t really know it. They can generate responses. So now there’s a gap between that knowledge and then what you actually do in a particular context.

And so, for example, if you, if I say to you things like, cut the tomato, pick up the knife, cut the tomato, put it down. Then what are you putting down? The knife.

Pick up the knife, cut the tomato, put it in the bowl. What are you putting in the bowl? The tomato.

The tomato, right? Everyone knows that immediately. The Geneva models don’t because they don’t see that context in the way we do.

And the same goes for normative and moral context. We’ve done experiments on this where what is being referred to in a normative context is very clear for people, but the model might go based on what was said before or what the last noun was in a sentence, right, on some other criteria. So that’s a challenge because you would want it to understand the right context, what to do in this context.

So if you then put this model on a robot, right, the robot won’t do the right thing because the model doesn’t know what the right thing is. So that’s a challenge. So we need to get to that point where these systems are not only aware of our norms, but are able to really follow them.

And when you cannot follow a norm because maybe, you know, there’s a conflict and we oftentimes have norm conflicts and then we have to sort of adjudicate which to follow and which to ignore, right, to temporarily ignore. And if you then subsequently get blamed for it, which is another natural process, right, you committed a norm violation, I’m going to blame you for it because you should have followed that norm. At that point, you will have to make some sort of justification for why you broke that rule, right, why did you do this?

Oh, and then if that is reasonable and if that makes sense, right, that’s fine. Then you are not blameworthy, but you might not be able to do it, right? But it requires you to know the norm in the first place because how could you make, how could you justify your actions with recourse to principles if you don’t know those principles?

So we would expect machines at some point to be able to do the same thing if they’re embedded in our society. And I’m very hopeful that that’s very doable, you know, if you’ve been working on this for a while and we’ve been able to show that in simple cases, that’s exactly possible. And it’s also possible to determine that there’s a conflict and that I cannot do both things.

You know, I cannot, for example, respect the fact that we are in a conversation, I shouldn’t interrupt the speaker, but there was a fire alarm that went off, right? It goes, fire, come on, come on, we got to leave, right? So at that point, that trumps it.

The safety here is higher than, you know, the politeness value. So for machines to understand that, there’s more work that needs to be done. And more importantly, I get this question, you know, from journalists often.

So what norms do you put in there? It’s not my job as a computer scientist to put the norms in there, right? But it’s my job to build the scaffolding, the software infrastructure and architecture for the system to learn the norms of its, you know, society surroundings, wherever it’s operational, right?

If the robot’s in your house, it will have to pick up whatever is customary and whatever the norms are, right? Your norms, right? And then the moment it leaves your apartment, you know, then in the hallway, it will have to abide by what’s customary in the building.

And when it goes on the sidewalk, well, then there we are in the society, right? So understanding that, understanding the shifts in context, what goes and what’s allowed, not allowed, that is really critical. And so I’m coming back to what you had said about robots and ethical judgments and needing to be able to say no.

That’s part of it. If you give me a bad instruction and it may not have been intentional, right, but just a bad instruction in the sense that carrying it out would violate a norm, I should not automatically do it. At the very least, I should say to you, you want me to do what?

You realize that the following is going to happen if I do that. And at that point, maybe you have a good reason for me to still do it. You know, so for example, here’s your robot and it’s after hours and you instruct the robot to break into the pharmacy and get your pain medication.

I said, I cannot do that. You know, I cannot destroy it. But maybe there’s a good medical reason for why you should do it and maybe you don’t want to do it, right?

The point is, the judgment and the assessment and the reasoning through that norm system needs to be ingrained in that machine. It needs to be able to do it. And if it’s not able to do it, it will automatically commit norm violations because it’s not going to know why, right, it might optimize things.

So for example, if you have a self-driving car, and you let it just optimize routes because you want to go from A to B the fastest. But I mean, we know for, it cannot drive as fast as it can possibly drive, right? So there’s a speed limit.

So that’s already a normative constraint. That puts a limit to what you can optimize. Same with one-way streets.

Maybe there’s a shortcut it could take, right? And it would normally do it if you didn’t say, well, you can’t do that. So if you’re just optimizing and you’re ignoring all these other constraints, you will step on somebody’s toes.

You know, I know you got a lot of pushback for the idea of having the robots giving us the ability to say no. And from people that I’ve talked to, there’s two camps. There’s people that are very pro-robot and there’s people that are very, you know, hesitant about robots.

And it manifests itself in like a whole number of different fears. There’s the idea of AI overlords, there’s the idea of uncanny valley, you know, like there’s so many different fears. And one of the reasons why I was really excited to talk to you is because you have a PhD in philosophy.

Where do you think that these fears are coming from? Like, what would you say to, like, you know, I’m a very, I’m very pro-robot and I want people to realize the value in them. I want them to be adopted throughout society because without adoption, they’re going to become very difficult for all of us to obtain.

What do you say to people that have those fears, maybe even within your own classes, you know, when you have undergrads that are working with you and they might have some of these same fears or even family members? And where do you think those fears are coming from?

Well, I think one thing that’s very interesting. So there’s an Austrian philosopher, Günther Anders, who wrote

about the Promethean chain, that there was a concept of the awe that you have when you look at something that’s perfection, right? A machine that works really well. You know, you look at the first looms, you know, any of these machines that automate stuff, we look at, wow, it’s amazing how that works.

The same goes with the chess programs. It’s amazing how good it is, right? It’s kind of funny because with the early calculators, that was also the case, nobody cares, right?

If it can multiply a large number, large numbers or, you know, with many digits. But I think there’s something interesting about us comparing ourselves to some other system doing something really well and much better than ourselves, right? And so that’s, I think, part of where that fear might come from.

The fact that we are worried, right? Where you see this perfection here and then you see the imperfection in yourself. To that, I have a few things to say.

The first one is, yes, it has been traditionally the case that these systems got better and better in various areas, right? So, like I said, a calculator that can multiply large numbers initially was very impressive and nobody now cares, right? We think, okay, great, it can multiply better.

Chess, yes, it’s amazing, it beat me. But yeah, it’s chess. Go, it beat me.

Well, it’s go, right? So, yes, they make inroads on different places. But if you look at, for example, one of these generative models right now, and not one that is specialized or trained for a particular task, but you look at, say, 20 tasks, right?

These could be IQ type tasks, right? Different visual tasks, sorting tasks, right? A whole bunch of different ones.

And oftentimes tasks that people don’t even think is any challenge. These models are not even close to where humans are, right? So, I can see why you would be worried when you see the perfection in some machine, but that perfection is far, far away from the kind of cognitive capability that we have, right?

Basically. Now, it is to be expected that the machines will get better, right? Across the board, and everyone is trying to push that.

So, we’ll get to a level of general intelligence where they will be able to do all of these tasks and potentially better than us. You know, there’s another angle to this. The only way they can do it right now is by ingesting everything that’s digitized.

That is an amount of information that is many orders of magnitude greater than what you’ve ever been exposed to in your life. And with that little information you have, you got that far, right? So, let’s do the comparison properly, right?

So, that’s another one, right? If you play ping-pong with a ping-pong master, right? And you’ve just practiced for two hours, that wouldn’t be a fair comparison, right?

Given the many hours they trained. But if you looked at how good they were after two hours compared to how good you were after two hours, well, maybe that was about the same, or maybe you were even better. So, the comparison is oftentimes skewed.

So, we need to be sure that we understand what these systems can or cannot do, right? They might be very good at, you know, recalling facts, but translating between 30 or 50 different languages. And they really are.

It’s very impressive. It’s very easy to give up on a foreign language, right? What you see Chachapiti conversing in all of them.

But at the same time, keep in mind, you know, all the information they were exposed to. The amount of energy they needed to get to that point, which is another really big issue, actually, right now in the way things are being done. They’re not only not energy efficient, right?

They’re really polluting the environment. And so, overall, the comparison, I think, needs to be relative to some of these other measures, including how much training did you have, you know, with how much energy did you accomplish that goal? And in that respect, we are far on top.

I find that with every new technology, and this happens in surgery in the same way that it happens in other fields, is that there’s this kind of pushback on new technology as if it is, it’s just not authentic, or it’s not like as high quality as, you know, a traditional way of doing something. So even though you can present information that says this is a better technology, it’s more accurate, it’s more efficient, it’s more XYZ, there’s still this pushback against it. And I just don’t know if robots are ever going to overcome that, right?

Like, there’s, there’s, that’s one of the things that I see as an outsider looking into this field, is that there’s always this kind of like otherness about robotics. And, and, you know, like, there’s, for example, Drive-In, right? There was a very popular show, Top Gear, that everybody watches, it’s really into cars.

And, and they were just lambasting the Google CEO, because he was saying that, you know, eventually cars are going to outpace us in regards to safety and reliability. And there’s going to be a time where you’re going to look at something like driving cars as kind of gauche, right? And that was, everybody was applauding, because it was like, there’s like, there’s something so, you know, warm-blooded about driving your own car and hearing like the, the, the, the gears turning and everything like that.

And here we are, like, I probably watched that 10 years ago, and here we are. And like, there’s a lot of people now that are in favor of full self-driving. And then there’s a lot of people who still feel like that, you know?

And I guess, number one, what, what would you say to those people? You know, because you’re, you’re on the cutting edge and you’re, you’re developing these systems in the lab and, and I see the value in it, but it’s just like, I can’t, I can’t take away from their experience. So, you know, part of it is the, part of it is the idea that like, they’re never going to be as good as us, which is one thing, one thing that you posited.

Another thing that you posited, which is that, is that, you know, it’s an unfair comparison, you know?

So I added, just to be sure, I’m not saying they will never be as good as us, right?

No, no.

And the point is, is they might as well, right? They are not right now and not by a long shot, right? Even though they’re very impressive.

So I think that’s very important to keep in mind. There’s nothing in principle that prevents systems to be way smarter than we are, right? And, and better across the board with all of these capabilities.

But I think what I want to say in response here is, is, is that, you know, people, when they want to drive, will always have the option to drive themselves just the same way that people ride horses, even though they may not take their horse to work, right? But they’ll drive a car rather than take public transport. It’s a fact that while technology is advanced, you know, the new technology replaces an old technology.

It oftentimes doesn’t completely replace it, right? We still have radio, right? Why do we have radio?

Well, we still have radio, right? We still write letters, right? So every single time a new technology becomes available, people see the benefit, right?

But it doesn’t necessarily replace all of the old one. I think the same is going to be true with a lot of tasks like driving, you know, when, when you’re really busy and you have to work and you cannot afford to drive, you’d be very happy to be in a safe, autonomous car that drives you to work, right? And you get your stuff done.

But if you’re taking the mountain road for fun, right? Because you really like curvy roads, right? Then you’ll drive yourself.

Not a problem. When it comes to robots, mundane tasks, monotonous tasks, right? The kinds of things nobody, tasks nobody likes to do.

That’s the first, I think, winner for robotics. It’s in factory automation, right? Where people, I mean, it’s, it’s almost inhumane when you look at what workers do for eight hours straight, right?

Taking things out of boxes and putting them into other boxes. It’s ridiculous that we still do that. Dangerous environments, Fukushima, you know, if, if we had had good robots to go in there, right, that would have already helped.

So in general, space exploration is another really good one, right? The, the, the amount of effort NASA has to spend on just maintaining human health in space, right? Is so enormous.

We could be sending fleets of robots elsewhere, right? So there’s multiple different environments where it’s very obvious these robots will be useful. But even in the household, right?

Even in your home, you could see like simple tasks, right? Cleaning things up, right? Maybe preparing the meal, right?

Fixing things, watching the kids. In the hospital environment, right? Very simple tasks.

Escorting people from A to B, as opposed to, you know, something we’re working on, as opposed to having a nurse that is highly skilled, take some of their time to escort people. So I think the key insight here is, is they are not going to replace us, right? That’s, I think, some of the words that you were getting at, and that’s what people are worried about, right?

They’re going to take my job. Now, I would be lying if I were to claim that machines will not take people’s jobs. Well, they have, right?

Every automation step has cut into something. But it always came with a shift, right? With a shift to something else, and then there was all of a sudden more work, right?

Same with, you know, the internet, and cell phones, and all of that. So yes, some of the robots that we will see coming in the next few years that will get deployed to factories that will be easier to handle, that won’t need programmers, will replace workers. But there’s going to be maintenance jobs for those robots, right?

There will be all sorts of other jobs in the hospital, right? We have a nurse shortage. We’re not even, we cannot even replace anybody.

We need to augment, right? All the care. You want to live alone and not be in a facility, right?

In your own home? Well, maybe you need somebody who is watching out for you. Maybe you need somebody who is cleaning up.

Maybe you need somebody who is bringing your stuff from the fridge. That I think are the applications. They’re all hard, right?

But those are the applications where the robot’s not replacing anybody, but it’s augmenting you, right? And so that’s, that’s, I think the goal here, the goal is for the machines to blend in, know our rules and norms, right? But support people and not replace them.

Yeah. So, you know, the other thing I wanted to speak with you, because I know you’ve done research on the uncanny valley and like how there’s this, you know, discrepancy between if something is human versus something that’s like almost human, there’s a little bit of a, a pushback against that. Do you think that, what are robots in your opinion going to look like?

Like, are they going to be soft, cuddly things? Are they going to be like, you know, a Tesla bot, which is just androgynous and, and, and totally, uh, different from humans. There’s also going to be, you know, uh, there’s a, there’s a, uh, company in Charlestown that’s making these like small little like carrier robots that you could put some stuff on and they just follow your route.

And, you know, what is the, what is the future of robotics is going to look like? Are they going to come in all shapes and sizes? What is your opinion?

Yeah. The last, you know, the one, one challenge when we talk about robots is precisely that, that it’s different from when you say humans, right? Very clearly defined species, you know, and category.

Whereas with robots, it’s not a clearly defined category. They come in all forms and shapes and sizes, right? They can fly, they can swim, you know, they, they are snake like, they’re human like, um, from our perspective, when you build these robots that are very human, like you mentioned the uncanny valley, right?

So robots that seem to be like people, but not quite right. And it’s very uncanny. It’s disturbing.

It’s unnerving when you see the differences. We think it’s not a good idea to do that because it just raises expectations on the human side of what that system is like that the system cannot live up to. And there’s research on it.

The more human, like the robot looks, the more likely people will attribute human like properties to the system, right? Based on appearance, but not only appearance, but also behavior, right? If you have a human like gate, that does it.

Uh, if you can manipulate objects in a certain way that does it. So it’s the mixture of appearance and behavior that are for us, social signals, right? This mixture says indicate the mixture indicates something to us about what the system is like and what it’s capable of.

So if you want people to believe that your robot is capable of having emotions, right, then you make it smile and, you know, frown and whatever else. Right. But if it doesn’t actually have behind the facade, the cognitive processes that go with that in people, it will exhibit wrong reactions, you know?

So for example, uh, suppose somebody, uh, you learn about the death of a person, right? And you, you break out in a hysterical laughter because you’re so shocked and there’s your robot smiling at you. It responds because all it does is respond to surface features that will be absolutely inappropriate.

Right. So when, when we talk about robots and, and what’s practical, obviously the tasks that robots need to do will determine their shape to a certain extent. If you need to manipulate objects, you need a grip of sorts, right?

Something to pick them up. And if you’re just a serving robot, right? Basically a tree on wheels, maybe you need nothing else.

If, excuse me, if you need to interact with people in language, well, you need to be able to speak that already raises the bar enormously because the moment or what says anything, right? People think, Oh, it knows English or knows whatever language it might not. Right.

It may only know a very limited vocabulary. That’s something that for many years we had a hard time with, right? Because we would give our subjects very specific instructions of what they could say to the robot.

And almost everyone always deviated from that because the moment the robot talked for them, anything goes, right? So this is all to say, there’ll be robots designed for specific tasks. You know, there are robots that will clean out your gutter.

And for that, that robot has several particular shapes so it can move inside. There will be lawnmower robots that, you know, they have to have a particular shape. There will be robots that have to negotiate indoor human spaces.

Likely they’ll have legs. Why? Because our spaces are built that way.

You have stairs, you need to step over stuff. You need to step on things. If they have to manipulate objects, they’ll have some sort of, you know, manipulator.

The car doesn’t need it, right? The car needs other, you know, effectors. So they’re going to be very diverse because the tasks are going to be very diverse.

Some of them will be very specialized and other ones will be more general purpose. Likely the human-like forms, right? Why?

Because we can do lots of different tasks with our legs and arms and the sensory apparatus that we have. So that’s why there’s, there can be a case to make, you know, for humanoid robots to look like people, whether they now display gender features or not, right? It’s a separate question, but in general, having a particular physical shape, it could be, um, you know, the, I think it’s a new Boston Dynamics robot that, that can now lift itself up in a very bizarre way because these joints are 360, right?

So it may have a human-like form, but it can maybe perform motions and motion paths that humans could not possibly do due to joint constraints. Uh, so, so all of that will depend on what these systems are used for, right? If you don’t ever need an arm as a company, you wouldn’t put an arm on your robot because it costs so much extra, right?

And it’s useless.

Yeah.

Um, in terms of the AI on those robots, that will also be commensurate with the tasks and what they need to do. You know, you will not have, uh, a gutter cleaning robot that, uh, has, you know, will philosophize with you, you know, about Plato. It will just do its job and it will have the perceptual capabilities for that job and, and nothing else.

Yeah. The reason why I ask is because I, I think that there’s two camps of, of people that I’ve, that I’ve come across. They have this idea that we’re going to make relatively humanoid robots that are just generalists, that clean up gutters, that mow your lawn, you know?

So it’s, it’s, it’s going to have them adapted to human technology as opposed to reinventing human technology and coming in these different shapes and sizes, which is interesting to hear because I think that, you know, lots of science fiction has, has talked about your vision, which is like Star Wars, there’s, they’re coming in all sorts of different shapes and sizes. So it’s interesting to, to posit how that future is going to look. And, uh, and that’s one of the things that I wanted to talk with you about because in general, with all the guests that come on, I have the last three questions that I ask is we are coming close to the end of our time, which is how is that future going to look?

And the last three questions that I always ask, uh, my guests are really general questions because, you know, you’re kind of on the cutting edge of this technology that’s really revolutionized in a whole sorts of different industries as well as our livelihoods, right?

Yeah.

So what do you think that, uh, the future is going to look like in 10 years with robots? You know, I think that the, the 10 year horizon is something that people can like tangibly understand, you know? So like, are we going to have a robot in every single household?

Is it going to be, is it going to be that, you know, uh, Tesla optimist or either the BYD model, you know, what is it going to, what is that 10 year tie stamp going to look like?

So I know you want an answer to that, right? I’m, I’m, I’m always a little reluctant doing this because lots of smart people made really silly predictions about it. So no, let me say this.

I think a 10 year timeframe already is, is too fuzzy to make a good prediction right at this point. Uh, Bill Gates in, I think 2007 or something wrote an article in the scientific American about every household will have a robot in it. Is that true?

I mean, a lot of households have Roombas, right? So, so maybe that’s already true, right? But it certainly wasn’t, it took a while, right?

I do think that, uh, what you will see in the next five to 10 years is the push of these more general robots. That is very clear to me, right? There’s a lot of effort in this.

We have already lots of specialized robots, right? People may not know that, right. But there’s lots of those that do different types of jobs, but they don’t usually interact with people.

They’re not in our, you know, uh, spaces, not maybe the public, some of it, right. But, but certainly not in our homes. They don’t do the kinds of tasks we talked about before.

So there will be a push for that. Is it going to be affordable so that everyone has a human right in their household? I doubt it.

I really doubt it. I don’t think that’s going to be the case. These systems are very expensive, uh, and, and very complex machines, but you will see them, right.

You will see the push for that. I also do think that we will see a lot more, uh, automation, you know, on the roads. The technology is there, you know, uh, we will see a lot more automation in factories, warehouses.

It’s happening already, but that will, you know, Amazon prided itself in the automated warehouse. There’s still people in there. And the reason why people are in there is, is because the robots are not good enough to pack things in boxes.

Well, that will also end, right? I think you will have these fully automated warehouses, right? So there’s a bunch of areas.

Is there going to be something revolutionary? Well, those are the hard things to predict, right? There could be.

Um, we have, you know, been part of projects on, uh, open world AI, and that’s right now the big challenge, like we talked about before stepping out of the controlled environment into a world where you don’t exactly know what’s happening and what’s going on. Uh, and there’s funding being poured into it, but nobody has a solution yet. Right?

So that’s the big question mark. If somebody can come up with a good way of handling that, uh, and then put it on robots, there will be a massive growth, right? Of applications.

Uh, I do, I do think that, uh, investing in robotics technology, the way, you know, one company does it and other ones are now doing it because there’s a lot of investor money right now in that space. We’ll push it in ways that wouldn’t have happened, let’s say five years ago or seven years ago before the transformers really became a thing. And part of it is, is that all of these companies in one way or another put generative AI on their robots.

How successful is going to be? You know, that’s anybody’s guess, but I do think that there are great opportunities for using even current technology. We’re not talking about what’s gets developed in 10 years and develop it to a point that we have it in application domains, you know, like what was that before relief, the nurses in the hospital of, of some mundane tasks they are not even supposed to do right.

And cleaning, you know, whatever else, right. There’s lots of tasks that we could use these machines for already today. Part of the challenge there is, is, is that oftentimes the money’s not there, right?

Then because the, the, maybe the, uh, the profit you can make off of that particular market is not as high as in this other market. So then investors go to the other market, right. Even though we could use them, right.

So that’s going to require some rethinking of how we incentivize things.

Yeah. And I think that the investment that I see it’s, it’s phenomenal. And I find it hard to believe that in 10 years, we’re not going to have, you know, these humanoid robots inside the household, but you know, that, that that’s my prediction.

And, you know, I just kind of wanted to see if it will come back to this in a year. I would love to have you back in 10 years. So the, the next question that I always ask is you’re very specialized in robotics.

Uh, and you also have a background in philosophy, but the, the other, the other thing that I always ask people who are on the cutting edge of stuff is like, well, outside of your own field, what is it that is really exciting you for all the different technological revolutions that are happening? Because, you know, there it’s happening in biotech, it’s happening in, um, you know, medicine, it’s happening in environmental stuff, you know, energy, like we have, you know, just a few blocks from your fusion reactor, it’s being built. So, uh, there’s, there’s a lot of different technological revolutions that are happening.

So outside of robotics, what is something that you, you read about? You just can’t get enough of, because like for, for me in medicine, robotics is that for me, right?

Yeah.

That’s my real passion. I look at it and I’m really interested in every robotics video that comes across, but what about for you? What, how, what are you looking at?

It’s almost medicine. It’s, it’s, uh, it’s, I’m, I’m very interested in, uh, nutrition, right. And, and, and food as medicine and, and understanding these processes, uh, that in the body, a lot of body to overcome disease and, and illness through food.

Right. Um, I find it fascinating if, if I, if I had to study anything else, I’ve studied enough for my lifetime, I think, but in terms of the formal education, but if I were to do another degree program, probably do nutrition science or something.

Yeah. And there’s a big push these days, especially like in the longevity movement, which is a totally novel field. Oh yeah, in medicine, absolutely.

Didn’t exist 10 years ago.

Exactly.

So, so that’s really interesting to see as an outside observer, you know, I, I, uh, I’ve had a lot of longevity experts on the, the podcast and, you know, I, I think that it’s, it’s just one of those things that we’re going to tackle in the same way that I think AI will help with that. Yeah.

I hope so. Because the systems are too complex really for anybody to understand. And especially the different interactions, you know, the up and down regulation of different, it’s very complex.

Too many pathways. Yeah. So the way that they interact is, is very difficult.

I have a friend who’s a pathologist. He has like this entire wall devoted to pathways and there’s just errors going everywhere. Yeah.

You know, none of them are one sided. They’re always, you know, two C. Yeah.

So it’s a very complicated thing to navigate. So I hope that that’s something that will come true. So cool to hear your opinion about that.

And then the last thing I would, I would say, uh, you know, part of the reason why I do this podcast and part of my inspiration for learning about new technologies comes from my love of science fiction and especially utopian science fiction. You know, I think that there’s a lot of dystopian views. Yes.

Especially in the general population about how the future is going to turn out, you know? And so I looked at towards utopian science fiction as inspiration, as well as kind of like the end goal for what we are going to achieve. What is it that, what, what is it that you’re looking towards?

You know, like what do you see as that future utopia that humanity is able to achieve? And what do you think that that inspiration comes from?

You know, I, you mentioned longevity, right? I mean, so that, that is deeply ingrained in us, right? The idea that we would want to live for as long as possible.

I mean, maybe not forever, right. But for as long as possible. And, uh, I find it interesting to think about machines and robots, right.

As an evolutionary step that doesn’t make us per se live forever. Right. But it might allow us to reach out and, and, and, and go into space and colonize, right.

Other areas and planets. It’s not us going right. But it’s our products.

It’s almost like our children, our products that are going out there. And the reason why we probably can’t is because of how we are made, right. The biological bodies are not made for long space travel, right.

We’ll be sick and radiation right out of the, but machines could, right. And so some of what we culturally have achieved as, you know, mankind as a society, uh, that would live on even maybe long after we were done here on earth. So I find that fascinating and, and consoling at the same time.

Yeah. Yeah. That’s certainly an interesting utopia. I think it’s on everybody’s, uh, feature list is, you know, us going to other planets and, and, and not being limited. 

It’s our proxys not us.

Our proxies or at least having some sort of other presidency, you know, their planets, you know, um, I hope that we are eventually able to go to other planets, but you’re right. There’s the biological capabilities that we have are really not, are not suitable for that kind of thing. Uh, but thank you so much for being on the podcast.

My pleasure. Thanks for all the good questions. Really interesting speaking with you.

And thanks to all of our listeners. And if you could just hit the subscribe button, I’ll really make a world of difference for us. So we can get great, uh, guests on the podcast as always.

And, uh, for those of you guys who are tuning in on a regular basis, we will see you again in the future. Have a great day, everybody. Thank you.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

By: The Futurist Society