In this episode, Doctor Awesome interviews Doctor David Rosen, a professor at Northeastern University, about his work in artificial intelligence and robotics. They discuss the challenges of developing algorithms for thinking robots that can perform tasks efficiently and safely. Dr. Rosen explains the advancements in certifiable and verifiable systems and the growing research efforts in this field. Join us as Dr. Rosen underscores the importance of moving towards provably correct systems and the role of optimization techniques in this insightful conversation about cutting-edge progress in AI and robotics. Listen now and be part of the future!

Watch the episode here


Listen to the podcast here

The Future Of Thinking Robots – A Conversation With David Rosen

Hey, everybody. Welcome back to the Futurist Society, where, as always, we are talking in the present but about the future. I have with me today Doctor David Rosen, a professor at Northeastern University in the Department of Electrical Engineering and Mathematics. More importantly, he is doing a lot of interesting and amazing work in the fields of artificial intelligence, robotics, and machine learning, which are on everybody’s mind right now. Specifically, I wanted to talk with him about a program he’s heading up called NEURAL, which is doing really cool stuff in robotics. So, tell us a little bit about Neural, and how you got into the position you are in. I think you started out in math for a bit and now are doing a lot of cool stuff with robotics. Let’s learn a little bit about what NEURAL is doing for us.

What is NEURAL?

Okay, cool. Thanks very much for the invitation. NEURAL is the Northeastern University Robust Autonomy Laboratory, the research group that I run. The motivation for this is, if you think about the kinds of robots we want to have in the future, these are going to be things that operate ubiquitously in society, outside the lab, doing useful stuff in the world. Many areas where autonomous systems have the most potential to do good involve robots controlling potentially dangerous or expensive machinery, like autonomous vehicles, aviation, or home robots operating close to people. These high-impact areas are also ones where poor decisions can be very costly. The real goal of my research group is to design perception, planning, and control systems for autonomous systems that achieve a level of safety and reliability making their deployment in these high-impact, safety-critical applications totally acceptable.

Yeah, I think safety is on everybody’s mind when they consider giving up some control to a robot or algorithm, especially with things around humans, like full self-driving cars or consumer robots. What strides have you made that people might be looking at right now?

Okay, yeah. So maybe by way of answering that question, I might start by explaining how our approach to research differs a bit from how a lot of robotics or AI work is commonly done and what we’re adding to the discussion. As you mentioned, my background is in mathematics, and that’s the lens we adopt when looking at problems in autonomy. From that perspective, the core technology we use as the basis for a lot of our work is optimization. If you think about wanting to have an artificially intelligent agent, what does that really mean? Roughly speaking, it means you want an agent that understands how to take the best course of action in a given set of circumstances. This could be finding the shortest path, minimizing cost, or other such goals. Anytime you’re talking about making a decision involving superlatives—best, fastest, etc.—you’re really describing an optimization problem.

What this means is that almost every problem in artificial intelligence can be naturally formulated as an optimization problem. From an engineering standpoint, much of the work involves formulating what that problem is supposed to look like. For example, if I’m trying to build an autonomous car, I would expect variables like fuel and speed to appear in the decisions it makes. But that’s really just a description of the problem. Once you’ve boiled it down to the problem you’re trying to solve, the next question is, how do you actually solve it? That’s essentially a mathematical question.

Where our work starts to come in is looking at the properties of these optimization problems, especially those in what’s called physically embodied AI, like robotics. It turns out many of these optimization problems are fundamentally hard to solve in terms of computational theory. This is a significant barrier to implementing and deploying these systems because robots tend to be very lightweight platforms with limited computing power. They also need to operate in real-time, making decisions quickly without a lot of processing time.

So, there’s a tension between needing to solve these computationally hard problems and making quick decisions with limited computational resources. Traditionally, many classical robotic systems have used heuristics to approximate solutions, simplifying problems by, for instance, assuming linearity when it’s not. This approach has allowed us to build robots that can operate, but it becomes difficult to predict under what conditions these systems will produce reliable behaviors, as the approximations introduce uncertainties.

Translating ethical decisions into a mathematical algorithm

Yeah, I feel like that’s such a complex problem. From a layperson’s perspective, when thinking about performance, the question is, performing better for what purpose? For instance, if the goal is to get from point A to point B as quickly as possible, that’s one algorithm. But if the goal is to get from point A to point B without causing harm, that’s a different algorithm. How do you juggle these different objectives? It reminds me of the train problem, where a train can either derail and hurt one person or continue and hurt many. How do you translate such ethical decisions into a mathematical algorithm that a computer can understand?

Yeah. So, first, I should say that we’re looking at much lower-level problems than the moral reasoning problem here. We could, in principle, come up with a solution to such a problem, but the catch is that for the stuff we do, we’re primarily looking at how to solve a problem that is already presented to us. It’s not 100% clear how you necessarily…

Yeah. So I guess moral reasoning is different from safety, but I feel like they’re so close. As someone not in the field, I may not understand it as well as I should. Could you give me some examples of safety measures that might make people who are hesitant about robotics more comfortable with the idea?

Sure. I can give you some examples of what we’ve worked on in the past. One area we spend a lot of time on is robotic perception. As I mentioned before, we do a lot of work in embodied AI—problems related to physically embodied agents. Geometry is really prominent here because the whole point of having a robot is that it can interact with the physical world.

If you think about the problems you want robotics to solve, many involve planning, perception, control, and navigation, which require an accurate model of the environment to avoid, for instance, driving into walls. The spatial perception or mapping problem—collecting data from onboard sensors and reconstructing a geometrically accurate model of the world—falls into this very difficult problem class where there are no fast general-purpose algorithms for solving it.

Before much of the work my group has done, these problems were often solved using heuristics.


Heuristics are just trial and error, right? Or what do you mean by heuristics?

Sure. Yes. I can make that a bit more precise. As I mentioned, these are usually formulated as optimization problems. The sharp distinction between what was done before and what we’re doing now is that, prior to our work, these were solved using local search. This means, if you imagine an objective that describes how well a potential solution is doing, you can think of this as a large landscape of potential answers. Each of those points has a cost associated with it, measuring its goodness. You can imagine this as a large mountain range. Local optimization attempts to improve a starting point by making small adjustments.

Using the mountain range analogy, you’re starting at some random spot in the mountains and going downhill. Whatever local valley you end up in is where you stay. That’s local optimization. What we want to do with global optimization is find the best answer, which is the lowest point everywhere. Similar to the mountain range analogy, if you’re a hiker in one of these valleys, you have no idea where the actual lowest point is.

So, roughly speaking, what was done in the past was based on local optimization. This often works and has the advantage of being fast, using classic algorithms like gradient descent. These are well-known in fields like machine learning and can be run quickly, making them suitable for lightweight mobile platforms. The catch is that you can’t guarantee the quality of the solution. These spatial estimation problems have the property that if you find a local minimum instead of the global minimum, it can be completely nonsensical.

For example, if you’re in an autonomous car driving on the autobahn at 90 miles an hour, and the map you’re using looks like a crazy, Inception-esque hairball, which is what happens when it finds the wrong answer, you’ll have a very bad day. Our work focuses on solving these complex optimization problems quickly enough for real-time systems while ensuring a higher quality of solutions.

Yeah, so from what I’m hearing, and correct me if I’m wrong, a local system is based on everything around you right then and there. In contrast, a more global system can draw on past experiences and other information, using more data. Similar to how a child might not remember what their room looks like yesterday, but you and I can navigate our bedrooms with our eyes closed. Is that what you’re talking about, or is it more complex than that?

It could be something like, going back to the landscape example: if you were to ask me, “What’s the highest point around you can see right now?” you might say something like, in the city of Boston, maybe the top of the Hancock building. That would be a notion of a local maximum. Whereas the global solution to this problem would be the top of Mount Everest. So, maybe one way of saying this is that the methods commonly used right now are limited by the notion that you have to pick a starting point and then make local adjustments or improvements to your solution. The issue with this approach is that where you end up depends on where you start.

For instance, if you’re trying to find the tallest building locally, in Boston that might be the Hancock Tower. If you ask this question in Toronto, you might get the CN Tower. But if you’re asking what the globally highest point on Earth is, there’s only one: Mount Everest. That’s where we always want to end up. We don’t want to find these local suboptimal solutions; we want to find the global solutions. However, you can probably tell from that analogy that it’s a much harder problem. You’re talking about something that requires global knowledge of the state space.

Yeah, right, right. I mean, I hear that a lot. It seems like the promise of robotics has always been 20 years away. You see cartoons like the Jetsons suggesting it’s just around the corner. I mean, you know, when we went to the future, and they had automatic dog walkers and all this stuff. So I feel like that. The promise to the public of robots is something that has been so ingrained in our psyche. And the more that I talk to people like yourself who are doing it on the ground, the more I realize how much more complex it is and how far we have to go. But then, by the same token, I see this robotic team that is sorting different items based on just a speech algorithm—separating apples, for example—and humanoid robots making the rounds on the internet. If you guys haven’t seen it, go check it out. Or Atlas Robotics creating a bunch of humanoid robots that can traverse all these different environments. I feel like there’s a discrepancy between what I see that’s technologically available versus when I talk to you or someone like you, and there’s this idea of complexity that I feel like is almost insurmountable. So, where do you see it from your standpoint? You know you are on the bleeding edge of robotics. Do you feel like this stuff is going to be there within our lifetimes? And if so, is it something that we can reliably trust around our kids and make sure that that’s something that’s safe enough for all of us? How do you see it?

Yeah, so I think the issue that you mentioned is really exactly the motivation behind a lot of the work we’re doing, which is the idea that a lot of these robots are built on some sort of more heuristic techniques. And so that’s very encouraging in the sense that we can demonstrate at least the inklings of these kinds of abilities that we would want to have. But when you say, as you point out, “Okay, that’s great, but why don’t these things exist out in the world then?” Oftentimes, what you find is that you get these very, very long tail effects, which goes back to, I think, what I was mentioning at the opening, which was that when you consider if these things really were to be outside the lab operating in the real world, just the scale of that deployment is so vast that even these really, really long tail corner cases are going to be a problem. They’re going to be a problem, yeah. And when you’re talking about the control of safety-critical systems or life-critical systems.

Systems, you can’t have that chance.

And so, I think what a lot of robotic systems researchers, certainly myself, believe is that it’s these really long-tail, rare but pernicious failure modes that are the limitation preventing us from having ubiquitous, Jetsons-like deployment right now. The motivation behind much of the optimization and computational complexity work we’ve been doing is to produce decision-making control algorithms that don’t suffer from those long-tail failures.

In particular, the main distinction between our approach and what’s been done in the past is that we can actually prove we’re going to get the right answer, at least in certain cases. Moreover, when we get a solution from one of these methods, we don’t just get a solution; we get what’s called a computational certificate of correctness. This means not only do I have the answer, but I know I have the right answer, and I can prove it to someone else.

So, if we go back to our autonomous driving scenario, the distinction is that with an existing heuristic-based approach, if an autonomous car gets the wrong answer, it doesn’t even know it has the wrong answer. It will happily execute a path planned against a nonsensical map and drive off course. Using these certifiable techniques, it may occasionally get a wrong answer under very adverse circumstances, but it will recognize that and take proactive steps, like asking for help, before making a terrible decision.

Introspective talk

Yeah. It’s almost like what they talk about with artificial general intelligence, the idea of not only having reason but also judgment. 

Yeah. So the term that we’ve started to kind of use around this, at least in my group, is to refer to this as the ability to do “introspective talk.” So to actually understand that you’ve got the right answer, you don’t have the right answer, or to recognize when you might be about to make a mistake.

Interesting. Wow. That’s awesome that you’re doing that. I think it’s going to be really interesting to see. I feel like we’re in the early stages of artificial general intelligence. Who do you think is doing it best? Is it academics, like what you’re doing in your lab, or is it private industry?

That’s a very broad question. 

Here’s the reason why I think I like your approach. First off, it’s because of the democratization. If you create this free model that everyone else can use, then a young kid interested in making an in-house robot might have the ability to catch up with some of these larger companies. From everything you’re saying, it seems like the moat is so complex that nobody’s ever going to be able to overcome that. If that happens, it’s going to be decades before it actually gets into our houses. Am I wrong in that?

Academics versus private industry

I would say that academia and industry have complementary roles. They’re better at doing different kinds of things. Things that industry is really going to excel at are things that require just having the scope and scale that industry can provide in terms of manpower and data collection.

Surely get it done faster.

Yeah. We’re finding these super, super large machine learning models, like the so-called foundation models, that are built on incredible amounts of data, which often includes large fractions of the internet. That’s logistically way beyond the capabilities of academic research organizations. It’s great that the industry can do that. What we’ve seen recently with large language models is that having extremely large corpus data sets can enable amazing things. But that’s just one aspect of building intelligent agents. One of the things I’m a little apprehensive about, and the motivation behind our work, is that while LLMs can do seemingly incredible things, they are also well known to be susceptible to hallucination events. I think of this as another instance of safety and reliability. So, private industry can produce these incredibly large foundation models. But if you’re going to ask somebody to get inside an aircraft driven by one of these models, which is a total black box, would they feel comfortable? I personally wouldn’t.

So, I think academia has a big role to play. We’re not going to compete in terms of scope and scale, but there are many fundamental scientific questions that need to be addressed to understand how these systems work and how to make them safe and reliable. These questions don’t require large resources to answer. That’s the kind of stuff my group focuses on.

Yeah, that’s really, really interesting. Do you follow science fiction? Are you a fan? What is the artificial intelligence model that you think is the most benign? What are you trying to achieve? Because I look at Isaac Asimov’s seminal works and his three laws, and I think they are so profound. I’m sure there’s a certain amount of elegance to the math you’re making. What is the promised road that you’re trying to achieve?

Okay. So maybe there are two answers to that question. In terms of is there a particular character in Sci-Fi that I especially identify with in terms of what I’m trying to do? I grew up watching a lot of Star Trek.

Yeah, me too.

So, I mean, I would definitely say that Commander Data is like…

Yeah, he’s absolutely one of the best characters. His whole grappling with what it means to be human was, I thought, the best out of all the Star Trek mega plots. There are so many different things. But he’s an artificial intelligence with reasoning very similar to a human’s. We’re a long way from that right now, but is there anything you’re looking at where, if we can achieve it, it would be good for humanity today?

Designing autonomous systems

Oh, well, yeah. Like I said, the real motivation behind our work is to get these devices out in the world, especially the ability to design autonomous systems that can guarantee safety, which would immediately have tremendous social impact. Just to take one example, one of the application areas we’re especially excited about is autonomous driving. To give some concrete numbers, the NHTSA tracks this stuff all the time, and the last report I read from them indicated there are about 40,000 fatal car accidents every year. If you look at accident investigations to determine the cause, unsurprisingly, it’s predominantly people just doing super dumb stuff and, like, driving drunk, driving…

Texting while driving, not paying attention.

Robots don’t get distracted or tired. So, we’re very interested in applying some of this technology in the near term to things like active driver safety assistance. Imagine having a vehicle built on top of these algorithms that deeply understands the underlying physical model and knows how to take appropriate and provably safe actions, given those models, and as a result, is effectively not crashable, at least up to the limits of physics. So, you’re no longer limited by a driver’s skill or lack thereof; you’re just limited by the physical capabilities of the underlying vehicle. If it is possible to avoid an accident, this system will find a way to do it. Not only that, but because this software is infinitely replicable, training a professional driver, which is done one at a time right now, would be scalable. In a large-scale deployment, like what we’re discussing here, as soon as one car learns how to do something, all the cars know how to do it. These systems will continuously improve over time.

Robots don’t get distracted or tired. So, we’re very interested in applying some of this technology to things like active driver safety assistance.

And that’s just one example. There are also applications in using robots as scientific instruments. One of the reasons I was excited to come to Northeastern is that they have a very active field robotics program, and a lot of the work they do is targeted at scientific applications, such as environmental monitoring for climate change mitigation. Similar to the driving example, the spatial and temporal scales required for environmental monitoring to get the data needed to answer these pressing societal questions necessitate automation at a very large scale. Deploying large numbers of autonomous systems that are smart enough to understand what they need to do and where to get the data of scientific interest is a very open problem, but it looks exactly like the sort of problem amenable to the techniques we’re developing.

Okay, I just want to double-click on the full self-driving component for just a second, because I feel like, first off, Tesla just released their full self-driving beta. What do you think about that? What are your thoughts on that?

Okay, well, at the risk of potentially, hopefully not, alienating some of your viewers, I take a dim view of Tesla’s approach.

Because it’s a heuristic model, right?

Yeah. I mean, effectively, what Tesla’s doing is running a gigantic unsanctioned human subject experiment on everyone on the road, which I certainly never signed up for. I’m not a huge fan of it. Even calling it full self-driving is both intentionally misleading and, therefore, unethical. If you read the fine print, it is absolutely not full self-driving. Calling it that is a marketing gimmick, but it lulls people into a false sense of security because folks who may not be experts don’t realize the limitations of these existing systems and the fact that they are not yet safe in the lab.

Yeah. What about the auto taxis that are in Arizona? I forgot the name of them, but they pick you up from the Tucson airport and then they take you to your hotel. Have you seen that at all?

I’ve seen it, yeah.

Is that just because it’s a fixed route and it’s basically like driving by rail. Who’s closest to achieving full self-driving?

So, okay, I’m probably a little biased because I used to work there, but I actually like Google’s approach to this. In part because they are, I think, being much more cautious and modest in both what they are attempting to achieve in the short term and in the claims they are making about what they are capable of. I think they are pursuing a more methodical, scientific approach, not trying to go too far, too fast, and thereby endangering people.

What are they claiming that’s different from Tesla? As someone who doesn’t follow this on a day-to-day basis, this is something you’re very close to. How can someone like me, who is not involved in your field, differentiate between one full self-driving system and another?

Well, I don’t know of anybody who actually has what I would call self-driving right now. Again, it’s because of these very difficult, long-tail scenarios. Most driving is not that interesting in the sense that, as you say, you’re on the road, driving forward, and staying in your lane, which is fine. But what makes driving difficult is the fact that occasionally weird stuff happens, and you have to avoid crashing.

So things like entering a construction zone, for instance, when there’s roadwork being done and lane shifts happening with cones and so on. You have to adapt to an unforeseen circumstance that may not look like anything you’ve seen before because it’s new. Your ability to understand how to make the correct decision when confronted with novel scenarios is something that principle reasoning algorithms do much better.

I don’t want to paint with too broad a brush, but the whole idea of machine learning is based on having a data set representative of what you’re going to see in the future. It’s great when you can do that, but one of the big problems in machine learning writ large is what they call domain shift, where suddenly something in the world changes. Your training data is no longer representative of what you’re seeing now. Unexpected changes in the environment happen all the time, especially in driving scenarios, and some classical ways of building these systems may not handle them well. Not only may they not handle it well, but as I mentioned before, they may not even realize they’re not handling it well, which is when you can really get into trouble.

Social acceptance and broad-scale deployment

So is it really only going to be safe when we have artificial intelligence similar to, I mean, obviously not like Commander Data, but, you know, something a couple of shades below that, you know, such that it can identify things like humans? Are we close enough to that, such that it will be within our lifetimes? Or are we going to continually wait for artificial intelligence to be as close to us as possible before it’s accepted by everyone?

Okay. Yeah. In terms of social acceptance and broad-scale deployment, the thing that I’m really aiming at is I want to get away from AI as a black box, that you just have to hope is going to do the right thing. Maybe that’s the way to say it. If I were going to draw an honest block diagram for most existing robotics or AI systems, there would be a giant box in the middle of that diagram that’s just labeled hope. And I find that very unsatisfying. So what we are attempting to do really is to take the guesswork out of this. We want to produce methods that you don’t have to just take on faith, but that are correct by construction, that actually are provably going to do the right thing, at least within a restricted regime of operation that you can quantify, so that you know if you’re about to enter an unsafe state. We’re looking at that from the perspective of optimization techniques, but similar kinds of technology. Similar kinds of approaches can be applied actually to machine learning systems as well, which is another area that we’re very interested in. For example, if you train some deep net model, how do you know that this is going to do the right thing presented with data of a certain type, let’s say? Can you say anything about the reliability of that system? Can you do verification? Verification is one term that’s often used to describe this. Part of one of the applications that we’re very interested in applying some of the technology that we’re building to is verification of learned perceptual models and learned controllers as well. So being able to say that, at least within the following set of circumstances, I can guarantee that this model is going to operate safely.

In terms of social acceptance and broad-scale deployment, the thing that I’m really aiming at is I want to get away from AI as a black box, that you just have to hope is going to do the right thing.

Predicting the future

That’s awesome. I mean, I can see what you’re talking about. I just always want this thing to happen very soon. I feel like the more that you talk about it, the more I realize that it’s not as close as I once thought it was. Maybe Tesla or Google or somebody will put out something that is a heuristic model, where the risk of something adverse happening is so incredibly low, they feel comfortable putting it out there. The way you’re talking about it seems like the most logical way to go about it is to have some sort of judgment that the computer has, as opposed to just following a set of orders because of trial and error from an algorithm that you don’t have any access to, you don’t have the ability to look at it. So I don’t know. I mean, you’re on the cutting edge. How far away do you feel like we are from this stuff?

You know, the pithy remark, “predicting the future is hard.” So I hesitate to make any concrete predictions about this because, actually, it is that space that I’m describing now that is evolving really, really fast, which I’ve been very, very excited about. So the algorithms of the sort that I was describing didn’t actually exist even, like, five or seven years ago. That’s about when these things started to emerge on the scene. And there’s been tremendous development in that space over the last five or seven years. And I think we really kind of have hit an inflection point with respect to those kind of certifiable or verifiable systems. In the past, even just two or three years, in terms of the number of people that are now, number one, excited about this, and number two, really devoting a lot of serious research effort to it. So, I mean, I’m a little bit of an optimist myself in that sense. So I actually don’t think it would be unreasonable to predict that we might start to see a lot of these verifiable systems coming out in the next, you know, five years or so, not that far off.

So there’s a component of the car itself being smart, and smart enough to navigate all of these edge cases, but there’s also the, from what I understand is there’s the idea of the cars talking to each other also to make sure that en masse, they’re able to predict any sort of strategic error from happening that would harm somebody. Is that something that’s also being pursued at the level that, you know, you’re at? Like, I feel like when I watch iRobot,  everything is going in one path, it’s almost like an artery where all the blood vessels are lining up in the same way.  Is that something that all these different companies are looking into, or is that something that’s probably going to be a lot more difficult just considering the fact that there’s different companies involved?

Certainly within a company, companies are definitely aware that the ability to coordinate the actions of multiple autonomous systems is hugely beneficial. Autonomous vehicles is a good example of this because I mentioned construction zones earlier. You could imagine that one car says, “oh, yeah, it turns out there’s construction on Broadway now,” and this data allows other cars to know that when they’re doing route planning and so on. So that’s definitely something that is of interest and is being actively worked on. Interestingly enough, it turns out you can cast that problem in the same sort of optimization-based framework that we were discussing before. The catch now is that you have to worry about not just can you solve this problem in the abstract, but now you also have to worry about things like communication constraints and data locality because not all your agents may have access to the same data at the same time. And at least if you’re talking about deployments of mobile devices, they may or may not be within Wi-Fi range, it may be that they only have the ability to send messages infrequently. And those messages may have to be small because they’re being transmitted via satellites or cell phones and stuff. And so it actually can be cast as the same problem, but now with much more operational constraints around communication and data transfer.

Scientific computation

One of the things that you had mentioned when we were talking before we started the podcast is there are some fundamental principles that someone like yourself would want to know to get into this field. I feel like it’s such an interesting field. You’re saying there’s a lot more investment and interest in robotics in general, which is something that I definitely see. There are more newspaper articles about it, more people who are not in the field are talking about it. If somebody was a young kid that wanted to get into just the ideas that you’re presenting here, I think you had said data theory was one of them, like linear algebra. What are the fundamentals that that kid should learn?

Okay. Yeah. So I can speak maybe sort of on a broad, high level first, and then maybe a little more specifically about some of our stuff.

So, I mean, for anybody who’s listening wants to get into scientific computation or anything to do with computing on data at all, like, you have got to know linear algebra. Linear algebra is absolutely the foundation of all scientific computation. And numeric linear algebra in particular. I always want to call that out because there’s linear algebra that you might study at the level of like an undergraduate, which is all about linear systems and how to solve linear equations and so on. It turns out that numerical linear algebra is really the study of, “Okay, I want to solve a linear system, but I want to actually do this using computers, which have finite memory, and where computation time in terms of number of operations is also a consideration because we want to do stuff fast.” And so linear algebra is really the study of not just linear algebra, but specifically, how do you implement linear algebra operations on finite memory computers? And so that’s like a whole deeper level of study, but that’s like absolutely the foundation. Like pretty much all of applied mathematics is really just a giant machine for taking broad class problems and reducing them to sequences of linear algebra problems that are basically the only kinds of general problems that we really know how to solve. So that’s definitely the foundation. And then if you want to get more into data science or AI type stuff of, kind of as I mentioned before, most of the problems that you’re going to be tackling, whether it be training neural networks or trying to come up with algorithms that make good decisions and so on, all these problems can be formulated as optimization problems. And so really, the core of almost all scientific computations is a combination of numerical linear algebra and optimization.

Getting started with robotics

Interesting. I just feel like even for someone like myself to understand what is the basis for how these things are working, it would give me some sort of idea for when I would feel comfortable with bringing them into my household. Like, for example, a consumer robot that would be able to fold my laundry. Like, I want to know where we’re at with that so that it’s safe enough to have around my kids. Right. And just maybe, I don’t know, like, learning a little bit of, kind of like where we were at with the topics that you mentioned. Maybe somebody that might be interested might be able to make the decision a little bit better. But I feel like those are two different things. Like, what you’re talking about, like, the actual nuts and bolts of how you’re building it. Is there any value in learning them just as, like, a layperson when we’re talking about some of these more general concepts?

Totally. One of the things I tell my students is that you can never over-invest in the fundamentals. And when people ask me, “Suppose that I study math, what can I do with that?” Usually, I answer that literally whatever you want.

Yeah, that’s what I heard, too.

You know, I actually began life originally as a physicist and subsequently I switched to math. But, like, I never set out with the intention of doing robotics. I kind of just fell into it as a hobby.  But I was very well positioned to do it when I actually decided to change my professional focus. Because what math really teaches you how to do is it teaches you how to think rigorously. Like, to be able to understand, okay, when I’m looking at a problem, what is the structure of that problem that’s important for understanding how to solve it? Like, how do I separate out the really core ideas involved in this problem from the details that don’t really matter that much? How do I take an ill-formed sort of natural language idea and sharpen it into a statement that is precise so that I can then make forward progress on it?

I’ve been recruiting both in academia and before for a couple of years in industry, when we were trying to hire people, that was, like, the number one skill that we were looking for.  How can you make your ideas precise so that you can actually start to attack them in a principled and deliberate way that enables you to make actual forward progress when trying to solve difficult technical problems.

Yeah, that’s interesting and certainly important. I feel like math is something that is the substructure of everything that we have and so I could see somebody making a transition into another technical field. But specifically robotics, I think you had talked about you were doing first robotics competitions for a while and then made the transition because you enjoyed it so much.

I actually started in high school. As it turns out, I was a giant nerd. I had kind of taken all the available AP classes by my senior year, so I had a free period. And my buddy just said, “Hey, I’m starting this robotics team, join.” I said, “Sure, it sounds fun.”

And I just kind of kept doing that when I got to college, just as a hobby, on the side with my student groups. Kind of just kept doing it throughout college. And by the time I got into graduate school, what I realized was that a lot of the math that I had been studying in “my professional capacity” during that time actually mapped really well onto a lot of the engineering and sort of physical problems that arise in robotics applications. And it turns out that a lot of traditional engineering programs, like mechanical engineering or electrical engineering, don’t necessarily cover some of the mathematics that is really useful, I would say really required, for modeling robotic problems. 

So a good example of this would be something like, you know, it turns out there are mathematical objects called manifolds that basically you can think of as being smooth objects in a sense. Right? So think of, like, a sphere. It turns out that kind of language, the language of smooth manifolds, is really useful for modeling geometric properties. So things like, you know, I want to be able to model the position and orientation of the robot in space. Like, that’s like, the most basic question you’d want to ask about a physical, embodied agent is literally, “Where am I in space?” And it turns out that the actual set of answers that you could have to that question is actually a smooth manifold.

I use this example, usually on the first day of class. I’ve got a robot. Let’s say it’s in the plant, so it’s driving around, like, on level ground, and I might be interested in asking, “Okay, where is that robot right now and where is it facing?” So it’s the facing part of that question that sort of makes this interesting. So if you think back to like high school, if you’re taking trigonometry, and one question would be, “Okay, if I wanted to model the orientations of a robot, what does that look like?” And you might say, “Okay, well, let’s model it as a rotation angle. If I use an angle in the plane, that would give me a notion.” If you do that, you are going to try to assign a real number that would model this orientation angle. And so you might say, “It’s just the real line, right? It’s the real number line. That’s what this answer looks like.” That’s actually not true and it’s a common mistake. And the way to see that is to think about turning a knob. Maybe zero degrees is here and as I start to rotate, if I turn this knob up, I start rotating my robot. And that’s fine until I get to 360 degrees, because in the plane, what that means is the robot gets back to where it started. But on the number line, I started here and I’ve been moving in the same direction. And so if I try to do that mapping, I end up with two different points that on the number line are distinct, but in terms of robot heading should be the same. So what that tells you is what this configuration space actually looks. Intuitively, I need to take these two points and I need to glue them together to make them line up. And if I do that, I get a circle. So what that tells you is that actually the space of orientations in the plane is actually a circle, which, you know, think back to high school when you studied trigonometry, the very first thing that you study on day one is the unit circle. And that’s not a coincidence because that actually is what the space of orientations looks like. So I mentioned this because, you know, talking about manifolds and stuff may seem like a kind of very esoteric type of study, but it actually turns out to be incredibly useful because these actually are the objects in question that underpin the problems that you’re trying to solve. And so if you’re not aware of how to handle them in the right way, you’re not going to be able to write algorithms that operate reliably and produce the correct answers. 

And maybe the last thing I’ll say about this is just that, you know, the mathematicians that originally developed this weren’t just sort of sitting around abstrusely cooking up stuff that they could study… I mean, some of them were. But one of the things that I think is kind of interesting about geometry is that although geometry can be a very abstract field, it was originally developed to solve very, very practical questions. So all the Romanian geometers originally were actually trying to understand how you can make maps that are accurate. Because you’re trying to represent a three-dimensional world which is on a sphere, on a flat piece of paper. So, what are the kinds of maps that one could write down?  What limitations might exist in terms of trying to take a curved piece of space and represent it on a flat surface, let’s say. And so even though these can become somewhat abstract concepts, they’re also kind of the natural objects that one needs to use to solve these very hard-nosed engineering questions.

That sounds super complex; even just hearing the whole hearkening back to high school, I think that you were taking some much more advanced classes than I was. But no, I’ve noticed that even though it seems very complex, people are interested in it.  I was a judge for FIRST Robotics for a number of years, and, like, each year, it was more and more people that were joining and more kids that were interested in robotics, period. Have you seen that firsthand, like, in your classes? Is that something that is accurate for people who are getting into, like, the higher levels of academics?

Oh, yeah. I mean, both interest and interest in FIRST specifically, is definitely increasing, for sure. Yeah, I agree with you. I sort of advised some of the FIRST Robotics teams when I was in college for a period of time. I think just the change or the advancements in terms of (i) the capabilities, and (ii) the things that were being asked to do, between when I was in high school and when I started sort of mentoring or advising some of these groups, even over that span of, like, four or five years, it’s gone way up. It’s really exciting, actually. And as you say, yeah, students are coming in with a lot more exposure to these things. And now, actually, with the development of things like robot operating systems and so on, a lot of the tool chains have evolved to a point that actually…

Robot operating system

What is the robot operating system? Is it just like a DOS, you know?

It’s a kind of middleware layer that allows you to sort of construct robotic systems that are meant to accomplish, you know, whatever you want by sort of stringing together individual processing modules. So the programming sort of paradigm is that you’re building a graph of interconnected nodes that perform certain kinds of operations. And so there are robotic systems that will generate certain kinds of behaviors by taking these off-the-shelf, individual processing nodes that do basic tasks and sort of wiring them up in the correct way to get the behavior that you want.

I was talking about with another artificial intelligence thought leader on the podcast the other day and they were saying that what’s the really interesting about what’s going to happen with artificial intelligence is a lot of these much more complicated coding processes will be off-the-shelf types of apps, or what have you, so that somebody who might not be as technically knowledgeable might be able to do something with it. Is that something that’s accurate? Like, what would be done in six months with a team of 30 is now being done in two weeks with a team of three. Is that accurate or not?

Yeah, absolutely. One of the big contributors to that ease of involvement in these areas is the proliferation of software design and coding implementation libraries that have emerged to support this kind of activity. So, actually, maybe I should shout out to the open-source Robotics Foundation, which has been a major driver in democratizing access and promoting the use of these kinds of toolkits, making it easier for people to get involved and significantly reducing the barrier to entry.

Consumer robotics

That’s something that I feel will only accelerate breakthroughs. Right. I mean, I hope that consumer robotics will come a bit sooner rather than decades from now. I want to see fully self-driving cars and humanoid robots that can do my laundry within the next 5 to 10 years, just as you mentioned. But I also have a particularly high level of comfort with robotics. I have a lot of trust, you know, inside your black box. I have that outside of the black box; I’m just fine. That’s my comfort level. What’s your comfort level? Because you’re in the industry itself. Do you have a Roomba at home, right? Yeah.

I would say the main distinction between what we’re targeting in my group and the broader consumer space you’re talking about is the stakes involved. The worst thing that could happen with consumer robotics is, you know, your Roomba might go over a part of the carpet that it shouldn’t.

Right, or bump into something and it shouldn’t.

Yeah, which is, you know, annoying, but it’s not catastrophic. My strong suspicion is that as we see these robotic agents start to enter the consumer market, they’ll initially tackle tasks where it’s a great opportunity to handle things that would otherwise be tedious. If robots do, frankly, some less-than-intelligent tasks, it won’t be a huge deal. So it’s okay to apply perhaps not completely reliable autonomy to those areas. Where we might initially see a boundary is with certain kinds of home assistance. For example, you mentioned having a robot do your dishes earlier.

Oh, my God. Laundry specifically would be a game-changer. I hate doing laundry. But anyway, go ahead.

Jump on the dishes. As you can imagine, if a robot makes a mistake with dishes, you’ll open the dishwasher when you come home and it’s just like a pile of broken pieces.

Right, yeah, yeah. No, I think the stakes are certainly important. I like how the Roomba has low stakes. Honestly, a lot of those low-stakes tasks, like a robot watering my plants, I would be happy with. You know, I would pay money for that. But at the same time, I just don’t know how far away they are in the pipeline right now. I see a lot of these humanoid robots, and I’m like, wow, they can walk, they can manipulate objects, I feel like everyone’s talking about them, and I want them to happen now. But that’s just something that, as a layperson, I’m excited about. I was wondering how you see it coming from your angle and how long that pipeline truly is.

Yeah, I think it probably breaks down based on what is the right application for the existing state of technology, and how can you use the experience gained from these near-term applications to improve the capabilities of these agents and extend their capability into the future? So things like Roombas right now are a good example where you can start deploying systems that are doing useful things. Another application that I might quickly mention along those lines, where I think we might actually see things sooner rather than later, would be aerial delivery. In fact, there’s one company right now doing amazing work called Zipline. They’re using fixed-wing drones for long-range aerial delivery of things like medical supplies in areas with poor road infrastructure. They’re doing operations for medical supplies that otherwise wouldn’t be transportable over long distances without road networks because they have to be refrigerated. They’re doing operations on a regular cadence right now for medical delivery over hundreds of thousands of kilometers.

Along the same vein, I’ve seen similar technology from Doordash and Uber with food delivery. What’s your comfort level with that? Would that be something you would use? Because it’s low stakes, you know. God forbid something gets lost, it’s not the end of the world, you just order another one. Is that something you have used or will use?

Personally, I would use it if it were available. I suspect that several companies are doing this right now. I was in Berkeley a couple of years ago, and I saw companies doing that kind of stuff because they’re actually deploying these. I don’t remember the name of the company.

Yeah, I think Doordash or one of the big delivery organizations is just rolling it out. I know Papa John’s has been talking about it for a while.

But to your point, when I was out, I saw these all over the city. They were like little four or six-wheel truck-looking things, about the size of a medium-sized cooler, with big flags on them to make them visible. You’d see them puttering along the sidewalks.

That’s awesome. I can’t wait to see that in Kenmore Square. I mean, I feel like if anybody should have it other than, you know, maybe Berkeley or something like that, it would be someplace around here because, like, we’re building the future just a few blocks away. But for whatever reason, we still haven’t had that. I wasn’t sure.

New England winters are challenging.

Yeah, that’s probably true, it’s a totally different story in California. What about full-automated self-driving? Like, when do you think that will be ready? I feel like we’ve talked a lot about how the models themselves don’t lend themselves to something you’re comfortable with.

Yeah, so that would be the sort of thing that I would tend to be a lot more skeptical of. And, in fact, ironically enough, I would trust something like the aviation space well before I would trust something for autonomous driving because the environments that you’re operating in are so much less complex. Driving is actually quite a complicated problem because, you know, if you’re flying a plane, you know how the plane works. There are no pedestrians throwing themselves in front of your aircraft, whereas the amount of stuff you have to keep track of, the situational awareness required to do self-driving well, and the kinds of environmental changes that can happen from one moment to the next are so much richer and so much more complicated than these applications like aviation or space technology. Where you’re trying to solve difficult physical problems, but in terms of understanding what the environment and the vehicle are doing, it’s just a much simpler modeling challenge than having to keep track of what’s going around you in a busy city street, like in New York City.

Autonomous technologies

Yeah, interesting. Do you think that any of these autonomous technologies will influence the capabilities of telerobotics? Like, will the autonomous stuff make telerobotics better? For example, people who are missing limbs or even exosuits that you can use such that they feel natural. From my perspective right now, when we talk about robots in surgery, it’s really telerobotics. It’s like you go in, you have this joystick, and you’re doing something through the lens that gives you stereoscopic vision, and it’s really just a video game. But it takes a lot longer to set up. It’s still not natural, you know. Will one breakthrough help the other? Is technological progress in one thing going to affect the other thing?

I think so, yeah. Definitely, there’s this direct line between a lot of what we’ve been talking about and advances in the capabilities of teleoperative agents. Actually, a great example of this that’s been going on for, like, a decade already is the Mars rovers. It’s a great example of this. I don’t remember exactly what the time delay is between Earth and Mars, but I think it’s something along the order of eight minutes. And so they have these Martian rovers that are driving around doing a bunch of science on Mars, but it’s actually not possible to directly teleoperate them because of the time delay in transmission and reception.

So one of the big themes that Jet Propulsion Lab has been working on over the last several decades is how to make these systems more capable by not having humans, necessarily, have to issue super low-level, individual motor commands, but instead building up the reliability and the capability for the robot to make its own local decisions on-site and simply letting the human do more supervisory-type tasks. Like being able to say, “I want you to look at that rock over there,” versus giving the exact path.

Right, yeah, that’s obviously ideal. I just don’t know how much of a crossover there is. I could see something in that application, but when I think about robots, for whatever reason, I have to liken it back to my own experience, which is the surgical robotics component, which to me doesn’t seem robotic at all. It just seems like a video game that they’re presenting to us and using this word as a marketing gimmick, you know?

But I think you could imagine something like, you know, suppose that you are doing surgery… I haven’t tried this myself, but I imagine that probably one thing that’s difficult about that is maintaining situational awareness. It sounds like you’re describing it like you’re looking through kind of binocular headsets, so you don’t have the ability to easily look around…

So the thing about positional awareness in the body is that, in fact, I think that’s one of the benefits of telerobotics because the implants themselves are positioned in such a way that they’re not movable. So, like the implants themselves, like the actual arms of the robotics apparatus, the instruments are at a known distance, and then they can be positioned on the three-dimensional CAT scan or the MRI in such a way that you know exactly where you are from a positional standpoint, because it has this three-dimensional thing that it always knows, like, in what space it is. Does that make sense? There are markers that give you a three-dimensional spatial location. So for us, I think, that’s actually one of the benefits of it. So even though I’m looking through the goggles, as soon as I look up, I know exactly where I am because it’s displayed against a three-dimensional representation that has been made from imaging that existed before the patient was asleep.

You have to sort of take your head out of the headset and then look elsewhere.

Yeah, right now. I mean, you know, there are people that are toying around with a lot of these AR setups, like, you know, the Apple Vision Pro and things like that, which, you know, I think there was a lot of buzz about Google Glass when they came out, you know, being like the next big thing.

That actually might be a good example. So, you know, AR would be, I would imagine, a much more natural sort of interface than having a fixed, restrictive field of vision. But in order to actually implement an AR interface like that, you would have to render a composite or efficient view from the sensor data that your robot is collecting.


This is exactly the problem. For instance, you know, if I want to use that as a basis for determining where I’m going to make decisions or things of that sort. And in order to do that, I really want to know that I’ve got the right reconstruction so that I’m cutting where I think I am, not somewhere else.

Yeah, a lot of interesting stuff. I think that, in general, technology is increasing at such a rate that it’s nice to just be a spectator. And I think that makes a lot of people very scared and pessimistic about the future because they don’t understand what’s coming down the pipeline. And people naturally fear what it is that they don’t understand. But on the same token, I do think that, all things being equal, we tend to go towards progress. Right? I think we’re much more likely to end up in a Star Trek scenario than, you know, a Mad Max scenario. At least that’s just my understanding of it. I mean, I don’t feel like artificial intelligence is this boogeyman.

The last thing I want to talk to you about is your view of artificial general intelligence. Everybody has this idea that it’s an existential threat, that it’s something that, you know, they’re really scared about. How do you feel about it? And if you feel one way or another, what do you tell your family members about it?

Don’t fear AI systems

Yes. In my core, innately, I’m an optimist, which is why I do what I do. So I don’t think that these kinds of AI systems are things that we should necessarily be afraid of. Certainly not. I mean, I think there’s tremendous opportunity for them to do really, really great stuff. In terms of high-level vision, what makes me really excited about the stuff I’m working on is ultimately what these things are going to do. One way that we can ultimately use them is as assistive devices. So imagine a car that doesn’t let you crash or assistive surgical robots that don’t let you make mistakes. And I think it’s not hard to imagine how that could be tremendously valuable.


On the more cautionary part of this it’s not that AGI is going to become sentient and try to destroy humans or something.

Yeah. Like Terminator.

Yeah. I don’t think that’s a problem. I think the problem is much more that if we try to deploy AI systems before they are sufficiently safe and mature, they may make bad decisions, not out of malice but out of incompetence. And so that’s the kind of thing that I think is the concern. So that’s not a reason we shouldn’t do it. I think the future is pretty positive. We’re just not quite there yet.

Yeah, I appreciate that. I think one of the things that I see more than anything else is that people don’t realize how incompetent these machines are right now. I feel like I look at a lot of the stuff that’s happening, and my sisters or some other person who’s not following things think we’re far more advanced than we are. I look at the Optimist robot from Tesla, and I’m like, “That has a decade to go before it’s going to be at my house.” But it’s nice to hear that that’s how you feel, too, because that’s just my own opinion. And it’s nice to hear it validated by somebody like yourself who’s on the cutting edge. But, man, this was so interesting. I feel like, you know, we really got into some deep dives, and my understanding of robotics has been expanded exponentially just by talking to you. So I really appreciate you coming and speaking with us. 

Inspiration to do what you do

We are getting close to the end of our time, though, and I did want to talk with you a little bit about the general things that I ask all of my guests, which is, you know, everything from where they gain inspiration from to other questions about the future, but specifically in that vein where we talked about when we both have a mutual love for science fiction. And I’d love to hear, like, you know, other than Star Trek, what are some other optimistic views of the future that may have inspired you to do what you’re doing right now? Because a lot of people that come onto the show don’t even mention science fiction. So it’s nice to hear from somebody like yourself who also enjoys science fiction. But there’s so much dystopian science fiction. And what you mentioned with Star Trek, which is utopian science fiction, is something that I love and that I really want other people to get involved with because if you can see the future in such a way that it has all of this promise, I think it makes for a really different reframing of all the problems that we’re experiencing now. But from that standpoint, tell me a little bit about your inspiration, where you get the inspiration to do what you do, and also some utopian science fiction that you kind of hearken back to when you’re in the lab and you’re building the future. What do you think about it?

Well, so in terms of, like, science fiction, I mean, Star Trek is definitely the GOAT, right?

Yeah. I mean, that vision of the future is something that we all kind of want. It’s heaven on earth. Like, no more money. You know, food is not an issue anymore. You know, we’re all allowed to pursue our own pursuits. And I think one of the best things about artificial intelligence is that it is something that can expand our lives rather than take things away. I think everybody’s so worried about the taking away, you know, but I don’t know. How do you feel about it?

I’m 100% in agreement with that. I think, more specifically with respect to robots, Asimov, of course, is the sort of obvious answer. These are things that are intentionally really trying to help us make our lives better. So I think those two are sort of the obvious things that come to mind. More broadly, beyond just wanting the utopia, of course, what are some other things that I personally kind of like? With respect to intersecting mathematics and robotics, I think the reason I really like that is because I just like math. I like studying mathematics for its own sake because I find it just incredibly beautiful. I think one of the reasons I like mathematics so much is that it is the only scientific discipline in which it’s possible to actually get access to, like, capital T, Truth. Everything else is contingent. Right?

And not only that, but it’s universally accessible, because all you need in order to do this is the ability to think rigorously and, like, a pencil to write something down. There’s something really fascinating about the fact that you can just sit in a room and think critically about what should be true given a logical setup and actually discover these incredibly deep, very beautiful statements about the laws that govern the behavior of the universe. Which is kind of incredible when you think about it. But not only that, the connection between that and robotics is that the thing that makes a robot a robot is the ability to make independent decisions, and those are necessarily expressed at the level of algorithms. So to me, a robot is really a physical embodiment of mathematics. If you sit down in a room and come up with a new algorithm, you can put that into a machine, which is something inanimate, and make it seemingly come to life. It’s the closest thing to magic that I think I’ve come across.

Yeah, that’s amazing. I used to like math a lot in high school, but as you go through medical school, you use it less and less, and it’s something that I don’t really have access to the same way that you do. I feel like it provides a framework of understanding for your daily life, and that’s really nice to hear. You know, when I saw “A Beautiful Mind” and this idea of the Nash equilibrium, where everybody’s kind of dancing and doing their own thing, I was like, “Oh, my God, that’s so elegant. It’s so beautiful.” And as somebody who doesn’t use math on a daily basis, it’s something that I think everybody kind of understands. Right? I think what is really interesting to hear, coming from your perspective, is that this is something that really frames your whole understanding. And it’s not only something that needs to be understood by math majors; that same kind of truth that you’re talking about holds up whether it’s somebody like me looking at it or somebody that’s a layperson looking at it. So that’s cool. That’s cool that you’re on the forefront of that, especially with something that’s going to affect our lives the way that robots are.

So the follow-up question to that is, where do you see this industry in ten years? Are we going to have full self-driving? Is it going to be ingrained in our society the way that we all think it is? Are we going to have virtual assistants that we talk to more than we talk to our spouses? What do you think it’s going to look like in ten years?

After ten years

Yes, I mean, like I said, predicting the future is hard. We’re really at an inflection point in the employment of robotic technology because we are actually starting to see these things move out of the lab into commercially available consumer products. So, you know, this is a great example of this, folks… well, I mean, the Roomba is one example of this… a second example of this that is, I think, a little bit more technically elaborate would be like Skydio, for instance.

What is that?

Well, Skydio is a consumer drone company. So you can actually buy drones now that can very, very reliably fly around. I think originally these were sold for photography. So if you want to be able to, like, have a drone follow you around and take videos of you outside doing sports or something like that.


It’s able to do that. But from a technical standpoint, what’s incredible about these is the level of sophistication and reliability for the onboard processing that they’ve achieved. So there are videos you can look at online of Skydio drones following mountain bikers through forests and stuff. You know where they have to avoid tree limbs. They’re doing all this stuff, you know, trying to get the shot of this guy.

Yeah, interesting.

And they’re doing all this in real-time, which is an incredibly challenging, you know, technical problem to solve, both in terms of understanding how to get the flight software to work correctly and how to get enough situational awareness and be able to plan these maneuvers to get the right shot while also avoiding tree branches and wires and thin structures and things that sort of. It’s an incredible technological achievement. So in terms of, just like, the level of sophistication in some of the systems that are being produced now, it’s very, very impressive.

Yeah. I think drones in general are going to be much more ubiquitous than any other technology. I took my kid to this dinosaur amusement park in Atlanta the other day, and they had a drone show. And I feel like that’s something that didn’t really exist a decade ago. And, like, the elaborate nature of how these drones move through space. They were able to make pictures of dinosaurs and, like, you know, just, like, bubbles and things like that. Having seen drones… I was interested in drones for a while, having seen them in their infancy. I know, like, the amount of technological progress that has happened to produce something like that. And now it’s everywhere. Yeah, those drone shows are everywhere. And I feel like that’s something that I see a lot. The delivery capabilities that we talked about. Who knows what else is next? But just moving it throughout space, I don’t think that there’s, like, another model that’s as good as that. Even with the humanoid model, it’s going to be years before they catch up to drones.

Yeah. I think, as a roboticist, what’s really exciting about what’s happening right now is that these are the first systems that have achieved a level of reliability and technical capability that actually makes them economically viable as much more consumer products. I think the earliest robots – like, the sort that we were describing that have to make real onboard autonomous decisions – go back to the 1960s, 1970s. And we’ve just now reached a point where our algorithmic and technical computational understanding and the hardware have advanced to the point where these are actually becoming not just academic research projects, but are actually starting to get into the real world, actually doing useful stuff in society.

What kind of robot do you want to see in the next ten years? What do you want to just, like, make your life better?

Oh, if I’m talking about a totally blue sky? I mean, I would love to have a robot in my house just to do my chores.

Yeah, of course. Everybody wants that. That’s low-hanging fruit.

Okay. So the thing that gets me, you know you’re talking about annoying tasks that I don’t want to have to do, I don’t want to mow my lawn. I just don’t want to do it.

Yeah. I feel like that’s such low-hanging fruit for Roomba, you know? Like, other than the fact that it’s up and down, you know? I agree, mowing the lawn is really annoying. Yard work in general. God love anybody who’s doing that.

Last question, aside from AI, robotics, AR, and VR—all of these industries that you’ve had direct influence on—what is something else that’s coming down the technological pipeline that you just can’t get enough of? For me, it’s longevity. That’s something that I’m looking at because I feel like the idea of treating age as a disease and not an inevitability is something that’s revolutionary. But what about you? Like, what, other than the stuff that you’re working on day-to-day, like, what other technological breakthrough is so exciting to you?

Just in general?

Yeah, just in general. Blue sky.

So stuff that I am really excited about is things having to do with long-term sustainability and climate change because that’s clearly a pressing question. It’s probably going to be one of, if not the most important, societal questions over the next century or so.

Now, you know, honestly, I don’t know too much about that. If you’re following it, have we made any progress at all, or is it just an inevitability?

I think given the social importance of it,  if it’s not currently a top-line priority, it’s going to be necessarily becoming one in the not-too-distant future because we’re starting to see the impact of that already in a lot of places. Like, I’m originally from California and wildfire season is now here.

I know that’s crazy.

So, I mean, we’re seeing the effects of it right now, actually. I think you mentioned it.


Yes, same thing there.

Listen, I hear you, and I think that it’s going to be devastating, but, man, like, the winters here in Boston have been pretty nice.

Yeah, that was definitely a plus, I suppose.

Yeah. I mean, it’s obviously a net negative, but, you know, from my perspective, it hasn’t been uniformly negative. But, like, have we made any progress at all? Like, are you following it enough to know, like, oh, okay, there’s some hope down there?

I’m not an expert in it myself, I’m following off and on. I mean, there has actually been, I think, some recent significant progress towards fusion energy, which I’m very, very excited about.

Yeah, just blocks away they were making a reactor. Apparently they made some breakthroughs, and now it can be made at scale, which is cool.

Yeah. I think there’s actually a local company, I think it’s a Commonwealth Fusion.

I think so, too. Yeah.

They’re MIT based, they were a startup that I think spun out of MIT. They’re actually building now, like, experimental plants that are, like, small but are full size, meant to be, like, proper fusion plants. So I think that’s tremendously exciting.

Yeah, absolutely. The fusion stuff—I can’t get enough of that. Every time I hear an article about it, even though I’m not a math guy like you are or have a physics background, I’m excited for the prospect of it.

Yeah, definitely. So on the production side, that’s one thing that I’m very excited about. The other thing I’m excited about is there’s been an increasing awareness of the great need to just even be able to understand what is going on with climate change. So in terms of ecological monitoring and climate change monitoring, how are we going to get the data and do the data processing at the scale that is necessary to actually understand, number one, what is happening, and number two, make intelligent decisions in response to that. And so I’m sort of maybe personally a little bit more excited about that. Not that fusion, of course, is not tremendously important. I just happen to be attending to that specific aspect of it a little bit more because some of the stuff that we’re working on in terms of autonomy actually has a direct role to play there.

Right. Even if you were to get all that data, you would need to process it. And, like, the breakthrough of processing capabilities would be much more beneficial to climate change. I hear you. I feel like there’s so much more data now than there’s ever been, you know, and that’s going to be really interesting to see, like, what are the insights that are going to come once we have the artificial intelligence models to guide it or predict it, or who knows what?

Yeah. And also just the fact that, like, even the data collection aspect of that really needs large-scale, ubiquitous autonomous systems. So I’ll give you two examples. One example is, you know, 70% of the earth’s surface is covered by water and the ocean is a gigantic driver of the earth’s climate. Things like remote sensing don’t work underwater – it’s like weather satellites can’t tell you what’s going on with the surface. So, you know, one of the most important elements of this is an area that we struggle to be able to access. So underwater robotics has a tremendous role to play in terms of understanding what is going on with the oceans. And what does that tell us about, you know, climate change impacts and what we should do to correct them?

Similarly, even in areas that are on land, you know, you can make remote measurements via satellite. But sometimes those measurements may not be sufficient to actually tell you the full story. I’m given to understand from some colleagues that one of the big questions that the climate scientists are very interested in is, what are the processes that drive glacial melt?And so that’s an interaction between ambient sunlight, the physical shape of the glacier, and air and water and ice interfaces, and so on. And the level of resolution that is required in order to answer those questions is much, much finer than what you can get from remote sensing capabilities like space-based flutter satellites and so on. And so answering those questions requires that we be able to actually place, you know, sensors close to those interfaces. And again, spatial and temporal scales there are such that it has to be automated.


Absolutely. Yeah. That’s interesting. Well, I hope we live in the automated future that you’re talking about. I think that’s something that I’m really looking forward to. You know, I don’t know if we’re going to get to the level of Star Trek anytime soon, but I like how we’re progressing in that direction. Thanks so much for joining us, David, honestly. 

Thank you to the listeners who are enjoying this in the comfort of their own homes. As always, please like and subscribe. And if you want to follow David and all this stuff that he’s doing at Northeastern University and robotics, then feel free to look him up on LinkedIn and also the stuff that’s located just below our video right now. Thanks, everybody. As always, I will see you in the future. Doctor Awesome is signing off. Thanks again, David

Thanks for the invitation. 

Yeah, nice speaking with you.

Important Links


About Dr. David Rosen

David M. Rosen is an Assistant Professor in the Departments of Electrical & Computer Engineering and Mathematics and the Khoury College of Computer Sciences (by courtesy) at Northeastern University, where he leads the Robust Autonomy Laboratory (NEURAL). His research focuses on developing trustworthy algorithms for machine perception, learning, and control in robotics. Previously, he was at Oculus Research and MIT’s Laboratory for Information and Decision Systems. He holds the degrees of B.S. in Mathematics from the California Institute of Technology (2008), M.A. in Mathematics from the University of Texas at Austin (2010), and ScD in Computer Science from the Massachusetts Institute of Technology (2016).


By: The Futurist Society