Many of us take pride in the fact that no matter how AI advances, humans will always have the upper hand in some aspects of intelligence – and emotional intelligence is one of those areas that we feel so sure are our exclusive domain. That remains largely true but that doesn’t mean AI will leave this large space untapped. In this episode, we dive into the intersection of AI and emotion as our guest, Rana Gujral of Behavioral Signals, takes us on a tour of AI’s evolving relationship with human emotions. Explore the groundbreaking work of a deep AI company delving into the core brain functions of emotion, behavior, and cognition, pushing the boundaries of AI performance to surpass human assessments. This conversation invites us to envision a future where AI understands and responds to human emotions, potentially becoming indispensable companions in various facets of our lives. Join the conversation as we navigate the exciting and transformative landscapes at the intersection of technology and human emotion. Tune in!

Watch the episode here

 

Listen to the podcast here

The Future Of AI And Emotion – A Conversation With Rana Gujral

We have Rana Gujral, who is the CEO of Behavioral Signals, a leading AI company. Rana, thanks for joining us. I know that you’re a thought leader in the artificial intelligence space. I have a lot of great questions to ask you, but first off, let’s get started with how you got to this place and how you got interested in artificial intelligence.

First off, thank you for having me, real pleasure. I’d say one of the most interesting shows I’ve come across. I’m eager to see what’s ahead of us in this conversation. I’ve had an interesting journey. Been in tech pretty much my entire career. Most of the early years were building products hardware, software, and everything in the metal for large companies. Various hardware and software product lines for Logitech, their smart home. I built a Google TV, which was the precursor of Chrome Stick.

I had this amazing opportunity to do something different and this iconic product company out of Utah, which had shrunk from about $500,000 in sales to $30 million. It was technically bankrupt with $300 million of debt and a negative $100 million EBITDA. We had this opportunity to do a turnaround. I’d never done a turnaround, and it was a Ross Bureau Holdings. I had an opportunity to meet Ross Bureau Senior and Junior. We decided to go take a stab at it.

It was like how do you bring something back to life? Which has gone from its glory days. We worked on a lot of interesting things. Design various aspects of innovation and the structural aspects. Long story short, it was a lot of fun. Also thankfully a success, we took it to a TPO which was a good outcome for everyone.

Right after that, I had this desire to go build something from scratch because I hadn’t done that. I’d built products and I’d grown businesses, and now I altered on a turnaround. The satisfaction of bringing something to life from just an idea, like a paper napkin, so to speak. I hadn’t done that before. I had an idea. I wanted to go after it. That’s how I found the TiZE which was a vertical SaaS that was focused on innovating and archaic industry. It was a special chemical we’re going to build this beautiful software that would streamline all operations.

FSP – DFY Rana Gujral | Emotion AI

At that time, I was hearing about AI and machine learning. I was like, “This is interesting. This is cool. I need to get more behind what’s happening in this space.” I hired a bunch of very young machine learning engineers out of school and put them on a team. I was like, “Let’s see what you guys can do.” We had no plans. Anyhow, we were building the software. We were getting good traction. I remember one day one of the guys came to my office and told me, “I don’t think we are doing much here. I think we’re all just going to go and we’re going to leave. You don’t have a plan for how to use us. “You tell me what’s your plan. What do you guys do? I don’t understand.”

We would build these models and we can train these models for certain outcomes. We train them and then cool things can happen. What cool things can you do? One of the things that was relevant for our product experience was predicting necessarily how the prices of commodities would change out in the future because you have to bake that into quotations, etc. That was one of the aspects of the business process. I like giving them that goal was like, “Build me a model that can predict the price of the commodity a little bit out in the future. How far can you do it? Can you do it 3, 6 weeks out? Twelve weeks would be amazing.”

After a few months, these guys come back and it’s like, “We have a model. It’s working.” We took a look at it and it was predicting the 2, 3 weeks out in the future. We started testing. It was amazing. We started to push the bounds on that. We got it to a point was accurately predicting the price of a commodity, like say titanium dioxide anywhere from 6 to 12 months out in the future with very high levels of accuracy as it’s like having a crystal ball. Imagine what you could do just from trading from that perspective.

We built that into a product. Long story short, soon after we built that feature, within six months, we got acquired because of that capability, which these guys in the basement were tinkering with. You had this whole plan on what to do with the company. I was later, “This is AI is a force to reckon with. You got to pay attention to this.” This was in 2016, so it was a while ago, ages if you look at it from the current spectrum. I briefly went back to doing some other things, but then I quickly wanted to come back to AI.

That was my first, very hands-on experience with how powerful this technology is. We’re doing some really interesting things and we’re doing things which I feel are very differentiated. It was very unique in many ways, we’ll talk about that. I feel one must understand what’s happening in our ecosystem and how our lives and world and all our experiences that we’re used to are going to change. It’s not necessarily just a fad. This is probably something and it is bigger than the internet many times over in terms of how it’s going to be impactful. That’s how I got into AI.

AI is not just a passing fad. It’s bigger than internet many times over.

This is going to be a big deal. Do you think it was back in 2016 when you had that model?

Even before that. 2016 we got acquired, but when these guys walked in and showed me this model that was predicting these prices. We took a look at how hypothetically this model was built on certain assumptions and it had learned on its own, that machine learning works in very similar ways as a human brain or a human learns in the sense. You have an assumption it could be formed by your worldviews or you could be taught, like you go to school, or university you’re taught certain formulas and cadences. You test out those assumptions and sometimes you challenge them but then you have to work on some data. You can’t just work on those assumptions in an isolated manner. It’s a lot of trial and error.

You have a thought, an idea, a decision based on certain factors and then sometimes you’re right, sometimes you’re wrong. When you’re right, you double down, when you’re wrong, you recalibrate and you throw it out. You work on more data, and then you get better and smarter. When you see that coming together in software, and you look at it how powerful that is, which is we don’t understand how the human brain entirely functions. Some of the substrate aspects are very mysterious to us. You’re a physician, you probably understand better than me.

Sometimes you’re right, sometimes you’re wrong. When you’re right, you double down. When you’re wrong, you recalibrate and you throw it out, and then you work on more data, and then you get better and better and smarter and smarter.

At the end of the day, we have certain assumptions and certain ideas about how the human brain learns and you replicate that into a software system and it starts to perform according to what you’d expect. That’s very magical. These were early days. We didn’t have transformers back then, and certainly no nets. These were very preliminary ML models built on Python files. We’ve come a long way. Some of the things that we’re doing now surprise me every day. Every day we look at something we’re just like, “We didn’t expect that.”

Someone said this, it’s not mine, so I’m not going to take credit for it. The entire hype around AI is incorrect because it’s grossly overhyped. That struck me. The guy who said it was an insider who’s in the thick of it, and I related to that, I was like, “You’re right.” If you are looking at it from an outsider’s perspective, like, “Here’s another hyped tech, like the blockchain and it’s like the next thing.” When you see some of these things come together you take a step back and it’s pretty amazing. That was that moment and we never looked back. Things have changed drastically in the past few years.

Do you think that the threats are overhyped the existential?

That question needs to be answered with some perspective or we could go into the details of this. Is there a chance that we could build something that could get out of control and that could potentially cause an existential risk to the human race? The answer is yes. The probability of that is more than zero. With just those two assumptions you can’t necessarily say that you’re overhyping that risk because if there’s any situation where the entire human race can be wiped out you have to take it seriously.

Now it is a very low probability. If you’re a probability guy, you’re like, “Guys, calm down, relax. I don’t think it’s going to happen not necessarily in any reasonable timeframe.” There are a lot of things that we could do and we will do as an industry to make sure that doesn’t happen. There are a lot of positives plus sides that would advance into a very I’d say quality lifestyle with many problems being solved. That doesn’t necessarily mean there’s no short-term pain, the societal upheaval, the job losses, and all those things will happen. I feel that yes and no are the answers in the sense, that yes, it is totally or hype. I don’t think it’s a concern that I have necessarily or I’m fearful of even for my kids. It is a possibility. Is that something that you have to take seriously? You can’t just brush it off.

I look at it from a probabilistic perspective too. It’s extremely unlikely. I see the overwhelming benefit of this. To have your assistant that can do all of the admin work of the society in 2023 soon to be 2024, I feel like the overwhelming benefit of that, I want that now. I want to have somebody who’s able to water my plants without me having to do it or remind me that I need to go to the doctor the next day. Some of that stuff is available in a rudimentary form right now. Having your personalized artificial intelligence that knows you in and out, and can maximize your benefit for life, is an exciting prospect to me. One of the things that I learned about your company is that it’s able to pick up on emotions, which I thought was interesting. Can you tell us a little bit more about that?

FSP – DFY Rana Gujral | Emotion AI

We’re a DPI company, and like most other DPI companies you’re building essential building blocks of AGI to some extent, but also you are, for the most part taking a core brain function.

Can you explain those acronyms to people who don’t understand it? I know Artificial General Intelligence is AGI.

Artificial Intelligence is AI. There are various facets of AI. AI comes in many different flavors and many different levels of complexity. When you’re focused on deep AI you’re essentially looking at understanding the core human brain functions, unraveling the mysteries behind that. There’s a realm of human physiology and psychology and other aspects required to understand that. You are hypnotizing certain models around how those things function. You’re replicating those or rebuilding models in a software system to replicate similar capabilities in a software system. That’s what creates the AI.

For us, our core focus has been on the emotional behavior and cognition dynamics of the human brain. We were the first company to extract these advanced signals from the tone of voice, which is intonation pitch and tone variance porosity which a human brain does effortlessly. We’re born with that capability. As you and I are speaking with each other our brains are not just doing the NLP and NLU, which is Natural Language Processing and Natural Language Understanding, you understand the language, and you understand me but also almost effortlessly. System thinking is a part of your brain looking at your emotional behavioral caution and how you respond to your questions or necessarily engaging or disengaging how you feel in the moment.

Those are the elements that we focused on. We briefly just describe our technology in real-time, we can extract these signals. These signals can roughly be put in three buckets. The first is the bucket of emotions, anger, happiness, sadness, and there’s many more. There is a bucket of behaviors engagement, empathy, politeness, etc. These are all individual models. Each signal is a specific AI model. The last bucket probably the most interesting bucket is a collection of advanced classifiers. Classifiers are more specialized machine learning implementations that are focused on more macro-level assessments of various maybe low-level signals.

This is where you’re looking at more advanced mental states, such as experience. Measuring your experience with me right now live in the moment, think of it as a not-live NPS score also level of satisfaction. Also, other things such as stress, measuring stress, distress, and control are interesting because control maybe you’re saying something, but you don’t want to say it. Someone’s making you say it, like there’s a gun on your head. I don’t see it, but I can detect that. Other aspects of I guess intent modeling, which is predicting an intent to do an action or intent to not do an action, etc. That’s all on the last bucket.

You could assess these capabilities through various modalities. As a human, you express this in various modalities. You could do it from your words or your facial expressions, your body language, your tone. We’ve focused on the tone. Tone’s been the most elusive, the vast majority of the industry has focused on the sentiment, AI models have focused on the words, which is to take the audio, use a transcription, convert the audio into words, and then parse the words for meaning. It’s not very effective. Also very inaccurate because for obvious reasons you could say something and mean five different things and if you don’t understand how you’ve said it, you’re going to be wrong many times. You’re going to be wrong 4 times out of 5, you’ll be wrong. That doesn’t work very well.

We started to look at the tone. We started to look at intonations and porosity. A lot of this was research that was done inside Stanford and USC before we spun the company out. Once it was spun out we still had to cross the thresholds of getting the F scores, which is one way of measuring how good these models are to perform to a level where they’re good enough. The bar is always “What is the F score of a human?” How well does a human do it? Can AI get close to it?

That’s a score between 0 and 1. 1 being the best it can be, and 0 is the worst. An average human’s F score is about 0.8, which means about 80% of the time the brain is making the right assessment, and 20% time is the wrong assessment. Some of us are better at it than others, but that’s an average score. Also, females are generally better than the men. We’ve been told, and it’s true.

Certainly true in my own life in my observations.

The engines were at 0.5 for a long time. There’s the point of inflection where it went from 0.5 to 6, and then 6 to 7 and 7 to 8. All of that happened within a year and a half. We crossed the magical threshold of it being better than a human. The engines are performing at 0.9. Now you have this very small baby AI which is a small building block of AGI that we built which is performing better than a human. In some ways, it has crossed that threshold of being truly a contributor to AGI, of course, we need other pieces. The emotional, behavioral, and cognition pieces a very important pieces. It’s also a very complex piece to get right.

The reason why I was so interested to talk to you is because I always thought that this was an area where we would always outperform artificial intelligence because it seems like such a human characteristic. It’s such a subjective ethereal plane to understand emotion in and of itself is difficult for most humans. I thought it would be more difficult for AI, but now that I know that it’s something that can be measured, and it’s something that like you’re saying, be outperformed by artificial intelligence. It’s very interesting, and I’m sure very important for, I would say the user interface for AI.

Right now you use ChatGPT, and it’s this amorphous machine that you put into, and you get out an answer. If we were able to interact with it in the same way that you and I are, that would be a much more interesting experience. The user interface for whatever artificial intelligence is using some of the components that you’re building would be much more human-like.

One qualifier though. We’re not as good, certainly far from being better than a human in many aspects of emotions, behaviors, and cognitions. There are certain wins in specific areas. If you’re talking about certain signals, the engines perform very well on certain signals such as anger, happiness, or engagement. There are these complex states of sarcasm very hard to do, including for a human. Sometimes human brains struggle with that.

We have a long way to go before we can say that now we have this AGI that performs holistically better than a human on emotional, behavioral, and cognitive skills, not there yet. There are these sparks of innovation that looked very promising. To your other point, the possibilities of what you could do with it are daunting. They’re endless. You could apply it not only to the human-to-human interaction paradigm but human to the human-to-machine interaction paradigm. Have these experiences and interactions with machines that are very human-like, because one big aspect of a human-like experience is the emotional behavioral assessment and recognition aspect, which has been missing.

Once you plug that in, you have a whole different level of experience which is going to be a reality in some ways it is now. Imagine, calling a call center and you’re being greeted by someone on the other line and there’s no wait. The person’s right there. The person knows you, understands you, empathizes with you, tempers with your current state of mind, and helps you guide through various problems. The only thing is that it’s not a human. Once you have that capability, you can scale it infinitely, so you don’t need any wait times. Those experiences will be changing the game in many ways. That is like a sliver of experience that one can shoot for. There are a lot of other things.

What are you most excited about in regards to its applications? What are you most looking forward to?

Like most startups, they have a focus on traction and the commercialization piece of the business. Certain areas are a no-brainer from that lens standpoint. I’m excited about those areas. We’re applying it and it’s essentially taking a technology and solving for a real problem that makes a difference. You can build a business around that. That being said we do have a very strong research DNA and we try to hold ourselves true to it every day. We do make a, I’d say sizable portion of our resource investment into continuing to research, which is pushing the boundaries on where else can we apply it, what else can we do with it. We do it in various ways and collaborations with various other academia and organizations among other things.

Sometimes something new comes up. From a commercial implementation, we have two focus areas. We have built a few very unique differentiated product offerings for businesses mostly benefiting from them in a call center setting. I’ll describe one at a very high level. We’ve built this product that creates a conversational bioprint based on a short recent interaction. A conversational bioprint is unique for every human and it’s an amalgam of about 75 to 100 attributes ranging from how fast someone speaks to the various emotional, and behavioral states they represent in a conversation. When you put it together, it uniquely represents how that person interacts with other humans.

A few things are interesting about that. One is that it’s unique. Yours is unique to you, mine’s unique to me. There are no two bioprints that are identical just like a fingerprint. It is a very macro assessment of who you are now as a person in terms of how you interact. As you change as a person the bioprint also changes but it transcends the mood of the moment and the particular interaction. It’s who you are now. Third, your bioprint either matches someone or clashing someone every day in every conversation that you’re having. You see proof of that every day. You see proof of that where sometimes you’re having intense conversation and you’re not maybe winning the dialogue, but you’re still loving it, you’re still having fun at it. In other situations, you’re talking to someone, it’s a casual interaction and you can’t stand it. You want to get out of the interaction.

We’re using that discovery and the bioprint that we now create using these AI models that we built to do matchmaking for agents and clients in a call center. Very popular with financial institutions, and banks, but also collection houses and just customer service call centers. That’s one of the core offerings. We also apply some of our use cases to national security and law enforcement use cases.

I feel like that’s immediate commercialization, that’s how you’re going to make this profitable. If you were to take this out many years from now. You have kids, I have kids. If your son or daughter, or if my daughter had an interaction with an AI and there was some emotional component to it, based on the deep AI that you’re building right now, what are you most excited that they’re going to have access to that you and I probably won’t have access to?

For me, I look at it like they’re going to have an authentic friendship with a robot toy that is unlike any other friendship that you and I are going to have access to. That toy is going to know exactly when she’s mad, exactly when she’s sad, and give her the stuff that is necessary to get her out of that. I look forward to that day, but you’re more close to this than I am. You’re building an emotional capacity for artificial intelligence that doesn’t exist before. Let’s put it out 10, 20 years. What are you most excited that they will have access to?

Some of those things we’re seeing come together. When we were first researching this technology one of the use cases that we went after was building something that helps kids with autism. They’re brilliant kids with a lot of capabilities, but sometimes they’re struggling with understanding the emotional behavioral quotient of someone that they’re interacting with. How about if we can build a tool that can be an aid and give them a sense of guiding them through that conversation, that interaction?

That’s one of the early use cases that guided the research and this technology when we started to look at it, and now of course, we’ve gone in a slightly different direction. We implemented this technology or someone used our APIs to augment and experience a software system that monitors real conversations between caregivers and patients who suffer from anxiety and mental health. The goal there was to use one of our intent models. We’ve built a few intent models that predict if someone will buy or not buy, or will pay the debt or not, etc. Modify those to listen to these conversations and assess if there is a propensity for suicidal behavior before the caregiver can assess it. The models are performing well there.

Some of those experiences are in line with what you’re saying so we’d have those tools. Whether that is a companion, it could be a toy or more than a toy. It could be a real companion that caresses you in your home when you’re alone. With the elderly, you need not just interaction, but you need someone to watch over you and watch for your wellbeing, etc. Those use cases become very real. Aside from that, you have also the ability to help us build these new AGI experiences. There’s a very interesting quote by Marvin Minsky, who’s one of the founding fathers of AI, and one of the guys who founded the term AI and he was asked this question more from a moral virtue signaling standpoint.

Someone asked this question, “Should intelligent machines have emotions? To your point, is this a very human thing? Is that right? Should you do it?” He answered that by saying, “The question is not if intelligent machines should have emotions. The question is whether a machine can be considered without emotions.” Considered intelligent without emotions because it’s a core fundamental part of intelligence and if you can’t understand emotions, behaviors, and other aspects, are you even intelligent? To answer your point, when you have these intelligent entities that cannot just do math better than you but can also hold conversations and be a mental companion in many ways those are some interesting experiences.

That’s exciting. I would put the caveat to what you said is not so much “Is it intelligent, if it does have emotions,” it’s going to be very interesting when they can respond to our emotions. Not only just detect them, detect when I’m angry, but like I was saying if my daughter had some companion where if she was angry it could calm her down, walk her through it, or even if I had that. If I had my assistant AI that was managing all of the components of my life, and I was able to talk to it, and it was able to give me the conversation that I needed at that time. There are so many conversations that I’ve had in my life that I needed at that moment to get me over something or to get me into a better mood, or even just like a pep talk.

That is a low-hanging fruit for emotional responsiveness. I feel like that’s something that will be available to us once you guys figure out a little bit more about how we work from an emotional perspective. That’s something that I look forward to. Other than that, artificial intelligence is so poorly understood that it’s getting a lot of bad rap. Even in the worst-case scenario, and if this thing gets out of control and everything like that, the probability is so unlikely that this is very good technology. There is the question of how are we going to regulate this. Is this something that we can regulate from a government perspective, even from a business perspective, where do you stand on regulation? Do you think that that’s going to be a benefit or it’s going to be unlikely? Where do you stand on that?

First off, there is a lot of conversation about regulations right now, and it doesn’t have to be a binary view on regulations. Regulations are very important. Anything that can potentially cause harm, even if it’s extremely beneficial needs to be regulated and is regulated now. I read about this somewhere, if you have to build a car now, like a car that is street legal, and I’ll put it out this around $16,000 checks and aspects of regulations that you have to pass through to be able to do that to put it out on the road. That’s crazy small things like that. It’s a car.

We regulate everything. We regulate our media, and what we see on TV, and so AI just like that needs to be regulated. The question is how do you do that? The problem right now is that there’s a little bit of a knee-jerk reaction happening in the industry. Largely because things are moving way too fast. Every few weeks, every couple of months it looks like we move the needle and there’s something new. That’s exciting to some folks, and they’re scaring the heck out of others. That’s what’s happening. The general audience wants to understand what is the government doing about it.

AI needs to be regulated. But the question is how do you do that? The problem right now is that there’s a little bit of a knee-jerk reaction happening in the industry, largely because things are moving too fast.

Are we moving the needle, or is that all hype?

We’re moving the needle.

The needle is moving for sure. I feel like there are so many breakthroughs that are happening. It can’t all be real. I feel like there’s a new AI company in every news article that I see.

There’s a lot of fluff out there. There’s an AI hype or a bubble from a startup and an ecosystem standpoint. The vast majority of the AI companies that you see out there will be worthless in the next few years. Since you’re an investor, you have to watch out for it. All of those things are true. There’s a lot of hype out there, which is not justified. I’m talking about progressing towards the goal of AGI how we are tracking that index, and what’s happening with the tools that we have.

Certain things are a valid question. Are these the right tools? Are they going to get us to where we want to go, do we need a completely different approach? There’s a lot of interesting debate right now. That question if neural nets and transformers are how the human brain works, do we need to go back to biology, reconstruct everything, and look at it from that perspective? That being said, I let those folks ideate on that part. I’m looking at very tangible things.

I’m looking at some of the newer breakthroughs around how some of these language models are evolving, but even outside of that various aspects of unsupervised learning, various aspects of synthetic data creation and models, learning from synthetic data how they’re performing. The multimodal aspects of how these engines can interact and react, are more a human-like experience where it could hear you, see things, and visually describe, it could talk to you.

There are also sparks of reasoning. It’s debatable if there are yet or not, but there are aspects of it that are not just pattern matching, it’s not just statistical modeling. There’s an aspect of reasoning. This model understands the idea or the concept, in that sense, you can’t prove that. You can’t say it understands or not, the results are really good. Some of those things are moving quite rapidly in many different forms. Relatively new is the thought of the Gemini launch. I don’t know if you’ve seen that, but now the Gemini is Google’s answer to ChatGPT and they’ve launched a preview of their language model. Looks very exciting, very promising. Soon to be followed by ChatGPT 4.5 which I expect should come out very soon right now. It’s expected to be multimodal. These things are happening fast. You see how things are progressing.

Maybe I’m just the end user who doesn’t have my eyes on this stuff all the time, but I feel like ChatGPT, that moment where everybody knew about it and everybody was like, “Have you seen this? I can type in something and it’ll spit out anything that I want.” That happened in 2022. I feel like it hasn’t trickled down to anything for me specifically. That sounds so self-centered when I say it out loud. What I mean is that if I’m the common man, ChatGPT is not in my life every day. When is that going to trickle down to an end-user experience that’s completely different from what we have now?

In the next couple of months, you’d have an experience where you don’t have to be an expert prompter or work on our terminal or a console and type out things. You could just say, “I have this idea, I have this talk. Or I want to show you something.” Some of these things you could do these days. For example, if you’re trying to change the seat post of a bike, there’s this video floating around the internet that shows that example. You’d take a picture of the post seat, upload it, and say, “Tell me what to do?”

In the next six months, you’d have an experience where you don’t have to be an expert prompter.

It walks you through a series of processes that step through understanding the bike, understanding the tools that you have, the special dynamics of what’s going on in there, and walks you through the step-by-step instructions of what to do. That’s a very simple example. Some of those things when those barriers of interaction are not techy and you could talk to it and speak to it, you could just click a picture and say, “I want to do this,” versus now you got to technically describe it. That’s the whole multimodal aspect of these language models, which are already there.

Gemini is multimodal natively, which means it’s built on multimodal engines. 4.5, which will come out soon is very much going to be multimodal. Here’s a thought for you. There’s a certain piece of a portion I’d say, not going to debate what the percentage is, but version of 4.5, is a certain portion of 4.5’s code will be written by ChatGPT-4 because ChatGPT-4 can write really good code. Let’s say it’s like 2% or 1% of the code for 4.5. There’ll be a percentage of 5 that will be written by 4 and 6 written by 5, and that number keeps on going. There’ll be a ChatGPT version. It doesn’t have to be ChatGPT, a language model in that progression where you could have an existing LLM completely rewrite the next version based on its progression.

From there on, that’s when you see that point of inflection where you don’t need to wait for six months, you have a new release coming out every few weeks, and they’re incrementally and exponentially better. You see the signs of that now. You see some of those things happening right now, we’re not quite there yet. Hypothetically, when you get to that point of singularity, so to say, you’d have Nobel Prize-worthy discoveries every few weeks. Now that might take 10 years to get there but at 10 years, not that long time.

FSP – DFY Rana Gujral | Emotion AI

Everybody’s feeling that. Everybody that I talk to, whether it’s somebody like you who’s in the tech sector, or somebody like me who’s in medicine, is feeling that the pace of change is at an exponential rate right now for humanity. Everything’s happening so fast, we have to adapt so quickly. I can see that. I can see that. It’s always difficult to separate the wheat from the chaff. It’s something that I still have trouble with for other fields other than my own. Especially with tech, there’s so much stuff that’s happening right now. It’s tough to keep track of all of it. I want to come back to the regulation portion because I know that’s something that I interrupted you about. What does that look like? How are we, as human beings going to say no artificial general intelligence, know this, know that? I don’t understand it enough to know what that is going to look like.

One needs to understand the practicalities of what you’re trying to regulate if you have to effectively regulate something. The initial thought around regulating AI is to control the resources primarily that be compute GPUs for these companies. Right now the core AI expertise is in a handful of companies you can name them on at your fingertips, and that’s not a good idea. That’s why the industry’s taken a tangible moment to check that and say, “Let’s have an open source movement,” where there’s a lot of open source evangelism. There are a lot of open-source models, and it’s a good thing.

Also, a bad thing there is that while you go the open-source route, how do you even regulate? The technology’s progressing in a way where you may not necessarily need GPUs for inferencing too far out in the future. You could do in-memory models, you could do other new techniques. As a result, then you can’t even control the resources. You could have somebody building this with very little resources in their basement, and you could be building something bad.

That’s also true these days. For example, you could use chemistry to create bombs but you are not going to start to regulate chemistry. There are two practical aspects to it, and this is broadly oversimplifying it. I’ll get into the specifics of what I’ve seen in the industry. One is to have the right type of laws and enforce them and make sure there’s a very extreme amount amounts of penalties if somebody does something bad. Just like you have it with bomb-making or doing something else. You’d be able to do it, but if we catch you, you’re done. Most of those laws already exist. Some of those laws you have to revise based on some of the technology movement. You do that and you enforce it, and you enforce this consistently.

The other aspect is that, let’s see the internet, how were we able to regulate the internet? Not very well but the internet-like industry regulated itself. The industry regulates it. The open ecosystem of humanity comes together to do it in a way, because they realize it’s good for everybody else. We regulate accessing the deep web across countries, across the world. Although you could still do it, if you get caught, then you’re done. Just make sure that you’ll never do anything else again.

Those things work. There are the practical aspects of it, and what’s happening is, I see a lot of platitudes. I see a lot of these regulations that are coming out. The czars who are making these decisions do not necessarily understand the technology. There should be a diverse body of people involved in this. I’m surprised to see a lot of the folks who know what’s going on not being part of that process either because they don’t care or they’re not being invited or being given a seat on the table.

Some of those things need to change. Just like everything else, I also feel that most of the regulations and most of those aspects are in the heart’s in the right place. They’re in the right direction. In due course, things will settle down the industrial regulates itself. Also, we’ll figure out what needs to happen when, but the practicality of regulation is that it’s penalties and self-regulation, and those are the two ways you do it. Some of the things that you see these days are game playing. You have the incumbents taking advantage of the situation. You have established players winning and the smaller newer players being squeezed out because regulations help the leaders of the market prevent the competition from coming in.

That is unfortunate and it’s a distraction for the most part. I also see what’s happening in the EU. We do a lot of work in the EU. We have teams in the EU. I spend a lot of time out there, and I see a big contrast between the mindset there and in the US. They’re very heavy on regulations in the field that’s what they want to lead on. The fact of the matter is that things are moving very fast and the EU is incredibly behind, and that’s not good for the EU citizens. You would be in a perpetual dependency model with no indigenous innovation. Some of those things are just brain-dead in terms of how those decisions are being made. We’re monitoring it, at this point there’s no clear path.

I saw that the EU had just put out some guidelines for the regulation of AI. I did a brief overview of it, but to be quite honest with you, I’m not sure, just like you were saying that those people know exactly where this technology’s headed. I look forward to the future where we’re working on it together. This has been a great conversation, but we are heading to the end of our time together. I do want to ask you three questions that I ask all my guests. One of them, I am going to hijack because it’s something that I ask everybody that is in the artificial intelligence space because for whatever reason a lot of people have been on the show in this space.

I asked them something that you alluded to, which is that the big players are working on this stuff. When I think about the last technological revolution, which I would say is the internet, it was incredibly democratized. It gave a lot of people the ability to be very successful, that were not these big players, but also it empowered a lot of people. It’s something that brought a lot of value to everybody, and not just big companies, or oligarchs or whatever. Do you think AI’s going to be like that? Do you think that it’s something that’s going to be a technology that is, not only available to everybody but empowers everybody? Or is it just going to enrich these big players like Amazon, Google, etc.?

That’s a great insight, Imran. The answer is a very obvious yes. That’s probably the AI specifically, but technology in general enables that. Let’s take an example. These days is a very interesting time that we live in. There has been no other time in history where you’d say that a person who has a $300 billion net worth and a college student uses the same actual phone device day in and day out, for all of their utility and production. It’s a great equalizer. It’s not like the phone’s made of gold or diamonds. Some maybe but it’s the same product. It’s the same tool.

Technology in general has a tendency to be an equalizer.

You have the same abilities in that as a great equalizer. You have the same amount of resources whether you are wealthy, whether you’re in some poor village in Africa, or maybe in a rich city in California. That technology is going to be accentuated many times over by AI because it’s going to bring access. There is going to be this initial period where there’s going to be inequity. That would be also a very interesting situation. We’ll have to monitor it.

For example, there are the brain chips that we’re not looking at. You could implant and you could get certain superpowers. You could have better cognition, you have more memory, or you could have an embedded smartphone in your brain that makes you more superhuman than the person who doesn’t have that chip, and they’ll be not accessible to everyone. The people who have it have an unfair advantage or then there will be a more expensive chip, and there’ll be a cheap version of the chip. That’s the transition but over time the price and the accessibility of those technologies come down. In general, it brings abilities, resources, and opportunities to everyone. There’s a lot to be hopeful about. Those inequities have always existed 5,000 years ago but there are fewer inequities now.

We’re much more democratic now than we were 5,000 years ago. I agree with you. Overall, the arc of history is long, but it bends towards justice, just like Luther King used to say. I hope you’re right. I look forward to this technology, but I also want to empower people the same way the internet has empowered my generation. The second question is about inspiration. You can see from my background that science fiction is a big inspiration for me. What drives me in my field is learning about cutting-edge technology and interesting things. What about you? What do you gain inspiration from?

From many things, I’d say I’m deeply intrigued by history and philosophy and the intersections of that with technology. When if you’re paying attention, you see that there’s a pattern and those patterns become very obvious. You could both learn from it, benefit from it, and also get inspired by it. I feel that’s been a big source of inspiration for me. Especially now that we’re getting deeper into the DPI spectrum. Philosophy is a guiding light, you’re trying to go back to the roots, and it’s a lost science if you call it science. We got to AGI by marrying philosophy with computer science as we get to AGI, and it was super exciting.

Now that we’re getting deeper into the AI spectrum, philosophy is a guiding light.

I agree. Especially when it comes to these fuzzy areas like, what is intelligence, what is consciousness? Philosophy is going to be a real guiding light for us. I appreciate that. Last question. I know that we alluded a little bit to this, but ten years from now, artificial intelligence is in everybody’s pocket. What does that future look like from your perspective? What do you think the field is going to look like in ten years?

I’m very optimistic. I feel it’s going to be a good future. It’s going to be a future, for the most part, abundance that we’ll have most of our core problems solved around food, water, health, and other resources that we depend on.

Do you think AI is going to fix all those problems?

Not all, but it’ll fix a lot of problems. Specifically, for example, AI will help in coming up with cures for diseases that would help us live better lives. It also helps us yield our production, which means that we just produce a lot more than we consume. That solves hunger, food becomes incredibly cheap. It could figure out how to desalinate without necessarily damaging the ecosystem in a more balanced form and that solves for water. It solves various other human needs for companionship. We could potentially go back to doing what we want to do and have tools that do the things that we don’t want to do.

That’s not to say that all the problems will go away. Of course, there are bad actors. People use it. They’ll still be wars. They’ll still be iniquity. I’d say, if you look back the last 5,000 years you’d have to say that we live in the best of times. One can’t say, “I wish I was born 200 years ago, and that was the best time.” Or even 50, or 30. If you’re truthful, we’ll live, you wouldn’t want to be born any other time you want to be born now. It’s a great time to be alive. I think it’s probably the most incredible. There’s a point of transition where even though there’s abundance, maybe there’s less excitement. We’re living through this transition from one way of life to a different way of life. That’s incredibly beautiful to witness. There’s a lot to be hopeful about. I’d love to think about those things versus potentially sentient AI coming to kill us all which I think is unlikely.

I hope for the future that you are espousing. I agree with you. It’s going to provide a lot of different benefits for humanity. I don’t know if it’s going to fix all those problems. I hope you’re right, but I think it’s going to be something that is going to be a much more meaningful life and hopefully get rid of all of this admin work that we have to do. Just responding to emails and filling out forms and all that stuff. I can’t wait until all of that stuff is gone and an AI personal assistant can take care of it. Thank you so much, for doing this. You pictured a hopeful future for all of us with all the different things that you’re doing. For those of you guys who are tuning in regularly on the show, feel free to like and subscribe. I will see the rest of you guys in the future. Have a great day, everybody.

Thank you.

Important Link

About Rana Gujral

FSP – DFY Rana Gujral | Emotion AIFSP – DFY Rana Gujral | Emotion AIRana Gujral is an entrepreneur and CEO at Behavioral Signals, a Cognitive AI company building essential building blocks of AGI with its acoustics-based deep learning technology. As a thought leader in AI, Rana often leads keynote sessions at industry events such as the Leap, DeepFest, World Government Summit, VOICE Summit, TNW, Collision, and The Web Summit. His bylines are featured in publications such as FastCompany, Inc., and Entrepreneur, and he is a contributing columnist at TechCrunch and Forbes. Rana has been recognized as among ‘Top 40 Voice AI Influencers to Follow’ by SoundHound, ‘Entrepreneur of the Month’ by CIO Magazine, ‘Top 10 Entrepreneurs to Follow’ by Huffington Post, and an ‘AI Entrepreneur to Watch’ by Inc Magazine. In 2022, the CEO Monthly listed him among the “Most Influential CEOs.”

Rana holds management, finance, and engineering degrees from the Massachusetts Institute of Technology, Sloan, the University of Massachusetts, and MJP Rohilkhand University. Rana founded TiZE, a vertical cloud software that used ML models to predict commodity prices, and held the CEO role until its acquisition in 2016. Before TiZE, Rana led the turnaround of Cricut Inc. from bankruptcy to profitability and its eventual IPO. Previously, Rana held leadership positions at Logitech and Kronos Inc., where he was responsible for developing best-in-class products.

0 Comments

By: The Futurist Society