Dr. Awesome and Nisha Talagala embark on a captivating discussion about AI’s transformative potential and ethical considerations. They unveil Navigator, a platform democratizing AI education, and explore its role in empowering individuals without coding expertise. Join the conversation, ponder the future of responsible AI development, and take action to shape a technology-driven world that serves us all. Tune in and be inspired!

Watch the episode here


Listen to the podcast here


The Future of Learning About AI – A Conversation with Nisha Talagala

Hey, everybody. Welcome back to the Futurist Society. As always, I’m your host, Doctor Awesome. And we are talking about the present, but not about the future. Today’s guest is Nisha Talagala, who is the co-founder of AIClub. They’re trying to promote AI literacy. And I know that that’s something that is very much in vogue right now, very much in the news: artificial intelligence. And all of us are trying to understand it in the best way possible. So, Nisha, welcome. Tell us a little bit about what you’re doing at AIClub.

What AIClub does

Sure. Thank you for having me. So what we do at AIClub is kind of post our mission as AI literacy from 8 to 80. Now, you can unpack that in a number of ways. Really, what we are saying is that everybody in the world has to really develop some level of AI literacy. Some people used to think about this as a branch of computer science that was only intended for people with PhDs and stuff like that. But the fact is, it’s everywhere now. And you don’t have to know it in the same way, but depending on your life, your job, your role, and your goals and objectives, you will want to know something about AI. So that’s what we mean by AI literacy. And when we say 8 to 80, we mean really, really, it has to be for everyone that this is a technology that’s going to be in all of our lives, and it’s important that we all have some knowledge about it and some say in how it develops as well.

Yeah, I really appreciate how you’re able to break down AI to really any type of skill level, because I still feel like it’s this amorphous thing that people don’t really understand very well. So, how are you explaining this to an 80-year-old? And how are you explaining this to an eight-year-old?

The 4 C’s

We’ve developed this approach called the 4 C’s, and I’ll explain shortly what that entails. However, it’s crucial to understand that this technology is both extensive and profound. It permeates not only academia and research labs but also everyday settings, likely on your phone, in your kitchen, or even in your car. In that sense, it’s omnipresent. So, to approach it, we refer to the 4 C’s: concepts, context, capability, and creativity.

Concepts encompass understanding essential aspects of AI beyond its mathematical or computer science intricacies—fundamental traits that define it. For instance, a notable concept is that all AIs require some form of learning; they are learning entities.

Moving on to context, once you grasp its functioning, consider how the technology integrates into your daily interactions. How do these applications utilize the concepts, and how does your knowledge facilitate your engagement with them? For example, when conversing with your digital assistant, such as Alexa, it’s crucial to realize it’s constantly learning from your interactions.

Capability encourages active engagement with AIs, from building them to leveraging advanced models effectively. By interacting, you swiftly discern their strengths and limitations. Even young children quickly grasp this concept; they understand their role in fostering responsible AI development.

Creativity entails exploring what tasks AI can enhance, allowing us to focus on broader problem-solving. Rather than competing with machines, we should leverage them to tackle humanity’s unresolved challenges. Combining these four C’s enables individuals to attain a level of mastery that enhances daily life and fosters informed discussions about the technology’s trajectory. Thus, by embracing this framework, individuals can approach AI with a practical mindset, focusing on application rather than mere comprehension of equations.

So, where do you think this technology is going? What do you think is going to be the best-case scenario for us? You wrote an article, What Should We Expect from AI in 2024?, for Forbes. Give us a roadmap of what you think this technology is capable of affecting our daily lives in 2024.

Technology shaping our lives in 2024

So, in 2020. Firstly, I think the initial thing you should anticipate is its pervasive presence in every facet of your life, a reality evident in 20, frankly, and even more so now. It will likely become increasingly prominent in 2024 and beyond. Sometimes, what escapes our notice is actually more significant than what we currently observe. While we notice instances like my digital assistant conversing with me or my car seemingly suggesting directions in the morning, there are also aspects we overlook. It’s important to contemplate the unseen influences. AI algorithms determine factors such as insurance premiums, credit card interest rates, and allocations in local police budgets. These decisions, made by AI, impact individuals, whether they are aware of it or not.

So, I mean, I feel like those are relatively simple calculations. You know, for example, how much my insurance or my car insurance is going to be? Yeah, I mean, there’s a set of variables, but they’ve been making those calculations for decades. How is it going to be different with artificial intelligence on board?

In the past, something you typed on social media that you may not have given a second thought to would not affect your insurance rate. Oh, now it will.

And that’s happening now. Do people know about that?

So you should be prepared that everything you ever put out there on the Internet is being affected by decisions about you. It is happening now.

Uh huh. Can you give me an example of that?

It’s hard to kind of, you know, basically. So if you just go ahead and look at, you know, past responsibilities of people, basically, effectively think of it, things are becoming increasingly personalized. So your risk profile will be determined by everything that anyone can ever find out about you. It could be yours.

I’m aware that similar occurrences took place during the last election cycle, such as Cambridge Analytica analyzing social media posts to predict voting behavior. However, I view this primarily as analysis, a task computers have excelled at for some time, albeit under human guidance. Now, it seems computers are assuming more of this analytical workload. Therefore, I’m uncertain if this qualifies as true intelligence.

When discussing artificial intelligence, most laypeople, including myself, typically envision artificial general intelligence—tasks so complex they seem uniquely human, like full self-driving cars. It’s challenging to comprehend a computer performing such tasks competently.

Despite this, I’m curious about the higher-level advancements we can anticipate in artificial intelligence becoming more prevalent by 2024. While I understand that much of this progress is already underway, I’m eager to explore the potential outcomes and implications of these advancements.

The potential outcomes

Firstly, concerning the topic of artificial general intelligence, it’s important to recognize the varying opinions surrounding it. Partly due to its ambiguous definition, some claim it’s already achieved, while others believe it’s five years away, or may never occur. This diversity in perspectives is expected given the term’s nebulous nature. However, consider the example of self-driving cars. Though they haven’t achieved full autonomy yet, they already exhibit advanced capabilities, such as freeway navigation. This functionality will undoubtedly improve, extending to regular streets. Whether this qualifies as AGI is subjective; regardless, progress in this direction is inevitable.

Other aspects, like dynamic pricing in services such as Uber, illustrate AI’s personalized capabilities. The price you’re quoted may differ from someone standing beside you due to AI’s understanding of your preferences and behaviors, adjusting prices accordingly. This personalization stems from AI’s enhanced ability to analyze diverse sets of information, incorporating factors like individual history, circumstances, time of day, and weather. While analytics have always existed, AI algorithms have significantly improved, enabling more sophisticated and personalized analyses.

And, okay, I mean, you know, regardless of the line, which I feel like it, you know, I agree with you. It’s always changing. What. What do people excite? What should people get excited about? Because, you know, realistically, a lot of the stuff that you’re saying kind of makes me feel a little uncomfortable, right? And I do feel like there’s a lot of benefit for AI, but so much of the conversation is based around pessimism. What can people be optimistic about? What are you optimistic about?

For instance, I personally admire AI’s impact on medicine and astronomy. Take the James Webb telescope, for example, which gathers vast amounts of data about our universe daily, with AI aiding in the discovery of habitable planets and furthering our understanding of the solar system. AI algorithms played a crucial role in the successful Mars landing—an achievement in itself.

In the medical realm, AI has made significant contributions, particularly during the height of COVID-19. At Mount Sinai Hospital, for instance, researchers found that smartwatches could detect early signs of COVID-19, showcasing the power of data analysis. Such advancements stem from AI’s ability to process immense datasets beyond human capacity.

Consider a recent project where a student identified mosquito species solely by analyzing their wingbeats. This innovative approach eliminates the need for traditional methods involving potential exposure to mosquito bites or capturing specimens. Similarly, AI in medicine facilitates non-invasive diagnostic techniques, replacing invasive procedures like biopsies with scans. These examples highlight AI’s transformative potential in various fields.

Yeah, that’s going to be really interesting to see, like, the amount of data that’s out there. I just don’t think there are enough human beings to process it all. And, you know, I think that’s going to be interesting to see, really, what the data holds. Like, if the creation of the Internet led to this giant treasure trove of data, I think we’ve only scratched the surface in regards to what we can really glean from that data. But, I hear the optimism, and someone like you is very close to this, and it’s totally understandable for an eight-year-old or a young person to be optimistic about this because the future has yet to be written. They have so much to look forward to because of the technological progress that’s happened in just a few years. I always think about what you said about the 80-year-old. What are they optimistic about? When you’re teaching older people about AI, what are they excited about?

I think that it really honestly says that what you get excited about depends not just on your age but also on who you are. There are things that you can do with AI now that, for example, you can draw pictures, you can create art, and you can, you know, help with your writing. There are so many things that you can do for personal engagement and things like that that maybe you could not do before. And I know I certainly find a lot of joy in that. I don’t know that it’s fair for me to speak for all 80-year-olds and say, Okay, all eight-year-olds are exactly the same kind of person. But if you get joy in art, there’s a lot of things. If you get joy in music, there’s a lot of things like that, maybe you don’t get as excited about where the future will be 50 years from now, but at the same time it might be really materially helping your health. Now, you may not be able to actively engage with that concept, but you can know that it exists. And so to me, to kind of speak to your comment earlier about fear versus excitement, ultimately, if you look at the history of any technology that humans have developed, the fear and the excitement are two halves of the same thing. You achieve excitement by being responsible. I think that you avoid it by being responsible.

You achieve excitement by being responsible.

Yeah, I get that. I’m not teaching 80-year-olds about AI, so I don’t really know what it is that they’re excited about because I just feel like that demographic in general is very hesitant about change. But on the same token, there are certain things that they do get excited about. I can tell you that my older family members are very interested in humanoid robotics that are able to help them around the house. I think universally, that’s something that people are excited about. I’m excited about that. I can’t wait to have a robot that’s able to wash my dishes or do my laundry. That’s going to be, I think, a huge game changer for a lot of different people. But right now, I think people, when they think about AI, it’s this really not well defined thing. It’s interesting to me to hear from your experience about what different people are excited about because I don’t know if I necessarily have the insight of what an 80-year-old or even an eight-year-old would be excited about.

So what I can tell you generally is that a lot of people kind of start from the point of, I don’t know what this thing is. I can see; I keep hearing about it every time I hear it. Honestly, the first reaction is usually one of fear and confusion. And for them, it’s really more about: first, let’s remove the fear and confusion, and let’s try to see it for what it is. You may decide you’re not excited, but at the very least, you won’t be scared. And then the part when we focus on creativity is that we explicitly ask the question, how do you see this being able to help you? Right. Usually the excitement comes from being able to identify something, and it’s a different thing for everyone, but it’s identifying some place in which this is exciting for you. So, it is a very personal exercise. And to whatever extent we can help them sort of realize in a tangible way that thing that they care about, they come away with a positive perspective.

AI as an existential threat?

A lot of people who are in AI space have talked about AI as an existential threat. There’s this idea that it’s a technology that could be used for good or for bad, and the bad is sufficiently scary enough that they think that it should be regulated and things like that, which I think is a whole other topic to itself. But my question is, when you’re introducing new people to artificial intelligence, has that trickled down into the public consciousness? Is that something that people are worried about? Are they talking to you about this? And also, are you worried about that?

I think that AI’s potential damage to humans and to society is something we should all be worried about. I don’t personally attach that worry to a level of AI, such as, I am not. What I mean by that is, I am not afraid that tomorrow, on any given day, something equivalent to the Terminator will rise. I don’t worry about that. And part of the reason I don’t worry about that is because I know the potential damage that can be created by the eyes that exist today. So I think the worry should exist. I think that a serious responsibility needs to exist. I am not afraid of it as this is some singularity.

Do you think you think AI today has the potential to be an existential threat? Like, I don’t think that is GPT.

AI today is not an existential threat. AI today is a threat.

In one way. I personally, don’t see it. I don’t see it having the ability to do that right now. Right now, it’s in its infancy. These language models, like, the worst thing that they can do is make a picture that could be inflammatory, like full self-driving is not available unless you turn it on. Right. So there’s checks and balances between that, like Terminator. So I feel I’m just not seeing how you’re saying that the AI is so negative right now. I’m not following that.

No, I’m not suggesting that AI is so negative. I’m suggesting that, as a technology there will always be people who will use it, for example, for the creation of weapons. So a researcher demonstrated about two years ago that an AI could create 40,000 new biological weapons in six hours. And he demonstrated it as a cautionary tale. And one of the weapons it created was nerve gas. Now, it did not know that nerve gas existed. It did not know that nerve gas was a weapon. It was simply asked to create biological entities that could cause very specific kinds of damage to human cells.


It crunched and crunched and crunched. Six hours later, she spat out 40,000 candidates, one of which happened to be nerve gas. So now the question is, is this a threat to the general population? No. The average person is not going to try to create nerve gas. But on our planet, there will be some government, some people, and some private entities, right? Who will do this, and it’s only a matter of time before someone does? So that’s what I mean by threats.

I feel like the big underlying concern is that outside of humanity’s hands, the sense of morality is gone, right? The sense of judgment is gone. A government may not be incentivized to do something like that. There’s been terrible people around since people were in existence, right? But for whatever reason, I am personally an optimist, and I think that given human involvement, things tend to go well for human beings, right? Like the idea that MLK presented, “the arc of the moral universe is long, but it bends toward justice.” I think of it. It bends towards positive experiences for human beings. It’s so much better to live today than it was 100 years ago.

Be a king in 1900, and my average lifestyle today is better than that of a king in 19. I would never trade places for, you know, I would never not want to live in a time that does not have air conditioning, right? So I think that technology in and of itself bends towards beneficence for human beings, because human beings are the ones that are driving that change. I think that what people worry about is when AI becomes so autonomous that they can make those decisions by themselves, and we haven’t made that switch as of yet.

I agree with you that there’s always the potential that a bad actor can use AI negatively. But I just feel that because humanity is involved, the likelihood is extremely low. I think that the existential threat question, personally, I just feel like it’s a very low probability. But aside from that, I just wanted to know, when you’re working with these people who are just regularly coming to you to learn about AI, do they have those fears about existential threats? Or is it mainly just that they just don’t understand it and they’re not comfortable with software or technology that they just don’t understand? Is it more fear of the unknown, or is it fear of this could potentially end humanity?

I don’t think a lot of people honestly worry about whether it could potentially end humanity, mostly because ultimately they can’t do anything about that, if that’s true. I mean, you don’t spend your time. I have had people ask me if AI is going to destroy the planet in their lifetime. People have asked me that; it’s a fair question. I personally don’t spend a lot of time worrying about it because, so far, all of history has shown that humans are the ones who cause damage to other humans.

And to your point so far, and I hope it continues, we will continue to ultimately end up better off. And that’s why most of what I talk about is about responsibility, because on the path, the goal is to end up better off. And to limit the number of cases of trouble we create between now and now. And so one of the reasons why I feel that AI literacy is so important for everyone is that ultimately you want to have a say, how we get to be better off. The more people who have a say in how we get to be better off, the more likely we’ll end up better off. So if I like to use nuclear weapons as an example. Nuclear technology powers a good chunk of the homes in this country. But along the way, we had some notable issues. The good news is that the notable issues are few. And there’s been a massive worldwide effort that has been going on ever since the creation of those weapons to keep it that way. And so that’s pretty much what AI needs—that if you have it, you have to basically navigate it and manage it for a good outcome. And the good outcome is amazing. We just need to make sure we get there, which we absolutely can.

The goal is to end up better off.

Now, going back to what people are afraid of, to be honest, I think they don’t know what they’re afraid of. And that’s okay. My goal is to get them to a point where they have an opinion.

Yeah. I think that public sentiment around new technology really depends so much on how it affects that person individually. For example, TikTok, right? Just most recently, they passed this bill that mandates that it be sold or stopped because of their concern about the association with communist China. The Chinese Communist Party. At least here in the US. And so knowing that, I remember specifically when TikTok came out, and I mean, now people love it, right? There’s 170 million people on TikTok. It’s something that my family uses and that my friends use. They gain a lot of enjoyment out of it. And people don’t necessarily realize the negative aspects until after the fact. With this type of technology, I feel like when I look at AI, it’s like the reverse, right? There’s so many awesome things that are happening that I feel like are happening. When Chat GPT first came out, I thought it was amazing. My mind was blown that I could write a letter in five minutes as opposed to maybe an hour earlier. And it’s a really nice, beautiful letter. It’s not something that takes a lot of time for me to type a Chat GPT prompt. But almost immediately, you had this fear, right? Almost immediately, you had that. And I wonder if it’s like due to all of the dystopian science fiction we’ve had, it’s like a new technology. And I feel like when you’re teaching people, you have a unique insight into what the people who are not associated with this field feel like because the majority of AI researchers that I read about in the news, like Elon Musk or, what’s Sam Altman, all these people feel like what you feel like there’s a significant amount of their concern is based on the responsible use of this technology. And I don’t know if that’s necessarily the same for people who are not as exposed to this technology.

Insights about AI

It varies. For example, I spend a lot of time working with teachers, and there’s a lot of balance in their views. They see potential. They see personal potential for themselves in terms of making their day more efficient. They worry about what it is teaching the children. So a very simple example, they might observe differences in responses generated by AI based on the gender of the user, prompting questions about its influence on students. Teachers aim to strike a balance between leveraging AI as a learning tool and addressing privacy concerns. Some adopt innovative approaches, encouraging students to use AI tools while teaching them critical thinking skills. Overall, it’s about finding the right balance—being enthusiastic yet cautious. I’ve noticed that most people I interact with aren’t consumed by fear or unbridled excitement; rather, they maintain a balanced perspective, cautiously optimistic about the potential outcomes.

What kinds of tools are you introducing to first-time users of artificial intelligence? People who have really no skill level with it, like what I am introducing them to.


So we’ve actually built a platform; we call it Navigator. It enables anyone to build an AI in about five minutes. So anybody we teach, we have them build an AI—a simple AI—to detect sentiment in text. Almost everybody we teach builds that AI in the first hour of our interactions with them, and then we encourage them to interact with the AI, find its mistakes, give the AI more information to fix the mistakes, and watch the AI respond to the new information that’s given. They examine the data, they examine the AI’s behavior, and they examine ways to improve it. It gives them a very good perspective on the process.

What kind of data are you using to create the state? From what I think about AI, look at it as a CSV file that’s filled with information, and the AI is able to sift through that information. Are you giving them data? Are they creating it themselves? How are you introducing maybe a young kid to this? Do you have a data set that they’re already using?

We have a data set that was crowdsourced by teenagers. So it’s kid-friendly, it has no profanity, it has nothing that is inappropriate for any age group, but yet it has the complexities of sentiment, and you can have a conversation like, Oh, chocolate makes me happy. But this other person is allergic to chocolate. So chocolate is not a uniquely happy concept. It’s a question of association. So we have a starter data set, and then we encourage them to build on it. We also have about 400 other data sets that have been curated. Some of them are imagery, and some are text. This particular one is text. It’s a CSV file with a lot of free-form text in it. But we also have others that are images; some are time series measurements of temperature over time, data gathered from sensors, things like that. And so they can explore lots of different kinds of data and what it tells them.

I’ve heard from other guests on the show that the biggest detractor to someone who is a novice getting into the AI space is just computational power. When you have a data set like that, are you using just a regular laptop? Are you outsourcing the computation power? How are you? How are you getting that computation power?

So we actually are. Our platform is available to anyone via browser interface. We provide the accounts for free, but we charge for other things that are more for teachers. But you can basically go in; you can sign in with your Google account. But the platform is on the Internet. It’s backed up by Amazon Web Services, where we run extremely efficient Python-based programs called serverless functions. And so the average AI to train costs, I want to say, about a hundredth of a cent.


We have had young people try to get to a dollar and they tried hard and they couldn’t do it.

Interesting. And are any of these young people, like, using this technology to develop stuff for themselves?

Yeah. We’ve had many students build things for themselves. We run a research symposium every year where children from grades six through twelve speak about their projects. They’ve built everything from violin tutors to research in bioinformatics. Many of our more advanced students are published. Some of them have patents, all before college.


Because of their innovations. In fact, I can probably show you our research symposium since this will be on the video. So let me just really quickly show you. Or can you enable screen sharing, please?

Sure. So for those of you guys who are joining just via the audio, Nisha is showing me the symposium. Actually, why don’t we send it in the chat, and I’ll include it with media for anybody who’s interested? They can check it out. I mean, I would love to check it out, but I just feel like we only have a limited amount of time, and I’d love to kind of tail off this conversation with that kind of hope and optimism. I mean, the opportunity that you’re providing for these kids, I think, is really something cool. When I look at artificial intelligence, I look at it as like, you have these big companies that are pioneering models like llama and, you know, gronk and all this stuff. And, I look at it as just like one another, you know, oligarchy, right? It’s another concentration of power. Just like the Internet was very democratized for a long time. And then all of a sudden, you have these big Internet companies like Facebook, Google, and Amazon that show up as large organizations that make it difficult for smaller people to enter this space. And so do you feel like that’s the case for AI, or do you feel like that it’s much more of the former where you have these large companies that are doing a lot of the movement?

Large companies with AI

There’s kind of evidence of both. There are definitely many large companies that are not many, but a few large companies that have a strong hold on the level of computers, the level of data, the level of skills, and the and the talent that it requires to generate the next round of AI. There is a real concern among a lot of nations so I do a lot of work with UNESCO, for example, and I work with some AI strategy teams. And there is a serious concern that the majority of the AI that is built in the world is built by seven countries, and those AI are the ones being used by other countries. But other countries do not have the ability to influence it, right? And sometimes they don’t even have the ability to influence what they are teaching their children. So that’s one side of the argument. 

The United States is definitely one. China and India, right? A couple of countries in Europe. We may not all agree on exactly seven. This is a quote from UNESCO, but you can guess. But the net effect of it, however, is that on the other side, AI is.

One reason AI has rapidly expanded is its frequent use of open-source platforms. Many initiatives promote the development of advanced AI through open-source collaboration, allowing individuals who have invested in upskilling to create cutting-edge AI models. Similarly, efforts to democratize access to data play a crucial role in AI development. It’s not a matter of one versus the other; both concerns and solutions are actively addressed.

On the flip side, there’s a notable quote I recently shared in a Forbes article. I had a discussion with a former colleague who heads one of the leading computer science departments in the country. His introductory AI course boasts a staggering 10,000 students, highlighting the demand for AI education. However, a major concern is whether universities will have the necessary computing resources to teach students the latest techniques. Alternatively, will students need to join large companies to access the massive computing power required?

While access to high-grade computing power may seem limited, examples abound of young students achieving remarkable feats with minimal resources. Despite lacking access to top-tier computational tools, high school students have produced groundbreaking work, demonstrating the potential for innovation at any level. Thus, the situation is nuanced; there are challenges, but also significant opportunities. The key question is how we navigate this landscape.

Yeah, I think that it’s nice to hear that there’s an open source community where people who are not associated with larger companies and are just independent from that kind of ecosystem have the ability to get into this field. I guess my question is, for people who are not computer science majors, has AI gotten to the point where it can be accessible to people like that or not, and not accessible by just using it? But if a kid has a great idea and he’s not a computer science major, is he able to create something of value with minimal computer science training?


And how does one do that? Because I’ve heard that with new AI models, they do the coding for you, correct?

That’s right. So this is why we kind of separate out. When we teach, we separate concepts from capability. Is that in order for you? There are a lot of advanced AI tools out there, very sophisticated ones, that will build AI’s for you without writing a single line of code. However, you still need to understand enough about how it works to know what to ask for.


If you don’t know what to ask, you are obviously not going to get what you want. But the coding is important. This is why the coding is important, but it’s not necessarily essential.

Okay, so can you give me some examples of those types of programs? If I were interested in using it, where would I go?

So, for example, we have the platform I mentioned, which we built, called Navigator. You can access it off of our AIClub website, and it starts off with no code. You give a dataset, you say what you want to predict, and it will run a bunch of AI’s for you. It will help you improve those AIs and things like that. And if you want to, at some point it’ll show you the code, and you can take the code, go off with it, and run it. So that’s the way we introduce children—we start them off without a code. We teach them the concepts, show them how to build, get, and let them create, and then, as they get older, a good fraction of them express an interest in the code, but not all. And that’s okay. So that’s an example of how we create that path. But it’s absolutely possible now to do that. And that’s one of the reasons why we. I liked the article I wrote this week for Forbes. What I’m really effectively saying is, jump on the moving train. Do it right now. Trains are moving, and trains are getting faster every day. Don’t sit on the sidelines; try to study how the engine is built. Yeah, jump on the train. And while you’re riding the train, you’ll learn how it works.

Yeah, cool. Well, yeah. Thank you so much for joining us today. We’re getting close to the end of our time, and I wanted to talk with you about the questions that I ask all my guests—basically general questions—to see where you’re coming from and where you think the future is going. Personally, a lot of my inspiration for my interest in AI, genetics, and any sort of cutting-edge technology comes from science fiction. I really want to see the kind of utopian science fiction that I see in authors like Isaac Asimov or shows like Star Trek. I think there’s a lot of dystopian science fiction that’s out there. Just like we had mentioned, Terminator, I think that’s a really pessimistic view of the future, but there’s so many optimistic views of the future, and that is what I gained inspiration from. What about you, Nisha? Where do you get your inspiration from?

To be honest, I think I get a lot of inspiration from the creativity of the people that I teach. So if the one thing I have noticed in teaching, particularly with K–12 students, is that the ideas they come up with are frequently cooler than anything I could have come up with, so if you put the technology. And the other thing I’ve also noticed is that, like you mentioned, Chat GPT creates a lot of fear among people. So when Chat GPT first came out, I made it a point to ask every student and every child I interact with, What do you think? And what I got was a collective shrug, you know, and basically, what does it mean? Have they used it? Absolutely. They are simply not impressed because their imagination is so far greater than what this tool delivers. Whereas every adult is frightened at some level or threatened at some level, these children are not threatened. And that is, for me, the most optimistic thing is that their brains are so far ahead, their imaginations are so far ahead, that this thing blew all of our minds, like, not just yours, my mind. Everybody, even the people who knew about AI, simply did not blow their minds.

That’s what Optima said. I don’t think many people were afraid of Chat GPT. I respectfully disagree. I think a lot of people were excited about it. I just don’t think so. I think there’s this other component of artificial intelligence that people are afraid of: the existential threat component. I don’t think that people look at Chat GPT and think it’s going to take their jobs. Even workers that I think are doing tasks that Chat GPT will be very easily able to replace. I have a secretary. I think that eventually AI could probably do her job, but I don’t think that I would want to trade my secretary for an AI anytime soon. And I think that AI will make her life much easier because it’ll allow her to do everything that she needs to do in a fraction of the time and allow her to gain more human experience. A lot of touchy-feely stuff, like talking on the phone and talking to an actual person—you know, being there for the person, speaking in front of them, and just being present rather than thinking about the hundred other tasks that you have to do. So I don’t know; I guess I have a more optimistic view of it. But I totally agree with you about youth inspiring people. I teach over at Tufts, and I see these college kids. They’re just so full of hope and so energetic. It makes me feel young, and it inspires me to do more as well. So I really appreciate that. I totally agree with the comment about the imagination of youth being an inspirational force. What about the other technology that is having breakthroughs at the same time that artificial intelligence is? I always ask my guests, aside from your own field of interest, which is artificial intelligence, obviously, computer science, probably. In general, what are some other technologies that you’re reading about in the news that really excite you? Because for me, I’m in medicine, and I can’t wait to hear about the next robotics article, the next AI article, or the next genetics article. There are a lot of different technologies that interest me. For example, consumer robotics. I can’t wait until I get that consumer robot that’s able to fold my laundry for me. I’m looking for it every time Atlas or, you know, Tesla puts out a robot video; I’m like one of the first to click on it. What about yourself, Nisha? What are you looking out of the news that really excites you, that you can’t get enough of, that’s outside of your field of expertise?

I personally find space exploration very interesting. Yeah, I think that’s definitely one of the things.

Yeah, I agree. SpaceX is doing some pretty amazing stuff, and hopefully we can become a multi-planetary species very soon. And it’s certainly inspirational. Right. It’s something that everybody gets inspired by. So I totally appreciate that. Obviously, as you can see from my background, I’m a big fan of space exploration as well. So, last question. Where do you see AI in ten years? What do you think the future is going to look like with artificial intelligence?

I think it’s going to dramatically change the way we live our daily lives, and it’s going to dramatically change the way we think about work and what it means.

What do you mean by that?

So, GPT-4, as an example, outperformed most high school students at all the standard advanced placement exams. It passed the bar in a Google search and did better on the USML than most of us doctors. Now, does that mean that humans will be replaced? It should not mean that. Some people will think it will, but ultimately, it will change what it means to be in each of these professions.


And since I come from the education space, fundamentally, what it tells us is that the techniques we’ve been using to judge human competence can no longer be used.


And so, that’s what I mean. It changes what it means to learn. It changes what it means to work. And so the next decade will. And one of the reasons why I focus so much on creativity in children is because, ultimately, I am more interested in what they can create with these technologies than how well they do in an AP exam, where the technology is already guaranteed to people.

Right. Yeah.

That is the next change in the next decade. So, I mean, if you’re a parent of, for example, I’m personally a parent of a 15-year-old, why would I want her to spend the next three years of her life studying for an exam that the AI’s of today?


No, I saw a good use of her brain. It is not a good use for her. So those are the changes that will occur.

Yeah. So I have a two-year-old. Any thoughts on how AI will affect her personally from an educational standpoint?

I think she will. The education system will change a lot around her, and by the time she is in high school, hopefully the education system will not look anything like it does today.


And that has to happen. But the thing is that, I mean, ultimately, these things take time because they take time. It’s not because anyone is doing anything wrong; it’s just that things take time. Right. So I would say just, you know, encourage creativity, encourage immersion, and so forth. And basically, help her become a human who can thrive. Right.

Yeah, that’s fair. It’s a nice place to end it. And with the ideas that you’ve presented, I’ll. I’m sure I’ll be thinking about them for my own family. So I really appreciate your insight and your wisdom. For those of you guys who are listening in, please, as always, like and subscribe at the bottom. And, as always, I will see you in the future. Have a great day, everybody.


Important Links


About Nisha Talagala

Nisha Talagala, CEO and founder of AIClub.World, is passionate about fostering AI literacy among K-12 students and individuals globally. With a wealth of experience in introducing Artificial Intelligence to learners of all levels, Nisha has played a pivotal role in advancing the field.

As a co-founder of ParallelM, a pioneering force in MLOps acquired by DataRobot, Nisha has been a driving force in operational machine learning. Her leadership extends to organizing the USENIX Operational ML Conference, the premier event bridging industry and academia on production AI/ML. With over 20 years of expertise spanning enterprise software development, distributed systems, and technical strategy, Nisha’s contributions are underscored by a PhD from UC Berkeley, 75 patents, over 25 research publications, and frequent speaking engagements at industry and academic events. Additionally, she contributes as a writer for Forbes and other esteemed publications.



By: The Futurist Society