People are worried about the impact of AI on the future of work. Will technology make human involvement obsolete? What will happen to us as AI continues to grow? This episode’s guest, Dan Turchin, is here to alleviate your worries and shed light on the positive future of AI in the workplace. Dan is the CEO and Founder of PeopleReign, a leading AI platform for IT and HR employee service automation that is reshaping the future of work. With over 25 years of experience in applying AI and machine learning to enhance employee work life, Dan shares valuable insights into PeopleReign’s mission of positively impacting the work life of a billion employees by eliminating friction at work. He offers an optimistic perspective on AI’s role in eliminating mundane tasks, allowing humans to focus on more fulfilling and creative aspects of work and life. They also talk about the collaboration between humans and machines, the impact of AI on jobs, and the need for reskilling in the face of automation. Tune in now and gain a brighter view of the amazing possibilities AI technology holds for our future.

Watch the episode here


Listen to the podcast here


The Future Of AI And Work – A Conversation With Dan Turchin

We have Dan Turchin. He is the CEO of PeopleReign, which is an AI company, but a thought leader in the space of AI and the future of work. He has a podcast called AI and the Future of Work, which is doing cool things and talking to cool people. Dan, thanks for being with us. I appreciate you being here. Tell us about what you’re doing in PeopleReign and how you got into the AI space.

I’ve been in and around the space of applying AI and machine learning to improve work life for employees for several years. It’s the seventh company that I’ve started in and around pursuing that mission. I firmly believe that the reason I was put on this planet is to positively impact the work life of a billion employees. I’m not quite there, but I will get there.

The team at PeopleReign is incredibly motivated and passionate. We wake up every morning excited to create value and help employees bring their best selves to work. PeopleReign is an AI-first platform that automates the delivery of employee service. We firmly believe that when you take out all the friction at work and the things you hate doing, do you know what’s left? It’s the things you love doing.

There’s a reason that everyone signed up to do their job. It’s not because they like the mundane parts of the job, submit PTO requests, or check the balance of their 401(k). They signed up to do something that they’re passionate about. Using technology can bring out the humanists in everyone. When we do that times a billion, we’ve put a dent in the universe that will create a lot of positive energy for the planet.

I’ve also invested in about 30 AI-first companies. I’m a firm believer that there are a lot of approaches. I’m always inspired by the entrepreneurs whose journeys I have a chance to be a part of. My passion project is AI and the Future of Work, where we’ve published about 250 conversations with amazing thought leaders, technologists, futurists, academics, and entrepreneurs. I’m proud of that corpus of conversations. When I look back over the arc of those conversations, I see that they shaped a lot of what I think. It’s why I am an AI optimist and on Team Human. I am happy to share any of those perspectives with you and the audience.


The Futurist Society Podcast | Dan Turchin | AI And Work


When people think about the future of work, the thing they’re most worried about is that AI is going to replace their jobs. I look at it in a much more optimistic light. It’s similar to what you’re saying. It’s going to take away all the mundane aspects. We can revert back to enjoying what it is that we do and having robots or machines do all the stuff that we don’t want to do. You’re close to the ground with this stuff. What would you say to those people that feel like that? What are some indications that make you feel like this is going in a similar direction and a more beneficial direction?

I believe that anything that can be predicted is better left to machines. Anything that requires rational thinking, judgment, or empathy is and always will be better left to humans. That part of the work can’t be outsourced to a bot. AI machine learning in physical or digital bots is well suited to do tasks that consist of three canonical elements. They’re dull, dirty, or dangerous.

Those kinds of tasks weren’t well designed for humans. I want to rapidly approach a point where no human is ever sent out to drill oil on an oil rig in the middle of the ocean. No human is ever sent out into a minefield to detect where the mines are. For lack of good technology, humans have been doing those jobs for decades, centuries, perhaps even millennia. Maybe not the mine detonation part, but we need to shift the paradigm and realize that a tight collaboration between humans and machines is what’s required to bring out the best in humans.


The Futurist Society Podcast | Dan Turchin | AI And Work


Dedicate tasks, jobs, and careers to a combination of machines and humans and not ever feel like it is like some creepy dystopian world. That’s a beautiful world when humans are doing this stuff that makes us most human. When we think about job elimination, I’ll be the first one to say that jobs that fit in those three categories that I mentioned are good candidates for elimination. The other kinds of careers and opportunities that they’ll spawn for technicians who manage what the robots do on oil rigs or analysts who look at the data and tune the algorithms, they’re massive numbers of more interesting, more fulfilling jobs will be created in the wake of replacing some of those that are dull, dirty and dangerous with machines.

I don’t have a good answer for people who are worried about the negative aspects of how AI will affect their day-to-day jobs. Are they going to be replaced by a robot? What would you say to people like that? I agree with you that some of the more dangerous jobs should be replaced, but what about a desk clerk? When we have artificial general intelligence, that is somebody that could be replaced by artificial general intelligence. That person is worried about their job security. I don’t think it’s going to be disruptive. If it is disruptive, it’s going to present new opportunities, but that’s my belief. I don’t have the same access that you do. What would you say to somebody like that?

I speak all over the world to audiences of executives and leaders, but also frontline workers. My advice is that anyone who’s in a role, maybe it’s someone who’s in a traditional desk job, in a dangerous job, in a factory or like the other ones I mentioned, this is a good opportunity and a good time in history to step back and say, “If my job is that easily automated by a machine because what I’m doing is pattern matching at scale or something that’s highly repetitive, it might be a good time for me to think about rescaling and upscaling, but not in a way that should make me fear the future but in a way that should make me optimistic about the fact that I can reinvent myself and potentially do something more exciting.”

I guarantee you. No kid in school grew up being passionate about wanting to screw caps on tubes of toothpaste all day or fasten the bolts and the chassis of an automobile on an assembly line. This is a good opportunity to think about that passion and skill that you always wanted to take up. Now is a good time to be thinking about rescaling and upscaling. That’s more of a blue-collar thing, but with respect to white-collar jobs, AI is well suited for doing tasks, but it’s not well suited for doing jobs. In a corporate setting, there always needs to be a human orchestrating the tasks that machines may do.

As I alluded to empathy, coaching, and rational thinking, as a society, we’ve learned more than the past year of hyperventilating and experimenting over LLMs. We can’t rely on their output. That pattern of next-word prediction is great at generating credible nonsense, but I don’t want a bunch of human employees replaced by anything that I’ve ever seen come out of an LLM. I’m firmly on team human because I understand what the capabilities are, but also what the boundaries are of what the technology’s capable of doing.

It’s going to be an adjunct to humanity. The work is going to have some quality control when it comes to its output. The fears that people have are real. People like us who are optimistic about the future have to address and give them some credible answers. The way that I look at it is that the smartphone that you have is a powerful tool, and it’s something that AI is going to replace.

It’s something that is going to be as powerful as, but much further reaching. When you frame it like that, people tend to identify with the benefits of artificial intelligence. How far away are we from this stuff? I would love to have a personal assistant who manages all the administrative work that I have to do as an adult in 2024. Fusion technology is always ten years away. I see a huge leap like ChatGPT. It got a lot of people excited. For people like us who are excited about it, when can we start expecting this technology to trickle down to actual use cases?

I’ll give you three-time horizons. Within the next five years, all of the current limitations associated with LLMs, some of which are pragmatic limitations, the cost, the uncertain regulatory future copyright protections, the generation of content that can be dangerous, and the guardrails applied to LLMs. Both technically as well as culturally, it is holding back broader adoption. In 2023, it’s still nascent. Those will go away. We’ll have answered a lot of the questions about how the technology can be exercised responsibly. It will become part of the connective tissue of work and life. That’s in the next five years.

In the next ten years, I firmly believe that every person on the planet will have a handful, no more than a dozen digital to, the Microsoft term, co-pilots to guide their daily life. There’ll be a digital you that will be accessible. It won’t be a traditional mobile phone but a wearable, whether a pin or embedded in your clothing.

What you’ll find is that the environment, whether it’s your home or your place of work where you commute, will have intelligence built into it. It’ll be an ambient. These sensor networks will be enabled by great advances in compute throughput and network bandwidth, which we’re on the cusp of, but within a decade, we’ll see things like quantum computing mature to the point where a lot of those fundamental technological limitations are going to go away and enable this world where you’ll get the best advice from these co-pilots. It won’t be creepy.

You’ll give it a goal like, “I want to be healthier. I want to add life to my years. I want to make sure I’m on time. I want to spend an extra hour a week with my kids or my spouse.” Things where you have tangible goals that could better be achieved with the benefit of a digital you, a version of a co-pilot, helping you make better decisions throughout your day. That’s within a decade.

I’ll go as far out as 30 years. Beyond that, it’s fuzzy. We’re going to start to think about a different version of species, not to come across in the realm of science fiction, but this notion of what it means to be a human will change in the next several decades, call it 30 years. I firmly believe that the traditional limitations of what constitutes a carbon-based life form will merge with a silicon-based life form. We’ll naturally think of ourselves as a combination of the two.

The trend that will happen in the next ten years will be amplified over the next couple of decades. What I mean by that is there’s even some foreshadowing now. To take it out of the realm of science fiction, when we look at the Olympics, we’ve got the next summer Olympics coming up. In the pool, the swimsuits are made of new materials. These are amazing developments in material science that have reduced the friction against your skin in the water.

Over the past few decades, we’ve seen things like that change. It means to be human because we’re able to test traditional limits or prosthetic limbs that are smart enough to mail themself to your body. Anyone with a disability or anyone who feels like they’ve been disadvantaged or held back is going to be AI-driven, whether it’s prosthetics or things that augment traditional human capabilities that are going to empower the most underserved and underprivileged.

We talk about how the traditional digital divide will go away. New technologies that are emerging feel like science fiction that will let anyone on the planet. A kid in an impoverished town in Zimbabwe will have every access to modern education that a kid in an affluent area in Boston has now. That’s where we’re headed in a couple of decades. That’s a world that I’m excited to be living in. Over the arc of history, to call it 30 years away makes me optimistic that it’s going to happen in our lifetimes.

That’s an exciting future to live in. What sets us back when we’re talking to people who are less optimistic is they don’t have that optimistic vision of the future to look towards. There’s so much pessimism associated with the future. Every new technology is associated with the worst-case scenario, which I can understand after having all of this social media explosion and seeing all the negative aspects of that. Everybody realizes that technology might not be like this amazing thing that we are always interested in getting more of. We are talking about extrapolations. I speak about this with everybody in artificial intelligence. Do you think that this is an existential threat to humanity? How do you feel about that?

I am an AI optimist. I’m sitting in the cradle of Silicon Valley. A lot of deep attention is being paid to how technology is exercised responsibly. I had an interesting debate on my podcast with a famous AI ethicist named Merve Hickok. She’s behind the website She was taking more of the doomer side of this debate, and I was taking more of the accelerationist or the optimistic side.

When it comes to that topic, at a philosophical level, humans who are brilliant enough to design and deploy these technologies are also brilliant enough to make sure that they don’t lead to the demise of humanity. At a high level, there are a lot of us in that camp who feel like we are thinking through some of these implications. We are embedding kill switches and things that should detect the bots from going rogue.

Humans, who are brilliant enough to design and deploy these technologies, are also brilliant enough to make sure that they don’t lead to the demise of humanity.

There’s always the canonical example of what we don’t want to happen or a reference to the medical field. If you train an AI, you ask it, and you say your job is to eradicate cancer, the best way to eradicate cancer is to destroy humanity. It won’t be cancer anymore. I don’t believe that’s likely to happen because, as a technology community, we are thinking through the implications of things like that. It’s incredibly important that we get regulation. Nowadays, the state of regulation is immature, even what’s happening with the EU AI Act.

How do you feel about the EU AI Act?

It’s certainly a lot further along than the United States. We have the AI Bill of Rights that was proposed, which was, politics aside, a political charade. My concern is that if we, as a tech community, rely on the Federal government to enforce these guardrails, we give big techs a ten-year ability to grade their own homework. That tends to not work well.

This is happening. I don’t want to take the doomer side. Organizations are responsible. They’re thinking about what it means to embed some of these controls into the AI to make sure that we have a gold standard set of tests to see what would happen in a worst-case scenario to prevent the bots from going wild. There is an appetite for self-regulation. In every piece of legislation I’ve seen, whether in the EU or what’s pending in the United States, China, Asia, and other places, we’re naive if we think that the pace of regulation can keep up with the pace of technology. It’s the most important issue that we face.

I had mentioned a timeframe of five years to make sure that we enforce the right guardrails. Mental health is an effect. There are some of the things that could go wrong. We are making sure that the bad actors, which there always will be bad actors, aren’t allowed to accelerate their nefarious behaviors, hack bank accounts, bring down planes, disrupt the energy grid, and things that we were always afraid of but now could potentially be more accessible with AI. We need to pre-answer those questions, but I want to make sure that I make the point that there’s an appetite and there’s a lot of good work going on. I’m going to give us the benefit of the doubt that we’re maybe five years away. These threats or dangers are well acknowledged.

It’s going to be more of a symbiotic relationship. In that symbiotic relationship, AI is going to depend on us, and we’re going to depend on AI. It becomes a real need for humans to be around. I hope that’s the case, but it’s nice to hear that there are people who are thinking about these things. That might make people feel better about it.

What I wanted to ask you about is transition and say, “What are you most excited about?” There are many applications for artificial intelligence. It’s hard to pin down things that people can get excited about specifically. I get most excited about having a robot butler, somebody who can water my plants for me, fold my laundry for me, and do it in such a way that is seamless. I could tell them that. It’s not difficult, but you have a different perspective. I feel like you know things our readers might not realize or think about. What are you most excited about regarding artificial intelligence?

I’ll give you three different answers to that question. My life’s passion is improving my work experience. I want work to no longer be a four-letter word. We’re on the cusp of a work culture aided by technology, where everybody gets back 4 to 6 hours per week of productive time. In that 4 to 6 extra hours per week, we’ll have a decision to make about whether we want to pursue another project at work.

Our vision for ourselves is to get out of the office an hour earlier every day, get to the kid’s soccer game, be at the recital, be a better spouse, a parent, and a friend, and pursue a hobby. I get excited about a global community where everybody feels like they’re the best versions of themselves because they get back those 4 to 6 hours per week to do the thing that they feel like they were put on the planet to do. I’m optimistic. Based on what I know we’re doing at people, that’s close.

I’ll give you two other examples. My kids are 13 and 15. I think about the world of work but also the global society that they’re going to be graduating in a couple of years when they’re in the workforce. I think about all the opportunities they’ll have. We started this conversation about the ethics of AI. We can talk about how every aspect of our lives, from education to the legal profession, law enforcement, medicine and healthcare, transportation, logistics, and hospitality. We are on the cusp of amazing opportunities to reinvent everything we thought we knew about every one of these fields.

As a parent, I’m excited that my kids have an opportunity to change the world because it’s going to take a bunch of smart, ambitious people to make this transformation happen. That’s number two, which makes me optimistic. The third one is, as you well know better than anyone, the population is aging in the United States and around the world. I think about taking my mom, who’s aging. I want nothing more than for her to have more life in her years.

I believe that some of the advances in technology, a home assistant that can keep her mind stimulated and a home assistant that makes sure she’s cared for and she’s got a wise best friend that’s always at her side and gets to know her. Not to stimulate her mind but to keep her safe and check in with her and all the things that are a lot better than me that can lead to premature aging.

These are logical applications of the technology. When I think about taking care of the most vulnerable in our population, there are many obvious advantages that we’ll have in a world where some of these AI technologies are mature. That’s a third aspect of the future that makes me optimistic about why we have to keep building.

Who do you think is doing some of those things the best? Are there any companies that you’re like, “If they get this right, it’s going to be like another ChatGPT moment?”

I’ll give you a couple of examples. I’m biased because of the guests I’ve had on my podcast and the companies I’ve invested in. By no means is this exhaustive, but call these illustrations of what else is out there. I have a guest on the podcast, a guy named Dr. Shiv Rao, who’s the Founder of a company called Abridge. That is reimagining the patient-doctor experience. It’s predicated on the fact that a lot of interactions that patients and doctors have are driven by compliance obligations and regulations, and patients always feel like they’re ignored or the healthcare system lacks humanness.

What Abridge is doing is trying to make it easier using transcription technology and AI to make sure that doctors can be more present and have AI write up the notes, summarize the calls, and do the things that create artificial barriers between patients and doctors. That’s one area where healthcare-specific example companies are called Abridge. There are others like it out there. A company I invested in is in another domain. It is called Apprentice. It develops an AI-first adaptive tutoring system for K-8 science and math.

That is going to be a big deal. I can’t wait until that comes to fruition.

It’s changing the world. It’s unbelievable to see the future of education when you see how much more engaged students are when they have a personalized tutor that learns with them. It’s amazing what the impact is on the classroom setting, where teachers are getting a real-time view of where students are struggling and spending their time one-on-one coaching, mentoring, and doing the things they love with the students who are falling behind.

Everybody is made better. That’s why I take like the example of an underprivileged student in a remote part of the world who is perpetually left behind those who have the advantages in the Western world of modern technology. When I think of democratizing access to the best educational resources using AI-based adaptive tutoring, like what Apprentice does, I’m like, “How could you not be optimistic about what’s ahead knowing that?”

We’ve got a set of ethics. We’ve got to make sure it’s not done in a way that disadvantages or potentially could lead to harm or leads us down weird dark alleys. We’ll solve those problems because the upside and the positive impact on humanity is significant. It’s a problem we must solve. I’m seeing in the cradle of the way this technology is developed. Some of these problems are being solved. It’s my two examples.

The positive impact of AI on humanity is so significant.

I have kids. I have a two-year-old. I look at her. It’s almost like a plant. You put the right amount of sunlight, fertilizer, and soil. It’s going to blossom to its full potential. I don’t think that each plant is the same. Each plant needs a different amount of soil, fertilizer, sunlight, and water. If you have some kid who’s advanced, you could have this tutor feed it all the fertilizer for its brain that it needs to be the best version of itself. Sometimes, in school, it’s limiting because you’re held back by the other people that are in your class. Having an artificial tutor that’s able to feed you that information at the pace that you’re able to take it in is going to be interesting to watch.

It’s not only from an educational standpoint. I also think from an emotional standpoint. There are times when, as a parent, I’m preoccupied or feeling bad myself, but I’m not able to give that kid the amount of emotional support that she needs when necessary. Who knows what the long-term ramifications are? There’s the resilience of nothing. What if you did have some artificial best friend that’s able to give you the emotional security that you need? It would be such a much more fulfilling life for these kids.

It’s going to be interesting to see the kids that are growing up now and how they interact with AI because, frequently, at dinner parties, I have other friends who have children the same age. We talk about how they will have a more significant relationship with a robot or an artificial intelligence that can meet their needs rather than any other kid in the history of humanity. Who knows what the implications of that? It’s going to have good implications because it’s going to give the kid exactly what it needs, whether it’s emotionally or educationally when it’s needed. How do you feel about that? You have kids. Are you excited about that?

I want to have a version of this conversation with you in several years when your daughter is in kindergarten. We will have learned so much between now and then. I believe that your daughter is going to have a vastly different experience in kindergarten than you or I had in a positive way. From two birth, they’ve been enriched by systems that get to know them and assist the parents. It’s not a replacement for parenting. That parenting journey doesn’t change. It might mean that your daughter starts kindergarten with a massive advantage versus what you and I had throughout their educational system.

I have two daughters. My daughters have to memorize arcane chemical formulas in O chem as I did. That was a waste of time. Memorization of chemical formulas is not a task that nothing that has benefited me. I’m not a doctor, but nothing that has benefited me in my professional life. When we peel away all the wasted time that happens in the educational system and introduce more things that are more relevant for whatever profession your daughter and my daughters choose, that’s why I’m a big believer in the benefits of pursuing creative skills and thinking about the ethics of things that a bot will never automate.

I love the idea of redesigning the educational system. We strip out all the stuff that’s mundane, unnecessary, and antiquated. That whole thought about what it means to have a copilot is part of your parenting journey. As a parent, I’ll feel much better when the school system is completely overhauled. Higher ed is broken.

The pressure will be off for parents to respond to every single need. There’s parental pressure that you have to fulfill every little facet of needs that your kids have, whether it’s financial, emotional, or educational. Having a copilot, even from a parental perspective, is going to be interesting. If we frame it in such a way that this is somebody who is able to take the pressure off from some of these more mundane, but I would argue important tasks, I want to be there for every recital and every time that she falls and hurts herself.

If I can’t be there, it would be nice to have something there that is able to help her in my stead. That, in regards to co-piloting, is going to be much more palatable for people who are less optimistic about artificial general intelligence. I don’t know if there’s anything out there like that, but I think that would be interesting.

A lot of these assistive technologies are for your kids and my kids, but for kids who have learning disabilities or other things, as a parent, the next generation of parents and kids will have a better relationship because their safety won’t be something that as a parent you’re as concerned about or what happens to them when they’re in school. It’ll be easier to supplement what they’re getting when they’re not under your direct supervision and feel good that the digital version of them is delivered through apps, services, or bots that you trust. You have access to the guardrails and the recommendations they’re making. You control the targets or what you want those digital companions to do.

Assuming we get all those pieces right, which is critical, we enter a world where, as parents, we can focus on doing the things that we as human parents do best, which is lugging our kids, treating them with respect, being excited about all the enrichment that we bring to them and not feel like when they’re out of our direct supervision, somehow something could go wrong. It’s Pollyannaish to say that’s what’s ahead.

We enter a world where, as parents, we can focus on doing the things that we do best, which is to love our kids, treat them with respect, and be excited about all the enrichment that we bring to them.

Do you worry about incentives? When you’re talking about it’s Pollyannaish, you have this idea that’s what’s coming down the pipeline. For a lot of people, especially with all the negatives of social media technology or capitalism in general, the incentives are such that they are meant to use human beings like commodities and gain attention in the interest of financial profit. Do you worry that’s something that might get in the way of some of these more humanistic philosophies that you’re talking about?

We have a saying in Silicon Valley. It’s like, “If you’re not paying, you’re the product.” It is incumbent on parents to understand what the capitalistic motives are behind the services they use. We’re in need of a reset. Social media caught a lot of us off guard. It’s coming up, not in the United States, but globally. 2024 is a big year for elections around the world. 

It is incumbent on parents to understand what the capitalistic motives are behind the services they use.

One of the challenges is we need to make sure we understand what the source is and who was motivated to produce the content that we see. We’re going to be having this conversation in the next several months, if not for the next decades, about what is truth. Can you believe the thing that the celebrity said? Was it synthetically generated?

Was it a deep fake or not?

There is going to be a level of responsibility that we can’t outsource. We certainly can’t assume that publicly traded big tech is a profit motive. We always need to be careful. It may mean that these services won’t be cheaper and free. We may choose to pay for them because when you pay for them, you can unlock the capabilities that will allow us to trust them.

An important theme and this is a more dystopian view or term, is what it means to be living in a post-truth world. How can we get more comfortable asking those questions about what the motive behind this content that I’m consuming was that potentially could have significant ramifications on me or the people I love?

AI is going to be more beneficial with that because the amount of time that it takes to fact-check stuff is too much for most people, and to have somebody that you could easily ask to be like, “Is this real?” AI tells you, “No, it’s not. It’s a deep fake.” That’s going to be much more helpful. We don’t have a lot of time left. My production team told me you’re big into science fiction, especially utopian science fiction. It is something that I always love talking about because I don’t know if enough people are exposed to it to realize that there are many beneficial possibilities that other people have thought about.

What do you feel like is the shining star for you? It’s like, “If we could get there, that would be awesome.” For me, it’s Star Trek. It’s a post-scarcity society where technology is something that’s a real help. Most of the admin stuff is gone, and we can focus on being the best version of ourselves. They’re not working like a three-day work week, but they’re working hard at bettering themselves, exploring, and making new interesting advancements for society. What about yourself?


The Futurist Society Podcast | Dan Turchin | AI And Work


I have two book recommendations. One is a book called Life 3.0 by Max Tegmark, a professor at MIT. It gets slightly dystopian. It was written close to a decade ago, but it’s still relevant. I share a lot of his vision or some of his reality. It takes place several years out. Another good book is by Dr. Kai-Fu Lee. He was the first Head of Google China. He became a venture capitalist. He runs one of the most successful AI companies in China. He wrote or co-authored a book with eight tales from around the world and different cultures. It is about a decade into the future, but what will life be like?

A lot of what I believe is consistent with what Dr. Kai-fu Lee and Max Tegmark talk about in their books. I believe in this future that’s not too far off, where a day in life is different from what it is now, and technology is deeply integrated into our lives that we no longer think about ourselves as cyborgs. It doesn’t feel like dystopian science fiction. It feels utopian.

We give one of our dozen copilots a directive. I don’t know about you, but I don’t feel like I make the optimal decisions based on my values or priorities every day. As a techno-optimist, I would love to feel like I’m being guided through my day, whether it’s a daily budget or how much time I should leave to get to the airport. Even mundane things like that, you walk into the office and the conference room is booked for you. The new hire that you have has a laptop sitting on her desk. It’s things that gum up the works. If you go throughout your day, you or I could come up with 60 examples of where life would be better if we had a wise friend who was always there and cared for us.

I got you. Dan. I’m going to order that laptop and put it in that conference room. It’s ready to go. That’s something that I look forward to. The administrative work of being a human being in 2024 is overwhelming. It’s like the taxes and insurance. Is the cable bill paid? I have to automate all this stuff because otherwise, I wouldn’t be, but I would love to have some wise friend who’s checking up on this stuff for me. I can’t wait until that happens.

If you look at science fiction, all utopian societies, whether it’s Star Trek or Isaac Asimov series, where human beings have this amazing experience of life. AI is the common denominator in all of these things. We have to have this technology to get to the next level. For a lot of things like admin work, labor, and scarcity in general, for us to move past those, we have to have some help fixing these big problems. Whether we like it or not, we have to address this issue. It’s coming. For a lot of the fear that’s coming out of this, we need to pump the brakes. We can’t pump the brakes. We have to address this issue. It has the most benefit for our society. How do you feel about that? Do you agree with that?

Roll back the clock to the 18th century and ask the Luddites in England how it worked out for them to be on the wrong side of innovation. There are countless other examples, like the printing press or the automobile, of where those who feared technology were left behind. This is another case where the argument for slowing down because of the dangers AI could propose, which are real.

It is much more dangerous than speeding up because, in a lot of cases, as we think about the ethical challenges or some of the potential downsides, we can’t accelerate our ability to react to the downside until we first agree as a community to continue innovating. That’s always what has advanced society. This time, the stakes are higher, the technology is better, and the pace of innovation has never been faster.

As a community, we need to keep embracing all the positives that are already the result of these massive advances in technology, keep our eye focused on all the big global problems that it will solve, and know that we are smart and care enough about the future of the human race that this shared vision for the future that we’ve been talking about should be enough motivation to continue innovating.

Last question before we get into the three questions I ask all my guests. How do you feel about working from home? Artificial intelligence and the intersection with work are significant for you. It varies depending on the person. I wanted to see what your thoughts were because my wife is a big proponent of it. I’m not a big proponent of it. It’s a discussion we have. How do you feel about it?

I will caveat this comment with one of the important nuances about working from home is that it doesn’t apply to everyone. I realize that those who may be working multiple jobs, frontline workers, or in certain professions don’t have the luxury of being able to have this conversation. I respect that. It’s easy for you and me with all the benefits and privileges that we have.

I don’t have the ability to do that. I have to go to the operating room. There’s no operating room in my house yet.

I’ve got a cushy desk job. I spend a lot of time on airplanes, but otherwise, I’m always respectful of all the employees who make the world work. I’m on team human. I believe that using technology to make us the best versions of ourselves is the planet I want my kids to inherit. For employees who are also caregivers at home, whether it’s for elderly parents who are disabled and not as mobile, or they have to supplement their work with childcare, I fully embrace that I don’t want to stigmatize work from home and I’m glad that the pandemic forced us to have this conversation.

I also believe in the power of teams in collaboration. Some of the most innovative corporate cultures have figured out a way to accommodate hybrid work without disadvantaging employees or creating some cultural bias. I believe that what’s right for the employee is also what’s right for the employer. I’m firmly in the camp that there’s no one right answer other than what’s best for the human doing the work.

Some of the most innovative corporate cultures have figured out a way to accommodate hybrid work without disadvantaging employees.

The way that I look at it is I like the culture of work. You leave your house, go into the office, and have a cup of coffee with your friends from work. That creates a certain sense of camaraderie with them and the people that you see on a daily basis. That’s something that I would miss. To be quite honest with you, I’ve never had the luxury of working from home. I always have to go to the hospital and put my time in.

It’s something that my wife and I talked about, especially with somebody like you who has a keen interest in AI and the intersection with work. I wondered what your opinion was, but it’s interesting to talk to you. We do have to close soon, but I wanted to end the conversation with the same three questions that I ask most of the people who come onto the show. The first of which is inspiration. A lot of my inspiration for the things that I do is science fiction and cutting-edge technology. For whatever reason, it gets me excited, and it’s different for different people. What inspires you? What do you look towards for your inspiration?

Given the why that I described at the top of the conversation, it should come as no surprise that I derive inspiration from my customers. I consider all my companies, certainly at PeopleReign. They are why we do it. We have this vision. We’re out to impact a billion lives at work. I encourage everyone on my team to spend as much time as they can learn about what inspires our customers because that has to be what inspires us.

I don’t like traveling for work, but I’m always happy to be on a plane anywhere, anytime, to be able to learn from our customers. The greatest source of inspiration is when it becomes not academic. A bunch of the stuff we’ve been talking about seems academic. It’s real when you sit in a call center in Manila, or you talk to a kid out of college who says, “This product is giving me back six hours a week.” With respect to my vision and why, that’s always been my source of inspiration.

Let’s fast forward ten years. What do you think the work is going to look like, specifically with AI being a huge component of it? Are we going to be working three days a week? Are we going to be working from home? Everything is going to be telerobotics. What do you think is going to be what work looks like?

The future of jobs is such that we will not have one business card with one logo on it. You punch a clock and go to work every day. What we talk about is a work net versus a workforce. I give credit to a gentleman named Dr. Gary Bowles, a great guest in my podcast who is also passionate about this. The idea is that your career will be stitched together based on passion projects that you have.

The work net will consist of ad hoc teams brought together with specialists. Everyone is there to create some output, product, or service. Around the table are people who are there because they love what they’re doing. When you extract all the friction, like paying your taxes and whatever craziness comes up in the course of your day, and all the time at work is spent doing the thing you love, it’ll be easy to have this job 2.0 where you’re doing a bunch of different things with a bunch of different ad hoc teams potentially for different companies. It’s an extension of what gig work is now that liberates you from a lot of the tedium of work. Within a decade, we’ll be talking about that work net versus the workforce.

There are many passion projects that I’m bumbling around that I don’t have the time to expand upon. The last question I ask a lot of the people in artificial intelligence that are on this show is, what do you think is going to be the next big ChatGPT moment? Is it going to be full self-driving? Is it going to be robot butlers? What is going to be the next big thing that makes us shift our mentality the same way that ChatGPT did?

The dirty little secret of LLMs is that they are large language models. We think of them almost like they’re sentient, which they’re not. We think of them as being capable of performing a broad range of tasks that they can’t perform. They’re great at generating credible nonsense and programming tasks and things that are pattern recognition. They’re getting better at math. They haven’t always been great at it. One of the things that is overlooked is that large language models are text-based. Increasingly, we’re incorporating some rudimentary video and image capabilities into them. What we need to appreciate is that 99% of the world’s data is not text.

The next big innovation is going to happen when we can incorporate spatial data, knowledge about the world that we live in, what it means to feel the wind on your face, what it means to hear the sound of traffic, and know when to cross the street and the sound of the jet engine when it takes off from the runway or the feeling of being at a concert in a concert hall and hearing the clash of the drums. Things like that represent the totality of human experience. LLMs don’t do any of that. The next real breakthrough or innovation is when the equivalent of the extension of AI-first large language models gets extended to understand the spatial world.

The next real break for real innovation is when large language models actually get extended to understand the spatial world.

Is full self-driving a component of that? Is that something separate? I feel like that’s related. What they’re struggling with is these edge cases. I don’t know if that’s something that’s related to what you’re saying.

You made a reference earlier, and we use the same words to describe self-driving, where it’s perpetually a decade away. It’s in part because it is a problem that requires spatial awareness of the world. We’re trying to solve it using traditional data and machine learning techniques. That is why it’s mature enough if you’re talking about driving grandma around in a golf cart in a retirement community. There’s little traffic. There aren’t bad actors. It’s a safe place and a great example of where fully autonomous self-driving is mature for use cases like that.

For the real world of traffic, unexpected events, and bad weather, it’s hard to design AI-first systems to take into account all of that, and that’s what holds back full self-driving. I do believe when AI has spatial awareness, we’ll rapidly turn a corner. It will no longer perpetually be a decade away from self-driving cars.

How far away are we from the spatial awareness that you’re talking about?

With the technology that exists in labs now, within 5 to 7 years, we’ll commercialize it.

I can’t wait until that happens. I can tell you that now. I’m going to be the first in line for whatever company is able to figure that out. It’s nice to speak with you. I wish you all the best with what you’re doing with PeopleReign. For those of you guys who are reading on a regular basis, please feel free to like and subscribe. Dan, thanks for being with us. Thank you guys for reading. Have a great day, everybody.

Thanks, Dr. Awesome.


Important Links


About Dan Turchin

The Futurist Society Podcast | Dan Turchin | AI And WorkDan Turchin is the CEO and Founder of PeopleReign, the leading AI platform for IT and HR employee service automation. He is a member of the Forbes Technology Council and has hosted over 200 episodes of the popular “AI and the Future of Work” podcast.

Prior to PeopleReign, Dan was the CEO of AIOps leader InsightFinder. Previously, he co-founded Astound, an AI-first enterprise platform for HR and IT, and was Vice President of Product at DevOps leader BigPanda, Chief Product Officer at security analytics company AccelOps (now Fortinet), and a Senior Director of Product Strategy at ServiceNow.

He also served as a founding board member at Rhomobile prior to its acquisition by Motorola and currently serves on the board of Auger, the open source automated machine learning framework. In 2000, Dan co-founded Aeroprise and served as CEO until it was acquired in 2010 by BMC Software. He is an active angel investor and startup advisor with a portfolio of over 30 companies.

Dan’s passionate about building great teams that build great products that solve hard problems that change lives. He’s a big fan of Asimov, Dr. Seuss, youth soccer, adventure sports, and Tynker. Dan has BS and BA degrees from Stanford University. Follow him on Twitter.


By: The Futurist Society