The Human Touch In AI: Building Technology We Can Trust

InFactory CEO Brooke Hartley Moy discusses how building technology we can trust will transform daily life, from AI-powered journalism to personal assistants.
As AI continues to reshape how we consume information, one critical question emerges: how do we maintain accuracy and trust in an AI-powered world? Brooke Hartley Moy, CEO of Infactory, tackles this challenge head-on by working with major publishers and enterprises to build AI solutions that prioritize reliability and truthfulness.
Discover how AI is transforming journalism, clinical research, and everyday life through practical applications and emerging technologies. Brooke shares insights on the democratization of AI tools, the future of work, and how the next generation will interact with AI companions and assistants. From self-driving cars to personalized content delivery, learn how AI is evolving from a mere productivity tool to a trusted partner in our daily lives.
Whether you're a technology enthusiast, a professional adapting to AI integration, or simply curious about the future of human-AI interaction, this conversation offers valuable perspectives on building AI systems we can trust while embracing their transformative potential.
The Challenge of Trust in AI
At the heart of InFactory's mission lies a crucial challenge: maintaining accuracy and trustworthiness in AI systems. For traditional publishers and media outlets, even small differences in how AI repackages information can significantly impact their brand integrity. This is particularly critical for organizations whose reputation depends on precise, reliable information delivery.
Transforming Journalism Through AI
InFactory's approach focuses on helping publishers adapt to the AI age while preserving their content's integrity. Rather than relying solely on standard language models, they've developed proprietary methods to ensure accurate information retrieval while maintaining the creative benefits of AI. The future of news consumption is likely to shift from traditional webpage visits to personalized, AI-curated experiences.
AI in Clinical Research and Healthcare
One of the most promising applications of AI lies in clinical research. The technology's ability to analyze vast amounts of pharmaceutical research data and identify correlations across numerous studies opens new possibilities for medical discoveries. This capability to process and connect information at a scale beyond human capacity could accelerate advancements in healthcare and drug development.
The Future of Work and Human Skills
As AI continues to evolve, the nature of work is transforming. Traditional knowledge work and manual labor roles will likely look very different in the coming years. Interestingly, this technological revolution is bringing renewed importance to human "soft skills." The ability to communicate effectively and interact with others is becoming increasingly valuable in an AI-driven world.
AI Tools Democratizing Technology
The democratization of AI tools is enabling more people to participate in technological innovation. Tools like Locofy allow non-technical users to create working prototypes and applications without coding skills. This accessibility is opening new opportunities for entrepreneurs and creators who previously lacked technical expertise.
The Environmental Challenge
While AI presents numerous benefits, it also raises environmental concerns, particularly regarding energy consumption in data centers. However, there's optimism that AI itself could help solve these challenges by developing more efficient systems and addressing climate-related issues.
Call to Action
As we stand at this technological crossroads, it's crucial to participate in shaping AI's future. While there are valid concerns about AI's impact, we have the opportunity to build it responsibly, to create technology we can trust. Whether you're a developer, business leader, or everyday user, your engagement with AI technologies can help ensure they develop in ways that benefit humanity while maintaining the trust and accuracy we depend on.
Links
This interview has been transcribed using AI technology. While efforts have been made to ensure accuracy, the transcription may contain errors.
Hey everybody, welcome back to the Futurist Society, where as always, we are talking in the present, but talking about the future. As always, I also have an awesome guest for you today, and we're going to be talking about artificial intelligence with Brooke Hartley-Moy, who is the CEO and co-founder of InFactory, which is an artificial intelligence company. I'm really excited to have you.
Thank you so much for coming on today, Brooke. Tell us a little bit about what you're doing at InFactory.
Yeah. Thanks for having me. Excited to be here.
We are focused specifically on the issue of accuracy and trustworthiness within AI. How we approach that is that we work with enterprises for whom accuracy and fidelity is paramount and key to their core brand and workflows, and help them build agents, applications, and experiences that have full audibility of their data, have end-to-end query control, and reduce overall hallucinations so that there's trust in the answers that are being provided by the solutions that they're building.
One of the aspects that I had read a little bit about is that you guys were thinking about using this for journalism. Is that correct? Can you tell me a little bit about that?
Yeah. Actually, a huge part of our customer base is media and journalism, particularly traditional publishers. That largely came out actually in some of our early experiences working with transformer technology in a previous job where my co-founder Ken and I met.
We were really aware of the fact that the future of AI and the future of large language models, transformer models, which are the technology that powers most of the AI solutions that you're probably familiar with, like ChatGPT and Cloud for Anthropic, and really basically anything that people are calling AI these days is running an LLM under the hood. What was interesting for us is it was clear that in order for LLMs to continue to be a useful and successful technology, there also is still going to need to be a strong amount of content and data being produced by high quality publishers because at the end of the day, the models are only as good as the data that they are being trained on, the data that they're fetching, the data that they're being fine-tuned on, the data that they're grounded in. We sought out to figure out how do we help the publisher and media and journalist ecosystem be successful in this new AI age because that foundation was going to be so critical for the entire industry.
I think journalism is something that's super interesting to me because we look to that to be part of our understanding of the truth. It's certainly the truth of what's happening out in the rest of the world. Everything that I hear that's negative about the large language models is the fact that there's hallucinations, that you don't know what's real and what's not.
How are you managing those two things? I know that that's what your whole ball of wax is, is to gain veracity, to make sure everything is accurate. I'm not sure how that's happening.
Yeah. For us, what it really comes down to is that we recognize that for journalists, for publishers, how their information is presented is the key pain point that they have within AI. If you are the Atlantic and you're producing long-form content and it's being summarized in an AI chatbot, even small differences in how that information is being repackaged can change the tone or the meaning of an article completely, and completely then devalue this 100-plus-year-old brand that's based on integrity and quality content.
If you're someone more like Weather Company, where you're a company that basically sells accuracy in weather, as accurate as weather forecasts can be, you have put a stake at the ground about to a tenth of degree specificity that you're going to be able to say what the temperature is outside of San Francisco. There's different spectrums of how they're thinking about veracity, but it's a thread across the whole industry. For us, in terms of how we have represented those interests in our technology, is this idea that rather than relying on just the out-of-the-box LLM that, again, powers a lot of the technology that we see right now, and rather than applying traditional what they call retrieval augmented generation, which is the buzzword du jour around accuracy in AI, but still has a probabilistic component, we've developed our own proprietary query methodology, which is all about how do you basically take out that degree of generative chaos when it comes to actually retrieving an answer.
That is to say that we're using the best parts of AI, which are around being able to search over a large corpus of information very quickly, being able to combine different data sources in order to come up with a complex answer, being able to have idea generation and creativity and all the things that make LLMs beautiful and magical, but when it comes to the core components of find a factual answer or represent a piece of journalist content in the way that the publisher wants it represented, we've created the guardrails around that so that there is no generative element within it.
I'm kind of a lay person when it comes to artificial intelligence. I'm using it in my life for more basic stuff like looking up vacations for my family and that stuff, but what I see as an outsider is there is some introduction of bias, which it seems like that's kind of what you're talking about. I'm not sure if I'm understanding you entirely with the difference between the different examples that you gave, but when I see Google makes a picture of the founding fathers and they're all minorities, I think that's something that everybody can kind of understand, okay, there's some sort of bias that's being introduced into Google.
Is that kind of what you guys are trying to correct for also?
I would say that we are focused less on bias correction, though it is sort of all baked into the same cake, if you will. The reason that you see biases is just because LLMs are trained on the open web, they're trained on human-generated content, and so the same biases that exist in our society at large are obviously then going to be represented in a model. It's very hard to correct for that.
There are strategies you can take with fine-tuning, but at the end of the day, humans have created this technology, and even though it is artificial, it's constrained by what it's seen from its past training set. Our viewpoint is that one of the things that helps biases, though it's sort of a side effect, I think, of our work, is that if you can, instead of asking, for example, an LLM to create a picture of the founding fathers, you go out and you fetch a picture of the founding fathers that was produced by the Associated Press, you are going to have something that is probably more aligned with accuracy, given that that content was pulled directly from a source.
Obviously, there are situations where you're, like I said, with creativity, where you're going to be wanting to generate new images and things from scratch, so it all just sort of depends on your use case and how exactly you want to apply the technology.
Interesting. Do you think that in the future we're going to be interacting more so with AI to gain our news than with actual news sources?
Yes. I think that within the next five years, the biggest changes that are going to happen are, one, I don't think humans are going to be actually going to web pages or landing on the homepage for the New York Times, in the same way that it has become increasingly archaic to actually have a full newspaper delivered to your home. There will still be some subset of people that prefer the curated experience, but what will happen is that you will have an agentic workflow on your behalf or a personalized agent that knows what kind of content you're interested in, knows what articles to look for, knows what information to look for, and then it's going to give you a very personalized curated news experience.
That's the future that we're helping a lot of our customers prepare for because it's a big change when you have traditional interactions where today, if you want your news, you go to CNN.com or you go to watch it on MSNBC or you open your phone and read the Wall Street Journal. I don't think many people will be actually going to owned properties within a very short timeframe.
Yeah, I kind of get that experience. I look at all my news from Google News, which is this configuration of what is popular right now, plus specifically geared towards me. I have Futurism technological articles that are coming up on my newsfeed, but do you have any reservations about just the bias that that would introduce?
If I'm only getting content that I want to see, am I really seeing all the content that I need to see?
It's a real risk. I think it's something that has been weighing heavily on society basically since the advent of the newsfeed. That technology has been around for a long time, even the advent of search engines, really.
It's kind of the original version of that. Then it really got hyper-personalized with the launch of Facebook and then algorithms. Not that algorithms were a new technology, but algorithms is applied to this particular use case to maximize attention.
I think what makes me optimistic about how news will be consumed without introducing bias is that one of the things that AI does is that it does allow for not just personalization, but actually expansion of creativity and ideation. That's something that's very different from how our current media assets are set up for. They're just not sophisticated enough for that.
It's going to be somewhat up to us because, again, we're the ones building those systems. One of the things we work with a lot with our partners is how do they reach audiences that they've never been able to reach before because it hasn't been financially viable for them to do so? If the model is less about just having eyeballs on your home page and instead, can you answer this particular question that a person has in a very specific way on a very precise bit of data, it gives you a lot more ways in which you can actually reach people that didn't traditionally participate maybe in your news system cycle.
The example I would give, going back to maybe the Atlantic, the Atlantic demographic is highly educated, largely represented in certain geographic areas, partly because of the nature of their content. They pay wallet. It's long-form articles in a magazine format, but if their content could be delivered now in a different format that's much more democratized and people can discover it in ways that they couldn't before, that has the potential to actually be mind-expanding, if you will, rather than limiting as far as bias.
It's going to be up to us, but I think that the technology potential is there to reduce some of the ill effects of what had happened in the newsfeed era.
Yeah, I would be very much for anything that democratizes technology. I feel like that was one of the best parts of the internet in its infancy. A guy with a food truck could set up a website and get some virality and his business could go off because of that.
Now, it almost seems like we have these gatekeepers like Facebook, Google, all these things that you have to pay specific advertising. There are certain strategies to maximize your visibility on those things. I wonder, all of the stuff that I'm seeing from artificial intelligence, I wonder how much of that is going to be democratized.
Can you shed some insight from your perspective? Do you think that that's how it's going to happen or do you think it's going to be large organizations that get the lion's share of this access?
It's interesting, because here in Silicon Valley, you can see both sides playing out. If you just look at, for example, the startup ecosystem, on the one hand, you have very large organizations like an open AI who are calling these mega valuations and are trying to build almost the next super app as far as the number of different use cases and coverage areas. Then on the other end, it's never been a more exciting time to be a builder in the technology space, because previously to start a company and to be in a position to democratize technology access, you needed a pretty deep engineering background.
AI has completely changed that where you're seeing more and more founders who don't have traditional coding engineering backgrounds, but know enough to use the next generation of AI tools that are only getting better. I use that as a metaphor large for where I see AI opening things up of winner take all versus democratization. I think there's room for both.
I think that what is different about this technology cycle versus the last one where you had ended up with the Googles and the Amazons of the world is that you needed a lot of resources and people and highly skilled people in the R&D space and in the technology and software engineering space in order to basically build these kinds of companies, like generational companies. Now you're seeing companies that are making millions of dollars in revenue a year with a handful of people. There's this new distribution of who has access to what because the traditional gatekeeping around talent has really shifted with AI introducing new access points for people that want to build.
I certainly can see that. My concern was mainly around the idea of someone that can use technology to advance their own agenda and basically use technology as a force multiplier. For example, if I write a book, I can do that and I can put it out there on Amazon.
I can put some content out there on YouTube. I can try my best to advertise this in a non-traditional way and I could gain a following with that. My worry is that unless I am the Atlantic that has access to a company like yours that I can partner with, is that something that will be out of reach for someone that's a solo publisher or a somebody that's starting out in their journey of making a comic book or a food truck or something like that?
No, I think honestly the ways in which people are going to be discovered and found actually only improve with AI. To your point about the way that the early internet was able to create places for creators to find their niche audiences, the personalization of AI enables an even higher degree of niche connection and it creates a degree of scale that was previously unavailable to a solo entrepreneurial bootstrapping founder or company or content side. The example I would give is I think there's going to be a whole host of people that have been traditionally only serving maybe 1% type customers.
You think about a personal stylist or you think about a personal shopper or a nutritionist, anything that is traditionally required a lot of time and therefore is expensive and resource intensive for a single person to do. Basically they charge prices that box out most of the market. I think a lot of those people are now going to be in a position to use AI to greatly expand their current capabilities and as a result reach a broader audience and potentially become more than just a singlepreneur type business, but actually do a lot more in terms of their reach.
That's just one example, but I do think that in many ways AI is more of a multiplier than even the traditional internet was.
What do you think about books? I'm writing a book right now and I feel like it's a lost start. I feel like a dinosaur.
I talk to all my friends, I'm like, what's the last book you read? I haven't really read a book.
You're talking to the right person because I'm a big reader and I also write on the side. In a pre-founder life, I wrote on the side. I don't have a lot of time for extra things right now.
Do you think the ecosystem is people like us that are writing for other people like us? That's what I see as an outsider just because there's so much content that's available and AI is just going to make it that much easier to get that content.
Yes. I think that in that particular area, first of all, it was already highly disrupted when you see how much Amazon has taken over and you've got unlimited subscriptions on Amazon where I think they're paying people pennies on the dollar for their content. We were already in the dark ages, I think, of book publishing and writing and the devaluation of written content.
I think what will be interesting though is that as AI-generated content becomes almost indistinguishable from human-generated content, there will be a degree in which I think people will develop almost a premium on human-generated content. This is going very far afield. I have no deep expertise in this.
It's literally just something that I see a little bit happening already on the technology side where there's this feeling of human-generated code versus AI-generated code. I think that there's a world where human-generated content will remain interesting to a subset of people and potentially maybe democratize access because now you have the increased engagement and increased platform awareness of where to find these kinds of things. I can't sugarcoat that I think it's not a tough time to be someone who's writing right now.
It's just an interesting inflection point. I still like to think I can outwrite an AI, but I don't know how much longer that will be true.
Good for you because I don't think I can. I feel like ChatGPT is just so good that it's very difficult for me to gain the same kind of inspiration that I do because I just know that I could just type the same thing in. I use it as an editor right now.
I'll just write it out freeform and then ask for it to edit. It's always better. It's consistently better.
Maybe that is a reflection of my writing skills, but it could also be the fact that I look at it and I'm like a dinosaur or a vinyl record or something like that that is becoming less and less.
To me, what's interesting now is that I think I spend a lot of time on LinkedIn because of my job and trying to be reasonably active on LinkedIn for getting thought leadership out there. To me, when I read what's on LinkedIn, I can tell this is all AI-generated. There's little ticks to the cadence and the tone, and then it's reinforcing itself because now when I write a LinkedIn comment, it's going to just look at all the other comments and basically base it off of that.
It's like this snaking its tail problem. It's kind of junk. It's not like it's poorly written, but it's just so clearly the way that social media got to a certain point with the filters and the way that the photos looked.
People stopped liking overly cultivated images and they actually backlash against that a little bit more authentic. We'll see where we end up, I think, at the end of all this.
Where do you hope that we end up?
It's a good question. Like I said, I'm an AI optimist. I believe that one of the biggest benefits to this is that many of the tasks that today are a source of either drudgery, or take away from higher-level thinking, and are unsafe.
You think about automation for things like construction work, or factory work, or areas where as much as the disruption in those areas is painful when you think about loss of jobs, you're also thinking about those kinds of jobs have always largely been the hardest jobs for people to do and not necessarily the best quality jobs. I would like to think that AI takes away and automates away a lot of those parts of what it means to work. What remains is then an opportunity to, in collaboration with AI, do much more of this judgment, higher-level thinking, creative-level thinking.
That means that we're much more likely to solve some of the really hard problems when we're not distracted by, how do I write this email?
Mm-hmm. Yeah. Can I tell you what I hope that comes out of this whole AI shenanigan?
Would love to hear it. I want a best bud, somebody that is just like, Imran, you should really do your laundry right now. It's getting a little bit out of control.
Then I say, listen, I don't want to do my laundry. I have other things to do. Then it automatically talks to my humanoid robot and just does it for me.
All of the admin work of 2025, I want that to go away. All of those things, but then also have some sort of relationship with it that's personalized just for me. For all those builders out there that are listening, make it happen.
It sounds like you want “Her”. You want-
No, no, no. I don't want dystopian science fiction. I want utopian science fiction. Yeah.
Well, I think “Her” was not very dystopian. I think her is one of the most positive views of technology that I've seen in terms of film. It got dystopian, I guess, in terms of falling in love with the AI, but it was mostly a view of the world in which AI has humans best interest at heart.
It's funny because you say best bud because sometimes I joke that Chachi BT is my therapist because I'll ask questions about some problem that I'm having or something that I'm stressed out about. Then the way that it talks to you where it always asks a follow-up question. The other day, it asked me if it wanted to create a mantra that I could put on my phone to reinforce positive thinking throughout my day.
I was just thinking, I'm like, wow, this has really gotten to a point where I feel a personal connection to this chatbot.
That's becoming very common. I read about in the newspaper or the news that I filtered through Google News that a lot of people are having that sentiment. If we're going to have that sentiment anyway, let's have it be beneficent, be our best friend.
I really want that for my kids to have somebody that's just universally nurturing. I just got my three-year-old, this little dinosaur that has a little bit of AI hardware in it. She has conversations with it as if that is her best friend.
Realistically, they might have a connection with something that's non-human that is more significant, more impactful than even with another truly human best friend. I don't think that that's necessarily a bad thing. I think that all of us need nutrition. We need water. We need sunlight. We need all of these things, just like a plant.
One of those important things is that we do also need some sort of reinforcement. We need support. We need other things that I think that we don't always get in human nature in 2025. That's what I hope for. I hope that it's all packaged in one nice thing that I could also get great news articles out of. Hopefully, it will suggest to me nice books.
I don't know if you've thought of any other use cases that might be a little bit atypical than what's being provided to us right now.
Yeah. I think that as far as atypical use cases, I feel like we're just starting to see the tip of this with ChatGPT doing memory. It relates to what you're talking about, where right now, a lot of the ways that people use it is a super charged theory or Alexa or a super charged Google News, basically.
Now that people have realized that ChatGPT is actually storing information about you, I don't know if you saw the prompt that I think was pretty viral recently of asking ChatGPT, what are your personal blind spots? I think that this idea of the social implications of AI have been underplayed. We've thought about it as less of an assistant and to your point, more like a friend or relationship side of things.
I feel like that is very much in my mind, one of the next waves of where things will go, particularly because one of the areas we've made the most progress in with AI is the lifelike nature of it. In fact, it's one of the dangers because it does still get things wrong. It does still hallucinate, but it does so in such a confident way because it sounds so human-like and so real.
One of my favorite AI tools is Notebook LM, where you can basically make anything into a podcast. It always blows me away because it's a conversation. It's two people, two AI voices.
My husband likes to tell long, funny stories. I literally put one of his stories in there for him and had it turn into a podcast where they're talking about it. It was amazing.
If I had shown that to someone that didn't know it was an AI, I think they would have believed that it was two real people that I had somehow gotten to make a podcast for me. I think that lifelike element of it, the fact that we've made so much progress in that area, means that in some ways the social and emotional benefits of AI are more ahead of the enterprise workflow, use it for legal, use it for finance, use it for healthcare-type use cases.
I want to ask you more about some of the other AI tools that you're using that maybe I don't know about or maybe some other people might not know about, but I do want to just put a pin in this idea of being a social tool for us. There was an experiment that was done back in the 50s or 60s where they gave young baby monkeys to a really soft and really cuddly maternal substitute and something that was very sharp and they couldn't really have any affection with it. The monkeys that had this soft maternal substitute lived totally normal lives, developed normally.
Obviously, the one with the really off-putting maternal substitute did not do well. They ended up being overly aggressive or having a ton of different problems. To me, I think that's something that I hope to see from the AI community, but I don't know if it's out there.
That's another reason why I want to know if there's anything that you know of, of something that is like a friendship substitute or just like a support substitute, because that's something I feel like is super lacking in today's society. Although we have more connectedness than ever before, there's a male loneliness epidemic. There's just a loneliness epidemic.
That's something that I was hoping that's out there because I do think it would be fine if it came from a robot or if it came from a chatbot, something like that. Do you know of anything that is out there like that? Then also, what are some other tools that you're using that we may not know about?
Yeah. The interesting thing is that the company that my co-founder and I were working at prior to starting in factory was working in the space of AI hardware and largely thinking about this idea of an AI companion. That tech actually was now acquired by Hewlett Packard.
They actually spun out a separate group called HPIQ. I'm very curious to see. It's a bunch of really brilliant people who had previously worked at Apple and had been very involved in the original iPhone.
They're some of the most empathetically oriented technologists. I think that whatever comes out of there, there's always the paranoia I have of HP as a big company. What does that mean for what they're actually going to end up building?
Their focus area had been on this more human empathetic AI experience. There's a few other companies that have been building similarly on the hardware side. There's Rabbit made a product that was meant to be a little companion.
Very cute. Yeah. Heard all about that.
Yeah. Didn't deliver on the technology side, but I think the vision was there. I have a product called BEE that is a bracelet.
The idea is that it's recording your memories as you go. It's a little black mirror. I think what's interesting about it is that it's really catered towards this idea of, I have a young child.
This idea of having a way to collect moments as they're happening.
Talking to it? Is that how you're recording them?
Yeah. You can record that and it knows it's location tracking. It's knowing where you are.
You have to embrace the idea that some of these devices are going to know a lot about you. Even the Ray-Ban, the better Ray-Bans do this really effectively.
I'm on the verge of getting those.
Especially if you have young children, I highly recommend because it creates this nice bit of presence that you have with your kid. If you're like any other parent, I'm big on holding the phone in front of my kid to take pictures. With the Meta Ray-Bans, you can record and take photos just in a very subtle way.
You're still in the moment with them. I use them at Disney World where we're on rides together. Rather than trying to document everything with my phone, there's this presence.
I think on the side of is there a more humanoid friend experience? There's a lot happening on the software side there. There literally is friend.com where they got a lot of buzz for paying for that particular domain. I think that we're going to see very, very soon something that is the combination of both the hardware and the software progression in this area. There's a bunch of cool things I think that are right on the cusp in terms of where the technology is and the fact that usually that's going to be married with some kind of hardware experience if you're talking about the social side of it as well.
What else are you using in your own life?
Some of the ones that I love, I'm a big lovable user. So that's a prototype and mockup tool. I use it a lot actually at work because I don't have UX design skills and I'm not a front-end engineer.
But when I'm talking to the team and I'm describing something of an idea I had or something that a customer told me that they'd be interested in building, I can literally just type in natural language a prompt and it will build me a working prototype or a working MVP. In fact, I have a friend who is running her own business right now. She has no coding skills whatsoever. You're brilliant, Josefina. I'm not putting you down. She was able to get an entire application spun up just using Lovable.
It's really just a point about democratizing access. If you're the mom and pop shop or a single entrepreneur or you're someone without traditional technical skills, the ability to give people visual demonstrations of what I want the team to build is huge and saves me so much time from either trying to explain it or having someone mock something up for me or even showing the customers what I'm thinking about. So that's a big one.
I mentioned Notebook LLM, which I love for me too because I commute here in the Bay Area. I spend two hours in my car every day. And so having an audio, I'll sometimes ask it to do basically a podcast version of articles that I'm trying to catch up on reading or have it do a report that I received from someone and just basically create the audio version of that is kind of a nice little time hack.
So that's a big one. And then the team here has become very pro using Cursor as their major coding co-pilot. And it's amazing. My co-founder has been coding for 30 years, more than 30 years. And he's like, I've never been so efficient and so able to... He's like, I basically feel like I have another junior engineer that sits on my desk and does a lot of the basic blocking and tackling. And then I can just manage that junior engineer to just significantly supercharge my output. So cool time.
That's great. Yeah. So in regards to my own journey into writing and all this stuff, I know you work with a lot with publishers and with journalists and stuff like that.
How is it going to be a different experience for the solo writer now and any tools that I could leverage to make myself more efficient or get my message out there a little bit easier?
Oh, there's Writer, which I think is trying to get a gear itself more enterprise, but they have built kind of a very robust prompt library around content creation. And so it's a little bit like you're kind of buying custom prompts from them that they have sort of refined to give you the best output where you talk about how you're using it as an editor. So in the case of an editing prompt, they really thought about, okay, what do we need to tell the AI to do in order to give you the most effective editing experience?
And I think that's actually something that's gonna be really interesting about how prompts become kind of IP, if they can even be that. I think that what I see on the venture side is interesting. I see a lot of venture capitalists who will put out sort of requests for companies, which is basically them saying they feel that there's a gap in the current market, and they're not seeing it in the deal flow.
And one of the things they want to see are more platforms for content creation, whether it's in the writing space, whether it's in the social influencer space, image generation space, whatever, that solves for not just like right now, we have things like Midjourney and Dolly, obviously from Chattopadhyay, they're doing some of the imaging things, but solves for that in a way that also allows it to be like a marketplace. So I saw one request for company from a VC that was basically like, where is the AI, the Wattpad AI?
Like, where is the version in which like, you're creating content, and you're also finding your audience at the same time. And so I think that, I don't know, there's probably, whenever those kinds of things, calls go out, there's a company that then pops up. So I think we're maybe like six months to a year away of seeing some of those companies start to float out where it's like, oh, you're a writer, you are going to use this set of tools to really maximize your writing output.
And you're also going to use this set of tools to find sort of fellow community members and, you know, share content and share assets and things like that.
It is so hard to keep up with all this, all this stuff. Honestly, it's just, I feel like it's like drinking from a firehose, like all these different things. Like, when you're when you're even when you're talking, I mean, ChatGPT exploded onto the scene, what, like two years ago, you know, like it was, it was not even that long. And now everybody's using it. It's crazy.
And it's, it's building on quicksand. It's definitely a, you know, as soon as you think you haven't mastered, something happens and changes the whole thing.
Yeah. And I think that like, the, the, the real tangible benefits have only been realized by a section of the population, too. Like, even though everybody's talking about it, I don't really feel like we've, we've even harnessed even like a small fraction of, of this stuff.
And who knows what it's going to be? What do you think is going to be the part that you look forward to the most? Like, what is the sector that you want to be revolutionized by AI the most?
I mean, I think that journalism is something that you're working on right now, which I totally get. I'm, I'm seeing a lot more people go to ChatGPT or to Claude or even the, the Google AI, Gemini, to get basic answers to questions at reference articles that, you know, I think that that's something that, that it definitely exists. But certainly as a founder in this space, you might have some, some other thoughts or some other insights about some places that are just ripe for innovation from artificial intelligence.
Yeah. To me, one area that I'm excited about, it's probably relevant to you as well, is the clinical research. It's really interesting.
So like we were talking to a company that was trying to kind of figure out what to do with this large corpus of pharmaceutical research data they have about basically like every single clinical trial that's ever been run on, for pharmacy drugs. And you think about, we're not quite ready for this, but if you think about the fact that like a human couldn't ever possibly digest every single study that's ever happened, find, you know, kind of correlations or unexpected findings about like, well, this study didn't test for this variable that maybe if it had done it this way, then, you know, based on this other study that we might've expected this outcome, like there, there has to be in my mind, some nuggets of like pretty amazing discovery that have just been sort of locked away. Because even though for a long time, we've had in pretty basically every industry, like more data than we know what to do with.
It's like the, the challenge of data has always been like accessibility and siloed and certain, you know, certain people have access to some part of it, but not another. And so if you have a brain that is essentially infinite, like an artificial brain versus a human brain, I think that the possibilities for things like, you know, eradication of disease and drug discovery and just general improvements across healthcare research are going to be explosive in the next, at least the next decade, I hope sooner. But, you know, it's an area that I don't spend a lot of time in, but it's one that I always think we've not even scratched the surface of what's possible.
Yeah. What's interesting to me is when you were saying that I would just, I feel like that is such a foundational change that it would accelerate the advances. It was, it would accelerate technological advancement, just a baseline, right?
So those, those, those, you know, genetic change or genetic advancements or pharmacological advancements, pharmacological advancements, or, you know, surgery advancements might be, there might be independent teams that are working on them. But if you have something like that, that's able to cross-reference everything, like it would, it would just rise the level of technological advancement just in general. Yeah.
That's, it's pretty interesting. We're living in such an exponential time and it's, I always love talking to people like you that are on like the bleeding edge of it, because I do think that AI is really kind of pushing all of us forward. It's this foundational technology that we really don't know enough about, you know, unless we're talking to somebody like you that is in it, you know.
So thank you so much for, for enjoying, you know, spending time with us and, and sharing with us your insight. We are getting close to the end of the time that we have together. So I did want to end with our three main, main questions that we ask all of our guests.
The first of which is, we kind of touched on science fiction. That's kind of an inspiration for me. And this is something that I really feel like I'm a techno-optimist.
I look at utopian visions, like, you know, Star Trek and some of these other, you know, pie in the sky future. And I look at that and I'm like, man, I'm really excited to, to have my own robot that's able to do my dishes and flying cars and all that stuff. But what about you?
Like you're, you're in this kind of bleeding edge industry. What, what is inspiring for you?
I think what's inspiring for me is that I've, I've been in technology my whole career. I started in software and I've seen different sort of waves kind of come and go. And, you know, to your point about exponential, there, there's nothing that comes close to what's happening with AI.
Like I think Sam Altman has a quote where it's like, AI is a technology that people both underestimate and overestimate. But at the end of the day, it's by far an underestimation. And I think that if you're as close to it as sort of we are and what we're doing in InFactory and, you know, being here and kind of the epicenter of it all, what you see is like everyday advances that, you know, used to be measured in months and years are now being measured in weeks.
And, you know, the, the idea that your dream of the, you know, the self-serving or the, the robot assistant is no longer like that far from science fiction. Like we are living in like a time that I think is hard to imagine even 10 years ago could have existed, but it's moving at such a pace that like, I get very excited that the future is, is, is really unfolding now. And it's not, oh, it's, you know, something for like our grandchildren to experience.
It's like, no, we're going to experience and our children are certainly going to experience it. Like it's, it's like the world is going to be a very different place next year and it's going to be a crazy different place five years from now. So I think that's, that's exciting and inspiring.
That's exciting and inspiring for people like you and me. I think both of us are really excited about change. I can tell you that not everybody is, you know, a lot of people look at AI as being this, you know, really concerning thing that is, you know, an existential threat to humanity.
What do you say to those people? I'm sure you probably have people around you that are, that feel the same way, maybe not so much in Silicon Valley, but maybe in your family or something like that. But, but you're, it sounds like you're very optimistic about AI.
So, so what would you say to those people?
Yeah, I think what I would say is, first of all, those, those fears are completely relevant and I don't dismiss them offhand at all. I think that there are real risks to this technology. I think that we have to build it in a responsible way.
It's why, you know, we're focused on things that we are here at InFactory and why I've chosen to make this sort of the focus area around trust and accuracy and, you know, human created content still playing a role. So it's, it's not a given, it's not inevitable that AI will ultimately be a force for good unless we build it that way and we focus on that. I think what I would also say though, is that AI is almost this reinforcing thing where it's sort of what I was saying about, you know, drug research or, you know, genetic discoveries and things of that nature, where the smarter AI gets, it's almost like does AI get smart enough to solve some of its own problems, right?
Like one of the biggest concerns I have about AI is the environmental impact and the energy consumption that it takes to power up these data centers. There's obviously improvements happening there around, you know, just requiring less GPU to compute, to actually run the models that the demand is growing. So how much is that really helping?
But I also believe that if anything is going to solve, you know, things like the climate crisis or things like, you know, how do we make a more energy efficient, you know, data center, it's probably AI.
Yeah.
So, you know, it's like, it's like the ironic part of it.
Yeah.
Yes, exactly. Exactly. And so, you know, I think that capacity, I think is important.
And, you know, it's, it's like any technological revolution, there's always disruption, there's always a degree of fear and anxiety, and even real pain. Like, I don't discount that either. I do believe people's jobs are going to change.
They already are. I think people's the nature of what work is, is going to change. I don't know if we're going to have, you know, I don't know if by the time my son has a job, he's four now, whether there'll be something like a 40 hour work week, and many of the jobs that people do now, I don't know if they'll really exist.
But I think that doesn't necessarily mean that humanity comes crashing down. I think it means like, our experiences as humans might fundamentally change for the better.
Yeah, I, I really appreciate that, you know, insight. I think that there, I never really thought about like, using it as a tool to solve the problems that we're facing, you know, I thought of it more of like this kind of novelty that would make our lives easier. But certainly, obviously, that's something that people like you are probably thinking about.
Okay, so second question, normally, I would ask somebody in like, 10 years, where do you see AI? But I feel like that's like, not even like a real question. You know, it's like, who knows, right? Like, it's just, it's happening at such a breakneck pace.
But you have to predict a year from now.
Exactly. Right. So it's like, not even applicable here. But you have young kids, I have young kids, what do you think that is it their lives are going to be like with AI?
As I was saying, I definitely think the nature of work is going to be completely different. So I think many of sort of the traditional occupations of knowledge work and of sort of more like, what we would call essential workers, or, you know, manual work will look very different if they exist at all. I think that as a result, I think that people's day to day is going to look completely different.
And ideally, they will have more time to pursue, you know, passions and creative pursuits, and potentially, like more high functioning, creative type roles. You know, it's funny, like, I've been joking that it's like soft skills are finally cool. Again, like I was a I was a history major, which, you know, is like, so impractical.
And yet, now it's like, Oh, do you actually like know how to interact with other humans and communicate? That's like something that is still not fully automated by AI. So focusing on some of the the human side, instead of what you're good at.
And I also think that, in terms of where that, you know, you're talking about what, like technology could look like, as far as, you know, robots in the home or things replacing like their day to day, I think that, like, those things will be real for them, like, they won't be science fiction at all. Like, I think that that's our parents, you know, maybe I thought we'd have flying cars. And clearly, that didn't happen.
But our kids will have self driving cars, they will have, you know, at home robotics that are doing kind of the drudgery tasks for them, they will have personalized assistance and agents. You know, they will have the ability to record basically any memory or any moment from their lives, like, like all of that will be very, very real for them.
Hmm. Yeah. Interesting. Very interesting. So last question, it seems like you're, you know, at this really incredible time with this really incredible technology, but outside of your own field of artificial intelligence, what technology or what kind of scientific breakthroughs are you looking at that you just can't get enough of? So to preface that question, like, for example, for me, I'm in medicine, I cannot get enough of these humanoid robots, like every time there's like a new Boston Dynamics video, I'm watching it.
Every time there's like a new Tesla optimist video, I'm watching it. So like, for me, humanoid robots, that's the thing, like, I feel like that's such a metric for we are living in the future. And even though it's not my field, in particular, it's something that I'm following tangentially.
But what about you outside of artificial intelligence? What What are you looking at?
Self driving cars, for sure. So here in the Bay Area, we have Waymo now as now Zoox, I think just launched in certain parts of the Bay Area. So I use Waymo, instead of Uber, like, and Waymo now has larger market share in the Bay Area than Lyft does, kind of wild.
And it just feels so cool every time, like, I, you know, the tourists come and take photos of it. And I still am not like jaded about it. I every time I get in, and you see the car with nobody in it, it just feels so miraculous.
Funny, one of my co founders actually was featured on Waymo's Twitter page, because they took a Waymo to City Hall when they got married, because they're also like Waymo super fans. It's now the thing that like, when people come to visit, I'm like, we have to write into Waymo. Like, that's, that's more important than like going to the Ferry Building.
We got to do that if we're in San Francisco. So partly because I, as I said, I commute, I hate the fact that I have to drive to ours, I think about all the time I could get back, if I could take a self driving car to and from where I need to go. I think about the safety implications of that.
Like I, as a parent, I dread the day I'm in a long way off from it. But I know I'm going to be terrified when my son gets behind the wheel of a car. And now I'm thinking maybe he never will have to because, you know, it will be done for him.
And I think it will be safer than human error. So self driving cars, you know, I hope we we continue to see that expand outside into other cities in the US.
Yeah, I'm really looking forward to when that's a standard application on all cars, but not quite there yet. Hopefully, we'll get there soon. But I think that this was a really interesting conversation.
And I really appreciate you coming on all the best of luck with all of the stuff that you're doing. And I hope that you guys succeed in making not only journalism, but just all AI very true to the facts and very, very adherent to veracity. And for all of those of you guys who are listening on a regular basis, thank you guys for joining in.
As always, if you could like and subscribe, and let us know if there's anything else that you want to talk about other than artificial intelligence, and all the other technological breakthroughs that we're talking about. But Brooke, thanks again. And for those of you guys on a regular basis, we will see you in the future.
Thanks, everybody.

Brooke Hartley Moy
CEO and Co-Founder, Infactory.
Brooke Hartley Moy is the CEO and co-founder of Infactory, an AI platform for businesses and developers that depend on accuracy. She's been a deal marker in the software industry for over a decade, from joining Salesforce as part of a significant acquisition to launching Slack’s Customer Success team in APAC to managing Android partnerships at Google. She most recently managed international expansion, corporate development and partner strategy at Humane, working on AI partnerships across hardware and software.