How To Use Generative AI To Power Faster Innovation

  • 5 months ago
This panel at Imagination In Action’s ‘Forging the Future of Business with AI’ Summit of Katie Trauth, Paqui Liana, Glen Coppersmit, Michelle Fang and Ted Bailey discuss four principals for utilizing generative AI for good.

Subscribe to FORBES: https://www.youtube.com/user/Forbes?sub_confirmation=1

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript

Stay Connected
Forbes newsletters: https://newsletters.editorial.forbes.com
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.

Category

🤖
Tech
Transcript
00:00 Hello, welcome. This discussion, I am so honored and thrilled to be able to facilitate and
00:08 have these luminaries in the room. The topic is Gen AI for Innovation. Hi, everyone. I'm
00:13 Katie Traut Taylor, CEO and co-founder of Narratize. And we have with us today Ted Bailey,
00:20 CEO and founder of Dataminr, Glenn Coppersmith, Proactive Health at ARPA-H, who is also sort
00:26 of here today more so as a founder in ML AI over the last decade. Here as a person, not
00:33 as a guppy. We have Michelle Fong, who leads strategy in the office of the CEO at Cerebris,
00:39 and Paki Lozana, who leads tech strategy at IKEA. We have four principles we're going
00:44 to drill into and sort of go lightning rounds and hopefully every voice is heard in each
00:49 part. Okay. Really for us, we met together, we identified what we see as four emerging
00:56 best practices for leveraging Gen AI to innovate faster and better and in more human led ways.
01:04 The number one that we want to start with, embrace approaches that democratize innovation.
01:09 Paki, you want to kick us off? Yeah. And first of all, I would like to say like, I'm super
01:15 excited to be with you today. It has been for me always a dream to be at MIT. So finally
01:21 I made it. But yeah, democratize innovation. That's a big word. I think we are talking
01:26 about this, there is this transition of, from humans adapting to technology, to technology
01:34 adapting to humans. And I think generative AI together with other technologies like special
01:40 computing are offering new ways, new intuitive ways to connect to technology in a new way.
01:48 And that is, I think, what is opening this door of democratizing the access to technology
01:54 for innovation. We at IKEA, I've been so proud to, I think, be proactive enough to start
02:03 experimenting with Gen AI almost more than a year ago, creating a small step, you know,
02:10 it's a revolution to start with a small step. But we gathered together with a couple of
02:15 colleagues to found the first community of interest to work with generative AI. So we
02:22 had a sandbox to test different ideas. And in this community, we started small, but suddenly
02:29 become one of the biggest communities at IKEA with more than 600 people working together
02:36 for AI. And so we had people coming from the store towards HR or even, yeah, working in
02:45 operations and everyone was dreaming about different use cases. And together with that,
02:52 we were of course working with our digital ethics team. So we were able to include from
02:57 the really beginning ethical frameworks into the system. So that ended up into founding
03:07 an AI lab. And now our AI lab focus specifically in the supply chain. And we are using, for
03:16 example, together with a special computing, different use cases to improve the, for example,
03:24 minimizing damages. And we are working in much more that I hope I will be able to share
03:31 one day. But back to the democratization, I think that also ended up involving the customer
03:39 into this change of innovation. So I don't know if this is an IKEA dream, but at least
03:44 it's my dream that what about if every single customer can be their own interior designer?
03:51 So I believe AI is part of that story. I love it. It's incredible to hear the sort
03:57 of wildfire that catches. And I think that's a story we hear across so many enterprises
04:03 that we have so many use cases and so much interest. How do we manage the pipeline and
04:10 the excitement here? And how do we decide and prioritize Gen AI projects has been a
04:14 major talk among enterprise over the last couple of quarters, especially as things ramp
04:18 up more. I think this idea to me, it's about harnessing ideas from everyone, everywhere,
04:25 and helping them tell those stories in more effective ways faster. And I'll share one
04:29 quick story around that. But last year, Neurotides partnered with the United Nations and the
04:34 World Food Forum. And as part of their transformation and research challenges, global teams from
04:39 around the world apply to get research funds. And so we integrated Neurotides into that
04:44 process. And these are teams that are literally solving world hunger, agro system, food, food
04:49 insecurity teams. And with Neurotides, it turned out that 87% of the research fund winners
04:56 had used Neurotides to write their pitch. And we thought, you know, wow, what all of
05:01 us in the room are probably like, this is a great, great case study. It's a great ROI
05:05 story around Gen AI. Yes, it is. But what's more exciting is that's a great use case for
05:12 humanity. How might we use this technology to leverage and pull and enable people to
05:19 share their stories, to share their concepts, and create different types of systems. And
05:25 so I love what you're building. And I'm so enamored by the culture at IKEA to facilitate
05:30 something like that. Glenn, would you speak a bit as an AI/ML founder who's, you know,
05:36 exited and then went into the big scene. Share your thoughts on like public intellectualism
05:42 and research and how that creates access. Yeah, thanks, Katie. And wonderful to be here.
05:48 MIT rejected me when I applied. So I'm happy to be back here on stage. You made it. There
05:58 is something as profound as it is to be able to tell the stories. The things that I'm seeing
06:02 in my corner of the world is it access to information, scientific academic publications
06:09 has become easier. And it's not that they were always there, but they're written by
06:14 academics. And so the fact that Gen AI has found ways to adapt the grade level, to synthesize
06:22 stuff, to help you find that darn citation, that you know that this exists somewhere,
06:27 but I don't know where. And being able to look at some, you know, even rudimentary rag
06:32 implementations that get into the academic literature have democratized access to all
06:36 that publicly funded research. And so there's a couple of really neat, neat things going
06:40 on here. There's some neat meta analysis work. How could you automatically run a meta analysis
06:45 today about across whatever it is that that one has always been intriguing me. And then
06:51 the adjustment of grade level of information. And so we this ability to change register
06:57 and meet you where you are to give you the information that US tax dollars have already
07:01 or taxpayers have already paid for. Incredible. And Ted, you wanted to share around, you know,
07:06 real time insights. How is the speed of information sharing changing access? Yeah. So as background,
07:15 I mean, Gen AI has really changed a lot of what data miner my company has done in the
07:20 last few years. And just as background, my company data miner detects events in public
07:26 data. As you can imagine, the type of data fragments that you can discover events in
07:31 are very complicated and hard to understand. And Gen AI for us has been able to actually
07:36 describe those patterns in very succinct and tangible ways. And recently we've been able
07:42 to take Gen AI and make event detection combined with ongoing dynamic event description and
07:49 use Gen AI to actually Gen AI to actually bring to life ongoing dynamic situations.
07:55 So I think that, you know, Gen AI is up to its billing in its game changing nature. And
08:02 for our company, it's really, really brought us to the next level. Incredible. And Michelle,
08:09 let's hear your perspective from in terms of compute, the chip side of the house, even
08:13 government, if you can sort of speak, because I know you have this incredible journey where
08:16 you sort of have helped drive policy change. Yeah. Yeah. Can everyone hear me? Okay, great.
08:23 So I think when we talk about Gen AI, there are three components to it. There's compute,
08:28 there's data, and then there's the algorithms. And so I, so we build wafer scale chips. What
08:33 that means is we build this huge chip that's purpose built for AI. And then we run on,
08:38 we put these chips inside of systems and then we build models on top of these chips. And
08:41 so the perspective I want to bring today is like to democratize AI, you need compute,
08:46 need to have accessible compute for people who want to build models, who want to leverage
08:50 this technology. But in addition to that, you need to have data and you need to have
08:55 the right team of people, machine learning engineers who actually know how to utilize
08:58 the compute and the data. And so when we talk about a new technology that has emerged, the
09:03 benefits usually accrue to like the first group of people, perhaps concentrated on either
09:08 side of the coast. But then when you talk about like, how do we actually democratize
09:11 this innovation? You also need to democratize, not just GPT or chat GPT, but also you want
09:17 to have people who want to own their data, their proprietary data, but also leverage
09:21 AI within their enterprise. And so where Cerebris kicks in is we provide the compute wafer scale
09:27 systems and then we also provide the machine learning expertise and the people to help
09:31 you utilize your data to build models that are yours. And so I think that is more of
09:35 the enterprise focus where I think right now people know they want to use AI, but they're
09:40 trying to figure out what metrics, what outcomes do I want with AI? And then after you figure
09:44 that out, you want to get the different components of the recipe in place so you can actually
09:48 own and build a model that's yours. And then I think the second component to that is like
09:52 AI and chips are both technology that's very important to the United States. And so I think
09:58 with that appropriate governance is also very important in terms of who has access to this
10:02 compute with access to these algorithms. And so without diving too much into that, I think
10:07 governance is also a point to be discussed as well.
10:09 Okay. So number one, takeaway, embrace approaches that democratize innovation. All right. Ready
10:16 for lightning round number two? We're speeding on. Responsible AI. So ensuring responsible
10:22 adoption of AI. What are the adoption barriers and risks? What are some of the best practices
10:27 around tackling them? I know it narratized the very first critical hurdle that we had
10:32 to overcome for enterprise customers was security, privacy, and from there explainability, transparency,
10:38 DEI, so on. And so Michelle, could you actually let's get in a little deeper. Can you share
10:45 more about your Senate experiences, what it taught you, and when do you think, what regulations
10:53 do you see coming around the corner perhaps?
10:56 Yeah. Yeah. So I'll start off talking a little bit about the Senate, but I think a practical
11:00 use case might be more helpful. So in the Senate, I worked on AI policy last year for
11:05 Senator Blumenthal from Connecticut. As part of that, we had a series of hearings where
11:10 Altman, Dario came in and the Senators got to ask some questions. I think regulation
11:15 takes a long time. The first part is fact finding. And so these Senators and people
11:19 on the Hill figure out what is AI. And then afterwards, as technology develops very quickly,
11:24 they develop appropriate regulation to make sure that the people who are actually using
11:28 and owning this technology are doing so safely. And so in terms of what's coming around the
11:33 corner, cannot predict and speak on behalf of the government, but I think a really important
11:37 debate right now is closed versus open source AI. I think a lot of the very interesting
11:41 work happens at research labs, but also the open source community is growing very quickly.
11:46 So how the government decides to regulate closed AI companies versus promoting an open
11:53 source community that they may have risk within cybersecurity, national security, and seeing
11:58 how this debate plays out and preparing adequately, whether you're an open source AI company or
12:03 you're working at one of the large research labs, I think is one point to focus on. But
12:07 secondly, and really quickly, I wanted to talk about the risks in terms of like data
12:12 privacy. So one of Cerebris' partners is the Mayo Clinic. And so they came to us, they
12:17 have a lot of unstructured data, patient data, genetic data, and they want to leverage AI
12:22 within their enterprise. And so when we partnered with them, we basically went in and our researchers
12:26 helped them identify here are two project areas that we can help you use the data that
12:30 you have and help equip your cardiologists. So they have better interpretations of ECGs.
12:35 They are able to interpret rheumatoid arthritis better because we are able to use your data
12:40 and build a model for that. And so with that, we have to get HIPAA compliant. Our systems
12:44 are all secure and it's own cluster in the data center. So if you actually want to work
12:48 with proprietary data, unlock the value in that there's a lot of steps of preparation
12:53 you have to go through before you can actually get to that value. And I think if people want
12:58 to get there, I think preparing early to get through HIPAA to be compliant for data privacy,
13:02 I think is a very important first step. Amazing. Thank you so much. And Ted, we'd love to hear
13:08 your perspective too, if you can share around how is DataMiner navigating and supporting
13:14 enterprise especially and navigating the checks and balances? Yeah, DataMiner, we focus our
13:20 AI on a very particular task of detecting events in the physical world. So in some regards,
13:27 the task itself is constrained in a way. I think the customers that use our signals that
13:35 we deliver have put a number of guardrails around their use cases to ensure ultimately
13:42 knowing about something in the world is used for use cases that they would support. But
13:48 I think for us, detecting events fast and delivering that knowledge to our customers
13:56 is sort of a baseline on top of which a lot of the use cases they want to take happen.
14:04 So for us, responsible AI is a bit of a different question because we particularly focus on
14:10 one task. I want to say one thing that what your comments made me think of. I mean, open
14:15 source is such a fascinating area. I mean, at DataMiner, we use open source models. And
14:22 I think that ultimately there are security risks, but companies which can adopt open
14:29 source models and explore models as they come out and adapt the core AI platform that you've
14:35 built to sort of keep up to speed with the latest innovations. From DataMiner's perspective,
14:41 our platform has thrived taking that model. And I think with all of the regulations that
14:46 come out and all of the guardrails people want to put on the open source community,
14:52 it's such an incredible catalyst of innovation. And I think so critical to AI really being
14:59 able to power all the applications that are possible.
15:02 Yeah, as they say, constraints are the best way to innovate. We all need constraints.
15:08 If you could share the slide. Yes. Okay, perfect. Okay, let's dig deeper into research, though.
15:16 And I think open source, open access, I heard you kind of pivot. Number three is lean into
15:24 rapid access to research findings. Research is changing. And the question I posed for
15:30 all of us to answer is what does the future of science, tech, and medicine look like now
15:36 in six months, in two years? And I think it's such an exciting question to be sort of on
15:43 the forefront of getting to answer as Neritize in particular, we support high reliability
15:47 industries from aerospace to medicine. And it's critical to achieve the highest levels
15:53 of accuracy possible. And so working with some of the exciting aspects of everything
16:00 from, you know, RAG to GraphRAG to all the ways to sort of put guardrails, I just I think
16:07 the challenges that still remain or the ways that research is going to change is something
16:12 that we all want to dig into. And one of the biggest challenges is peer review, which is,
16:17 of course, where all validated research goes to live outside of the IP protected insights
16:24 within corporate. Only 30% of peer review is open access. For all kinds of business
16:30 model reasons related to publishing. But that truly limits what we're able to build in terms
16:36 of the more fine tuned models or the ways in which we help retrieve validated evidence
16:44 based knowledge for users. And so, Glenn, could you speak a little bit to some of those
16:49 challenges specific to healthcare?
16:51 >> Yeah. There are certainly challenges within that piece of this. The larger opportunities
17:01 that I see there, we're going to have to overcome a bunch of those things. But if you look forward
17:05 to how are we able to adapt to each individual person, we need access to all of the research
17:12 that has come before this. A lot of this is locked up in various places. But if you think
17:17 about the dream of what becomes possible there, so where might we be able to go in a couple
17:21 of years, thinking through, we heard some talks this morning about personalized medicine.
17:25 I don't know how many times I've heard it here at this conference today. And that's
17:30 a very that's one very specific piece of this that's going to depend on us finding ways
17:35 to share this data. And so, there are great questions around things like rare diseases,
17:40 where if you have a piece of the puzzle on a rare disease, how are you going to figure
17:43 out where the rest of those pieces are? And we don't really have an answer for it right
17:47 now. And this comes down to like that proprietary data that's locked up somewhere is actually,
17:54 you know, it is perhaps even more valuable if you're finding ways to combine it with
17:58 other proprietary data. So, I don't have the full answer for those things. But when I look
18:01 forward over a couple of years, if we're able to solve some of those things, we get to look
18:05 at personalization, we get to look at what does this person right now need in their medical
18:09 and their psychological and their well-being journey.
18:11 Okay, and I think it's fair to say research doesn't never have to be only relegated to
18:16 peer review. So, Glenn, you created one of the first ways in which we could scan social
18:23 media data to predict suicide. And share your story a little bit. And I just give away the
18:28 punchline. I'm sorry. It's such an incredible story.
18:31 That's fair. I'll say very briefly. And this was ultimately where the technology didn't
18:36 get. Right? Five years ago, we had built a bunch of technology around using social media
18:40 data. Pretty useful, isn't it?
18:44 There's something in that.
18:45 For us, we were trying to predict mental health and well-being, trying to give us what was
18:49 missing in that space, which was the ability to quantify what was happening. What is happening
18:55 with this person today? And this just doesn't exist in mental health in a really meaningful
18:58 way. So, how do you use wearables? Now they talk about digital phenotyping. All of these
19:02 sorts of interesting technologies were nascent versions of -- I mean, I guess we would call
19:07 them SLMs, maybe. We used small language models back then. But that gave us the ability to
19:13 predict with 11 times the accuracy of clinicians if someone was at risk for suicide. And we
19:18 could do that six months before an attempt. And that's sitting on a shelf right now. Because
19:23 the world wasn't ready for AI. The world wasn't ready for any of this stuff.
19:27 And so, I don't actually know the path to solving that problem. That actually becomes
19:31 much, much, much harder, it seems like, convincing clinicians to do something different. Despite
19:35 that fact, right? Now we're all sitting here. This doesn't seem crazy to you. If I had said
19:39 this five years ago, it would have been. And so, it's an interesting story. And ultimately,
19:44 the core of the adoption challenges we're facing are the human ones. The technology
19:48 here is easy, relatively speaking. Humans, so hard.
19:51 Yeah. Okay. Ted, I'd like you to share your story as well around information briefs and
19:56 some of the exciting news that your team is doing in that space.
20:00 Yeah. So, at Dataminr, we've built an AI platform that takes in a million different public data
20:06 sources and text, image, video, audio, sensor streams. And we detect events faster than
20:13 any other source. Our customers range from the Department of Defense to two-thirds of
20:18 the Fortune 50, half of the Fortune 100, using these early signals to essentially protect
20:24 their people, their assets, their digital physical infrastructure, and their communities.
20:29 We've always used a hybrid of predictive AI for detecting these events. And we pioneered
20:35 a category called multimodal fusion AI, which stitches together all these different formats
20:40 to detect events. But we recently integrated generative AI, as I was mentioning before,
20:45 to describe those events in real time, aside the detection.
20:50 Today we launched a new technology called RegenAI, which actually is a real-time regenerative
20:57 description that uses predictive AI not just to detect an event, but to detect ongoing
21:03 developments as an event evolves and update a dynamic event brief iteratively. Let text
21:12 formulate and reformulate itself dynamically as the world evolves. And I think that, you
21:18 know, for a number of use cases that our customers, and we did have a slide that showed it, but
21:23 you're just going to have to look at the data miner tagline incessantly, apparently. But,
21:28 you know, what we've learned is, you know, a marriage of predictive AI and generative
21:32 AI can unlock a lot of things that both fields, you know, independently can't. And I think,
21:38 you know, RegenAI, our announcement today is one of those types of examples.
21:44 So exciting. Okay, Michelle, I'm going to hand it over to you before we close this lightning
21:48 round.
21:49 Yeah. What I wanted to add on top of what Glenn and Tada said is, especially for science
21:55 and perhaps when we're dealing with proprietary data that an organization has, and that data
21:59 is remote, they don't want to give it out. It's very hard to actually get the research
22:03 done and put in public that actually leverages that data. And this isn't a full almighty
22:09 solution, but one way to do it that Cerebris helps with is if you look at this, Cerebris
22:13 has the compute systems and there are specific forms of models and architectures and problems
22:18 that we are better at solving. And so our wafer scale system has high memory bandwidth.
22:23 And so that allows us to solve, look at seismic imaging data that helps us look at genetic
22:28 data and process it much more quickly than GPUs. And so what we do is we allow the companies
22:34 that have proprietary data to keep that data. And we run our, their data on our secure systems.
22:40 And then after that, we build a model with them and then we get that model back to them.
22:44 And so it allows these organizations that have data that is proprietary and siloed to
22:48 still leverage and use AI to find new research findings from that. And so that is my Cerebris
22:54 answer in terms of how do we actually unlock research when there's data that's more sensitive
22:59 and proprietary and how do we actually get the right technology into the hands of those
23:03 people. But yeah. Amazing. Incredible. Okay. So just
23:09 to recap point number three, lean into as much as possible rapid access to evidence
23:15 base and research. Okay. Number four, embrace empathetic human AI collaboration. Paki, I
23:21 would love to, you shared a story with us about memory making. Can you share a bit about
23:28 that and the ways that we innovate through? Yeah. Well, my personal experience working
23:37 with generative AI, it's, yeah, it's had opened up new dimensions of creativity, connection
23:45 and communication. I started exploring generative AI with my grandma one year and a half ago.
23:53 She is 19 years old and she has tremendous stories. So I was calling her every Friday
24:02 and we were creating, recreating her memories. We were visualizing memories where we were
24:09 together and also playing with creating moments of life that she was not able to do, like
24:19 swimming in the middle of the ocean. So that experience working with her, first of all,
24:27 getting us closer and also helping to explain to my grandmother what I was doing, which
24:33 was super interesting for me. And now I have this digital gallery with the memories of
24:41 my grandma and I see how the technology is evolving. And yeah, now it's like very static,
24:49 but maybe in a few days I will be able to enter into the memories of my grandma, like
24:55 navigate maybe one day even smell them, the memories and have some conversations as well
25:01 with those memories. So it's opened it up a completely new world. So what I wanted to
25:09 say is that beyond my personal story, I believe that technology using AI to go beyond and
25:19 see things that we cannot see today. And those that are using, yeah, never afraid to fail
25:30 and to be, to learn together with the machine will really unlock a lot of potential. And
25:37 companies that are empowering those values, like curiosity and critical thinking, I think
25:45 they are the ones that are going to have the best potential and the best competitive advantage.
25:51 So for me to the question, how we ensure, yeah, empathic AI while we are adopting the
26:00 technology, for me, it's a question about values. So it's back about to this fundamental
26:08 question that we need to invest the same or even more in values than AI.
26:18 And the values around human computer interaction, where do you see a balance? Where do you see
26:26 synergy there for you? Are these, is the human and the machine living symbiotically as one
26:32 leading versus the other? What are your perspectives around that?
26:36 Well, I don't know, but I believe in a future where we can use the three different intelligence
26:43 that we have, the natural, planet intelligence, human, us, our intelligence, and of course
26:51 machines. So I think we need to evolve innovation towards a new value system where we can integrate
26:57 those three intelligence.
27:00 And it's going to take so much partnership and global thinking to get there. I think
27:06 too, what I love about what the way that Paki leads at IKEA is to begin with the culture,
27:12 to begin with the value system and think of the human. And I think we share this sort
27:17 of approach to innovation and I'll share quickly at Narratize, we have just launched the world's
27:24 first story infuser with generative AI, excited to announce this. The idea began back in my
27:30 days at the U.S. Department of Veterans Affairs. I was part of a team called My VA, My Story,
27:36 where we sent story collectors to the patient bedside to listen to the veteran's personal
27:41 experience and then we did something crazy. We put that paragraph into the EHR and we
27:47 asked caregivers and providers through control groups and test groups to read that story.
27:53 And those who read it, their provider empathy levels increased and the patient treatment
27:58 adherence increased. So we saw perhaps for the first time in evidence actually a connection
28:05 between story, empathy and a patient outcome. And inside the Narratize platform now we launched
28:11 this story infuser which can be used for so many other use cases besides the one I just
28:16 described. But the idea is how might we capture and ping one another to share those stories,
28:21 to coalate them, to give you the chance to be prompted rather than you having to prompt
28:26 the chat bot and ultimately be able to collect and share and then coalate all of our insights
28:33 into one. And I think if we try to remember things like we are human because we tell stories
28:40 and we remember these types of values, we'll innovate in the right ways.
28:44 Glenn, I know there's some exciting aspects of this, too, that you wanted to share around,
28:50 you know, AI being more accurate in some ways. And so how might that change the future of
28:56 empathy-led medicine?
28:57 Yeah, when you told your story infuser story, the thing that immediately hit me was my -- it's
29:05 amazing what a little bit of empathy does. And so the thing that struck me looking at
29:10 -- this is very, very early on when I was still a bit of a skeptic having lived through
29:14 this for a good number of years, that colleagues of mine, John Ayers and Mark Dredzi, did some
29:23 work very, very early on with this boom of resurgence of interest in AI and took medical
29:31 questions and had trained a -- asked a simple -- this is probably GPT-3 to answer medical
29:39 questions and then asked their doctor friends to do the same thing. And so they were able
29:44 to compare what is a human response to a medical question to an AI-generated response to a
29:49 medical question. And the accuracy, that's the first thing everyone should think about.
29:52 It was pretty good. Okay. And we could get better with rag. Great. But the thing that
29:56 really got me was that the -- when they were asked to rate the empathy, the AI outscored
30:03 the humans by quite a bit. And so this really does change the experience of medicine if
30:10 we do this right. If we're -- this is a -- I hadn't thought that this would have been
30:14 even on the table. But the fact that the machines can take a bit more time, use a couple more
30:19 words, probably adapt more to the people that are there, personalize the message a bit,
30:23 perhaps, that could really change, fundamentally take stress out of what is a very stressful
30:29 experience for many of us. And so there's no greater empathy that might be possible
30:33 from my perspective than fixing some of that experience. And so you're -- the story infuser
30:39 thing, right, one of the things that is present there for many of those veterans is they don't
30:42 want to tell that story again. And the fact that we're able to provide some empathy by
30:47 capturing some of these things and by providing this as part of the medical record by any
30:52 number of other interactions, this is just one interaction, but what are the other ones
30:55 in which this kind of an idea can be present and infused, as some might say?
31:01 >> And I would love that -- I think the data powerhouses up here to share your insight
31:06 on this. Because really what's happened is a revolution in what we can do with data.
31:13 And I don't mean numerical, right? I mean the story qualitative, what's possible, even
31:18 if it's unstructured. Your founding story is one that strikes me, if you can share.
31:25 >> Yeah, no, absolutely. So when I was in college, 9/11 happened, and I was specifically
31:33 studying the dissemination of realtime information and how gaps existed that, you know, prevented
31:42 people from essentially getting out of harm's way during that event. When public social
31:50 media came about ten years later, I thought, well, this is the type of data, public eyewitness
31:56 accounts of the world at large, that if you could combine with a machine learning system,
32:03 could actually solve that gap and actually deliver realtime information in an event like
32:07 9/11, where those that were in the tower didn't know to leave the buildings, and ultimately
32:13 lives were lost. You know, I think that there are many uses of AI that can truly save lives.
32:21 And, you know, ultimately today data miners used by the United Nations in 100 different
32:28 countries, and we've been able to really systematically bring this realtime indication warning capability
32:35 across the world. >> Amazing. Okay. Maybe like one key takeaway
32:42 for each of us up here that we'd like to leave the room with today. Who would like to start?
32:50 Key takeaway and/or ask of the audience. How about that?
32:53 >> Okay. Okay. Well, I'm improvising. But I heard one day that if you repeat a lot one
33:09 concept a lot and a lot like the meaning of the concept disappear. So, today the main
33:18 big word is AI. AI, AI, AI. So, let's ensure that the AI doesn't forget the meaning. And
33:26 we work for the better of society and people on the planet to put this technology to serve
33:33 our needs. I think this is my end takeaway. >> Amazing. I'm happy to follow. I think inspired
33:42 a lot by Glenn and Katie, your story, I think. AI is a sociotechnological innovation. We
33:51 cannot purely think about technically we're able to achieve this accuracy. We can pass
33:56 the U.S. medical license exam at better rates than doctors, perhaps at a certain rate. But
34:00 what makes AI uniquely powerful is I think technically it can get there, but also it
34:06 infuses the empathy and the human side in ways that we fall short in society today.
34:12 And so, I think one line that I would want to think more about, I think, is when we think
34:17 about innovation and AI, these are all buzzwords and there's a lot of hype today, but I think
34:21 thinking about it as a coherent system between society, between technology, and also between
34:26 government that also plays in a role. And so, I think making sure the balanced perspective
34:30 is there and that we push this innovation forward, thinking about it as a sociotechnological
34:35 system. >> I would offer that all of you are stressed.
34:40 I'm stressed over some of the AI. The stuff just keeps moving. And at some point, we are
34:46 going to turn that corner and we're going to be able to use this to be reducing our
34:50 stress. And so, I'd offer that for those of you thinking in this space, what are the ways
34:54 in which you can reduce the stress of the people around you, your users, your academic
34:59 colleagues, whatever those might be? There is an incredible amount of stress and anxiety
35:03 in this world and this sure feels like it's part of the solution. So, please, and also,
35:09 everyone take a breath and take some water. It'll be good for everybody.
35:11 >> Yeah. >> I'll be quick because it sounds like we
35:14 have to end. I mean, I think just thinking about human interaction and empathy, I think
35:19 there's this misconception that something's either AI or it's human, but ultimately, the
35:24 two working together are what makes the outcomes unique and special. At DataMiner, we use human
35:30 and AI feedback loops extensively and I think it really is a combination of the best in
35:36 AI that could be brought out by human interactions and vice versa. And I think that can get lost
35:43 in the general conception of it. >> Thank you so much. We're so excited to
35:49 continue the conversation after the panel. We're hiring principal engineers and data
35:54 scientists. That's my take home. Talk to us. We're growing.
35:58 [ Laughter ]
35:59 [ Silence ]
36:00 [ Silence ]
36:02 [ Silence ]
36:03 [BLANK_AUDIO]
36:13 [BLANK_AUDIO]

Recommended