Transcript - AI That Doesn’t Hallucinate: How Enterprises can Safely Deploy AI Agents - Botco.ai

AI That Doesn’t Hallucinate: How Enterprises can Safely Deploy AI Agents

 

1

00:00:03.440 –> 00:00:16.579

Rebecca Clyde: all right. Hello, everyone. And thank you for joining us here today. My name is, Rebecca, Clyde, I am the Co-founder and CEO of Botco. AI. Thank you for taking some time with us this morning to join us.

 

2

00:00:16.700 –> 00:00:30.380

Rebecca Clyde: I see a few folks are are coming online, and I appreciate everyone for being part of this conversation today, as we wait for folks to join, I wanted to just share a couple of housekeeping items.

 

3

00:00:30.390 –> 00:00:58.069

Rebecca Clyde: We have a Q. And a channel here that’s available for you to use to ask your questions. You can go ahead and add your questions throughout the session. You don’t need to wait for the end. I’m going to be watching for it very carefully and make sure that if something comes up as we’re discussing it, we try to address it in real time, otherwise I’ll be sure to leave some time at the end in order to cover those questions. We’ll be together for about 45 min.

 

4

00:00:58.070 –> 00:01:27.369

Rebecca Clyde: and all of you who registered today will receive a link as well to view this webinar on demand. So you’re welcome to also share it with others. Today I’m joined by my esteemed colleague and CTO. Of Botco AI Chris mehta, Dr. Chris Mehta. So, Chris, I’d love for you just to give a quick rundown on your background, and then I’ll share with folks a little bit more about our company here. Botco, AI, and why we’re talking about this important topic.

 

5

00:01:28.320 –> 00:01:34.399

Chris Maeda: Sure. I come from an academic computer science background. I I was actually

 

6

00:01:34.510 –> 00:01:45.626

Chris Maeda: around for the 1st AI wave in the eighties with symbolic AI, and then lost my religion and went off into do crm and marketing technology. And

 

7

00:01:46.230 –> 00:02:04.329

Chris Maeda: when the deep learning stuff started to give impressive results. That’s when that’s right. About the time we sort of bought Co AI, and we’ve been busy leveraging that technology into our platform here. And it’s been a very exciting time, because it keeps getting better and better and making it easier to deliver

 

8

00:02:04.490 –> 00:02:06.710

Chris Maeda: deliver solutions to our target markets.

 

9

00:02:07.310 –> 00:02:32.260

Rebecca Clyde: Wonderful. Thank you, Chris, and just to give folks a little bit of rundown we have implemented, I would say, over a hundred organizations AI solutions. And so we’ve gotten a pretty good, I would say experience to share with you here today, and some of the primary industries that we serve here at Baku AI are regulated industries such as healthcare. We’ve done a lot in the pharmaceutical sector

 

10

00:02:32.440 –> 00:02:49.179

Rebecca Clyde: in government agencies as well, both state and local government. So if you are coming from one of those industries and you have questions that are specific to those categories, it’s very possible that we have experience there, and we would be happy to cover your questions specifically.

 

11

00:02:49.180 –> 00:03:00.810

Rebecca Clyde: So today, we’re going to be talking about the importance of safely deploying AI agents because we know hallucinations can be a challenge. We’ve heard about those now for some time.

 

12

00:03:00.810 –> 00:03:24.049

Rebecca Clyde: and as we increase the capacity and the capabilities of these AI agents having the ability to maintain adherence to protocols, to policy and procedure is especially important. Everybody’s talking about agents right now. So if that’s what you’re here for, you’ve come to the right place, and we’ll be sure to cover some of your questions.

 

13

00:03:24.100 –> 00:03:35.780

Rebecca Clyde: Of course, you know I’d love to just make sure that you’re aware of what we have going on here at Botco, AI. And why? A lot of some of the leading brands in healthcare and pharma work with us.

 

14

00:03:35.780 –> 00:04:00.759

Rebecca Clyde: It’s because through our platform. They’re able to produce no code enabled agents. So they don’t have to be engineers like Chris Phds. In order to use the product, because ours have already done. So. We train these agents using secure methods and strong privacy protocols, especially because we do handle a lot of sensitive information like patient information or financial

 

15

00:04:00.760 –> 00:04:20.970

Rebecca Clyde: information within some of the workflows that we support, we have been able to develop a methodology that provides AI assisted answers with well, over 98% accuracy. And we have methods to provide audit controls over where the AI is trained and the answers and responses it gives.

 

16

00:04:20.970 –> 00:04:43.109

Rebecca Clyde: And we pride ourselves in delivering a solution that is non-hallucinating. So we’ll talk a little bit more about some use cases and some examples as we get through. But I just wanted to start off there with a couple of just examples of how Botco AI is supporting several industries. So to get going, I wanted to start with my 1st question to Chris.

 

17

00:04:43.110 –> 00:04:55.369

Rebecca Clyde: Chris. You know a lot of people have come to us at Botco AI. Because they come from regulated industries where maybe in the past it would have been considered difficult to implement AI. And

 

18

00:04:55.710 –> 00:05:16.099

Rebecca Clyde: you know there are certainly some hesitations that some of these industries have presented from what you are seeing. Why is this the case? Why has there been, while there’s a lot of promise and eagerness with AI? What are some of the, you know, reasons or kind of hesitations that you’re seeing out in the market when we’re talking to customers in those sectors.

 

19

00:05:17.130 –> 00:05:19.030

Chris Maeda: Sure what I think. The you know, the

 

20

00:05:19.410 –> 00:05:24.790

Chris Maeda: the AI technology is so great because it can produce language. That sounds right.

 

21

00:05:24.830 –> 00:05:39.979

Chris Maeda: But it isn’t always correct. So for applications where you can have human review, it’s it’s you know, it’s kind of a slam dunk. So that’s things like you want the AI to write drafts of essays or notes, and then you can have a human review it?

 

22

00:05:40.306 –> 00:06:09.700

Chris Maeda: You can. You want the AI to write code, and then you can run it or have human review. So that’s great. It’s tougher in a business setting where you want the AI to speak on behalf of the business. Because you you kind of want 100 accuracy there, and that’s not where the ais are very are as good. So we’ve you know, we’ve built lots of scaffolding and guardrails around the ais. So it’s basically things like tailoring prompts making sure they’re not going off topic

 

23

00:06:11.130 –> 00:06:25.379

Chris Maeda: and then regression testing on the models to make sure that they’re always giving you the right answer. So there’s a lot of things you have to do to. You know, leverage this game changing technology to get the, you know, 100% accuracy that’s that’s needed for businesses.

 

24

00:06:25.910 –> 00:06:42.149

Rebecca Clyde: Exactly. And you know, one of the things that I hear a lot about is the need for transparency. So where did the answer come from audit trails, being able to understand what content might have been referenced or sourced in order to generate as a particular response.

 

25

00:06:42.190 –> 00:07:08.059

Rebecca Clyde: Because, you know, as we all know sometimes the the content, if it’s not up to date or correct. If that is being used to train a model or to train an Llm. Potentially, we could, you know, end up giving out incorrect information. And so that that level of explainability and transparency is very important, particularly in a lot of the industries that we’re talking to.

 

26

00:07:08.200 –> 00:07:19.149

Rebecca Clyde: So if you’re in the audience today, and this is an area that you have had to address, or you have a desire to implement AI. But your leadership or management is maybe questioning.

 

27

00:07:19.150 –> 00:07:25.580

Rebecca Clyde: bringing up some concerns. There’s always that one person right who’s saying, but wait a minute. What about A, B or C,

 

28

00:07:25.580 –> 00:07:50.450

Rebecca Clyde: you know, feel free to ask your questions specifically, because, you know, it’s quite possible that we might have already addressed it with one of our existing customers. So moving on to our next question, when you know a lot of the organizations that are here with us today are probably in publicly traded companies, large enterprise organizations, whether they’re in healthcare or pharma, or in government.

 

29

00:07:51.335 –> 00:08:11.959

Rebecca Clyde: What? What are some of the I guess. What is some of that scaffolding, or some of the pillars that are required in more of an enterprise grade AI solution. And and how should organizations go about thinking about. You know all the elements that we need to be part of that AI solution that they pursue.

 

30

00:08:14.320 –> 00:08:35.510

Chris Maeda: I mean, the the way I think about it is, it’s kind of like the the business process reengineering stuff that was happening in the eighties and nineties and their businesses mapped out their processes and said, Where are we waiting on people? Where are steps taking too long? And I think, for the agentic world businesses may have to.

 

31

00:08:35.909 –> 00:08:44.974

Chris Maeda: Rev, you know. Revisit those process, flows and understand. Where are you? Where do you have an analysis step, where do you have a bunch of

 

32

00:08:46.148 –> 00:08:51.861

Chris Maeda: steps that could be automated? And that’s where you could inject a gentic technology.

 

33

00:08:52.490 –> 00:09:13.599

Chris Maeda: I mean, sort of if you look at what AI’s are doing right now. They’re very good at analysis, and that is so. You could think of that as 1 1 possible thing that an agent can do, and then there’s additional steps that can be done as well. So anytime you can, you know, automate a set of steps according to some rules. That’s where you could inject some AI to to speed the process up and

 

34

00:09:13.850 –> 00:09:18.079

Chris Maeda: and and automate steps, you know, speed it up and automate it for you.

 

35

00:09:19.400 –> 00:09:42.910

Rebecca Clyde: Right. And I think in in a lot of those steps, one of the things that often happens is it requires, ask access or authentication to a system of record. That’s something that I’m constantly seeing. It’s like, Oh, I need to be able to access this Crm or this Ehr system in order to extract a piece of information and then determine what the next best action is. That’s something that we see a lot in. These agentic workflows.

 

36

00:09:43.328 –> 00:09:54.170

Rebecca Clyde: So how do we make sure that as these agents are accessing these other systems of record, that there is a component, an element of security and authentication. That’s a part of that process.

 

37

00:09:55.920 –> 00:10:10.379

Chris Maeda: Yeah, I mean, that’s almost the the biggest sticking point right now is the you know, the agents could do a lot of these things. But the you know, we don’t really have the infrastructure to authenticate the. You know the programmatic access from the agents. So I think, as

 

38

00:10:10.974 –> 00:10:32.730

Chris Maeda: more and more systems of record. Get the kinds of Apis that can be used by AI agents. Then you’ll see more and more of these scenarios will be able to be realized. But you know, like, right now, it’s like the you know, websites are designed for humans to log in. Sometimes there’s a an Api sometimes there’s not but there really isn’t.

 

39

00:10:33.930 –> 00:10:49.479

Chris Maeda: you know the kind of Api access that agents might want. Submit, you know. Maybe you’d want, like a read only thing certain kinds of limits, etc. So I think there’s, you know, there’s some infrastructure pieces that need to be built to, you know, to the enable the agents to take this kind of action.

 

40

00:10:50.177 –> 00:11:16.042

Rebecca Clyde: Right? I think we’ll have to see. The use of like role based access control also handed to these agents, right so they have, like you said, maybe read only, or maybe only the limited aspects of the data, not the entire data set. So that you can really control. How they’re being utilized and and what they have access to. Absolutely. So that becomes really important. So just looking at that level of detail,

 

41

00:11:16.920 –> 00:11:45.339

Rebecca Clyde: in in some of the the agents that we are seeing out there. Can you give me an example? I think everybody loves to see like, what are some of these tasks that we have seen agents do? And and even from our own standpoint, that we have supported with customers? Because I think a lot of folks here on the line would like to hear what some of those examples might be, maybe what some of the low hanging fruit could be particularly in pharma, or in healthcare.

 

42

00:11:48.090 –> 00:11:50.596

Chris Maeda: So so yeah, in farm and healthcare, there’s

 

43

00:11:51.370 –> 00:12:09.979

Chris Maeda: yeah, we have some examples like, I think, you know, you’re gonna talk about adverse event, detection and reporting. So there, you know, you wanna you wanna have a model that’s trained to detect adverse events. And then, if it’s detected, there are certain government databases where you might file these reports. So that’s 1

 

44

00:12:10.403 –> 00:12:17.850

Chris Maeda: example of of an agent. The, I guess the you know the the kind of thing that’s getting talked about a lot.

 

45

00:12:17.960 –> 00:12:23.300

Chris Maeda: You know, this month are browser based agents where you might automate different kinds of

 

46

00:12:23.763 –> 00:12:31.996

Chris Maeda: interactions with different websites. And those are sort of you know, it’s it’s it’s still emerging technology. But you could see a lot of

 

47

00:12:32.550 –> 00:12:36.713

Chris Maeda: very simple automation that gets driven out of these browsers.

 

48

00:12:37.560 –> 00:12:43.759

Chris Maeda: And then another example that we’ve talked about would be like, if you’ve got, you know. So say, you’re say.

 

49

00:12:43.920 –> 00:12:50.110

Chris Maeda: one of your business function is to make introductions and walk you know.

 

50

00:12:50.270 –> 00:12:58.069

Chris Maeda: walk potential clients through a qualification process. The the agents can do that as well. So those are

 

51

00:12:58.590 –> 00:13:04.922

Chris Maeda: you know, and and I guess in the case of pharma or healthcare that might be clinical trial enrollment. It might be

 

52

00:13:06.090 –> 00:13:19.020

Chris Maeda: getting patients scheduled with specialists, etc. So there’s lots of things that require. A lot of, you know, today require lots of phone calls and and phone tag and scheduling that that the agents could probably automate very easily.

 

53

00:13:19.320 –> 00:13:29.999

Rebecca Clyde: Yeah, I’m thinking of like insurance eligibility pre authorizations. Those tend to be really big, I would say bottlenecks

 

54

00:13:30.110 –> 00:13:48.690

Rebecca Clyde: in the, in the system that often are just simply caused. Because there’s people involved in those steps. There’s manual lookups taking place in order to check those things. There’s phone calls being made right. I hear about some of our customers have gone from having to wait on hold with

 

55

00:13:48.690 –> 00:14:05.239

Rebecca Clyde: united healthcare or blue cross to verify somebody’s eligibility. Now to basically being able to do that in a matter of seconds using AI agents. And so they’re no longer having to delay some of these treatments or some of these

 

56

00:14:05.240 –> 00:14:15.899

Rebecca Clyde: scheduling steps, because they can now handle those things almost in real time. I’m seeing like you, said another big opportunity in clinical trial.

 

57

00:14:16.560 –> 00:14:44.531

Rebecca Clyde: you know, in the clinical trial space, just because one of the big challenges has always been being able to assess someone’s eligibility right? Which may, in order to do so, you might have to have access to a patient record. Maybe there’s patient biomarkers that are required in order for someone to be eligible, and then, of course, there’s real time trial slots right? How many people are still? Are they still taking patients?

 

58

00:14:44.950 –> 00:15:00.329

Rebecca Clyde: and if so, from where? Right? So all of these different pieces of information sometimes take a lot of time to come together. A good friend of mine was just telling me that it took him almost 3 months to find a clinical trial for his mom. She was dealing with late stage cancer.

 

59

00:15:00.330 –> 00:15:21.630

Rebecca Clyde: and the family really wanted her to participate in a trial, and it took several months, and the goal would be here that with agents we could reduce that down to a matter of moments, right? Maybe within a single day or less be able to determine someone’s eligibility and get them into those trials much faster. So there are some real life implications here.

 

60

00:15:21.660 –> 00:15:31.310

Rebecca Clyde: And so, as you’re thinking about those folks who are on the line, as you’re thinking about where to apply some of these agents, I think that you know.

 

61

00:15:31.310 –> 00:15:43.370

Rebecca Clyde: thinking about where the bottlenecks are, where there’s a lot of manual process, but also where the biggest impact could possibly be, especially as it comes to delivering care or delivering life, changing solutions.

 

62

00:15:43.370 –> 00:16:04.340

Rebecca Clyde: And then, if some of you folks here are from the government sector, maybe it’s not so life threatening of a scenario, but it could certainly go a long way to smoothing out what has been typically a high friction environment. I’m feeling this a lot right now, with my 60 year old getting her driver’s license, you know, having to go through the Dmv process

 

63

00:16:04.783 –> 00:16:19.506

Rebecca Clyde: once again. Just reminded me like, Wow, we could do so much of this with agents. So if somebody here is on the line and you work in that world, please create a solution to identify the Dmv. I think we would all be very grateful.

 

64

00:16:19.820 –> 00:16:22.020

Chris Maeda: Just to to make you know.

 

65

00:16:22.310 –> 00:16:34.530

Chris Maeda: generalize that the the places where the ais are great is taking, you know, analyzing all these, it’s it’s essentially a matching problem. So taking all these different criteria

 

66

00:16:34.660 –> 00:16:47.369

Chris Maeda: and kind of netting them out into whether you meet them or not, for a lot of things, so it might be like, the eligibility criteria for clinical trials. Look at a lot of them at once, and

 

67

00:16:47.470 –> 00:17:01.101

Chris Maeda: maybe come up with a, you know. Here’s here are the questions I’d ask you. And then, based on that, I can tell you what you’re eligible for or indications, contraindications for pharmaceuticals or

 

68

00:17:02.530 –> 00:17:05.419

Chris Maeda: what are the criteria for different government programs

 

69

00:17:05.560 –> 00:17:19.783

Chris Maeda: etc. And then, if you meet, you know, if you if you match one of those, then it can also execute the steps to enroll you in the program. Trial, doctor, visit, etc. So that’s sort of

 

70

00:17:21.050 –> 00:17:26.460

Chris Maeda: Maybe the most general description of of what an agent can do, and and what a gentic technology could do.

 

71

00:17:27.030 –> 00:17:49.369

Rebecca Clyde: Yeah, I think, especially as we think about customer facing interactions. Those certainly come to mind. There might even be some others that are more internally facing right? I hear a lot of people wanting to implement agents to consume a a lot of data. We’re getting requests for things like, hey? We have all of this data coming in from reports from industry studies.

 

72

00:17:49.737 –> 00:18:01.539

Rebecca Clyde: And it’s really hard for our team to really parse out what’s important and what’s not and then to help us make decisions? Based on that. So so very similar opportunities. There.

 

73

00:18:02.048 –> 00:18:06.979

Rebecca Clyde: let’s talk a little bit about switching gears. You know I mentioned hallucinations at the beginning.

 

74

00:18:07.608 –> 00:18:14.460

Rebecca Clyde: But you know it’s always important to go back to this point, because I find myself

 

75

00:18:15.260 –> 00:18:16.300

Rebecca Clyde: you know.

 

76

00:18:17.350 –> 00:18:35.099

Rebecca Clyde: having to make sure that in my own use of AI, I’m also double checking things. And I’m also verifying what is the need? What is the issue around hallucinations just to recap. And then how do we make sure to avoid them? What are some, some steps that we can take there, Chris.

 

77

00:18:35.770 –> 00:18:45.019

Chris Maeda: Sure. Well, like like I said, the the basic AI technology, is it you? You train a neural net to produce language that sounds correct, but there’s no.

 

78

00:18:45.400 –> 00:18:59.600

Chris Maeda: you know, there’s you have no way of knowing that really is correct. It just sounds. It just produces something that sounds good. But it has no way of knowing, because it’s just you know, it’s just a bunch of numbers producing more numbers. So

 

79

00:19:00.470 –> 00:19:06.023

Chris Maeda: to to actually build what’s called a retrieval augmented generation system. You have to

 

80

00:19:06.880 –> 00:19:21.788

Chris Maeda: kind of fence in what the ais can produce. So you you control what they can do, using prompts, you give them relevant information. You tell them to only use that relevant information to produce the answer. And and then you have to build

 

81

00:19:22.510 –> 00:19:44.423

Chris Maeda: evaluation or testing infrastructure to make to, you know, to make sure that they’re producing the correct answers. On the, you know. And then, you know, it’s kind of give and take with the model creators and the model consumers. So on the model Creator side, they, you know, they’re working against benchmarks that will reduce the incidence of hallucinations and

 

82

00:19:45.770 –> 00:19:57.369

Chris Maeda: test, how the models focus in on the most appropriate information. So so an example, one of the benchmarks for the models is this thing called the needle in a haystack benchmark, where they give it.

 

83

00:19:57.500 –> 00:20:15.720

Chris Maeda: You know, they give the model a bunch of data and the relevant data is is buried in the middle, and you test how well the model can pick out the relevant data and produce the correct answer. And there’s other benchmarks that ensure the model stay on target. So you know, a lot of the hallucination stuff was.

 

84

00:20:16.070 –> 00:20:43.100

Chris Maeda: you know. So 2 years ago, and the models have become, much better at reducing the incidence of hallucinations because of of benchmarks and and training like this. So the you know both the the models and the systems that consume the models have have gotten better at this. And it’s kind of the standard things that you need to be doing to to ensure that your you know your ais are producing accurate information every time.

 

85

00:20:43.887 –> 00:20:52.569

Rebecca Clyde: Yeah, I appreciate that especially. And somebody here asked a question. So I’m going i’m referring to the the chat questions, from the audience.

 

86

00:20:53.207 –> 00:21:06.759

Rebecca Clyde: Somebody asked, Have you seen any real world examples where AI hallucinations cause damage or could have? And how is that mitigated? Or maybe not, if it did cause damage, how should it be mitigated.

 

87

00:21:07.800 –> 00:21:08.600

Chris Maeda: Yeah. Do you have any?

 

88

00:21:08.600 –> 00:21:10.040

Chris Maeda: We did have an example where,

 

89

00:21:11.063 –> 00:21:21.139

Chris Maeda: an AI misquoted a refund policy for an airline, and the the person sued and and won in court.

 

90

00:21:21.998 –> 00:21:24.389

Chris Maeda: And you know they said, well, you know

 

91

00:21:24.540 –> 00:21:30.589

Chris Maeda: it doesn’t matter if the AI is wrong. The AI was representing the company. So you have to stand by that that statement.

 

92

00:21:32.140 –> 00:21:33.660

Chris Maeda: So that’s that’s 1.

 

93

00:21:33.660 –> 00:21:39.160

Rebecca Clyde: They have done? What should they have done? Where did this? Where did that go awry like? What?

 

94

00:21:39.970 –> 00:21:41.029

Rebecca Clyde: What was the.

 

95

00:21:43.460 –> 00:22:01.633

Chris Maeda: well, I guess that you know there there wasn’t sufficient testing. I you know it’s like the the person asked a question about a refund, and the the AI found it. It sort of stitched together a couple of phrases from from 2 different refund policy documents and created a brand new policy and

 

96

00:22:02.100 –> 00:22:03.339

Chris Maeda: so that’s

 

97

00:22:03.690 –> 00:22:12.355

Chris Maeda: and then the custom. And then and they said, Well, that’s not our policy. And the customer said, Well, you that’s what your AI said it was, and so they sued, and they won.

 

98

00:22:13.830 –> 00:22:21.059

Chris Maeda: So I think you know, that’s where you’d want to test very carefully with a bunch of scenarios and make sure that it’s not giving you made up answers.

 

99

00:22:21.700 –> 00:22:44.449

Rebecca Clyde: Right? Absolutely so. I think in this case, having a system to validate, and and one of the things that we have done in kind of in response to this, and I can share my screen really quickly. Here is. We’ve created this AI playground where our, you know, customers who have access to our platform can test all of their content against several different prompts

 

100

00:22:44.450 –> 00:23:03.749

Rebecca Clyde: and against different questions. And so we will show the retrieved source. What is the answer? We can show them history around those questions. And this just becomes another way to visually be able to see. You know, if those potential questions are going to be answered correctly or not.

 

101

00:23:04.100 –> 00:23:20.210

Rebecca Clyde: And then what’s helpful is being able to provide a relevant score around the content because one of the things we often find happens is that there is a gap in maybe some of the content sources not being super clear or current. And so the AI may score them

 

102

00:23:20.662 –> 00:23:35.099

Rebecca Clyde: perhaps lower and not use them. So we provide kind of that environment for that testing to take place. So just one of the things that you would want to look for and and make sure happens carefully before going live.

 

103

00:23:35.290 –> 00:23:49.190

Rebecca Clyde: There’s another question here from the audience. If a healthcare or Pharma org is just getting started with AI agents. What is the safest and most impactful place to begin?

 

104

00:23:50.230 –> 00:23:52.150

Rebecca Clyde: What are your thoughts on that, Chris?

 

105

00:23:54.705 –> 00:24:04.869

Chris Maeda: Well, safest would certainly be, you know, non clinical environments. So you know, topic.

 

106

00:24:05.860 –> 00:24:07.800

Rebecca Clyde: More administrative functions, perhaps.

 

107

00:24:08.130 –> 00:24:13.709

Chris Maeda: Yeah. Administrative functions, you know. Marketing to doctors.

 

108

00:24:15.530 –> 00:24:22.028

Chris Maeda: You know, marketing single answering questions about a single pharmaceutical to consumers.

 

109

00:24:22.760 –> 00:24:25.871

Chris Maeda: but you know, definitely not clinical decision making

 

110

00:24:28.360 –> 00:24:40.240

Rebecca Clyde: yeah, start exactly. Maybe with some like, you said some lower risk opportunities really work out the kinks, 1st and then move into some of those more clinical decisions.

 

111

00:24:40.240 –> 00:24:45.674

Chris Maeda: Want to talk about this pharmaceutical great can do that all day long. Do you want to talk about

 

112

00:24:46.700 –> 00:24:47.746

Chris Maeda: you know,

 

113

00:24:49.670 –> 00:24:58.269

Chris Maeda: the criteria for a trial. Great, you know. Do you want to decide whether to administer a drug or do a diagnosis? That’s a lot more risky.

 

114

00:24:58.810 –> 00:25:13.790

Rebecca Clyde: Right? Yeah, I think that’s a good place to to think about it. And and really get as an organization comfortable. With some of those components, so that you have more experience. Organizationally, too, as you move to some more of those advanced like.

 

115

00:25:13.790 –> 00:25:25.849

Chris Maeda: And we’ve had clients where they where they said, you know, you can talk about the pharmaceutical all day long, but do not do any diagnosis and make sure you don’t stray into that. And here’s examples of straying, and make sure you don’t do that.

 

116

00:25:25.960 –> 00:25:26.953

Chris Maeda: So that’s

 

117

00:25:29.290 –> 00:25:45.469

Rebecca Clyde: Yeah. And I think what you just shared. There’s something important that a lot of people forget. When we’re training these AI agents. Sometimes it’s we. We think about all the positive examples like, we want you to say these things, or we want you to reference this content.

 

118

00:25:45.610 –> 00:25:53.050

Rebecca Clyde: But talk a little bit about negative examples, Chris, and what that means, and why? They’re important, because it kind of plays into this as well.

 

119

00:25:53.440 –> 00:26:00.550

Chris Maeda: Yeah. Well, like, you know, we we said, clinical clinical stuff is is dangerous, or you know it’s well, not dangerous, but but

 

120

00:26:00.650 –> 00:26:07.389

Chris Maeda: has more risk. So you know, we’ve had pharma clients where they said, you know.

 

121

00:26:07.750 –> 00:26:16.189

Chris Maeda: Make sure you don’t do any sort of diagnostic activity. And here’s examples of of diagnostic activity, so that you can

 

122

00:26:16.850 –> 00:26:23.340

Chris Maeda: show the model what not to do. So you know. So in addition to telling the model what’s correct, you have to tell it. You know you say, like.

 

123

00:26:23.500 –> 00:26:32.560

Chris Maeda: don’t don’t cross this line into doing diagnoses, or don’t cross this line into, you know, a forbidden, you know, like a forbidden.

 

124

00:26:32.560 –> 00:26:33.010

Rebecca Clyde: Activity.

 

125

00:26:33.010 –> 00:26:45.379

Chris Maeda: Or something like that. So yeah. And again, the models will, you know they’ll do their best to do what you tell them. So if you give them positive examples and negative examples, they’ll they’ll try to stay, you know. Stay within the lines.

 

126

00:26:45.980 –> 00:27:09.219

Rebecca Clyde: Yeah, thank you. I think that’s a that’s a that’s a trick of the trade that I think we have learned very well, and we’ve become very. You know, our system supports that kind of infrastructure to do so. But a lot of people who are new at this are not aware of the importance of those kind of negative examples in addition to the positive examples. So something hopefully new that you learned here today that you can take. Take back to your organization.

 

127

00:27:09.831 –> 00:27:28.500

Rebecca Clyde: Alright. So we have here a a kind of along those lines, maybe a follow up question, about continuous testing and improvement. So once the the agent has been deployed, what are some ways to do? Continuous improvement, and and testing from an ongoing standpoint.

 

128

00:27:30.510 –> 00:27:34.914

Chris Maeda: Yeah. So I mean you, you showed the example of of the AI playground. So so

 

129

00:27:35.320 –> 00:27:52.330

Chris Maeda: one thing that our platform does is is every question or every interaction with the model gets logged and turned into. And we have tooling to take those logs and turn them into evaluation data sets so that we can. You know, if you if you first, st if you

 

130

00:27:52.730 –> 00:27:55.510

Chris Maeda: verify that those are correct answers, then you can

 

131

00:27:56.890 –> 00:28:11.649

Chris Maeda: run those questions through the model and make sure that the answers are stable and and not changing. And there’s, you know, there’s there’s different ways. It could change the. You know, the underlying data might change. Sometimes the model changes and it gives you

 

132

00:28:12.054 –> 00:28:20.885

Chris Maeda: like. If they, if a model provider releases a new revision of the model, you might get different answers from it. So it’s important to be able to detect that

 

133

00:28:21.170 –> 00:28:23.810

Rebecca Clyde: We’ve experienced that. Yep. Yeah.

 

134

00:28:23.810 –> 00:28:24.385

Chris Maeda: Yeah,

 

135

00:28:25.080 –> 00:28:26.110

Chris Maeda: So

 

136

00:28:27.570 –> 00:28:47.449

Chris Maeda: you know, like, I said, our our tooling allows this sort of natural capture of this kind of evaluation data. So you can have some. You know, people testing it informally that all gets captured into our logs, turned into formal evaluation data sets, and that allows us to do regression testing on the models anytime. You you know, anytime the model changes anytime, the data changes, etc.

 

137

00:28:48.040 –> 00:29:06.020

Rebecca Clyde: Yeah, and just keeping in mind that this is a very dynamic environment. Right? So 2 things like you said are always changing. So your content may be changing because there’s always new research, new studies being released. New standard of care that might be getting released new protocols. Right? So

 

138

00:29:06.020 –> 00:29:35.390

Rebecca Clyde: the models have to keep up with those changes. And so you want to make sure that anytime a content change is introduced, that the model has been updated appropriately. But then, like you said, there might even be changes on the model side itself. Right? So what gave a perfect answer yesterday today may not. And so, having to stay up to date with that becomes very important. And this is some of the, I guess the fun part of implementing AI is keeping up with the rapid rate of change.

 

139

00:29:35.440 –> 00:29:40.060

Rebecca Clyde: because it really is pretty significant and exciting at the same time.

 

140

00:29:40.515 –> 00:29:47.309

Rebecca Clyde: But you can’t just like build it and forget about it. Definitely. Don’t recommend that. Keep. Keep up with the changes.

 

141

00:29:48.165 –> 00:29:48.630

Rebecca Clyde: Alright.

 

142

00:29:48.630 –> 00:29:55.029

Chris Maeda: We’ve we’ve also seen the trade offs between like fine tuning and and not fine tuning change. Like earlier, we had

 

143

00:29:55.220 –> 00:30:00.429

Chris Maeda: applications where the model needed to be fine tuned to answer correctly, and then.

 

144

00:30:00.880 –> 00:30:16.899

Chris Maeda: you know, we then we compared the fine tune model versus the newer revisions of the model, and it just you know it. It outperformed the fine tune model. So it’s like you have to continually evaluate the the models that you’re using, because the the performance keeps changing and and mostly improving.

 

145

00:30:17.380 –> 00:30:30.940

Rebecca Clyde: Right, which is the encouraging part. All right. There’s another good question here about the importance. What is your perspective on having a qualified human in the loop, and using AI in a more supervised mode.

 

146

00:30:34.590 –> 00:30:53.099

Chris Maeda: It’s great if your application allows it, and and like, I said, the early applications where the AI would would write 1st draft for you and allow. And then humans could could review them. That was great, because, you you know, it was very safe way of using AI

 

147

00:30:53.550 –> 00:31:05.999

Chris Maeda: It’s not, you know, it’s not feasible in all applications. So you know, what we’ve had to do in our in our tooling is to allow you know humans to review it after the fact.

 

148

00:31:07.300 –> 00:31:10.490

Chris Maeda: And and in some cases you can have

 

149

00:31:11.014 –> 00:31:27.879

Chris Maeda: another model reviewing the the answers from the 1st model to try to detect incorrect answers more quickly. But but yeah, if you can have a if your application allows a human in loop, it’s great if but that’s you know

 

150

00:31:28.020 –> 00:31:31.320

Chris Maeda: you’re not always allowed. It’s not always feasible to have a human in the loop.

 

151

00:31:32.030 –> 00:31:48.850

Rebecca Clyde: Yeah, there might be too many, too too much data for a human to be able to consume. I think that’s the biggest challenge that we find is just too much. And so one of the things that we’ve also done is maybe created a hierarchy like where the AI when you like. When the checker, let’s say the fact checker. AI

 

152

00:31:48.850 –> 00:32:02.240

Rebecca Clyde: comes in and does its job. It could flag a few things that need human review. And then just those exceptions go over to human review. There’s some things to kind of make that more feasible. Because, again, if you’re creating just another bottleneck.

 

153

00:32:02.789 –> 00:32:24.980

Rebecca Clyde: That doesn’t help either, right? The whole point was to streamline, but in some cases it could be. There’s there could be a very important step, and I am seeing that, especially in the diagnostic side, where the ais may make recommendations. But it’s still up to the clinical staff to make the final determination and make a diagnosis. So that’s a place where I do see a lot of human in the loop.

 

154

00:32:25.560 –> 00:32:27.899

Rebecca Clyde: as necessary, at least at this time.

 

155

00:32:28.483 –> 00:32:37.769

Rebecca Clyde: There’s also another. Gosh, we’re getting great questions from the audience. Thanks everyone. There’s another question here about the about a rag.

 

156

00:32:38.040 –> 00:32:46.179

Rebecca Clyde: Can you talk about? Why, they’re essential. I know you touched on them briefly. And then how do you know if it’s implemented correctly.

 

157

00:32:49.030 –> 00:32:51.970

Chris Maeda: Yeah, I mean, it’s essential because

 

158

00:32:52.170 –> 00:33:03.600

Chris Maeda: essentially because of the hallucination problem. So the you know, the models are trained on a corpus of of data, and if you just ask them

 

159

00:33:04.210 –> 00:33:09.560

Chris Maeda: questions without providing relevant information, they’ll, you know they’ll answer.

 

160

00:33:09.720 –> 00:33:20.550

Chris Maeda: They’ll they’ll cobble together an answer from from the training data which may or may not be correct may or may not be current, etc. So you know, Rag was developed to be able to take, you know.

 

161

00:33:20.660 –> 00:33:31.780

Chris Maeda: Let’s identify the up to the minute factual information and provide it to the models so that the models can produce their answer from the correct data So

 

162

00:33:33.100 –> 00:33:45.450

Chris Maeda: you know, like, like, I said, if your if your application depends on accuracy out of the model, then it’s important to to have that data front and center so that the model can use it to to create an answer.

 

163

00:33:46.960 –> 00:33:49.399

Chris Maeda: Sorry. What was that? There was second part of the question. I I.

 

164

00:33:49.649 –> 00:33:53.140

Rebecca Clyde: Well, how do you make sure the rag is implemented correctly? So what’s the.

 

165

00:33:53.140 –> 00:34:01.300

Chris Maeda: Right? So yes, I think that’s the the the testing part. Either.

 

166

00:34:01.580 –> 00:34:07.519

Chris Maeda: you know. So essentially, when when we roll out a new chat. Bot, we, you know, we give it.

 

167

00:34:08.009 –> 00:34:22.740

Chris Maeda: We have people tested out. We make sure that it’s producing correct answers. We turn that testing into evaluation data sets that can be run automatically on the model in in the future. So you have to, you know.

 

168

00:34:23.300 –> 00:34:28.609

Chris Maeda: Make sure it’s giving you the correct answers, and then continually test to make sure it’s continuing to give you correct answers.

 

169

00:34:29.330 –> 00:34:43.100

Rebecca Clyde: Exactly. All right. Thank you, Chris. Great response. Wow! Another one, all right. What is a misconception that we often see around the use of AI in regulated industries.

 

170

00:34:47.920 –> 00:34:48.730

Chris Maeda: II.

 

171

00:34:49.070 –> 00:34:52.090

Chris Maeda: I don’t know. I’m too close.

 

172

00:34:52.090 –> 00:34:57.969

Rebecca Clyde: Yeah, I’m happy to. It’s not really a technical question. So I can take this one.

 

173

00:34:58.090 –> 00:34:59.330

Chris Maeda: No.

 

174

00:34:59.870 –> 00:35:03.439

Rebecca Clyde: You know, one of the things I have found is that

 

175

00:35:03.926 –> 00:35:32.960

Rebecca Clyde: there’s, I think, people kind of fall into a couple of camps. You have the the fear camp? Right? That says, no, this is problematic because of these reasons. It’ll take our jobs or it’ll go off the rails. It will take over the world. Right? So you kind of have that group of folks that might be chiming in. And so a lot of times there’s a lot of concern or hesitancy about applying AI in these more regulated markets like, oh, it’ll put us out of compliance.

 

176

00:35:34.310 –> 00:35:37.891

Rebecca Clyde: That’s what I hear a lot, or there’s just simply

 

177

00:35:38.350 –> 00:35:58.249

Rebecca Clyde: hesitancy around, or a misconception around, maybe how easy it might be. So I kind of have the people who think it’s too hard and too impossible. And then there’s other group of people who, I think, think oh, this is so easy. We can just like knock something out using chat gpt, and we’ll be fine without really understanding the kind of complexity and interdependence

 

178

00:35:58.250 –> 00:36:16.320

Rebecca Clyde: of AI, and how it might work through your system. So that’s what I usually see is a challenge. And so just making sure. So we spend a lot of time like we’re doing today with Chris just talking to customers about what those misconceptions might be, or what some of the challenges are, you know, and and then helping them think through

 

179

00:36:16.320 –> 00:36:32.980

Rebecca Clyde: all of the how we approach all of those different components. And I think once people have a better understanding, they can now see, like, Okay, where is this gonna fit into my organization? How do I need to staff this initiative correctly? Who are the right people to bring in? So it becomes more of like an organizational

 

180

00:36:32.980 –> 00:36:56.549

Rebecca Clyde: consideration, more so than anything. But you know, one of the things I’m seeing a lot is the use of AI to help drive compliance. That’s actually a place where I’m seeing, even though there’s a concern that it would help, it would create compliance problems. We’re actually seeing it applied in driving compliance. And I was actually going to share an example here, since we have a few minutes.

 

181

00:36:57.226 –> 00:36:59.050

Rebecca Clyde: So this is a

 

182

00:36:59.190 –> 00:37:11.500

Rebecca Clyde: as a real live example. Let me see if it’s sharing. This is a chat bot that a customer had put in place. They were launching a new treatment.

 

183

00:37:11.700 –> 00:37:13.469

Rebecca Clyde: It was a topical gel

 

184

00:37:13.920 –> 00:37:35.310

Rebecca Clyde: for a particular condition, and they, you know, started getting some questions about the product and maybe getting rashes or something like that. And so what? What we did as part of the process was identify, create an agent that automatically identified if somebody mentioned any kind of adverse event, and then we

 

185

00:37:35.400 –> 00:37:44.940

Rebecca Clyde: created a scoring mechanism to score that event and then trigger all of these notifications and reports to all the appropriate people. So the compliance team

 

186

00:37:44.940 –> 00:38:06.240

Rebecca Clyde: could have a log of all these issues and everybody that needed to know about it would, and then ultimately provide them with a dashboard. This single pane of glass, where they could see any kind of adverse event or product quality issues that had been reported by consumers or by any of the retailers that were carrying that product.

 

187

00:38:06.240 –> 00:38:27.310

Rebecca Clyde: So what would have been considered a compliance nightmare, maybe, before really got streamlined in the use of this agentic system to track adverse event and product quality issues during the rollout of a new product. So I just wanted to share that one example.

 

188

00:38:28.683 –> 00:38:31.950

Rebecca Clyde: Let’s see, we have another question here.

 

189

00:38:33.760 –> 00:38:34.970

Rebecca Clyde: About

 

190

00:38:36.410 –> 00:38:44.930

Rebecca Clyde: how leaders are thinking about the role of AI agents? Are they replacing staff tasks or augmenting them in a more strategic way? What do you think, Chris?

 

191

00:38:48.550 –> 00:38:53.249

Chris Maeda: I did probably depends on the business. I mean, I’m

 

192

00:38:53.370 –> 00:38:57.080

Chris Maeda: I always assume that when you you can automate

 

193

00:38:57.870 –> 00:39:15.079

Chris Maeda: these labor intensive steps that freeze your humans to work on higher value added things. You know. Like, if someone knows your business really well, you wouldn’t want to lose that person. You want to let them do things that make you more money and and allow the the ais to do

 

194

00:39:15.360 –> 00:39:17.989

Chris Maeda: the you know the simple things that can be automated.

 

195

00:39:19.513 –> 00:39:21.080

Rebecca Clyde: Yeah, I think that’s the general way.

 

196

00:39:21.080 –> 00:39:22.260

Chris Maeda: Business is different now.

 

197

00:39:22.450 –> 00:39:28.326

Rebecca Clyde: Yeah, exactly. So really, it’s up to you, and your organization, how you want to use it.

 

198

00:39:30.020 –> 00:39:40.521

Rebecca Clyde: another question just came in, how are our government policies keeping up with the use of AI? What do you think, Chris? These are more like existential questions about AI. I love it.

 

199

00:39:41.236 –> 00:39:46.280

Chris Maeda: I I think so. I think the the latest.

 

200

00:39:47.040 –> 00:39:50.479

Chris Maeda: I mean the latest stuff out of the Us. Government is, you know, they? They’ve kind of.

 

201

00:39:50.860 –> 00:39:58.100

Chris Maeda: They seem to realize that the the field is moving a little bit too fast for them to really understand how to regulate it. So.

 

202

00:39:59.650 –> 00:40:00.630

Chris Maeda: You know the

 

203

00:40:00.990 –> 00:40:14.480

Chris Maeda: I think we’re still waiting to see what the regulations are going to be. The the good thing is that the Federal regulations are going to preempt all the State regulations and and the like in the United States, having 50 sets of State regulations on AI was going to be very

 

204

00:40:15.280 –> 00:40:16.160

Chris Maeda: I think.

 

205

00:40:16.160 –> 00:40:16.510

Rebecca Clyde: Yeah.

 

206

00:40:16.860 –> 00:40:19.260

Chris Maeda: Anti-competitive for the United States.

 

207

00:40:19.590 –> 00:40:44.809

Chris Maeda: I think the the other thing is that you know existing data, privacy, laws, and things like that also apply to AI. So if, though, you know, if those kinds of regulations need to be updated to take account, you know, to take the AI technology into account. Then. You know, in some cases maybe they do. In some cases, maybe, the the existing regulations are are perfectly adequate to to deal with this new technology.

 

208

00:40:45.778 –> 00:40:47.331

Rebecca Clyde: Great, yeah, I would

 

209

00:40:47.880 –> 00:40:58.639

Rebecca Clyde: spot on. Thank you. Chris, all right, we’re coming here to the end of our our session, together. So thank you, everyone. If you have any burning questions like, now’s the time as we start to wrap up.

 

210

00:40:58.760 –> 00:41:09.709

Rebecca Clyde: Where do you think all of this is headed, Chris, what are some of the kind of next evolution advancements that you’re seeing when it comes to AI agents.

 

211

00:41:12.500 –> 00:41:36.699

Chris Maeda: I think you know, a lot of labor. Intensive stuff all over. The economy is is about to get automated. And I think that’s gonna drive, you know, big improvements in in productivity. It’s you know, it’s hard to say what those things are gonna be because it’s you know, the the economy is so big and so diverse. But I think you know to to

 

212

00:41:38.170 –> 00:41:40.449

Chris Maeda: look at the commonality. It’s gonna be.

 

213

00:41:40.880 –> 00:41:51.010

Chris Maeda: you know, anytime you’re doing something labor intensive with with computers or websites or data the AI and a gentic technology should be able to automate that. And

 

214

00:41:51.590 –> 00:41:53.259

Chris Maeda: you know, hopefully make it easier.

 

215

00:41:54.100 –> 00:42:17.549

Rebecca Clyde: Yeah, I kind of think about this idea of like networks of agents, too, that are gonna be collaborating and working together, so I come from like, you know, my background was in marketing, and so I remember there was always a very distinct function between marketing sales and customer support right? Because organizationally, we were. Most companies are set up that way.

 

216

00:42:17.550 –> 00:42:46.329

Rebecca Clyde: Well, I think in an agentic world those don’t have to be separate functions anymore, because these agents can in a sense collaborate much more quickly. So imagine that you have a a chat interaction going with a customer that was maybe educational. So think marketing. But then, all of a sudden, they quickly convert into a sales opportunity right then and there and then, maybe even already, they start using the product? Perhaps using a demo. And now they have a support question.

 

217

00:42:46.330 –> 00:43:05.610

Rebecca Clyde: So those 3 things that used to be finite functions now get blurred into these agents that could now support all of those interactions seamlessly and more fluidly. So I like this idea of being like more consumer centric and creating experiences that could cross all these different silos.

 

218

00:43:06.070 –> 00:43:07.869

Chris Maeda: Yeah, I mean, part of that was that

 

219

00:43:08.640 –> 00:43:18.179

Chris Maeda: these functions had different touch points. So you had 1, 800 number. You had an 800 number for support. You had an 800 number for sales. You know, marketing had their own

 

220

00:43:18.550 –> 00:43:20.462

Chris Maeda: emails and stuff

 

221

00:43:21.370 –> 00:43:29.119

Chris Maeda: with, you know, with chat bots you don’t get. You can’t get away with that anymore. It’s like you’ve got one interface, which is the interface to the company, and

 

222

00:43:29.310 –> 00:43:40.250

Chris Maeda: it has to orchestrate all the different business functions based on what the customer wants. So I think you know the fact that you could have these silos is A,

 

223

00:43:40.480 –> 00:43:48.200

Chris Maeda: you know, artifact of the old interaction technology and and the new interaction technology doesn’t allow you to have these silos anymore.

 

224

00:43:49.590 –> 00:44:06.982

Rebecca Clyde: Right? Yeah. And so I I think as organizations, that’s the real. The real challenge will be, how do we operate in a world where we have to think more con customer centrically and create systems really, that provide that? That experience in a much more fluid fashion.

 

225

00:44:07.330 –> 00:44:13.459

Rebecca Clyde: I’ve seen that with our chat bots. It’s like, you know, it’s like, I don’t want to hear your marketing message until you solve my support problem. It’s like, Oh.

 

226

00:44:13.460 –> 00:44:38.439

Rebecca Clyde: yeah, well, and then we even get interactions like these patient facing systems, all of a sudden employees start using them. And so now you you have to think about. How do they also handle employee questions that are asking about payroll or vacation time? Right? So, you know, being able, the agent needs to know. Oh, I’ve switched over. This person is an Hr. Topic. Let me access the Hr system and the H

 

227

00:44:38.440 –> 00:44:43.680

Rebecca Clyde: database authenticate this person and then give them the information. So

 

228

00:44:43.900 –> 00:44:54.631

Rebecca Clyde: yeah, it becomes, whereas before, remember, you used to be like, Oh, now you have to hang up and call a different 800 number, or let me transfer you, and then they would never transfer you, and you know you’d have to call again.

 

229

00:44:54.870 –> 00:44:57.635

Chris Maeda: Yep, that number in case we’re disconnected.

 

230

00:44:57.890 –> 00:45:04.190

Rebecca Clyde: They’re definitely gonna disconnect you. They always did alright.

 

231

00:45:05.000 –> 00:45:21.809

Rebecca Clyde: So, you know, to wrap up. We always like to make sure you you had. You were able to use your time wisely with us today, and we’ve shared a couple of key takeaways. Feel free to grab this as a screenshot, or if it’s helpful for you in any way to reference

 

232

00:45:21.810 –> 00:45:35.739

Rebecca Clyde: later. But today, you know, thank you for spending some time today with us. We talked about, you know that it’s not just about getting answers. It’s also about kind of organizing the organizational and technical architecture to deliver an experience.

 

233

00:45:35.790 –> 00:46:03.800

Rebecca Clyde: We talked about testing being really important and not being something that you just do at the beginning when you’re getting ready to launch, but that you’re, you know, providing ongoing, testing, and updating of your system. Because, as you know, as we mentioned, this is a dynamic environment, the models are always changing. Your content is changing. So it’s important to keep that as part of your discipline throughout the existence of your AI experience.

 

234

00:46:05.280 –> 00:46:32.729

Rebecca Clyde: you know. Let’s not think of a compliance as an afterthought, but actually use it in our advantage. Like, let’s use AI agents to help us be more compliant and and tackle some of those compliance challenges that are sometimes such a big headache. And then finally, just think about using AI agents across all the different workflows that could make sense. So whether it’s making a customer experience more fluid.

 

235

00:46:32.780 –> 00:46:53.609

Rebecca Clyde: more streamlined, or maybe bringing together some of these, you know, silos that existed before reducing friction from any kind of process with their whether it’s a customer engagement process, an onboarding process, an intake process. You know. Those are some places where we can see a lot of opportunity. Here for for agents.

 

236

00:46:53.820 –> 00:47:01.720

Rebecca Clyde: All right. I don’t see any more questions. Thank you. Everyone for joining us. Those were some good ones. Oh, yeah. And, Chris, what do you? What are you?

 

237

00:47:01.720 –> 00:47:24.640

Chris Maeda: Just a closing thought. I I think we’re in the early days of imagining what the agents can do. So we’ve kind of talked about some of the scenarios that we’ve worked through for for our customers. And you know, if if you’ve got other scenarios that you’re interested in, we’re very interested in talking to you, and that’s what that’s what kind of keeps us up at night, because it’s so exciting.

 

238

00:47:25.240 –> 00:47:43.800

Rebecca Clyde: It is. Yeah, thank you. Chris. Yeah, we love working on the forefront of what’s possible. And we will be having just as a little plug. We’re actually working with a big academic medical center that will be testing how well AI helps with clinical trial dissemination.

 

239

00:47:43.800 –> 00:48:00.180

Rebecca Clyde: And so we will have some clinical results here soon in the next year or so, once the data starts to show up. But these are some of the things that we love to partner on. So it’s not just us saying it makes it better, but also having some actual proof.

 

240

00:48:00.180 –> 00:48:04.009

Rebecca Clyde: some scientific evidence that is also improving

 

241

00:48:04.110 –> 00:48:24.339

Rebecca Clyde: both clinical trial, participation and retention, and ultimately outcomes. So if you have any of those kinds of opportunities, please reach out to us. We would love to hear from you and thank you again for joining us today, if you’d like, we have a Oh, I need to share the QR code. We do have a QR code that you can scan

 

242

00:48:24.480 –> 00:48:50.769

Rebecca Clyde: to schedule a free consultation with us, bring any ideas you have forward brainstorm. We love doing that, and if necessary, we can bring our Phd team online as well to help you think through some of these different ideas. All right. Everyone with that have a great rest of your day. Thank you for joining us here today, and we hope to hear from you soon. Take care, everyone, and thank you, Chris, for your time today.

 

243

00:48:51.210 –> 00:48:51.929

Chris Maeda: Thank you.