Matt Lewis: Yep. Inizio is a purpose built communications and consulting firm that works with Life Sciences organizations, to help them commercialize novel science and bring it out to the market. So all the groups that are out there in the ecosystem that have medical science, like the pharmaceutical companies, biotechnology groups, medical device companies, those that are offering digital therapeutics offered as a medical device that help to provide solutions and group the way that people manage their health conditions and improve their lives.
We partner with them when they have science coming out of clinical research studies, clinical trials, and the like to determine the value of that science and then ensure that it can be communicated to different stakeholders like doctors and patients and researchers and the government and insurers, other groups, employers, if you will, so that they can make decisions from that information to hopefully improve their lives.
The role that I'm in, which is focused on artificial intelligence is to really help those people that are making the decisions and interpreting the evidence and working with all the science to be able to better understand the information that they're working with, so that they can determine what's of interest and what really doesn't matter, so that they can work with the other people, the other humans in our environment in a more effective way, and really speed time to decision because there's so much medical science, there's so much new information coming out every day, that it can be sometimes challenging to know kind of where the signal is, and wherever the noise is.
So AI is one of many tools that can be used to separate out all the wheat from the chaff, if you will, and help people make better decisions.
And in terms of the team, our group is about 3000 people I work with most of the people actually within and across our organization. I work with our data science team, our product team, our teams that do consulting and analytics, as well as all our teams that are doing medical writing and supporting our clients across every aspect of the continuum. And both in a global perspective in Europe, in the US, and in other regions as well, which means I have some long days sometimes, but it's good work.
How has your role evolved and changed since you first assumed the position? So perhaps you could share, why was this position created? What was the goal? And again, like how to has it evolved?
Matt Lewis: Sure. So I've been in the role for just about eight months now or so I was before this I was Global Chief Data and Analytics officer. I was in that role for about six years after starting our data analytics division back in 2016. And my boss came to me earlier this year and asked me to take on this role as Head of AI. And at the time, the number of our clients and organizations around the ecosystem were recognizing the value that artificial intelligence could contribute to their work.
But there really hadn't been really a dedicated focus on just being dedicated really all in, if you will, with AI and it was the consideration was that having a real kind of… real emphasis there, that might be helpful both in deciding and helping to contribute to what the standards could be or what the recommended best practices or thoughts in the space that had actually operationalize AI and within the guardrails and frameworks and ways of working could be...
as well as thinking about how teams could think about things like upskilling and rescaling their staff, what competencies and areas of importance might be necessary as we evolve into the future, as well as about how do we learn what we need to do before actually implementing it across our internal organization, and the organizations with whom we partner.
So when I first started back in the spring, it was kind of a hodgepodge of lots of different considerations. Now, over the last couple of months, a lot of time I spend with other organizations, professional societies and groups that are trying to help define what the gold standard of the space looks like. I like to professional medical organizations in the space, as well as with others that are innovating and experimenting to determine if you use a particular piece of tech, and if you use it in a way that is a little bit different than the way we've always used it.
How does it work? And how do we kind of think about the way we work as a result in a kind of aspect of the overall implementation. And that's really important, because the way the way that we think about AI is around something we call augmented intelligence, which is AI only works when the humans that are working alongside it, are able to make better decisions, or they think in a more helpful way enhances their ability to operate. So it's not meant to kind of replace people or to shift people aside, take their jobs, so to speak, but rather to, you know, kind of allow us to be more effective and more engaging and more helpful in the work that we do.
So if people react in a way that is not the way you expect, then that's something that we can learn from, and a lot of my work is around just helping to explicate or make people aware of what they're learning so that we can grow from that and hopefully contribute and more hopefully to the environment, if you will.
Felicia Shakiba: That sounds like the work has exponential opportunities, if you will, and what you're doing for not just your own business, but the partnerships that you have.
Could you take what you've shared, and maybe think about one or two examples of the impact your work is having on your organization's approach to AI?
Matt Lewis (07:14): Sure, yeah, I'll let me just share an example of one of the pilot projects that we have, that are ongoing within the organization. So we're probably running over maybe 300, individual pilots across the organization of about 3000 people. So when I say pilots, I mean, there are different ways of experimenting with AI.
One way could be that someone that's just in the business recognizes that they could potentially do something faster, or perhaps more effectively, by using something that they're aware of like, you know, ChatGPT, or Bard or Pi, or something else, and they just want to try it out.
And that's great. We want to encourage that type of curiosity and see how it goes and see if it works well, then we can tell others about it or it doesn't go well, then we can learn why that's the case and tell people not to do that. Because we don't want 50 or 100, or 600 people doing the same bad thing if the first thing didn't work, and that we have a lot of that kind of organic experimentation going on.
But what we also do is we look at the strategic drivers of our organization, the things that really create a lot of value within and across the value chain for the company. And we see of those things like where are the real kind of pain points that we can solve for if we were to introduce a solution that is purpose built, if you will, that is kind of considered by the organization. And one of these within the work we do happens to deal with like published research.
So when scientific data gets to its final point, and it's out there for the world to see it gets into a journal into a scientific journal, and then doctors and payers and the governments and others can see it in its final resting form as like a paper essentially. And so then get access to the paper and then use that reference in the paper for all sorts of different types of materials like slide decks and things that different organizations use, like presentations and medical meetings, you need the actual paper to be understood.
And the key points to be summarized, that takes a lot of time, a lot of human time. And the traditional way of doing that is very manual, it's very routine. And it's not the most fun thing in the world. But it's essential because if you don't have that evidence from the study, then you can't really build a narrative, you can't build a story.
Matt Lewis: So we've approached some of that with a new approach, which it takes more of a artificial intelligence, kind of fractional consideration for that, where we apply machine learning and deep learning and NLP and generative AI standpoint to that, and it takes away some of the drudgery and it makes the work a little bit different than how people would have approached it.
And the AI kind of suggests a way of viewing that same evidence that is very different than how people would have viewed it. It's not better necessarily. It's just different. It's different. and how people would have approached the same task. And when we first started talking to teams are like, What is this, like, this is not how I would have done this if I was given this task. And they had to almost relearn how to do the same type of thing, because the way that people have done it for 22, I've been doing this for 26 years, is a very kind of straightforward approach.
And you're trained how to do this as a medical person for your whole career, and you kind of only approach it that way. And when the AI gets involved, it doesn't approach it that way at all it approaches it in a completely kind of unexpected way. And it's, again, not wrong, it's just completely different.
And the teams had to almost relearn how to approach the same task by incorporating the AI's perspective into their work stream. And when you do it that way, the task gets done two to three times faster, with about the same level of quality as when it's only human led, if you will.
And it's really this kind of like opening up your mindset, opening up your kind of consideration as to what's possible to not think that the only way to do it is the way you've only ever done it, but to also think of other possible considerations, other paths of a possibility, other kind of adjacencies.
And when you think that way, it opens up lots of choices, potentially, that the business and in this case, the science has to offer, because the the AI, while it's trained on a lot of human data and a lot of other content that kind of allows it to exist, it doesn't process, it doesn't analyze that content the same way that people do.
And as a result that comes up with suggestions and recommendations and offers ways to progress forward that are often quite different than what we expect, and therefore creates a lot of value for us to potentially offer both internally and to our teams.
Felicia Shakiba: And how long is a typical paper or range, would you say?
Matt Lewis (11:55): Yeah, so a paper is typically about 16 pages. And it has a lot of very heavy medical jargon and texts in it. It might have, you know, between 100 -200 references or citations that are in the back end. And within the type of work that we do, where we're taking a lot of those types of papers and putting them into final formats, like slides or other deliverables at teams utilize, you might see a lot of those types of things included as part of the actual asset that someone you know, engages with.
The teams are very into their work, day in, day out. They're used to it and expect it to be done a certain certain ways when they see it done in this kind of new fashion, if you will, it almost does feel wrong, it doesn't feel like it can possibly be done any other way than it's the way it's only ever been done. So a lot of our training, a lot of our kind of skill development or competency development is on what might be called a lateral thinking or, you know, some an aspect of systems thinking.
And it's not so much on teaching or training people to be coders or to teach them to be digital experts, if you will, then we don't need more people to be data scientists. It's about the thinking of people and how they recognize patterns and how they recognize adjacencies. And how to think about what might not be expected isn't necessarily wrong, but is an opportunity to consider and reflect on what might be.
And that's that's not necessarily a default pattern. For many people, it usually is the opposite. People are trained to kind of think about the things that happen routinely and happen often is what should be and what is to be expected and what should be done often. And we're almost kind of having to retrain a lot of established professionals to think more broadly about how to approach that work so they can think a little bit more from a growth mindset, if you will.
Felicia Shakiba: I just want to recap what you shared. So you're taking a large set of very heavy medical jargon, qualitative data, being able to quickly digest it, and have the output be more suitable to present to various audiences. So they could in turn, also leverage this qualitative data in a way that makes more sense for whichever audience you're presenting to, right.
And then, in order to do so, and for the humans that are working with this type of technology, you're saying that lateral thinking is a competency that perhaps needs to be developed.
Could you dig in a little bit more on that I know that you shared already, how you might do that, but what type of training or what has worked? Have you experimented in that type of competency development yet?
Matt Lewis (15:01): Yeah, it's still early days, we have done a little bit of that we have a group of individuals within and across our organization that are essentially early adopters, they're... they're people that have kind of raised their hand and indicated that they want to be deeply engaged in the space of the tech, they want to be involved in pilots, they want to, they're already doing a lot of this, but they haven't been officially recognized as such.
And we've kind of just raised them up from within the groups that they sit and designated them as champions, really, as early adopters, so that they can participate in pilot projects, get involved in trainings, get involved in some of the early content that we're building to see if it passes the tests, so to speak, it really kind of rings true for them and their audiences.
And because we are a global company, we have people that are on the West Coast, and they have a different expectation, where they are geographically. There are people that are here in New York, where I am with people that are in United Kingdom with people in the Asia PAC region.
And it's important that we kind of reflect both the geographic and cultural differences across the organization. So this champions group is made up of people that represent different cultures and different regions, different time zones, and, they serve different client organizations as well, as they hopefully some of that comes into the mix as we're helping them go through it.
You've tried to kind of think about this initial bolus of training is more of a catalyst to help us think about what might be needed when we actually have to go out and train the full organization in 24, or in the later years to follow rather than how can we really effectively train the smaller group now and then just be done with it, as it were? That's really not our thinking.
It's more just how do we learn from them to think about what will really be our remit moving into the after years? And so there's a definitely a content piece of that in terms of what are the things and the competencies that they need to learn.
We've done a bit of a needs assessment and understanding from a role and a task perspective, what are the things that would be helpful for them to up-level and kind of think about developing further, as they progress further in their maturity curves and kind of build some of that into their roles.
But we've also started thinking from an affective perspective, what are some of the things that they might need to be exposed to potentially, both as managers and responsible and the subject matter experts in the business, as well as when they were coming into contact with their clients, their customers, and other people within the group, like their direct reports, their managers, and who may have a less kind of expert familiarity with the tech, they might get asked questions that are a bit uncomfortable, or that happened to be a little emotional, if you will?
And how do they be prepared for those conversations, which are less about the content and more about some of the affective or emotional qualities that people ask questions about? Is this something that is likely to take my job? Or is it you know, what was...
What does it mean for AI to have a role as a colleague versus a human as a colleague and things of that. So, we're trying to give people a little bit more consideration around things like sensitivity training? And how do they be comfortable in ambiguity? And how do they work through situations where it's not as much just what they know? But how do they express in a situation that is highly variable as it's still kind of progressing forward.
So the bigger picture around the strategic enablement and training and all the rest, definitely has a competency core to it. But we're also making sure that it has a cognitive piece, but it's also complemented by an affective component.
Felicia Shakiba: I think that there's so much to do around the soft skills and understanding how to work with AI technology.
In that realm, looking into the future and as a leader, how do you envision the role of Chief AI Officer evolving as AI technology continues to advance?
Matt Lewis: Sure, I know, there's been a lot of attention on the role this year, and there'll be some continued discussion as the as the years progress. I see my own role kind of evolve over the last eight months, somewhat significantly, both when I first started talking to folks about what I did back in the spring, I had maybe two or three work streams around education, internal and external.
I had this experimentation work stream, and then a bit around actually building generative models, deploying them in the field that is in the cloud environments so that folks could understand how their local environment could be influenced by something that was bespoke to them.
And then as the summer progressed to the fall, it started growing and now I have five or maybe six work streams. It's getting more complex, but also hopefully having more of an impact on the enterprise. I do see that into the next year, and in the years following that, hopefully, the role will start becoming less of a kind of figurehead and a kind of a recognition that the organization needs to kind of initiate if you will, and more of a kind of strategic catalyst for the business to begin to transform.
Because I think we're starting to see that as the government, both in this country and abroad, begins to stand up policies and laws and regulation around artificial intelligence, that a lot of organizations are starting to say that the way in which we work, the products and services and solutions that we offer will likely be transformed through a lens of artificial intelligence, and the Chief AI Officer will hope to kickstart or initiate a lot of those conversations.
It's not possible and I don't think it should be desired, that I or anyone in that type of role could be in all those conversations, or could even know what all those conversations are about or taking place wouldn't be possible for everyone to be everywhere, all at once, kind of so to speak. But I think if the role can evolve into being a thought starter, and someone that helps to initiate what's possible, then I think it'd be really useful for the enterprises and entities that are so aligned.
Matt, how do you create a successful project within the business? How do you pilot that?
Matt Lewis (21:08): Yeah, it is? It's a great question. I think I actually, I had a post on this on LinkedIn, just maybe yesterday, where I suggested that a lot of people are kind of throwing generative AI at all the wrong things, which, you know, is kind of a natural consideration for many people to do because it's out there, and it's robust, and it's sexy, and it's so cool.
I have this neat technology, let me see how it works on X. But in the enterprise, and in a large organization, or any organization, that's probably not the best thing to do, because using generative AI on anything, there's an opportunity cost of your time, first of all, of doing something else.
And also, you have to think about if the project doesn't go, well, then you're creating some negative perceptions of the technology that are then out in the water, if you will, for everyone to then parse and interpret. And they think that perhaps, it's not as robust as it could be, and that'll forever color their perceptions moving forwards.
Everything does have an implication, and we need to be mindful of the choices we make, whether they're aligned with the right objective, and also what the implications might be.
In our environment, and the one that we support is - it's a very pragmatic consideration, everything is starting with the outcomes at the forefront. So we're really kind of beginning with the end in mind. If we start by thinking about, what are the KPIs or the strategic imperatives that the organization is looking to accomplish first?
If you can't quickly describe why this matters to their business, and that could be different for every company, it could be that they really want to improve efficiency, they want to improve productivity, they want to show that something's more effective, or it could be that they have a organization where they're bleeding talent to a direct competitor to three direct competitors, and they want to focus really much more on engagement and keeping people engaged and enjoying the work that they do.
But if they can't describe what the KPIs are with a strategic drivers or the business, then they should probably spend a little bit more time thinking about those aspects, first, before they come to the table and thinking about experiments and projects using generative AI because generative AI extends, enhances, amplifies and complements the ability of humans to do our work better, it can't replace that work, and it can't create the work.
But once you identify what a driver should be, then it's a good understanding first of what a reasonable expectation could be. And hopefully, if you're with a team that knows their work, well, they can put together a priority baseline that says, if we didn't have AI, the way we normally would do this would look like this step one would be as follows. Step two is here, step three is there, maybe it goes all the way to step eight, or 10, or 40, whatever it is. And if in a normal environment, without AI, we could expect to achieve a 20% savings or a 40% improvement or 90% whatever terms of the margin.
With AI, we expect when we juxtapose that on top, those numbers are going to look like this. And then there's a reasonable follow up over the course of the pilot of the project to see whether that's possible, and some training for all involved prior to implementation, during implementation, and following to see what actually worked.
And it might sound simple, but it gets a little messy in the details sometimes, but it really should be that pragmatic. We aligned the KPI, we test with a baseline, understand if it's demonstrating against a KPI and if it meets that expectation, our consideration is that we scale across the organization.
It works in one team, we scale across many. If it doesn't work, then we try to learn why it didn't work. Perhaps, you know, a team found that you know, there was one aspect of the platform that they really hated, they just said was just really distasteful to them, and they only used a small fraction what was possible. They didn't use the whole thing and then as a result, the pilot didn't work.
So we had to go back drawing board and kind of figure out how to make it work for everyone. Or maybe it just actually didn't work. A lot of these things sell a great story. But when it comes time to implement them, they fall flat in the real world. So we ended up deprecating those and killing them. But I think it's adding and exercising a measure of discipline that is on top of some good business sense and then seeing kind of what works.
Felicia Shakiba: Absolutely. I agree. Matt, I think we have certainly started a conversation that we will have to continue, and I'm excited to see how your role evolves over the next few months. Thank you for being here.