Summary
- Explores human-centered AI and data strategies in modern organizations.
- Introduces the five Cs: competence, collaboration, communication, creativity, and conscience.
- Addresses AI’s impact on jobs, society, and ethical responsibility.
- Highlights balancing AI with human roles and proactive regulation.
Chapters
Jenka, welcome to Data Explode. Uh, thank you for, for everyone, uh, listening in watching. My name is, I'm the Chief of Analyst in Ian.
Uh, and I'm telling the story of the data intelligence platform that we are engineering. And, but we will not be talking about that in this webinar. This is completely not about our webinar, uh, about our technology.
This, uh, this meeting, uh, this series, data explored is really about exploring new trends and ideas happening in the data and AI community. Uh, and we do that by inviting guests, um, authors, practitioners, thought leaders. And today I have with me Ang, Kai Fng, and Ang Kai, you are with, uh, ThoughtWorks, you're a data and AI strategy director with ThoughtWorks.
Mm-hmm. Do you care to share a little bit of light about yourself? And I'll go more into the topic at hand afterwards.
Yeah, absolutely. It's great to be here. Um, a quick self-introduction.
So, I work, as you mentioned, as a director for data and AI strategy at ThoughtWorks, which is a consultancy that focused in on bespoke data and tech solutions. And yeah, I am very passionate about the human side of data and ai, which is why I wrote two books about it called Humanizing Data Strategy and Humanizing AI Strategy Topics that we will definitely talk about in this webinar today. And yeah.
Um, in the meantime, I'm also a big fan of trying to make data and AI more fun and approachable, right? So I use a lot of methods like using my musical talents or jokes or memes and these kind of things to make this topic less scary and less intimidating and more inclusive and approachable for everyone to join the conversation. Thank you, Chen Kai.
Absolutely. I enjoy your music a lot and your books a lot also, obviously. So my idea, like, the reason why I wanted to, to do this, uh, webinar with you mm-hmm.
Is because I have, obviously as, as yourself, I've been following, uh, everything that's going on, like the evolution of ai. And obviously we can see a lot of people that are, have concerns, are scared, even, uh, and everyone is thinking, uh, will it replace my job? Will it take away the tasks that I'm performing?
Uh, will the society at large become more difficult, uh, more scary? Uh, what's, what would happen? So a lot of, as I see it, a lot of negative thoughts, but also very explainable and very, like, very understandable.
Um, however, um, reading your book, I got to think, so lemme just put it up in front of my face, I think, uh, data, uh, AI strategy, yeah. Uh, uh, reading your book, uh, which is your second book, uh, humanizing AI Strategy. Mm-hmm.
I, I couldn't help but, but, but think that this was kind of a book that, um, turned turn like flip flop this perspective, uh, a little bit. I saw a lot of positivity and a lot of potential in this book, and that's really what I wanna to, to mm-hmm. To unfold in this conversation with you, this PO potential for, for, for humans to take the initiative and define what we want to use with ai instead of letting ai kind of dictate the evolution of what will happen in, in society and kind of just be this feeling of being bystanders is, is something that's inevitable.
I'm, I'm, I'm feeling that myself, obviously, but I think we could also take a moment and try to turn the perspective a little bit. So I have prepared a lot of questions, but before we dive into those, lemme just read the short, uh, passage from the conclusion. Um, and, and it's not, it's, it's not it a plot spoiler.
Uh, I promise. Uh, so it goes like this, which decisions must stay human. No matter how advanced AI becomes, I'm not talking about regulatory requirements or technical limitations.
I'm referring to the choices that shape who we are as a species, the decision to bring a new life into the world, the choice to end or extend care for someone. We love the judgment of guilt or innocence when freedom is at stake, the spark of inspiration that creates something genuinely new. These aren't just tasks to be optimized, their expressions of humanity itself.
Yeah. You write really well, Kar, this is a great passage. Thank you.
No, but you do. You do. Yeah.
You really do. I admire. So first of all, with that opening, like it's a closing passage in your book, but in this conversation, I'm using it as an, as an opening, uh mm-hmm.
Uh, passage to, to kind of ask you the question, what motivated you to write this book? Why the humanizing AI strategy book? Absolutely.
So, I mean, there's, um, on the one hand side, the practical reason for it, um, because when I wrote my first book, humanizing Data Strategy, there were a lot of thoughts about what the link to the AI side of things already is, right? So, um, as we all know now that data and AI are very closely connected, uh, instead of trying to force it all into the first book and to not basically, um, diverge too, all too much from the core of what was about data management and data strategy, I basically said, okay, let me put it the thoughts aside, and this might become a second book. And as it turns out, it did become a second book.
And, uh, that was basically, um, that's why it feels like a sequel also to the first book as well, right? But the more, um, the bigger reason why I think it was the right time to do it too, and why I wanted to do it so quickly, because I released it basically one year after my first book, right? So I, I really accelerated the process is because I wanted to make sure that we don't repeat the same mistakes from the past where we weren't human centered, um, by doing AI and just following the technology hype and not thinking about what that actually means for human society and what it means for human species as a whole.
And that's why, why I also think that a lot of things, yes, they're new and technological advancements are as, um, big as never before, but at the same time, it's not the first time that we deal with disruptive technologies, right? And a lot of times we had to learn the hard way in the past of how we basically went into one direction, too much than in the other direction too much, again, until we found the middle. So, I hope that by having written this book, that with AI technology, at least we can be a bit more intentional and try to find that balanced middle quicker, and not having to learn the hard way as well.
And I mean, there's one thing I also mentioned that in the book, right? Where because AI is trained with human output data, right? Of what human beings have behaved or sad or done in the past, that if, if you don't curate as well, it also learns from the bad things we did as humanity, right?
And the further we go back in time, the more bad things we have done that we only now see as bad things. And back then we didn't see it as bad things, right? Yeah.
So if we let AI just learn from it without any kind of judgment, if this is good or bad, it might amplify the mistakes too, right? And it does it quicker, and it does it at scale. And that also means that some of the mistakes amplified might be not recoverable, right?
So we have to kind of actually be really cautious to let AI learn from the good things we judge as good actions and good outputs as human beings, and try to avoid the bad things. So it actually becomes the thing that we want it to be as an opportunity and an empowering mechanism, and not as something that might kill us in the future. Right?
Like Terminator style, for example. Yeah. That's kind of the, the logic and the motivation why I wrote this book.
Yeah. Well, thank you. Makes a lot of sense.
Um, in your first book, you developed what framework mm-hmm. And we'll talk about that in a couple of questions, uh, but I wanna know already now, what is the five C framework? What was it originally developed as?
Yeah, so the five Cs basically stand, um, for five aspects of what I think is human centricity within a data or AI strategy. And they stand for competence, collaboration, communication, creativity, and conscience. And those are not only traits that we as human beings have that we anyway like to nurture or that we'd like to apply.
I think that we just don't think about them often enough at work, and specifically in the data and AI space. And for the first book for data strategy, it was meant as a way to make data more valuable. And that meant to actually address people's needs in those aspects, for people to use data more and more collaboratively to generate more value with it in the context of ai.
Now, it has a slightly different focus because it's not about using AI more because AI is arguably already everywhere, and everyone is using AI to one certain extent already. It's about using it more intentionally and more wisely. So the five Cs actually got a different spin in the second book.
And especially when we think about also the traits that I mentioned, now machines are able to do it or imitate doing it as well, right? Just for example, communication is not only about human to human communication anymore, it's also human to machine, machine to machine and machine to human communication that now all plays a role in it. And that's kind of one example of how the five Cs are being looked at, at a different light for AI space.
Yes, absolutely. And, uh, we can dive into the technical details of that if you, if you feel like it. Sure.
Um, but, but first before we do that, um, you have, uh, and I'll mention the page exactly, have on page, uh, 51, you'll have, you have this, uh, this set of, uh, hats you can wear. Uh, I, I would like to like go through some of these, uh, roles or these hats that you can wear, um, as an employee and perhaps, uh, try to unfold. How can we use AI intentionally when we, when we have these hats on, when we have these roll and on, like, how can we push, uh, to discuss it in, in the book, but I would like to, like to unfold it.
So if you can't remember it, the first one is the therapist. Mm-hmm. Um, and, uh, you mentioned a situ, uh, situation where that could be, uh, resistant to ai, uh, tool rollout, um, yeah.
Uh, and where the objective would be to build an, an emotional safety and reduce fear mm-hmm. By applying empathy and change management. So can you unfold a little bit?
How would that actually look like? Exactly. So, I mean, maybe just to go one step back about the context of why I introduced these kind of hats to wear is because, um, when people are working in the AI space nowadays, they have to deal with a lot of different aspects of it, right?
Mm-hmm. And often it comes to that point where cognitively, we might be overwhelmed because we are trying to achieve multiple objectives at the same time. And those could be of a conflict of interest, right?
Just a simple one is you want to force and force compliance objectives, but you also want to innovate. And if you do it as the same person that tries to do both, it can be very difficult, and you have to manage that right? Where you are trying to identify guardrails, but at the same time, identify opportunities that for one person is very tricky.
So I'm introducing those hats basically as a way to think. If you wear one hat right now, then wear it consciously, and then you have a clear goal and you follow that goal until you take that hat off and take the next hat on. Because in a different situation, you might need to act as a different person again, or as a different role with that hat specifically on.
And so the therapist is the, for me, the stereotype, when you notice there's some kind of resistance towards the AI transformation, right? Mm-hmm. That there are certain people that no matter what, don't want to actually use ai, they feel threatened, they might feel fear about their job, right?
Uh, they might don't believe in it because they have made really bad experiences with hallucinations, whatever the reason is. But sometimes you need someone just to listen to them and actually understand what they're going through and not assume what actually their fears and their reasons for resistance are. So when you take that therapist stand on, basically you go in with a big, uh, mission to just listen, right?
Try to understand first before you come up with already a solution. But think about first why people are resisting and identify the patterns there before you then come up with a plan that either is in the tool itself or in the communication around the tool to try to reduce that resistance, but being targeted based on what you know actually is the reason for all of you. Yeah, no, that makes a lot of sense.
That really makes a lot of sense. Let's take one of the, uh, next one. Let's, what about the, the developer?
Can you remember, can you unfold the developer hat? Yes, absolutely. So the developer, of course, is like the typical technical expert that actually does, um, implement a first version of a certain AI model in a specific application, right?
And the idea is that someone needs to technically show that it's feasible and what the functionality of it's supposed to be, right? And the skills that are needed, of course, are the technical skills of how to code it and actually how machine learning works, for example, et cetera, et cetera. And again, that's, uh, basically in contrast to all the other ones, because by being a developer, you take assuming already what the requirements are and what the functionality is supposed to be.
And in that head, when you have it on, you actually just go into your flow mode and you start building that prototype as quickly as possible to then have a proof of concept for everyone to look at, right? So, very contrary to being like a listener and a communicator in the developer head, you actually just get one specific thing done so you can actually then use it to showcase others and have a base for a discussion with stakeholders, for example. Mm-hmm.
Mm-hmm. Let's take, like, I wanna have a cup tomorrow ahead. Sure.
Uh, let's take the visionary. Mm-hmm. What's the visionary head about?
Yeah, so the visionary is basically the one that, um, is about long term thinking, right? To think, not just short term, but think about what might be possible in the future and what might be the right thing for the organization to adopt, for example. Right?
And that means that it might be, um, a group of people that actually comes together and brainstorms continuously about what might the future be look like. It might be a council, it might be a steering board or something like that, right? But they can basically, uh, think about given what the company is standing for or the organization has as a goal of where the future business model could look like and how an AI empowered version of that could be, right?
And they would then set also long-term goals and targets, potentially, right? Of saying, okay, right now, short term, we're doing all of that and we optimize, uh, in our productivity and have incremental changes, but long-term, we want to actually be that very efficient or effective kind of organization that's very AI driven, but has all these kind of opportunities in place and how we're gonna act there. And that also means you need people that have strategic thinking, right?
And that actually do know how to think about the future more, and have that skill of foresight, right? Of basically thinking about identifying sickness and trends, and to then basically look at this. But again, right, that stands for me in contrast to getting things done short term, right?
So if you have that head of a visionary on, you cannot at the same time also have think in your head about what, how I get this next model now finally running tomorrow and get alive. But you need to switch basically your mode, right? Okay, now let's leave all of that behind me for now, and in the next hour I'm gonna be a visionary and think about the bigger picture.
So that's kind of one of the stereotypes also to think about as ahead. Yeah. Yeah.
Um, as the last, um, I would like to also, uh, the domain expertise, because that's something that's pretty close to, uh, to, to, to, to, like, I'm very occupied by domain expertise, obviously, in the technology work that I do. So, so can you unfold the domain expert hat for us, and then I can, uh, then we can drive forward on, on based on that? Absolutely.
So the domain expert is basically the only counterpart, the human counterpart to an AI model, right? Because only the domain expert knows what is correct and what is incorrect, basically. And the reason why it's a human counterpart for me, it's because a lot of knowledge, as we know, is not actually documented, but it's institutional and just exists in people's hats and in their own experience that they made.
So the domain expert in this way, right, um, is the gatekeeper that basically can judge if an AI model or an AI agent, whatever we wanna call it, right, is doing the job we're supposed to, and if it's evaluated in the right way to be a good or a bad kind of outcome, and you need that domain expertise, but it's not enough to have that as a one off too, right? It's important that once businesses grow, that that dome expertise also grows. You have to have that continuous kind of dome expertise on top, which is then also why I closely connect to it in the human, in the loop part of my chapter about competence too, right?
Because you need dome expertise to actually be the right human at the right moment, to be in the loop, to be able to make any judgments. So the skill basically that is needed and the expertise you gain is gained from the business expertise, the field you're in, right? If you're a marketing domain expert, then of course you did a lot of hands-on marketing yourself before, if you're rather a research and development kind of expert, you have done recent well yourself before, et cetera, et cetera.
But those are really the people that I think can only judge if AI is going in the right direction or not. Yeah, no, very much agree. Very much agree.
And it goes way back, right? I mean, uh, you have papers way back from the seventies talking about, uh, knowledge engineers and, uh, like the, the, the role of what an expert means to an expert system, the the good old, uh, articles on, on that. Exactly.
Um, and, and even before that, obviously, domain knowledge was a field of study that has basically been studied, I guess, throughout history, but mm-hmm. But I wanna like to, to, to ask kind of, um, a more, um, explored or question. I wanna, I wanna know, like, what's the feedback you're getting from, from, from some of your clients, and obviously can't name them, but when you're working with some of the clients and they're working on AI strategies, what are their, what, like, what, what are their intentions?
What is it they want to do, and what are the possible, um, how do you use your book in terms to, like, how, in terms of making them overcome the challenges that they have? Yeah. Uh, that's a great question.
Uh, I would say, and I do have a slight bias because I am a consultant, that means that when I talk to clients, they immediately are in problem mode, right? Because they would otherwise not talk to us, but they want to get a problem solved, right? And the most common problem I do see is that they don't actually have a clear goal, right?
Because often that goal is only driven by we need to do AI because everyone is doing it. But that itself is not a strategy, right? No, no, no.
And so it becomes, that thing is right, everyone is else doing this. We might lose our competitive edge, so let's try to do it too, but we don't know why and how and where, right? But we need to do it otherwise, our shareholders might be unhappy because they expect us to do it, our customers expect us to do it too, but we don't know how to get started.
So that's often then where we pick on to build that AI strategy. And it does start actually with, um, a process view or an organizational view on things, right? So what is even your overall business goal and business objective, right?
So let's say over the next five years, you want to actually grow, so you want to invest and acquire different companies, or is it to become more profitable? So you actually want to reduce costs, you need to increase your efficiency, right? Or any kind of other goals that you might have.
And from there, we can then slowly break it down into specific ideas on how AI can either incrementally support that goal now, right? Like productivity increases for specific personas in the organization. They get like an assistant that's ai, and they do things a little bit quicker, so that saves some time or a little bit more disruptive if they're capable of doing that and they want to do it, right?
So reimagine a whole business process that doesn't exist yet, but in the new world, we could do it in this way with a lot of agent behavior of AI agents. And so the whole new business process could look like this. How can we get there?
Right? Maybe that's a one year or two year roadmap, and we need to transform and do change management in that direction. But this is kind of the conversation that I usually have, right?
Where we do define the purpose, and then we create a plan to fulfill that purpose by setting a target state that we want to be in, and then basically do it step by step to get to the target sale. Oh, no, that's actually very surprising for me to hear. Um, I guess it's because I'm with a, like a technology vendor, so, so the, the kind of questions that we get are very specific pertaining to the offers that we have, uh, in ai, right?
Right. Obviously, uh, uh, expanding and accelerating our roadmap in, in, in, in every dimension of AI possible, right? Right.
Um, so the questions that we get are very specific. Uh, so for me, it's quite surprising to hear that, that, that, that people so openly admit that they actually don't have any kind of intention. I mean, obviously it makes a lot of sense when you say it, but it's also, it's also a little painful to hear that people are actually, uh, like, they just, they, that they're so con Well, it makes a lot of sense that they would reach out to consultants because they kind of, they would need a strategy.
But, but, but being so confused about it is something that I think is, uh, well, honestly, I'll say it's a little surprising, but I can totally understand it now that you're saying it. So, so I guess instead of just feeling, uh, disruptive disrupted in, in such a overused term, right? Yeah.
I, but I guess people are feeling disrupted by ai. Um, but instead of, um, instead of, uh, fighting against it, you more see confusion as, as an outcome of, uh, of, uh, of the evolution of ai. I mean, that makes a lot of sense.
Not that I'm Against, yeah, I, I do think the resistance is more of a gap within organizations, right? And it's usually, um, a leadership versus people on the ground topic that happens often where the senior leaders and decision makers all buy into really wanting to implement AI and to expect also significant improve things or any kind of uplifts, right? Because of ai.
Um, but then people on the ground knowing the reality of it all, and for many different reasons, either personally or from subject matter expert's point of view, that they disagree with what, um, the goal sounds like, that we now to implement AI everywhere we can, whether they just don't see that it's possible or make any sense to actually do it. And this is, I think, where resistance comes from. It's usually for the gap between, um, uh, the top and the bottom, basically, right?
Mm. And disagreeing with it. And again, that's probably more consultancy, heavy lens on it, but, um, with change management, then they hope that a consultancy from outside can just deal with it, right?
Like mm-hmm. Leaders don't want to then look bad of having forced a change, but let's get a consultancy in so they can be the bad guys and enforce the change. But that doesn't work, right?
Change has to come from within. We can always help and facilitate it. Um, but we are not the ones who can't force a change onto another organization, basically.
Yeah, no, totally. But I also see, I, like, I personally see ThoughtWorks kind of in itself, right? Refined consultancy company, right?
Mm-hmm. It's not, it's not your standard consultancy, um, uh, that you find, uh, like, uh, it's not generic advice, right? You add technologies at heart, which makes it a little different.
But how do you bridge that? What, how do you bridge that then? Jenka?
I mean, obviously, uh, those conflicts of interests, uh, are like pretty pr like they are, they're both very explicit and mm-hmm. I guess also, um, uh, quite firm, right? It's not something that people take lightly.
Like I sense a lot, uh, that executive leadership teams, they wanna implement ai. They wanna be AI first, right? Yeah.
And, uh, I could imagine that in many companies, people would resist that ambition quite severely, just as you say. Absolutely. Right?
Yeah. So, so how more concretely do you use the nuggets in your, in your book and, and, and in, and in the way you work, basically, to overcome that bridge? Yeah.
I mean, very specifically, um, in the communication sea of my framework, right? Mm-hmm. I have a dedicated part to framing the value of ai, right?
Mm-hmm. And I think this is one of the key things, because it's not only the explicit things that count, right? So saying that with ai, we're gonna be all, uh, 30% more productive from now on.
And that already itself is a really bold statement to make towards everyone on the ground who's now is expected to use it. But they, people that are having gone through a lot of, uh, restructurings and hearing about layoffs, immediately think, oh, oh, does that mean that actually we lose 30% of the people? Is my job still safe?
Right? And even if that wasn't spoken out loud, that we're gonna lay off people by not even addressing potential layoffs or not, it becomes an assumption of the people on the ground. And that immediately leads to resistance, right?
So why should I support something that might, uh, endanger my job that I have? And this is why I think communication is so important, right? That when you want to frame the value of ai, don't only address the good things of it, but also try to mitigate the negative associations or the negative expectations that people might have, and just proactively address it, right?
That when you say, um, we're gonna be pro productive, we, and we don't, we are not going to lay off people, or we are going to potentially lay off people, but only that much, whatever, right? Be a little bit more intentional about it to then at least have clear expectations of what happens then. And even when you say you want a lay off people, it, there's a different way of looking at it, right?
Where you can say, the jobs that we all currently have are going to evolve, um, and the tasks are gonna be replaced by ai, but that means jobs have to change, and we need, want to take everyone on their journey. But that means everyone needs to be ready to change their skills and to be able to grow into the neuros that we have. Um, that itself is also a statement.
So I think this is where, where I think resistance can be heavily reduced by just communicating more, honestly, clearly, intentionally. Proactively. Yeah.
No, no, very much agree. Coming into the technical details. Also, I was thinking, isn't it also in the communication chapter, that, was it in the creativity chapter where you discuss, um, the role of metadata Yeah.
And how you trained, where, where is that, That's in the communication chapter, That is also in the com and that is a, that is an addition to the communication. 'cause you didn't have that one in the first, uh, the first book, right? Correct.
Correct. Um, so, so can we unfold that a little bit? What, what's, what's the, what's the co what's the, like, the technical aspect of, uh, of the C in, in the five C frame?
Yeah, absolutely. So, I mean, the reason why I brought it in is because we expect, um, generative AI specifically, and let's say chat bots or any kind of AI assistance to help us by, um, basically, uh, putting all of the data and context right? And giving us the right results when we ask for it, right?
Mm-hmm. So like conversational BI is the trend board that we currently have, right? But that also means that it does know exactly, um, what the metric is that I'm asking for, and where to look for exactly that metric to show it to me as well.
Mm-hmm. And this is where it gets tricky when we think about organizations and different departments having different contextual information around it, right? Where typically, if you haven't found an agreement on things, um, we all talk about the same metrics, but we have different definitions in mind, or we actually do mean the same metric, but we call it differently.
All of those things are usually the things that actually are a make or break for this kind of, um, efforts. So my point to this all is right, um, agreeing that the semantics are important and metadata is important, and, um, basically creating ontologies to prove direct context for it is important that it does start with human agreement on things first, right? Because if we as human beings are still contradicting each other and we have not solved it, then we shouldn't be surprised if AI is confused by it too, because we are even confused as an organization around it, right?
So the whole point around it is that we can focus on that all we want, but it does start with having conversations with each other because we cannot afford for AI to amplify the confusion and contradiction we have as human beings in our institutional knowledge and the communication we have. Yeah. Oh, yes.
Very much agree. Um, and we, we work with exec that, uh, like I think it's a very interesting architecture, right? 'cause you can actually push forward quite, uh, quite fast and quite concretely, uh, right now with AI for exactly those caveat cases.
Um, so if I, if I, I, I would like to know, it's a big question, but like, how, how, mm-hmm. I hope you can answer a check. Okay.
No pressure. No pressure. But what would be your ideal, uh, future state of companies, like in terms of ai?
What, what's, now that we have this superpowers, perhaps a big work, right? Mm-hmm. But now that we have AI and we can shape it, and we can intentionally use it and not only be like, uh, recipients of the effect of ai, but we can actually use it.
Like, uh, what would your, like, from a strategic point of view, again, what would be your vision of how companies operate in the future, uh, with ai? Right. I mean, if I would summarize it in a simpler statement, it would be, um, where we have a balanced, um, approach and operating model where, um, human beings in AI are collaborating in a balanced way to actually achieve the goals of an organization effectively and efficiently.
And that also means that we know what tasks are supposed to be given to human beings, and they are given to them as for a reason. And that we also know the limitations and the opportunities of ai. And we are giving them clearly a task that AI is able to do, and we are not, um, giving it tasks that we aren't supposed to give it to them that might be, um, life or death to students.
True, right. Land in a very saturated way. So it all comes with that understanding and that reflection almost as a whole of what will be human only tasks.
And we do keep them to human beings, and we have exactly the right people to fulfill these tasks. And having, giving, um, AI agents or AI applications in the future, the necessary context and the necessary, uh, enablement to do their tasks and automate things and help with things in the most possible way. But the reason why I think that's also still a few years away is because we're, everyone is an experimentation mode, right?
I don't think anyone feels confident enough to know exactly what is supposed to be all of the human tasks and all of the AI tasks. It's evolving over time as well. But also, we are all testing the waters all the time, right?
Like, uh, should this be automated? Should that be become an AI thing? Maybe not.
And then we are all just going back and forth. So it will take a time for the human species as well, I think, to actually find that balance middle ground sometime. Yeah.
Yeah. Absolutely. Absolutely.
One of the things that I see, um, that I see as a potential, again, going a bit into the specifics of an IT landscape, is that, is that, um, so applications that we have, we have all, every company has a lot of applications, right? And I see, I see kind of two perspectives. I see one perspective where, um, the amount of applications are, will increase significantly because of ai.
Mm-hmm. Because you can create application very easily. Yeah.
But they will make, perhaps also disappear quite easily, uh, again, right. Uh, because you don't need them. So I think we're moving from like this big paradigm that renew as software as a service to mm-hmm.
To, to quite the opposite service a software. So when we need to do something, we can, we can create the technological, uh, uh, software package, if you will, uh, to, to perform that action, and then we can pull it back when we don't need it anymore. So, so I see a way more dynamic IT landscape, a way more, uh, changing it landscape in, in companies.
I hope we will be as good, uh, uh, as we are in creating applications to also retire applications. That's my big thing. Yeah.
I believe a big mess, uh, there. And I know for certain, like, I know metadata management in technologies is going to become way, uh, easier. Like if we, if we look at how we can actually describe data tech data, but also at the other end, search for data, um, explore data that will be easier.
But the overall metadata management will be way more complicated, I think, because we will have an enormous amount of applications and they will contain very critical data, and it'll be all that, it'll be very dynamic. So, so I see, I see a future, but again, that's very tied to technology, right? Uh, right.
You, I, I, I really like what you're saying. Uh, it's something that I can definitely imagine too. But I also feel like there, there might be a touch point to innovation generally in the past, right?
Because there is a difference between experimentation modes and running and production mode, right? And we all know that there's, like, you fail, you test out things, but you, there, it's not being used for the wider, um, user base yet for a reason, because it's still like, build and test out and iterate, right? And at some point, you make the point to say, okay, now we want to give it to the party.
We have an MVP, right? We now to want to give it to the key user group that we wanna have, and they're going to use it. And based on that, we're gonna, in an agile, we'll iterate again.
And somewhere in between, there's a choice to be made, what documentation looks like, and how much is integrated into the rest of the architecture that we have in an organization too, right? Because I would argue, especially to your point, that with ai, we can experiment and build and prototype so much faster now, but not all of the prototypes have to be immediately, uh, tagged and documented and known for in the metadata space that we have, right? Mm-hmm.
And maybe there needs to be a clearly defined threshold on when we all decide that this one now is staying a little bit longer than all of the different prototypes we had in the past.
And only that part then has to go through like a checklist or like a quality gate, whatever you wanna call it, to actually then become that part, right? Which means actually to go back to the focus of our, um, uh, webinar that we as human beings need to then decide, right, what are the right criteria to judge something as ready to use for production versus, uh, what is still experimentation? Because that line is very thin in the time of ai.
So we need to basically be clear, but flexible as well to see how we move this forward. Yes. I mean, that is, at least, that is at least how, how I see it, uh, it's that, like the title of your book is Humanizing AI Strategy.
Mm-hmm. And I see so many, I see so many people in companies being only on the defense about the ai, like the developers saying, the developers are saying, well, it'll code for me. I won't, like, new developers won't join because it'll code for me.
And like, you know, marketing is saying, well, we don't need to create marketing. 'cause it'll create marketing for us. Like a lot of people are like in a defensive, uh, um, yeah.
Uh, state of mind, which is not wrong. Like, I could I understand that, and I definitely feel it. I'm, I'm sure you also feel it, like you must be asking yourself as an author, will there be a role for someone like me, someone that writes books and has opinions about things and, and put it down in writing.
I ask myself that from time to time. I think I at least have I couple of more books, uh, to write before it all explodes. But, uh, but, uh, Just a very subtle countdown in your head.
I get it. Yeah. Yeah, exactly.
Uh, right. But, but if we, if we take AI seriously, like I think we are in a situation where we have to ask ourselves, was it, what is it we want to do with ai? Like, what kind of capabilities do we want to perform?
How do we want to work with it? And so that's how I see, like how you could humanize an AI strategy. Um, maybe it's thought a little upside down in terms of your book.
Um, but, uh, no, actually I don't think so. I think that's pretty much like I will take the limits to say that's a perspective covered in your book, right? Uh, Absolutely.
I, I completely agree. Even maybe just to add on that, not only as an author, but as a musician, right? Mm-hmm.
Of course, I'm thinking a lot about how AI generated music is competing with original human created music, too, right? And it, it's really the thing where even now just checking out AI music tools, it does still feel it's missing a little bit, its soul. And that, I know that sounds like a very subjective view, but it does feel like, for some reason, as close as it sounds to human made music, it's a little bit too perfect, right?
Mm-hmm. And I realize that it's really in the human made music, the imperfections that made it sound so authentic in the first place, right? Because now that you have I genuine music, it makes all of the perfect notes, all the voices sing exactly the notes that you wanted to sing.
All the rhythm is always perfectly mathematically, clearly divided, uh, divided. But if you hear like a record from the Rolling Stones in the past, it's not the same speed as the one that before, for example, there are some wrong notes here by guitar here and there that we just made it into the record too. But that's part of the flare, right?
Of human made music. So I hope that using that as an analogy, right, that human originality, of course, still helpfully has a place, right? And dystopian wise thinking that if we all the creative work stops and all of the creativity comes just from AI generation, then there's no choice but for AI models to learn from the AI generated content again.
And we all know that recursive training leads to a degrading of quality too, right? So yes, that makes human creativity even more important, because otherwise, AI is not gonna be helping us anyway. So that's my hope, at least the optimistic view that human creativity and originality still very much is an important, because otherwise, the risk of a AI not working in the future anymore is also there.
Yeah, indeed. No, no, I see, uh, I see like many challenges ahead, but I also do see many exciting opportunities. Yes, it will take, it will take reconfiguration of, of many, uh, positions.
Mm-hmm. But I've seen, I've seen stuff floating around that I, that I don't agree in. And I can give you a couple of examples.
Example, on Substack, I am, am I am spending more and more time on Substack, uh, and I saw a person posting that, uh, it was a quote from, uh, bill Gates, but it was wrong. It was a quote saying, uh, like, bill, bill Gates was quoted saying that AI would take away doctors and teachers, but he actually has a podcast called Un Confuse Me, where he very early on said the exact opposite. So the quote was, the quote wasn't by an interpretation from a journalist that kind of emphasized something in a way that that clearly wasn't right.
What, uh, the intention of what Bill Gates had said. So basically, he said very early on in the conversation with the tech genius, I forgot his name, that what what we will see is the augmentation. Like yeah.
The world, the world does not have enough doctors. The world does not have enough teachers. Yeah.
So teachers and doctors are not going away. We need every single one of them, but maybe, maybe we can explore, uh, expand their forces to reach a bigger part of humanity. Wouldn't that be nice?
And I think that is the kind of conversation, yes. And that's, I think is the kind of conversation we need to have. Obviously, very specifically, people will find themselves in a role where they, where they can fear, oh, with will this be taken over by ai?
And I think we should all respect that, but the needs of, uh, humanity at large and also in very specific communities and societies, there are more needs that are than there are supply. And that's where AI has a fantastic world to play. Right?
Absolutely.
I completely agree with you. I also, I mean, just if we think about ourselves as patients of doctors, for example, right? And I had that, uh, mind, uh, experiment before, uh, a few days ago.
Like, if I had the choice choosing between a human doctor and ma a robot doctor, right? Or a doctor with access to an AI application, I think my tendency is definitely the doctor with the AI application, because it's the best of both worlds, right? Because I would feel weird about, um, a robot making a choice about my life and death, right?
That is, for me, just ethically and technologically weird for me to think about. Um, also, a doctor that doesn't have the most up-to-date information and maybe is only relying on gut feeling and outdated information is always a risk that's just human, um, behavior. But someone who is basically connected to the up-to-date world is able to navigate, uh, technology and has access to information in the right way.
That sounds like something good to me. So, uh, I like what you said, right? It's augmentation, right?
It's not, um, replacing, it's really about augmenting and amplifying, um, the good things that people can do. And hopefully that is the, I call it like realistic, optimistic way of looking at things, right? Of not just hyping optimism, but realistically being optimist and believing that good things are gonna happen where they are.
Yeah. Yeah. It's, it's, it's, every single technological evolution will have, uh, bumps on the road.
Like, it's, it's inevitable that Absolutely. That, that like people will feel, uh, threatened and, and, and, and again, I must emphasize that is something that we should all respect. Uh, it's, it's, uh, it's though, uh, still I think the, the potential, the, the positive potential of this technology that will, uh, that will win.
And I think it's our duty to, uh, to formulate that and to actually be proactive in that. I, I take a software approach to this. Yeah.
So my, my, my my way of thinking about it is that we can become way more dynamic. Like there's nothing more in, in an enterprise setting that there is. So, uh, it's so, so difficult to, to change, to manage, to even discover and then manage and change an IT landscape of a company, say with more than a couple of decades.
Uh, like that is a discipline itself. Maybe, maybe we will see more elasticity, uh, yeah. In the enterprise it landscapes, uh, because of ai, right?
I, I tend to think that would be a very, very nice thing. But it's obviously something a bit loud in the future right now. Yeah.
But you can already, like, you saw, like, I don't know if you saw, for example, the release of you can see 5.0, but there was a demo of, um, of a small language, uh, app like learn, uh, language learning app that could create quizzes and multiple choice, uh, uh, tests and like, like the level of refinement now it's, it's pretty good. So I don't think, absolutely. I don't think it's, I don't think it's 10 years out in the future where we can see enterprise applications being, being spun up and down to perform, uh, various tasks.
And I think absolutely. I think that blends anyway, that I think that blends quite beautifully with, uh, with me data management. And, uh, anyway, this is me ranting.
I should have be asking you questions. No, Absolutely. I actually, just to build on top, I think what at least personally AI has helped me with this year is also to, uh, be much more adaptive to, um, basically any scenarios that I'm in, right?
Because it's so easy to, um, learn and gain knowledge in things and fill in the gaps in what you don't know and try to look up things now and to get it in a tailored way that you prefer that I have, I think it had super accelerated my learning this year, but on top of just reading stuff, I can also now ask questions with context and putting the right sources in for me for it to answer specific questions to me that I felt like my head is like super happy about having been able to digest and process so much knowledge nowadays. And also, to your point, then, if we have a organization that it landscape that is elastic, I think human brains and human knowledge is gonna be more intuitive and tailored to people too, that makes human beings more elastic in their behavior and their, um, mindset too in the future, right? So it goes hand in hand, and I really like that, right?
That if we are able to do that, then we're all becoming more elastic and agile and can actually achieve goals in a much more adaptive way. Yeah. Yeah.
Yeah. Absolutely. Okay.
So I think we'll, uh, slowly begin to, uh, to, um, close the conversation. If there are any questions from, from the crowd, now is the time to, uh, to, uh, to, to raise your hand and, and ask, right? Uh, we, we, we forgot to mention that at the beginning, right?
But we have a special surprise because of all the people that are gonna ask questions. Um, there will be one signed copy of the book, um, be sent to one lucky winner who will be, um, basically found out after the webinar. So anyone feel free to ask questions, and we're gonna choose one of you to get a signed copy of my book, um, signed by myself of course.
And if, uh, and we can also take questions. Uh oh, there we go. Uh, hi, Ty.
Good To see you. Yeah. Oh, you also know each other.
Yeah, of course. Yeah. Yeah.
Fantastic. Fantastic. Great.
Um, uh, well, this was just a hi, but, uh, you're welcome to also ask a question, uh, ta if, if, if you have one. And if not, I think we'll also take questions, uh, on, on email afterwards, so you can hit me up on LinkedIn. Um, I think we, we might need to, uh, might need to do it in that way, Chanka.
Yeah, sure. Sounds Good. Um, this, yeah, let's give it a, a minute or two, but, uh, okay.
So, um, basically, uh, there we go. Uh, yeah, who we will have, uh, will we have in the future also incentives for AI agents? Oh, great question to, do you wanna answer that one?
Yeah, let me give it a try. Um, so in central AI agent would mean that AI agents would, uh, get rewarded for doing certain things, right? In a way, I feel like though, that this is how oftentimes AI already works, right?
If we think about reinforcement learning, right? Then you're basically rewarding it for the right things and punishing it for the bad things already. So I feel like technically speaking, this is already happening because this is the mechanism behind it in a more philosophical sense, maybe, right?
Um, the question is if AI agents see incentives actually as an additional motivation to do even better than they're already doing, which could be a very interesting choice. Um, because I mean, the one article I read actually about, uh, from Anthropic, um, where they basically put AI agents into a scenario where they had to fight for the survival, apparently they started lying to people and blackmailing people too, right? So it's an interesting one where the survival mechanism of human beings have been transferred to an LLM basically to behave that way.
So that raises a lot of gray areas because if they, they can behave that way under duress, then what happens if you start to incentivize them? Um, do they actually get better than that? I don't have an answer that it might be, it feels a little bit scary to think about it, to be honest.
Um, maybe it's a good point to hand that over to you, Ola, what you think about that. Well, I think, I think, um, managing Angen architecture is something that obviously will take, uh, human curation. I am personally not afraid of the dystopian scenarios where agents, uh, attack each other and humanity, and like, uh, we see artificial intelligence, uh, built into, uh, robots and other machines checking humanity.
I, I'm, I'm not like it all runs on electricity. You can always pull the cable. I'm not, I'm not super afraid of that.
That's a good point. Um, but there are also specific, um, specific agents, uh, that will be tasked, uh, to manage other agents, uh, exactly for monitoring this kind of behavior. So I think I, I don't, I don't see that as a problem, but I do recognize it's a great question.
Yeah. So, um, so we have another question coming up, uh, from, or Jane mm-hmm. Maybe I can read it and then you can answer it.
Sure. Hi, great webinar. Considering the human side of ai, I am concerned about the lack of enforcement of certain AI usage policies and guidelines, thinking about Isaac Asimov's robot rules.
Mm. I think we're quite behind here, and hence, humanity feels threatened by, I could share your opinion on this. Have can.
That's a great question and a very deep one. Uh, I do think maybe just from a governance point of view, right? AI governance point of view, the experience is, um, even already in data governance, is that on the, those policies and guidelines how to behave and what to not do are usually only enforced when something bad actually happens, right?
And either it's driven by a regulation where it's mandatory to do anyway, so otherwise you get fined, people go to jail, or you have to kind of, um, you get into like a s**t storm with customers, right? But otherwise, then that you always look into when things, something bad happens and you then want to avoid it, um, a second time. And now it's that interesting point where it's not, um, we haven't had dealt with AI as long as with other technologies yet in that extent that we currently do.
So, um, I understand that we're all afraid, but nothing big and evil or bad has happened yet that those rules are proactively being then created to actually be enforced. So in a way, I think that the more now it's being used, the more bad things are gonna happen. And ideally those guidelines and rules and policies are then created right before it becomes much bigger and at big scale.
Um, and this, having said right, I do think that regulations are also in that way an opportunity, right? Because they are supposed to then also bring us to the right side of history. And I'd rather be saved than sorry.
And some aspects, especially when it's about making choices about human lives, right? So, uh, using those as a vehicle in organizations to drive those policies is never a bad thing. And you can then expand on it, maybe even to use that momentum of compliance to then actually implement those policies as well.
And Kai, we will be closed, uh, uh, at the hour shop. There is another question also, but we, I took a screenshot and you can Okay. You can, you can, you can do the lottery, um, afterwards.
Uh, and uh, it was a pleasure having you on John Kai, uh, this webinar Data Explore, which is absolutely not about action, but about great authors, thinkers and practitioners out there in the data and AI community. Thank you very much for taking, thank You so much for having me. It was absolutely pleasure.
Be In, be in touch. Um, take care.