Navigating Forward

Top 10 myths about AI with Kevin McCall and Melanie Roberson

December 11, 2023 Launch Consulting Season 5 Episode 8
Navigating Forward
Top 10 myths about AI with Kevin McCall and Melanie Roberson
Show Notes Transcript

On this episode of Navigating Forward, Melanie Roberson and Kevin McCall from Launch Consulting walk through ten of the top misperceptions or myths about AI. From bias and hallucinations to data readiness and cost, they chat about some of the top concerns as organizations think about delving into AI projects. Plus, they discuss some of the top questions Kevin hears from friends and business leaders alike: Is AI creative? Is AI conscious? Will AI replace people? Listen in for answers to these questions and more. 

Find Kevin at https://www.linkedin.com/in/kevin-mccall-74b66/
Find Melanie at https://www.linkedin.com/in/melanie-roberson-55a0a23/

Learn more about Launch Consulting and AI at https://www.launchconsulting.com/ai-first

00:00:03:08 - 00:00:44:11
Narrator
Welcome to Navigating Forward, brought to you by Launch Consulting, where we explore the ever-evolving world of technology, data, and the incredible potential for artificial intelligence. Our experts come together with the brightest minds in AI and technology, discovering the stories behind the latest advancements across industries. Our mission: to guide you through the rapidly changing landscape of tech, demystifying complex concepts and showcasing the opportunities that lie ahead. Join us as we uncover what your business needs to do now to prepare for what's coming next. This is Navigating Forward.

00:00:44:14 - 00:00:58:04
Melanie Roberson
Hi everyone. My name is Melanie Roberson, and I am the director of Organizational Effectiveness and Change at Launch Consulting. We're here today to talk about some common misperceptions around AI. I'm here with Kevin McCall. Kevin, why don't you go ahead and introduce yourself?

00:00:58:09 - 00:01:29:29
Kevin McCall
Thanks, Melanie. Yeah, Kevin McCall here out of Seattle. I run our overall AI program here at Launch. So good to be here today. So just as a bit of background, this list is inspired by common conversations that I've had over and over again with customers, colleagues, friends, and even family about AI, and really is meant to clarify some pretty common misperceptions about the nature or use of AI.

So, and I know I've shared this list with you, Melanie, and you've validated that you think it's a good top ten list. So, I'm going to allow you to drive us through this list today.

00:01:30:00 - 00:01:41:04
Melanie Roberson
Yeah, it sounds like a very fun topic. So, let's get into it. Okay, so the first myth or misperception is can I prevent bias in my model. Why don’t you share a little bit of information.

00:01:41:10 - 00:02:31:20
Kevin McCall
Okay. Yeah, this comes up a lot. The short answer is almost without exception, no. There is going to be bias in AI models. And the bottom line here really is that people are biased, right. Sometimes they're subtle, sometimes they're profound, sometimes they're conscious, sometimes unconscious. But people have biases. And since people use apps and people then create data by using those apps, when A.I. models learn from data, they're going to inherit that bias. Right. 

So, one thing that's interesting about this space, too, is that we often think of bias, and we hear bias referred to as kind of a pejorative term in this domain. But it's important to also remember that most people are quite proud of their of their biases, depending on their heritage, their politics or their religion. Right. So, everybody carries around these biases. And so, it's almost impossible to avoid having bias in models.

00:02:31:27 - 00:02:40:19
Melanie Roberson
So, Kevin, how should people think about bias then, in your model? Should you,  what's the mindset that you should have when you're thinking about bias?

00:02:40:22 - 00:03:11:10
Kevin McCall
Right. The important thing, I think, is to pivot away from this idea of will there be bias on my model? Because that answer is yes, almost inevitably. And the real question is how do I align the operation of my model? How do I align that model and use with my intended values? How do I align that model with what I want that model to do in use, right? And how do I then regularly monitor and evaluate that model over time to make sure that what it's doing is consistent with what I need and want it to do in practice.

00:03:11:17 - 00:03:23:10
Melanie Roberson
Awesome. That's great. Okay, so the next one is can I prevent hallucinations in the model. And hallucinations is an interesting one because all kinds of things come to my mind when I think about hallucinations.

00:03:23:13 - 00:04:20:10
Kevin McCall
Right, right. I guess the shortest way to think about this is untruths, right? People are regularly surprised when they ask a large language model something and it gives them something back that's not factually correct. So, this is similar to the last question, of course, but there's a little bit of a different twist on it. So, the short answer, again, is almost without exception, no.

And as a bit of background here, an important thing to understand is that large language models are not large knowledge models. Those are two very, very different things. Large knowledge models do exist, like that system that famously won on Jeopardy, right. And that was all about answering questions factually. But large language models are not that, right. Large language models master language, but they master language by studying a lot of text, some of which is factual and some of which is not. So, it shouldn't be surprising to people that a large language model will then say something that's untrue because it's learned from a ton of data, some of which isn't true.

00:04:20:13 - 00:04:24:22
Melanie Roberson
But couldn't a large language model fact check itself before responding?

00:04:24:25 - 00:05:40:21
Kevin McCall
It can, right? And so, this is actually, this is a really good question because there's different ways that you can tackle this problem. There are patterns that have become quite popular lately to try to do that. For example, there's a lot of talk around this idea of Retrieval Augmented Generation or what they call RAG, which is the whole idea that when someone asks a question of a model, how do I do a lookup in order to bootstrap, you might say, that response back to the user based on some sort of trusted corpus and so here it's important to distinguish two different things. When large language models learn everything that they do, there's knowledge, you might say buried in the actual model itself, and I'm going to use the term parametric memory to refer to that, right?

It's knowledge that's stored directly in the model. This whole idea of Retrieval Augmented Generation or what people just refer to as RAG is all about how do I, when a user asks something of the large language model, how do I do a lookup in a trusted corpus of text? It could be company specific; it could be domain specific, and essentially use that to augment and then generate the response back to the user. So, I can have more trust, right? I can have higher confidence in the accuracy of what I hand back to the user.

00:05:40:23 - 00:05:47:20
Melanie Roberson
Awesome. Okay, great. Thanks for that. Okay, myth number three, are chat bots sentient or conscious?

00:05:47:23 - 00:06:31:12
Kevin McCall
Yeah, this is kind of an entertaining one, Melanie. The short answer is no, not even close. But the reason I left it on the list is because I think it's interesting to ask another question, a more reasonable question, which is, do large language models have an understanding of the world, right? Do they have a deep understanding of the world?

Because after all, they've consumed and learned from a lot of text. For example, GPT-2 consumed about 9 billion words. GPT-3 learned from about 300 billion words. GPT-4, yeah, GPT-4 famously learned from over a trillion, you know, words. And so, they've certainly learned a lot. If you compare that to the average teenager, the average teenager has heard about a hundred million words, right, by the time they get into their teens.

00:06:31:12 - 00:06:33:24
Melanie Roberson
They've gotten the teenagers beat.

00:06:33:27 - 00:06:53:23
Kevin McCall
They've, right, yeah. And beyond, right. They've consumed so much text that they have developed a decent understanding of the world and so good actually that modern LLMs now can pass something called the Winograd schema test. As of 2019, these models can now beat the Winograd schema test. So, it's a pretty exciting moment.

00:06:53:23 - 00:06:57:20
Melanie Roberson
So, wait. What's the Winograd schema? Give me an example.

00:06:57:22 - 00:09:05:06
Kevin McCall
Sure. So, this is a fun topic because the Winograd schema was originally conceived as an improvement to the Turing test, so it was created by a University of Toronto professor named Hector Levesque right around ten years ago, I want to say. Maybe 12 years ago. And so let me just give you some examples. There's fancy linguistical ways to refer to disambiguating, anaphora, and things like that.

But let me just give you a sentence in order to give you an idea of what these Winograd schema tests look like. So, if I were to ask you two sentences, the trophy doesn't fit in the brown suitcase because it's too small, or the trophy doesn't fit in the brown suitcase because it's too large. You know that in the first example the trophy doesn't fit in the brown suitcase because it's too small means that the suitcase is too small, right.

But if I say the trophy doesn't fit in the brown suitcase because it's too large, you'd know I'm talking about the trophy, right? But that requires an understanding of the world. That understands this idea that smaller things, you know, can fit into bigger things, but bigger things can't fit into smaller things. So, if I were to say the couch didn't fit through the doorway because it was too small, you'd know that's the doorway, right?

So, in short, the Winograd schema test has hundreds and hundreds of statements like this that illuminate knowledge about the world. In this example, it was small things and large things which children learn, obviously very, very young. But if I were to offer a different sentence, like the city council refused the demonstrators’ permit because they feared violence, now we're talking about human motivation, right?

We're talking about agency, right, on behalf of the demonstrators. And so, if I spun it to the other, you know, in the other direction, the city council refused the demonstrators’ permit because they advocated violence, then you know I'm talking about the demonstrators. And that's why the city council denied the pass. The point here is that those tests are hard, and they require knowledge about the world. In roughly 2019, these large language models were able to pass these tests with over 90% accuracy. So definitely an interesting moment, but that doesn't mean they're sentient, no.

00:09:05:11 - 00:09:13:14
Melanie Roberson
So, what you're saying is that they obviously are demonstrating pretty impressive capabilities, but really they don't have a sense of self.

00:09:13:16 - 00:09:22:26
Kevin McCall
They have no, right, they have no motivation. They don't experience jealousy or hunger, you know, anything like that. And we're a long way away from that. So, the answer is definitely no on this one.

00:09:22:26 - 00:09:27:21
Melanie Roberson
Got it. Got it. Okay. So, number four, is AI creative?

00:09:27:24 - 00:10:17:15
Kevin McCall
So, this is an interesting one because no AI is truly creative. No AI is literally creative. It may look creative to people like me and my friends, but that's a super low bar, I promise you, I'm not creative at all. And just because these models look creative doesn't mean they are creative. So let me be more precise. So, AI systems operate on patterns and data, right. Underneath, all these models really act on really, really fancy probabilistic math. They don't possess any type of intrinsic creativity. They can't envision new ideas outside of the scope of their programing or the patterns that they found in the training data. So, from a human standpoint, AI doesn't have any emotions or emotional understanding. It lacks intent. It lacks in inspiration, right? So, it can't truly be creative.

00:10:17:20 - 00:10:33:26
Melanie Roberson
Right. So, this is an important one because there are lots of movies out these days where it talks about AI in relationships or AI in all those, you know, movies about folks, you know, having relationships with things and AI. 

00:10:33:26 - 00:10:36:25
Kevin McCall
Like Her. Yeah. Or Ex Machina, right? These are good movies.

00:10:36:25 - 00:10:39:18
Melanie Roberson
And you're saying that that is, in fact, impossible?

00:10:39:21 - 00:11:23:04
Kevin McCall
No, no, they can't even be creative, much less have, you know, these types of motivations. It's an interesting topic, though, because there are lots and lots of really fancy AI models out there that do really impressive things. Like, you know, style transfer models where you can say, hey, you know, I want a painting of Oscar the Grouch in the style of Picasso's Guernica, right? Or, I want to see Van Gogh's Starry Night, you know, the style of Van Gogh’s Starry Night applied to a portrait of Cookie Monster or Grover or something. So, but these style transfer networks or more advanced, like, you know, generative adversarial network works or things like that, they just act on patterns at the end of the day.

00:11:23:07 - 00:11:24:29
Melanie Roberson
So, it's not creativity, it's just a pattern.

00:11:25:00 - 00:11:29:27
Kevin McCall
It's not. It's all pattern matching, and new generation based on those patterns.

00:11:30:00 - 00:11:37:26
Melanie Roberson
Ah, really interesting, Kevin, really interesting. Okay, so number five is, AI is a complete black box.

00:11:37:29 - 00:12:59:14
Kevin McCall
Yeah, this comes up a lot. And there's really a bifurcation here where a lot of people say, oh, since AI is black box, we're not comfortable using it in. The short response is that sometimes yes and sometimes no. So let me double click on that. There are lots of deep neural networks that are so complex and have so many parameters, whether they're just massive models in the first place or whether maybe there are sometimes kind of complicated ensemble models that bring together many different types of models like gradient boosting algorithms, whether or not we're talking about those or just really massive networks, sometimes they are effectively black boxes, right?

But they're all sorts of AI algorithms that are not black box at all. So, for example, linear regression algorithms or decision trees or Bayesian networks are very explainable. As a matter of fact, that's one reason why people love decision trees is because they're very interpretable in the end. So anyway, sometimes yes, sometimes no. But there are lots and lots of AI tools in the in the proverbial toolbox that are very interpretable, very explainable. And for some of those deeper networks, there are tools like LIME and SHAP and other ones showing up that can be used with more complex models in order to increase transparency and explainability.

00:12:59:16 - 00:13:08:13
Melanie Roberson
Got it. Okay. Number six is AI models are massive, complex, and expensive. Therefore, can't use AI.

00:13:08:15 - 00:14:32:16
Kevin McCall
Yeah, some people feel like these things have gotten so big and complex and expensive to train that they're not usable in big companies. And so, here's why that one's a fallacy is that there are indeed a ton of very, very large models that are very complex and were very expensive to train. Certainly, people famously hear about what it costs to train, you know, these GPT models nowadays. But this is one of the main reasons why transfer learning has become so popular. And there are lots of examples of where transfer learning has been used, let's say, in the convolutional neural network space in order to process images or of course lately in the large language model space, in order to build upon and customize open source large language models.

And even if you just go, for example, to the Hugging Face website and you say, hey, I want to download one of these, you know, fancy large language models, last time I checked, which was a few weeks ago, there were something like 240 unique models that are supported by the transformers library. So, there are a lot of them out there, more show up every single day. And so, this idea of transfer learning is, you know, is kind of like if I were going to, you know, lace up my running shoes and run from Seattle to Los Angeles, it's kind of like getting a bus ride to Burbank and getting dropped off there, right. You can get 90, 95% of the way there with some of these large models by using transfer learning to solve your problem.

00:14:32:19 - 00:14:39:13
Melanie Roberson
So, it sounds like there are a number of capabilities of these accessible models that have dramatically increased in recent years.

00:14:39:15 - 00:15:02:22
Kevin McCall
Absolutely. As a matter of fact, there's about three or four years ago where we really saw I think, a much larger, a lot more research papers being written. I want to say it was 2018, 2019 where there was an explosion of research papers that talked about how to apply transfer learning techniques in order to use these massive models and customize them to apply them to your needs.

00:15:02:24 - 00:15:11:25
Melanie Roberson
Okay. So, number seven: new models are so unpredictable and unexplainable, we can't control them. They are probabilistic instead of deterministic.

00:15:11:27 - 00:16:32:15
Kevin McCall
Yeah, I hear this one sometimes as a reason that people are afraid of using AI as well. It comes up a lot in the autonomous systems space where people say these are so complex because oftentimes because their behavior is often probabilistic, people are less comfortable putting it into production. And the short answer here is, is simply that there's a long history of using similar technology like advanced process control systems in aviation or space exploration, even in manufacturing, chemicals, oil and gas plants, that there are lots and lots of examples across industry of where engineers have proven patterns that they've been using for years and years in order to make sure that these things can run safely in operation and to make sure that there are the right error handling techniques and management techniques to control these things in use. 

And to be frank, a lot of what has been learned over decades in this space was applied to the 0 to 5 model of automotive autonomy as well. So, the automotive industry didn't have to start from scratch when they were thinking in terms of levels of, you know, of autonomous operation and autonomous behavior. So, lots of tools in the toolbox here. This is just a continuation of what's been done for decades, really.

00:16:32:19 - 00:16:39:10
Melanie Roberson
And it sounds like in almost all of these situations, the human is still or can be in the loop.

00:16:39:12 - 00:17:15:27
Kevin McCall
Absolutely. Yeah. And that's an important attribute of all of these approaches. As I mentioned, oil and gas plants, chemical plants, you know, aviation, etc.. You know, even when airplanes use things like autopilot systems, there's a human sitting in the chair, right. And there are well defined circumstances when the agent will relinquish control to a domain expert, a subject matter expert who's been trained to manage it. So, again, yes, these tools are complex. Yes, they're powerful, but this is really a continuation of what we've been practicing in industry for decades.

00:17:16:01 - 00:17:23:04
Melanie Roberson
Great. Okay. So, number eight: need to have data state in order before we do AI, or we can't use it.

00:17:23:06 - 00:19:03:19
Kevin McCall
This might be my favorite out of the ten. And here's why, is that I regularly have conversations with customers where they'll say something like, our data governance policies aren't what we would like them to be, so we're not ready for AI. Or our data state isn't fully in order, so we can't do AI, right. And the problem with these statements is they're overly binary. There's no such thing as an organization that's either ready for AI or not ready for AI in a binary sense. So let me be more specific. At the end of the day, whether or not you have data that's ready for AI is a function of the use case. It's a function of the project that you want to do.

And so inevitably, if I talk to a company and we talk about ten possible use cases, ten possible projects, they may want to do in AI, it's almost inevitable that for a few of them they have the data they need, right. They have it in sufficient quantity, etc. There inevitably will be a bunch where work needs to be done on the data.There could be issues of data completeness, data, sparsity, data quality, etc. So yes, there are inevitably things that need to be done to that data to get it ready for that use case. And in addition, there also inevitably you get a couple of situations where the data isn't even remotely ready, and they have to start over again and collect the right data because they don't have anything that's usable at all in the context of the problem they want to solve. So, I see those green, yellow and red, you know, situations, you might say, every time I talked to a customer when we do that data assessment.

00:19:03:21 - 00:19:14:07
Melanie Roberson
So, this is a hard one to swallow, Kevin, because we all know the importance of data in AI models. Give me an example of the last one where you literally had to start over?

00:19:14:09 - 00:21:00:15
Kevin McCall
Okay, sure. And I realize this is kind of a tough one, right? So let me give you an example. I talked to a company just last year, a big, big manufacturing company. They have plants all over the world, hundreds of manufacturing plants. And when I started asking them about the data that we would need in order to potentially train a model to do autonomous control of an extruder, they said, oh yeah, we've got all the data you could possibly need. We've got data in our historian that goes back 15 years. We've got it on old extruders, new extruders, you know, Clextral extruders, Bühler extruders. We've got all the data you could possibly want. I said, okay, well, we want to create an agent that could potentially change the supervisory settings of those extruders up to once a minute, right.

And they said, yeah, we want to consider changes to those settings up to once a minute. I said, okay, great. Well, what's the control frequency at which all of this data has been collected that's sitting in your historian, and they said every 5 minutes. And I said, okay, well, we already have a problem because if we want to train an agent to be making potential changes to supervisory control settings every 60 seconds, then we've got a lot of work to do to even try to use that data that's been collected at five-minute or maybe ten-minute control intervals. So that's a good example of where, that's a good example of a situation where we either had to do a ton of engineering work to try to interpolate what those reaction curves looked like or start over again. And start collecting data at a 60 second or preferably a 30, 20, 10, you know, six second control frequency so that we had the granularity of data that we needed in order to train an agent that was going to perform autonomous control.

00:21:00:18 - 00:21:09:08
Melanie Roberson
So it wasn't that the data needed to be in order because you were still able to manage the data. You just needed to manage it differently.

00:21:09:10 - 00:21:24:27
Kevin McCall
Right. So that's a great example of where they did a great job of collecting data for many, many, many years. But it didn't fit the use case, right? We didn't have the data we needed because it wasn't collected at the right control frequency. We didn't have the data we needed to train an autonomous agent that would hit the mark for them.

00:21:25:00 - 00:21:44:04
Kevin McCall
Right? So again, this is just subtlety where whenever we objectively and kind of systematically evaluate different possible AI use cases, evaluating what data they have in the condition that that data is in, is obviously a very important part of assessing those different potential projects.

00:21:44:07 - 00:21:49:16
Melanie Roberson
Right. Okay. So, number nine: AI will generally replace people.

00:21:49:19 - 00:22:18:08
Kevin McCall
Yeah. This, we hear this all that all the time, don't we, Melanie? So, in very, very narrow situations, sure, I think that is true. But what I'm seeing is in the vast majority of circumstances, it's not true. I think the simple summary is that I by far most frequently see AI being applied to tasks, right? And these tasks, by definition, are subsets of roles.

00:22:18:08 - 00:22:32:07
Kevin McCall
They're subsets of jobs. And so, for the foreseeable future, I feel like AI is going to be really, really good at much more narrowly defined tasks. But that's a subset of roles. It's not going to be replacing whole jobs, whole roles for quite a while.

00:22:32:09 - 00:22:40:03
Melanie Roberson
Well, that is good news. Okay, the last one is: AI can be effectively done without people. This is a good one for me.

00:22:40:06 - 00:23:31:01
Kevin McCall
Yeah. The short answer of this is just even though there are organizations out there that say, oh, just give us your data and we'll give you a fancy AI application or fancy AI solution that solves the problem. Generally speaking, that just doesn't work. And there are many, many examples of this. And sometimes you must suspend disbelief, you might say, because it just seems like in some areas that you could do this with just a good crew of data scientists that aren't deeply informed on the domain.

But in general, I just don't see it being true. You need subject matter experts. You need domain experts there to interpret what's good, what's bad, right. To be able to define precisely what you want and need out of that system in order to hit the mark for your business. So generally speaking, that's not changing anytime soon.

00:23:31:03 - 00:23:44:01
Melanie Roberson
So, are you sure, Kevin, aren’t there examples where more accurate is going to be better? Customers have a system that's 80% accurate and AI comes along and it's 90% accurate. Isn't that always better?

00:23:44:04 - 00:25:37:06
Kevin McCall
So, it's definitely an interesting question because let's take that example. Let's say you have a system that can predict the presence of a tumor with 80% accuracy, and somebody comes along and says, oh, I've got a better system. It can predict the accuracy of tumors at 90% accuracy. But the problem with that is that when you consider the things that the model might get wrong, they're dramatic differences in the level of badness, you might say.

So, for example, if we consider false negatives and false positives. Let's say you have a model, and it says, oh, this isn't a tumor, don't worry about it. You know, go home, enjoy your weekend. And let's say that was a false negative, right, where the model got it wrong. There actually is a tumor there. But the model predicted that there wasn't. Compare that with a situation where the model says, hey, I think there might be a tumor here. We should do some more tests in order to be sure. Right. That's a false positive, a false positive situation potentially, where it says, hey, I think I see a tumor, let's do more tests. And if it got it wrong, then getting that scenario wrong where it says, hey, let's do a few more tests to be sure, that's a lot less bad than a false negative, right? 

And so, statisticians and data scientists will use these fancy tools where they will go way beyond accuracy and they will consider things like F1 scores and they will compare precision and recall, for example. But that's beyond the scope of our conversation today. The point is, is that in order to decide how to turn the dials, in order to decide what to optimize for, when you build these models, you need to rely on the subject matter experts. You need to rely on the domain experts in order to decide what good looks like and what better looks like. And it's rarely something that can be summarized in a single statistic like accuracy.

00:25:37:08 - 00:25:46:11
Melanie Roberson
Right. And as a people strategist, I always believe that having people involved is where AI and technology always shine.

00:25:46:13 - 00:26:14:01
Kevin McCall
Right. And I would say, you know, one thing I end up seeing a lot is, you know, AI is really no different than any other IT power tool. You know, it needs expert guidance, right? It requires a responsible steward and a steady hand, right. In any in any domain, really. So, you always need the subject matter experts involved. Right. Those are the people that have to be working closely with the technologists in order to make sure you're hitting the mark for your business or your organizational objectives.

00:26:14:03 - 00:26:20:29
Melanie Roberson
Yep. Agreed. Okay, so we just covered a lot of ground. Kevin, can you summarize the entire list succinctly?

00:26:21:03 - 00:29:00:19
Kevin McCall
Sure, sure. I know I'm not always really good at that, but let's go through all ten of them. So, the first one was: Can you create an AI model without bias? And the short answer to that is almost inevitably no. But there are ways to align your model with its intended output, right? So that's really the reality on that one.

The second one was can we create AI models without hallucinations. And again, that answer is almost inevitably no. But there are definitely ways to fine tune your model or your application with content that is your ground truth, your organizational data, your industry data to dramatically increase the probability that you're going to deliver the right answer and not hallucinate. Right. It's not a perfect process, right? But there are patterns that are emerging now that allow you to do that pretty well. 

The third was, Is AI creative? And answer is no, not technically, but it sure can appear creative. The fourth was: are AI systems conscious or sentient. Definitely no on this one. Very far from it today. And frankly, probably not soon. But who knows what the future holds. Number five was: is all AI black box, and that's, some of those models, yes, operate as black boxes, but many of them, the answer is definitely no. But things are getting better over time with really powerful tools like LIME and SHAP and powerful approaches to better understand and interpret these models and operation.

The sixth was: are the powerful AI models so big and so expensive we can't use them? Short answer, no, there are lots of techniques to customize and fine tune models to your needs using transfer learning approaches. So that's become super common. Number seven: are AI models so unpredictable and unexplainable that we can't control them in some of these advanced scenarios? And short answer there is no. There are proven techniques engineers have used for years to control some of these powerful autonomous systems. 

The eighth was: do I need to get my data estate completely in order before using AI? This is definitely a no. Data readiness is rarely binary and the data you will need is going to be a function of your use case, it's going to be a function of your project.  Night was: will AI replace people? Again, no. We see that AI will normally be applied to tasks, not jobs or roles or an entire person's set of tasks, you might say, at a company. And the last one was: can AI be effectively done without people? And I would say generally no on that one. AI requires subject matter expertise every step of the way, requires that human guidance and control to maximize its value and its impact.

00:29:00:21 - 00:29:03:18
Melanie Roberson
Really fantastic list, Kevin, thanks for sharing it.

00:29:03:20 - 00:29:05:22
Kevin McCall
Thanks, Melanie.