Events

Understanding Supply Chain Risk with AI and Big Data

September 21, 2023

In an era of increasing complexity, from emerging regulatory demands to constant disruptions caused by weather and climate, AI and Big Data have emerged as powerful tools to enhance visibility, improve forecasting, and drive operational efficiency. For supply chain leaders who want to mitigate network risk, data availability must go beyond the headlines, to location-specific impact, to supply- and shipment-level insights, unknown sub-tier relationships, and more. 

View the session to delve deep into the transformative power of AI and Big Data in managing and mitigating supply chain disruptions. This webinar will share real examples of how businesses today leverage these technologies to navigate through disruption and maintain resilient supply chains and steps you can take to do the same. 

Presenter

Jim Hayden

Chief Data Officer, Everstream Analytics

Presenter

Pierre Mitchell

Chief Research Officer, Spend Matters

Lauren McKinley: 

Hello, everyone. Good morning. Good afternoon. Thank you so much for joining our session today, Understanding Supply Chain Risk with AI and Big Data, presented by Everstream Analytics and SpendMatters. A couple of housekeeping items before we get started today. All attendee lines are muted. If you have any questions throughout the session, we encourage you to please put them in the Q&A section on your go-to webinar panel. This session will also be recorded and we will send a copy following the call. 

And now I will introduce our speakers. Our speakers today, Jim Hayden, Chief Data Scientist at Everstream Analytics. Jim is a pioneer and innovator in creating analytical solutions to solve emerging problems in the finance, telecommunications, and supply chain industries. He has over 25 years experience producing award-winning solutions through the practical application of knowledge discovery and machine learning. At Everstream Analytics, Jim leads our data science team as chief data scientist, building scalable, predictive and prescriptive supply chain risk solutions. 

Pierre Mitchell is the Chief Research Officer at SpendMatters. Pierre leads the procurement, research and IP development at SpendMatters and is the chief architect of the firm’s leading solution map framework. He has over 30 years of industry, advisory and research experience and is a recognized digital procurement transformation expert, specializing in advanced supply processes, practices, metrics, and enabling digital tools and services. Now I will turn it over to Pierre to kick off today’s conversation. 

Pierre Mitchell: 

Thanks, Lauren, and thanks, everybody, for attending. I’m really excited to be on this event with you and especially with Jim to ask the expert, if you will, in this area. I’ve been around supply chain and been in procurement for a long time, both in industry and then coming over to the consulting and advisory and research side. And having cut my teeth on things like supply network design projects and various types of analytics, the amount of technology transformation and change that’s happening right now is amazing. And so I think I’m really looking forward to tapping Jim’s experience on this. 

I want to start with just a little bit on just overall context, and this is from the 2023 Deloitte CPO study. I have actually been one of the co-authors on this with Deloitte for the last three installments of it. And this last one was around orchestrating and managing resilience in the face of risk and complexity. And it’s interesting, from the study, I mean, what we found, heard pretty loud and clear was that, one, risk has not gone away. The risk types certainly change, but risk has actually increased, and that has gone up from the last installment of the study. In terms of some of the risk types, certainly there are many different risk types that are out there, inflation certainly, but supply continuity and resilience right up there, geopolitical. And all these things touch each other and if you map these risks to each other, they definitely have an impact on each other. 

And then it’s not just the traditional, let’s say, risk types, but certainly on the compliance side, that’s been a big issue. And risk and compliance should theoretically be integrated here. But there’s definitely a lot of compliance requirements on the ESG side and that could certainly include Scope 3 emissions is really big right now, and especially just more broadly with things like German supply chain law and you’re going to have your own regs in force. But Scope 3 is a big one. And if you’re going to do Scope 3, you better have a pretty good understanding of your supply network to be able to understand that on inbound and outbounds across the whole supply chain. Slave labor, conflict minerals, again, you need to be able to map those to the supply chain. And then also on the responsibility and resources, who actually owns these different risk types and mitigates them. 

Because quite often the governance model is not really great in terms of how we measure and resource folks to actually deal with protecting value. And then in terms of the top three mitigation strategies, I mean, obviously, alternative sources of supply, supplier collaboration, which is also the number one procurement strategy that CPOs are going to use. And obviously there’s a lot in that supplier collaboration bucket. But guess what number three is? N-tier supply network visibility and then being able to do something with it. But that ability to do something with it is challenging because you have so many stakeholders that are involved. And getting these outside insights are terrific, but just I put the death by TLA, three letter acronyms, out here. But there’s a lot of groups at that kind of enterprise level, within procurement, within IT, within different centers of excellence. 

So I guess I’m going to turn this question over to Jim in terms of, I know you work with a lot of different clients in different parts of the journey, but how do folks actually deliver value in the face of risk and build this resilience and learn how to move quickly when there’s so many entrenched stakeholders and just such overwhelming data, too? So maybe you can just give us some insight on that. 

Jim Hayden: 

Yeah, Pierre. We talk to a lot of customers and it can be overwhelming. And to get the most value out of your supply chain risk management solutions, you need the most visibility, the most transparency into your supply chain network. And it is a network, it’s an ecosystem. And there’s millions of relationships and you don’t know about half of them. So any attempt to try to learn more and more and more about your network is going to help you deliver more value. So where customers typically start is understanding their suppliers, their direct suppliers, their tier one suppliers. And they do some analysis on financial risk, but that’s not enough, that’s just scratching the surface. 

You need to understand location-centric risk, where their facilities are. That then opens it up to new risk categories: climate-based risk, geopolitical risk, some of the more important ones these days. And so getting to that understanding of the where, not just the who, is important. And you mentioned N-tier, understanding who your suppliers’ suppliers are, all the way down to the raw material. The more comprehensive your view, the more you can monitor for risk, the more you can have a resilient strategy. 

Pierre Mitchell: 

That makes a lot of sense. And actually, I mean, the data seems to play that out from the study. When you look at the difference between those orchestrators of value, which is the top quartile performers, and also having the top capabilities to deliver that in the face of risk, if you look at the different capabilities, whether they’re real differentiators. I think everybody can do some level of supplier management, performance management, relationship management, but how do you get a little more fidelity and granular in terms of seeing broader and deeper and earlier and faster rather than just some of the basics? 

And it’s funny, the biggest differentiator is multi-tier supply chain illumination. And then after that, being able to use that for, let’s say, value stream mapping, which is a great way to actually understand total costs, but also to see your carbon footprint, actually, is to actually look at your lanes and your facilities and all these things that are actually generating the carbon and not just the cost. And then scenario modeling and planning. How do we actually make better business decisions around what we’re going to do, and also have kind of this whole thing around opportunity analytics? Where do we find there’s the most money to chase, but also where’s the most risk and where’s the most revenue risk or profit at risk, not just kind of the spend-based view of the world? 

And so it’s kind of interesting, the differentiators and if you do that, you’ll also have better supply market intelligence insights and also give people the tools they need to manage this better. So I’m going to ask you a couple of questions that I see in terms of some of the challenges that are out there. So one is how do you just roll this out, as you said, maybe start tier one supplier, certain risk types? How do we go beyond just survey-based approaches and just looking at fiscal entities of suppliers and suppliers’ suppliers? 

So there’s that. How do you find the signal in the noise, because there’s just so much overwhelming data? How do you find that insight? How do you find that needle in the haystack? How do you reduce the false positives, all that kind of stuff? Where can AI help in that? And then just lastly, maybe it’s tactical, but I mean, it is important, which is how do you protect clients’ critical IP and intelligence about their network so that that remains protected? Because they’re going to be very guarded about their suppliers and the technology that those suppliers have. So maybe you can just chat a little bit about those topics. 

Jim Hayden: 

Sure. Well, here’s how you don’t roll it out. You don’t go big bang and try to solve everything at once. You first try to understand your suppliers, then you understand where they’re located, where are their facilities? That then opens it up to the geographic risks that you probably weren’t looking for to begin with. Then you start worrying about the sub-tier associated with those direct suppliers. And once you figure that out, once you have a good understanding of who they are, using entity resolution. You mentioned earlier how much of a problem it is, the siloed information inside these enterprises. Multiple systems, they have different supplier IDs, different vendor IDs. They have different addresses, different names for them depending on a billing address versus a physical address. Resolving those takes a lot of work. Then once you know those, you need to overlay the risk. And so what this slide shows is just how many signals there are about potential risk out there. 

And getting to is this a story, a post, a Twitter post, a news story about my supplier? Once you’ve resolved and you feel comfortable, you really know who your suppliers are and where they’re located, then matching that to the news story and understanding, “Is this a real risk for me?” is a challenge. We use AI in a few different steps here. So we look at a million news posts an hour, and what we’re doing there is we’re applying different AI algorithms and models along the way. First, we’re trying to understand, is this an entity I care about? So now you’ve resolved your entities, you know who you care about. Now, is this post about who I care about? Named entity recognition, and then entity resolution. Once you’ve done that, and you think it is an entity of interest to you, whether it’s a supplier or a sub-tier supplier, a transportation partner, then you understand the topic that the news post is about. That’s topic classification, a different AI algorithm for that. 

Then you try to understand how severe this incident is and you try to get some sentiment analysis, different algorithm for that. And what we’ve been using recently is Chat GPT to help summarize all that and add some context. And that’s been working pretty good, but not good enough that we don’t need the human in the loop after we do that. Everybody’s heard that Chat GPT can hallucinate at times. We want to make sure it’s not hallucinating in the stories that we’re sending out to our customers. So we have a human in the loop. AI’s probability base, it could be that 1% probability that it got it wrong. So put the human in the loop there. Then once you’re confident in it, you understand the context is real, send that to your suppliers, have them validate it, and then take action. 

Pierre Mitchell: 

That makes sense. And I think that speaks a little bit to the point around the false positives and training the model much more in a focused way. Because I think Chat GPT’s awesome, but when you’re just building it on general purpose linguistic models and seeing relationships in the language but not necessarily the knowledge underneath, and that’s the knowledge about suppliers, but then also the knowledge about the supply chains and the lanes and the facilities, as well as the metadata around the geopolitical entities and these risk types. It seems like that focused domain knowledge helps to improve those domain-specific algorithms and data models or knowledge models that really help to reduce the analysis space so that you can make this just much more to reduce a lot of that noise. I mean, is that a fair statement? 

Jim Hayden: 

Yeah, absolutely. The more domain-specific your learning sets are, your training sets are, the higher quality models you’re going to have. The more language-centric your models are, the better performance you’ll do. What a lot of people do today, take a foreign language, translate it to English, run the English model against it. Well, if you build foreign language-specific models, you’ll have much better results on your output, you’ll have less false positives. Because they’re trained on what the exact language is as opposed to a translation. 

Pierre Mitchell: 

Yeah, it’s so interesting. It’s like if you look at the precursor to OpenAI and some of the things like in Google, DeepMind, that ability to have all the data vectorized and it basically had kind of a machine language mother tongue that would then translate basically to all of the different spoken languages. You don’t need to have rule-based or language-to-language mappings because it sees those patterns, but there’s kind of a common denominator. And here it’s kind of like, that’s great, but let’s also have a common denominator around the multiple models and ontologies in terms of the actual… And this is actually from a 2016 piece that we did around supply network information models and how we saw a bunch of worlds coming together. And so one was this multi-tier resource model, so think of it as N-tier, almost supply network design, kind of steady state. 

And then two, there’s the supply chain control towers, which are great for time-phased demand and supply, but they had their limitations because it was everything, data, the numbers were all averaged out and it’s all very fixed and you don’t get into the fuzziness. And also just all the things that will affect that flow of time-phased supply to meet demand through the network. And then there was three, all this geospatial and digital twins in terms of actually how do we model the weather and sites and flow of goods and logistics. 

And so there’s that digital twinning to provide more granularity into that physical world. And then all the metadata to augment these core structured data models to make them much richer. And on the feature extraction and pulling all that data out and building out this kind of knowledge graph with these multiple ontologies and taxonomies and things that all connect together versus these very simplistic relational database models that are in some of these different domains. 

And so the thing that was fascinating, is that we looked across all the entire ecosystem and we’re like, “You know what? The supply chain risk tools actually have the best approach in terms of how we richly model things across the tier.” And the only real downside was not necessarily understanding that time-phased supply and demand, and we saw that as definitely an opportunity to fill in there. And I think you guys have done a good job in terms of your logistics control tower expertise with DHL and other networks to understand that so that now we can actually get into some of those most pressing risk issues around when is that stuff actually going to hit the dock door of our customers or in our facilities, and get predictive and actually understand these things that are really going to affect you in a very practical way. 

I mean, I mentioned knowledge graphs and obviously it’s a type of technology and approach, but it certainly seems to be a game changer in terms of anything that involves outside in intelligence, knowledge graphs are showing up. And it certainly seems like it’s a great way to augment some of the large learning models to actually tie the linguistics into these very focused domains that you can then model and link to some of the traditional applications. So maybe can you just chat a little bit about how you guys are using knowledge graphs, or just a little bit about the data pipeline here in terms of making the magic happen? 

Jim Hayden: 

Yeah, sure. Here’s an example of the processing we use to help understand the sub-tier network. And you can send out surveys, you’re going to get limited responses and that data gets stale. So you need to find alternative data sources. We’re a big data company, we can handle lots of data. So find alternative data sources that can help tease out these relationships. And what we’re trying to get to is who’s trading with who, where are they trading, and what’s being traded? And that way we can identify not just the trading relationships, but the trade flows, the lanes they use, and on and on and on. So we start with lots of data. Import-export records are a great way to find out who’s trading with who. We have tens of billions of import-export records. That’s dirty data, it’s customs data entry, it’s different languages. And getting to the entity resolution to match them to your suppliers is not an easy task. 

And we use knowledge graphs and we use graph database technology and algorithms to do that. So they can create… A few years ago it was just, does this name of the company look like this name, and does this address look like this address? Now you create vectors in your graphs and they can do things like how frequently do they trade? What are they trading? Who else do they trade with? Oh, that looks a lot like this company. They must be the same entity. So that entity resolution is key to success in risk management. Once you do that, then you know something about those relationships. You know what they’re trading, you know how frequently they trade, you know how recently they’ve traded. You can tell if it’s a new relationship, an ongoing relationship, how strong that relationship is. That can be an indicator of risk as well. 

And with that, processing, we generate a knowledge graph that represents as best we can the entire ecosystem of global trading. And then for our customers, they tell us who their suppliers are and we carve out of that knowledge graph those suppliers in the sub-tier in their value chain, and that’s what they care about. I don’t care about all the relationships my suppliers have. I care about the ones in my value chain, the ones that are contributing to what I’m getting from my tier one supplier. 

Pierre Mitchell: 

Yeah, that’s really interesting. And it speaks a little bit to the walk, crawl, run, or don’t boil the ocean. Because it’s like on the procurement side, spend analytics are great. It’s great to see the forensic analysis like the exhaust on the car. It’s good to know where you’re burning some oil and et cetera. But it’s good to have a little bit broader telemetry of where are you, where are trying to go, what can hurt you? Who else is trying to go there? What’s coming up from the rearview mirror? And getting this, like you have the 360 views in a car, to have that view. 

And actually to let you know, by the way, there’s danger coming, you might want to pay attention to that. But you got to start somewhere, and I guess that’s where maybe we can finish this piece is on getting down into the actual implementations and saying, okay, so as we think about how to use this insight to drive value and also not just protect the value from the lens of supply chain risk insights, but what’s in it for me for all those stakeholders? 

Any insights around how you maybe start with a certain named pain point, maybe it’s the risk du jour that’s of interest. And rather than just do risk whack-a-mole and shift from one project to another based on different risk types as you’re reacting, how do you start to get folks aligned together? So right now there’s a lot of interest around let’s at least align our outbound face to the supplier around all the things we expect from them and try to get some visibility from them. And as you said, that’s maybe just a survey-based approach that maybe you’d like to augment with broader intelligence. But any insight around just a typical client implementing this? And you have a case study I’ll go to, but maybe just talk a little bit more if there’s any just implementation patterns in terms of how you do this with clients would be really useful. 

Jim Hayden: 

Customers are always asking us, “Where do we start? I’m going to be overwhelmed, if I have 100 suppliers, I’ve got 1,000 tier twos, I’ve got 10,000 to a million tier threes. I can’t consume that, so where do I start?” We tell them it’s one of a few things. Start with the products that you derive the most revenue from. That one sounds pretty straightforward, but yeah, start there. Start with the ones where you have the most spend from a supplier perspective. Start with the ones that are sole source, so if something happens there, you’re in a lot bigger trouble. Or start at the bottom of the knowledge graph and tell us the commodities you think are precious to you being able to manufacture your goods. Start there and work your way up the knowledge graph and see what tier ones are really impacted if something were to happen to those commodities or raw materials. 

And so there’s a bunch of different ways to start, but it’s prioritization based on impact. In order to get to impact, you need to understand a few different metrics, like revenue associated with it, potential customer dissatisfaction. The network is an end-to-end network. It’s all the way from the raw material to the shelf. And anything happens along the way, it’s bad. You don’t get your revenue. 

So this example here talks about how important it is to know about things sooner than later. And we talked about how to operationalize this. You take different action if there’s a disruption at your tier one than you do if there’s a disruption at your sub-tier. If there’s a one-week or two-week strike at your tier one, that probably matters to you. If it’s in a tier three, you probably don’t care. That’ll be absorbed by the tier two and then the tier one. If there’s an explosion at your tier one, that’s really bad and you need to find alternative sourcing. If there’s an explosion at a tier three, then your action is go to the tier one and order ahead as much as you can. 

When COVID hit, that was the microchip land grab. Who can get the most first, because they knew it was going to be a big outage. And this case study talks about that. It talks about the sooner you understand where the problem is and the potential impact to your tier ones and their ability to deliver, the sooner you can take action and then not get hurt by what’s going on in your N-tier. 

Pierre Mitchell: 

That’s really interesting. And I like how you’re talking about how you prioritize. It’s funny, I mean, I think a lot of folks are just familiar with the basic risk framework, or the Mollycoddler or whatever, probability of the event, and then the level of impact. And quite often, there’s not a lot of granularity in actually understanding that by category or tying that to the product level and having a holistic view of it. But it’s funny, if you go back to 1983 and category management 101 and the HBR, the seminal two by two of impact, which is not just spend, but quantified business impact, and complexity, complexity of the category in the supply market. Well, you don’t necessarily understand complexity or the dynamics of it unless you can really have some data because just saying, “Well, it’s always been this way and there’s always been best practice,” no, it’s not data informed. 

So the way to bring those worlds together is to actually have some data around it. And once you have a little more fidelity, then you can say, all right, where am I at risk? It was funny, I was at a CAPS research event and there was a large auto OEM and had the same kind of situation except it was more around what was happening in Russia and around aluminum buys. And they quickly saw what was happening and they were like, for them, it wasn’t semiconductor, it was steel. But everyone’s going to have their own view into it. But if you don’t really have, first of all, the early signals, the playbooks to know what to do with it, and then have that deeper intelligence to cross-functionally manage it, I think it’s going to be a challenge for folks. Yeah. Anyway, I think… Go ahead, please. 

Jim Hayden: 

Yeah, that’s a great point. And since deep visibility into your sub-tier is new, these playbooks are new, too. And so often we see customers asking us, “What’s a typical playbook for this type of incident in the sub-tier?” And that’s an area that people need to pay attention to. In the history of analytics, it’s easy to show someone an insight. If they don’t take action on it, that was not a valuable insight. And so you need to understand the action you plan on taking and whether that was successful or not. 

Pierre Mitchell: 

And that’s such a good point, too, around you see a lot of that with AI or just prescriptive analytics. And it’s like, yeah, you can generate tons of prescriptions, but if the humans are overwhelmed with the prescriptions, they just get numb to it and it’s kind of worse. It’s like a type two error versus a type one. We start to ignore the alerts that we’re getting. It was funny, I think it was a webcast with Unilever and they were like, “We got 10,000 prescriptions from the analytics, but we acted on 100 of them because, honestly, our people were just so overwhelmed.” 

So I guess the more the technology can be used by the humans to really understand what’s truly important and maybe peg that back to the impact and quantify that, I guess that’s probably one of the pieces of the value that’s developed in this, how do we actually look at this? So anyway, we only have a few minutes left. Let’s see if there’s any questions here from the audience. And Lauren, I’m going to pass it back to you to see if you’re monitoring the chat. 

Lauren McKinley: 

Sure. Yes, we did have a few come in. Hopefully, we’ll get to at least one or two. And if there are any other ones, we will make sure we reply back after the session. So thank you for the content today. There was a question about the stale data that you’d mentioned earlier, Jim. When you’re mapping your digital network, how can you be sure that the information you’re getting and the updates, alerts that you’re getting are not stale and relevant to your users? 

Jim Hayden: 

The not stale part, part of the analytic you deliver when you’re showing relationships between two companies is how recent the activity’s been. And you need signals that show you this is recent activity. There could be mitigating factors there, like there’s an embargo. They were trading a lot, then there was an embargo, and now they’re not trading at all. And you need to incorporate that into your model as well. And then when the embargo’s dropped, you need to look for signals that they’ve picked up that relationship again. So it’s not that easy, but there are ways to solve that. 

Pierre Mitchell: 

So it sounds like, too, explainability in terms of knowing where the data comes from, so that you can then say, “Well, okay, that came from some trading data from six months ago,” or maybe, “No, this actually was very recent where we picked this up.” And just I guess having visibility into the metadata, the data about the data, can also help you understand what to do with it, to maybe just have an estimate of how stale it might be, but also peg it back to the actual source data from which the insights were created. And that’s where the whole Chat GPT and all that stuff, if there’s no explainability, that’s just not a good place that you want to be just in terms of if something goes wrong, but also just so that you can make the algorithm better so that you can make it more and more relevant. 

Jim Hayden: 

And we actually let our customers filter along a confidence score. So they can say, “Only show me those things you’re really confident in.” “Well, I might want to see more, open that up a little bit. And maybe there’ll be a few more that are false positives, but it’ll be worth it.” 

Pierre Mitchell: 

You got to know what you don’t know. 

Jim Hayden: 

Exactly. 

Pierre Mitchell: 

All right, Lauren. Any others? I’m just looking at the time. We want to be respectful of it. 

Lauren McKinley: 

Great. Yeah, we did get a couple of other questions, but again, to be respectful of time, we will conclude the session today live. We got some other questions around data security, which we’ll include as a follow-up to this message on how Everstream treats data security related to AI, and a couple other ones about visibility. So thank you so much. We have our presenters’ contact information here. We will send a follow-up to this session with a link to the recording. Again, the couple of questions that we did not have time to answer and information there in terms of how you can get in touch with our team and connect and learn more about big data, AI and visibility. So with that, we will conclude the session today. Thank you again for the time. And have a great day. Thank you, Pierre and Jim. 

Jim Hayden: 

Thank you. 

Pierre Mitchell: 

Thanks, everyone. Thanks, Jim. 

Jim Hayden: 

Bye-bye. 

 

Share this post