Reltio Connect

 View Only

2023 Emerging Trends in Data Management

By Chris Detzel posted 12-09-2022 12:00

  

The current trends in data management, like AI&ML for data management automation, Data Fabrics & Data Mesh Architectures, Real Time Integration, and Multi-cloud will impact your MDM Practice in 2023 and onward.

In this session, we will discuss where these trends can have the most significant impact, what you can do to take advantage of what they offer, and most importantly what to avoid and why. For questions around data trends check out the Reltio Community
https://community.reltio.com/home


Transcript

Chris Detzel:

Welcome everyone to another Reltio Community Show. Today's topic is emerging trends in data management. It's a little bit different, but I thought it would be really fun to do. And Chris Ilof, he's a director of strategic services here at Reltio, will be giving today's show. And Chris, really appreciate you doing this. It's really exciting.

Chris Ilof:

Happy to, thanks for having me here.

Chris Detzel:

So the rules of the show, which I think most people know, but just in case you don't, please keep yourself on mute or if you do have a question, you can take yourself off mute and or ask it in chat. If you ask it to chat, it's great. I'll make sure or Chris Ilof will make sure to answer those. By the way, you have two Chris's today, so a lot of people call me by my last minute.

So this is being recorded and it will be posted to the Reltio Community. My hope is tomorrow and or Monday by Monday for sure. Just as an FYI, we have two events, one event today and then one event for next week and next week's show will be quite a bit more technical in nature. But that will be our last one until the end of the year. We'll start these up probably mid-January and so I'm going to stop there and let Chris go. And then Chris, if you just tell me when has changed lives, that'd be great.

Chris Ilof:

Sure, sure. I appreciate it. Thanks, Chris. I have the one screen some a little limited today being on the road, but good morning everybody. I hope you're all having a great week so far. Everybody's happy and healthy out there. Today, we're going to talk about a pretty high level topic around some of the emerging trends that we see in data management and share with you some of what are our thoughts from a Reltio perspective on these trends in general, how they can impact your data management practice and also what are the implications for master data management. Feel free to ask questions as we go. Happy to answer any and all. We're not going to get into technical deep dives and debates on some of these topics. We just don't have the time. But if you'd like us to do some deep dives on any one of these topics, please just let us know, get us that feedback and we'll schedule something where we can get into some nitty gritty things here and some specifics.

Four topics, we couldn't pick them all. Obviously we're in an industry where there tends to be buzzed tech and there tend to be trends, but some of these ones are the ones that seem a little more durable I would say, or things that we've already been talking about for quite some time and we're just seeing them really get woven into our day-to-day more effectively. So we're going to talk about AI automation, a little bit about the machine learning data virtualization, specifically what we see around data distribution and different architecture and process and people frameworks that are starting to be built and in place today. We've also got real-time integration and that's a pretty cool one I'd love to talk about and multi-cloud strategy. Now, this isn't to say that these are the top four, but these are four of the really interesting things going on that we know are impacting the data management space in today's world.

So let's move on and we'll talk about AI automation. So all our sides are going to look like this, so we will, we'll talk real quick. Pretty sure everybody understands what AI automation is and means, but this is the advent where we're applying basically the advanced analytics and machine learning and those type of theories into technical solutions that we build in order to automate either business decisions, business processes, so even further or critical actions for us around data. What I find really interesting about AI is an analogy I have is my youngest son is a chemistry graduate. So he is a chemist and loves looking at ingredients anytime we bring products in the house. So we have these hydration packs, these powder hydration packs, and it's funny because they're labeled as non-GMO and in thinking about AI across the board, it seems like much like that non-GMO label on a package of salts, which is weird and doesn't apply.

We see this too with AI. We see labels, we see it branded on many different software packages and across different things. So it's everywhere these days. And as well has this idea of or this concept really much like the red hot commercials, if any of you have seen those, right? AI is you put it on everything. So we're seeing it a lot. We're seeing it out there all the time, but what does it mean? And in general where we see a lot of benefits around using it to automate. So hence we dialed in AI to be specific to automation here, and we see it automating the trend exacting of ETL and ELT and data management. So any of the transformations that we're making, we're seeing people adapt their ETL streams or other data integrations that are doing data transformations to include AI and machine learning specifically around data quality initiatives.

So where you're automating, cleansing and standardization and things of the, like. And we also see it in data stewardship activities. So one of the big things around data quality and what we do is we have data stewards. We have people manually making sure that our data has high quality, they're doing checks, they're updating, and these are things that we see people starting to use AI to improve, right data quality, but to do it in an automated fashion and reduce the load on a lot of the data stewards that are out there.

And again, similar to the data quality focus, really if we look at it from another lens, being able to automate some of these things that are helping create better data quality is really a great and easier way sometimes to activate policies from data governance. That can be really, really challenging sometimes because it requires a combination of things, it requires a lot of conversation with different data stakeholders to create policies.

And then you got to figure out all the places that you need to plug that in. AI is a great way to create some of that automation and enforce those policies when it might be a little bit more difficult to do. So we see a big focus on producing decentralized data products as well, and AI is helping to make some of those products a little more durable, a little more robust, meaning when we have AI embedded into data products, they're able to manage data change. So as an organization's, data evolves as it naturally does, as new sources come in or activities like mergers and acquisitions come in and change the data significantly, having AI embedded in as part of these data products can really help that data product maintain its efficacy across data change, right. So data changes, but it doesn't ruin the insights or other aspects of data products that you're producing, things you really want to be considerate of.

And these considerations just mean, "Hey, let's just think about this. Keep this in your mind." I say here, "Don't let the hype exceed your abilities, right?" There's AI and ML are very, very distinct skill sets and not all organizations have this skillset know within their teams or the ability to hire it out. So before we make decisions to design solutions with AI and machine learning embedded, we need to make sure that we actually have the ability to maintain that and manage that over time as well. And we have the skillsets and the expertise in-house to do that.

Next is really alignment, and I kind of touched on data governance just a little bit, but we do see organizations with AI strategy where your enterprise architects and your solution architects actually do have an idea of where it makes sense to use AI and machine learning and they have some standards and some guidance around that.

And oftentimes that can also be aligned to or needs to be aligned to data governance in any of the policies that you're trying to inform there. So making sure that those two are aligned is a way to make sure that the path is clear for you to actually have a successful implementation of anything with some AI or machine learning in it. And lastly, back to the Frank's red hot thing, it's not a panacea. I think when we go out there on the web and you see so much machine learning in AI and it is very, very exciting technology, but it can't do everything and it isn't the solution for everything. It's not the hammer that pounds in screws type of thing.

So just some things there to be considerate of. Again, it's really about those four bullets or three bullets are really about your organization's ability to manage and maintain components with AI in them. And then we can talk about now on the right hand panel MDM implications. So this is great. Yeah, we know this, we get this about AI automation, but how would I apply AI automation aspects to MDM to make any types of improvements and where would that fit? You see on the right hand side got these four highlighted four of six, what are the general high level capabilities of an MDM application or a system?

But when it comes to MDM, some of the obvious ones that I had mentioned it just a moment ago are data management. Our data quality management and data governance. Again, this is where we see our ability to implement validation rule sets, but standardization, cleanse and enrichment type of activities that happen as part of a robust data quality management platform.

We can do that with AI now so that we're detecting things and resolving them before they ever create an issue. And that's fantastic and you can do that without flagging the data and waiting for a human to correct that, which is amazing. Next, we have really entity resolution and relationship management. These two capabilities are affected or can be affected by AI automation and similar ways. So when we talk about entity resolution, we're talking about matching, right? Matching and merging. And these match rule sets, you can make them as simple, you can make them as complex as you really need to. It really depends on your data.

The advantage that AI and machine learning bring to that is you don't have to, if you create the right match algorithm using machine learning and using AI to make match decisions for you, that machine learning can actually do a better job of keeping up with data changes over time that may make you want to augment or adjust your match rules because as data changes so too does the need for the match rules because the population of the set changes and even that can affect the precision of your match rules.

When we talk about relationship management, it's pretty much the same. If you think of relationship management the way we do at Reltio, it's really reconciling like you do entities reconciling relationships except you're resolving relationships between entities or building groups of entities, not just resolving the entity of self. And so in a similar way, when you build out rule sets that are attaching entities and creating those relationships, you can automate those with AI too. And for the very same reasons that you do it for entity resolution, you do it for relationship management to make sure that these fantastic insights around your business stay up to par, stay accurate over time as all your data changes.

And we can go ahead and go to the next if there's not a lot of questions. All right.

Chris Detzel:

Yeah, no questions just at the moment.

Chris Ilof:

All right, so let's move on to data virtualization. My machine's a little slow. Sorry about that Chris. So data virtualization, this is a term that we've used in the past. Some of you may have been to webinar that Alex did back in, I think that was October Chris, around data virtualization where we talked about data fabric, data match, right? Yeah, that's right. Yeah, that was a cool one. So when we talk about data virtualization, right, it's really applying distributed data management patterns or architectures to your organization. You hear the buzzwords like democratizing data and that's what we're talking about here. We're not going to get into debating fabric and mash and which is better that that's going on has been going on for the last almost years or so, and we'll let that figured itself out.

But again, our belief is that you can gain advantages of both those, right? And it's advantageous to MDM as well. So either approaches work give you great benefits, they help remove a focus on these monolithic data stores and solutions that align and processes that align to those monolithic data stores. And anytime we can move away from that and towards aligning our people process in technology to providing data and data products that really meet the different contexts that our businesses need in our internal business partners, that that's going to be ideal. The closer we can get to developing data products and data assets that our business needs and wants, the better off we all are.

Chris Detzel:

Hey, Chris. Yeah, before you get deep into this, we actually have a question about AIML. So how do you make sure that the matching done by AI and or ML is accurate and how do you identify what logic was used by AI ML when it's matched to into these? Is that something that you can kind of provide some light on or

Chris Ilof:

Yeah, I mean I can speak to it from more of a Reltio perspective, but we build with our match IQPs that is our AI matching or ML matching piece. And again, it's really about tuning your machine language or your machine learning logic. Start with how you match today, right? Start with that and then an understanding of course of what your data really looks like, where it has gaps, where you're match today might not be meeting that.

Now how you do that, it really depends. With Reltio, we have tool sets that like you analyze and understand the results sets of your match. So it is a little bit easier than the typical manual process of analyzing matches and potential matches. Having the visibility into data sources and how they align to master profiles or golden records is key in that as well because you really need to see that's the results that you want to see from match.

And it really is just about working through that machine learning and then comparing it to the results sets that you're getting today to understand if you're getting advantage out of it with your data, managing it over time. Again with Reltio, we have tool sets that let you understand and manage your match rules and analyze the outcomes from them. So using match analyzer, external match is a good way to do it without ingesting data and taking time to do that. So taking advantage of some of the tool sets that allow you to develop match rules are really the same way of double checking the outcome from any automated type of matching as well.

Chris Detzel:

And I think, not to give anything away here, but I do think Michael Burke, he's working on some stuff within AI and ML here at Reltio on the matching piece. More to come on that, but definitely going to be really some cool stuff around that.

Chris Ilof:

Yeah, absolutely. And in the data quality piece. So yeah, hopefully Michael gets a chance to show that to you all as well around how we're using AI and ML and Reltio and building that out to automate data quality management.

Chris Detzel:

We'll, definitely be getting on in the next several months. Cool. Thanks, Chris.

Chris Ilof:

No problem. Great questions with data virtualization, we'll jump back to that real quick, right? Again, it's how we see it is, it's more this distributed form of data that tends to align better to the physical data assets that customers need and allows that level of contextual expertise to start to exist, right within the data management teams. And that's really a key aspect of it.

So as you're working through some of considering what you might do with data fabric, data match, these are huge buzzwords in the industry right now. We do know that it expedites delivery of those data assets and that's because what tends to evolve out of either one of these approaches is that deeper understanding of the business context within the data rather than just understanding the data movement aspects and data modeling and things like that from a data management team allows data management team to get just a little closer to the meaning of the data.

As we say, it improves data discovery because there's a big focus on metadata in the data fabric piece and that metadata driving really what is the data, where does it need to go, what should I use? It can drive a lot of that data discovery because it has a level of context that isn't necessarily readily available to assess such things. And like I said, it aligns data ownership as well. I think we've all been in situations, those of us who've been in data management for a while now in organizations especially where we've seen the trope is usually the business owns the data one as long as it's in the application, but once it hits the supply chain to be monetized and to create new and different value for the organization problem. And so we see data governance and data, even stewardship actions struggle with this political pull of who really owns the data.

You've got IT always saying the business owns the data, the business owns the data. And then a lot of times when you run into issues with the data, it's not within the app space. The business is saying, well, IT owns the data. So it adds a lot of challenges to actually having a very robust governance in stewardship programs. So having this context and being contextually aware of the data really helps align both the business and IT around that ownership and soften some of those edges. And it makes it a little more clear that in a way we both own the data, right?

Things that you should really consider around data virtualization, how does it align to your org is what most of these bullets are about. So when we talk about federated or centralized governance, traditionally you have an affinity towards one or the other and how you built your data governance programs and if they're very mature, you should be able to understand whether or not there's an alliance or affinity to having a more distributed data governance where different teams, usually specific to business domains or even data domains have a focus and an ownership of the data within that domain.

We see that in places, but we also see where wholly centralized teams and they'll just be one team owning it all from a governance perspective. We still see both of these approaches. Obviously a federated governance approach will position you much more easily to implement some of the great things about data mesh and data fabric that you can take advantage of and it'll be a little bit longer road for you to do that should you have a centralized governance approach and what you want to look at there. If you're centralized today, it may actually be easier for you to start with some of the data fabric concepts because those data fabric concepts tend to be more about architecture and about the tools, not necessarily the people in the process. Where on the other hand, data mesh is really about changing hearts and minds changing culture because it's very people and process oriented.

And we all know that those things take a long time to implement. So understand where you're at there and that can help you guide where you might take some of the ideas and concepts of data virtualization within your organization and that has a little to do as well with your organization's ability to adapt. And I mean that from both perspectives when we talked about the culture, the people, the process piece of it, but also the technology. Do you have technology in place? Are you going down a road of democratization of data already and you've made investments and you're making that happen?

Or is that a sea change for you from a technology perspective within your organization? Make sure you understand where you're at so you know how much it will take to adapt to a data virtualization type strategy. And lastly, for considerations from an MDM perspective, it's foundational and that's why you see all of the capabilities turned on for this one is that MDM is really a foundational piece to either one of these strategies because in creating data products or even in democratizing the data, that data just is your core data.

It runs your enterprise, it needs to be accurate, it needs to be unified, and MDM does that for you. That's unifying the data and bringing it to action and of it all really. Data virtualization isn't necessarily something that changes with MDM or changes MDM I would say, but MDM is definitely a part of that overall implementation and solution from the tech stack and from really the thought stack, if you will, of people understanding MDM and what it can provide for your organization. Did we have any questions on data virtualization, mesh fabric?

Chris Detzel:

I don't see any questions as of yet.

Chris Ilof:

Let's move on. We'll do real-time integration this, this is one of my favorite ones to talk about honestly because I don't think we get into the nitty gritty here and there's really a lot of an amazing value when we talk about the aspects of real-time integration. And when we talk about it here at Ralph, the way we view it is that it's how you're connecting the data, you're connecting technology assets. It could be applications, it could be data stores, it could be platforms, but it's all about distributing that data. And the real time part is about doing it in a continuous manner.

So it's not batch, it's removing data latency because you're not waiting for data to bundle up and to batch it up and then do something with it providing you the opportunity to really accelerate that data distribution, meaning as data changes, even from an operational perspective, you're on the phone with customers, data updates get made, data updates flow in through MDM and through the data supply chain in real time and then back to the business applications that it needs to support with the new corrected information. This is kind of one of the dreams we had back in the early two thousands, right around data management and specifically MDM being more realtime in supporting operational use cases.

I say it enables data duality. And what I mean by that too is there's an aspect here around being able to have realtime pipelines of data, a real time connectivity between apps and data management utilities. It allows us to not replicate data to suit both analytical and operational use cases. We can actually have the same set of data, we don't need to replicate it, but we can use real-time integration connections to feed the data to those disparate use cases to either the analytics type of use case or to the operational type of use case. It's actually made operational data management and operational MDM and integration into operations much more viable. If we do this, we connect directly. We don't have to build these constructs of operational data stores to serve data to applications in a timely manner. And with that-

Chris Detzel:

Hey, Chris?

Chris Ilof:

Yeah?

Chris Detzel:

Any guidance on real-time integration and sequence synchronization since much as a sequence is in Reltio?

Chris Ilof:

From an MDM perspective. So the asynchronous right type of matching or processing of data for entity reconciliation or any resolution. The realtime piece here, again, quite honestly I prefer the term more like on demand, not batch, but yes, the ability to do matching on the fly if you will, is what we call it with Reltio from APIs from other aspects of integration. Is there such that you can plug in operationally and you can be mastering data as you're acting on data through an operational processor program?

Chris Detzel:

Thanks, Chris.

Chris Ilof:

And Mark, is that Mark Burlock?

Chris Detzel:

Yep.

Chris Ilof:

Okay. And then is the realtime integration via RIH? It can be. So we have three forms that can let you do realtime integration here in Reltio. The considerations there, talk through some of those, right? And they all support high performance and scalability, which is absolutely key when we talk about realtime integration with Reltio. The questionnaire of via R I H, that's Reltio's integration hub. That integration hub is a representation of really integration platforms as a service iPad stuff.

So these are wonderful tools and where when we think about that, yes, RIH is one of those tools that allows for the no-code type of development, which is very key. We've seen that no-code type of development reduce integration construction by 75% with our customers. And this is because again, you're not building out these monolithic ETL streams for every piece of data that you want to move for every data asset that you want to move and present and give to the business or to an application or to another data store.

And that's one of the big advents of realtime integration is these low-code integration platforms, the no-code connectors, these are the ones that are just out of the box with more rigid data structures and data models in them to let you connect to your enterprise's. Larger applications like Salesforce and things like that, doing that automatic connectivity there, these are a lot easier to build and manage than our really large TL streams. And what we're seeing is with some of the low-code configuration, we can do a lot of the basic transformations of data that are being done in these broad and monolithic ETL streams. So those have follow on impacts when we talk about total cost of ownership of the data supply chain. A lot of that cost of ownership in the data supply chain does come from everything that we write and manage to move data around, which is data integration.

I've seen projects multiple times project's either delayed or even shelved because of the expense of actually creating and managing all these new data assets from the old school data warehousing perspective, highly normalized data and all the ETL structure that needs to happen within that to make that happen. It can be very cost-prohibitive to growing your data. Whereas these quick tools that you can either just plug and play a connector, develop a really quick low-code, no code integration with something like RIH and also using the API driven architecture in a way that allows you to attach microservices and things like that in a very high performant manner. All of these are ways that you can do real-time integration into your MDM and into the other parts of your data management stack, which is why on the right hand side we see again, everything's highlighted and lit up.

Obviously real-time integration capability is the one has the most implications here when we look at from an MDM capability. But in reality, when you have an API driven architecture behind your SaaS solution, having the ability for those to be realtime makes anything that you can do through the API, anything you can do on that platform capable of realtime. And we are seeing that with customers not only the data integration pieces, but really doing actual operational integration to support realtime customer interactions at point of sale, things like that. We're actually seeing where MDM is part of that technology stack now, not just in the analytics space or in the data warehouse space and real-time integration using these approaches to solution things allows that to happen.

Chris Detzel:

Hey Chris, how do you make sure that data quality still meets the data quality standard? So most MBM systems have data quality checks before publishing or releasing data to downstream consumers. So it becomes challenging if you push updates in real time.

Chris Ilof:

So this is where we start to augment, we had talked about a moment ago, and we try to do as much of that data quality issue capture and resolution in an automated fashion as possible so that realtime data and issues in the realtime data aren't perpetuated downstream. So it's about doing the data quality management piece in a more automated fashion and that's where we start to prevent some of those where we can't do that or where you can't do that yet. It's important that you're flagging them and then you're able to report on these back to the data stakeholders that can best affect the change to make sure that data quality issues aren't impacting folks that are consuming beans downstream.

Chris Detzel:

Yeah, Robin makes a great point. He says, "I see the biggest uses for real time is more for prospective matches rather than matches that we would want to immediately use operationally."

Chris Ilof:

Yeah, we do see that. We've got quite a few customers that are doing realtime search before create, and that's similar to that where you're using match logic really to match to the input that you, you're given in your realtime operational process. And again, it is kind of that prospective match, right? In that case, so that it's the know your customer, do I know Chris Ilof? No, I don't know him, but if you find it then tell me about it. And I don't have to ask Chris, I have 500 questions, I've got the information for him right in front of me.

But we also see customers that do that and then at the same time, once they validate that information with the customer, they're sending it back to get matched in real time. So they're using that customer engagement at that point in time to actually steward the data, cleanse it manually with validate the data with the person who that data belongs to technically and using that in a mastering passion. So we do see it both ways. Roman. Yeah, you're absolutely right. It's very useful realtime for this prospective match to understand do I know this company? Do I know this person?

Chris Detzel:

And by the way, you get a thumbs up for that so good.

Chris Ilof:

All right. And I just can't harp on this enough. I think realtime integration is something that it's not really all that difficult to start to do. Most companies, you have a service tier behind, you have an architecture within your organization that can support real time. And when you've got a product like Reltio that is real time enabled across all capabilities, it really opens the door to a lot of that.

All right, moving quick. So let's move on to multi-cloud strategy. So multi-cloud we define it in a very specific way here because there's a couple different interpretations when we say multi-cloud. The interpretation we're going with here and we'll talk about both of them, but we're really talking about losing the locked in or the single cloud affinity and looking at each individual cloud service provider and saying, I like this functionality, this functionality suits my needs in my organization better than the functionality from the others.

So it's about really looking at what the cloud service providers have to offer you from a function and feature perspective and looking across all those providers to select really what's kind of considered a best of breed approach and pick the thing that works best for your organization and for what you're trying to do. So that's enabled by having a multi-cloud strategy. It also minimizes risk of disruption because we see folks not just looking for best of breed solutions offered by the cloud service providers, we actually see them building redundancies across different cloud providers. They do this for business continuity reasons obviously, and to protect their systems that are mission critical, that need to be up 24/7 or have five nine type of availability, it helps to do that. So we do see folks that are taking those, that critical functionality and creating redundancies on different clouds.

So they may have a primary function running on AWS and then they have a redundancy on GCQ or on Azure and really they're doing that for that failover perspective. And then we're seeing organizations really pushing back on being locked in, meaning having affinity to one cloud. Now, not every organization is doing this just yet, but this is a trend that we're seeing where folks are saying, I don't want to do just Azure or I don't want to do just AWS, I want to be able to pick which parts of what that I want and I don't want to be locked in to one set of solutions and maybe not getting advantages that I could have from others. So we do see customers and we see the market that folks are starting to want to avoid that and it also provides them with some bargaining power, if I'm honest.

If you've got substitutes of any kind from a function or a feature across these cloud services when it comes time to renew your subscription to the cloud service, you've got options there and you can let you know your cloud service provider, know that you've got options and use that in your negotiations as a benefit to you as an organization. Things you want to look out for from a multi-cloud strategy really are what I just mentioned. Some orgs still just do have affinity and that's fine. It's absolutely fine. You don't want to go pitching a multi-cloud strategy if there's a hard line rule that says no, we're doing everything on Azure and Azure. So understand what your organization's affinity to cloud or affinity to having a multi-cloud strategy might be. We talked about the business continuity strategy. Companies have different strategies around that. Enterprise architects have different ideas of the best ways to ensure business continuity and 24 7 up times for their critical organizational needs.

And you want to make sure that your vendors are, the products that you're picking are actually multi-cloud, right? So we see many different applications that are SaaS applications, it's proliferating greatly and we do see some of those that are unique and restricted to one and cloud. If your organization is moving towards a multi-cloud type strategy, then you need to make sure that in your selection criteria for your software packages, for your vendor packages that they support that as well. You don't want to end up buying something that locks you into one cloud and then you end up moving some of that functionality away from that cloud to another one because there you go, your utility, your tool's not going to work. When we think about it from an MDM and what are the implications of multi-cloud and MDM, again it's really more associated to that point I just made about vendor selections.

And you should want your MDM platform application, however you want to refer to it, to be multi-cloud, right? This means a couple things and you should want both of them. You want the platform itself to be available and to function on either cloud in case you have cloud affinity towards Azure or GCP or AWS, you want it to operate on all of them essentially, right? Because you want to have that flexibility. You don't want be restricted to not changing your cloud service providers going forward, but as well is you want to make sure that your vendors understand the more nuanced and almost it's really the best of approach from their perspective if you think of it that way. That they're developing the software to use and consume the best of breed features and functions across all clouds for the most optimal solution. And in this case MDM platform for you said customer. So there's a couple things you want to look there that are implications to MDM on the multi-cloud strategy. Any questions there? Looks like we have one from

Chris Detzel:

Oh yes, just came in. Other than replication, BCP, are there any reason one should think of multi-cloud options?

Chris Ilof:

Yeah, as it's an opportunity for you, it's the best of breed approach. If you're focused and you have single cloud affinity, it's pretty much like being stuck with Microsoft Office. So you're only going to be able to use Microsoft Office tools. When we talk about multi-cloud strategy from the best of breed perspective, it's the ability to say, well, but I really don't like PowerPoint and I want to use Keynote instead to do slides, or I want to use Excel for spreadsheets and not Google spreadsheets. Those type of things. So it's a much more nuanced approach where you're looking at very specific offerings from the clouds and you're using that cloud as a bucket of a lot of enterprise software that you can select from and you have multiple buckets that you can select from to build across all clouds, much more robust and effective tools for your needs. Does that help enough?

Chris Detzel:

Yes. All right,

Chris Ilof:

Cool. So I did a lot of yaking at folks and Chris if you want to go back up to the agenda, you can kind of do a quick summary and see let's get some questions going here. We have some time left. So folks want to talk about, so we talked about four emerging trends. Obviously, there's plenty of others, there's the AI automation, AI is all over everything. Make sure that how and why you're going to use it before you use it.

And go ahead, I'll start automating your data quality management, get some real true gains there from both resource perspective and a total cost of ownership. Automate your match rules, reach out to your CSMs or me or Chris or anybody about Match if that's something you'd like to see and check out to do a machine learning match for much more precise matching as your data evolves. We talked about the data virtualization, so meshes in fabrics and how this distributed data philosophy is critical. A lot of people are taking aspects of this and trying to implement them where they can in their organizations and MDM is really a key and critical component to that. It's more foundational than it is anything.

Chris Detzel:

You have any videos or docs on implementation of AI and data quality and matching?

Chris Ilof:

Yeah, I'm sure, I know we have stuff on the doc portal about Match IQ, right? That is a product name that we gave for our ML matching piece. We can take a look, Chris and I can look to see what we have from a video perspective or any-

Chris Detzel:

Yeah, we have one on match IQ machine learning based matching. So I just put that in the-

Chris Ilof:

Cool, perfect.

Chris Detzel:

The notes bill, take a look at that. Let me know if that helps from a data quality standpoint. I'm not sure we have that much on that, but-

Chris Ilof:

No, we'RE building it, right?

Chris Detzel:

Yeah, so it's just being built.

Chris Ilof:

Yep. And the cool thing with that is keep your eyes open for when we announce that it should be a pretty big announcement because it's integrated with the DQ dashboard as well so that we can make recommendations, provide really valuable insights across all your data in MDM from the DQ dashboard. And then it's going to be automating some of the data stewardship and resolution of DQ issues as well.

You'll be able to see that and track that through the dashboard. So some really cool things that Michael Burke is working on over there. Just to recap, realtime integration, right? Again, we can do a lot of things that TL can do for cheaper, faster, and you can support both analytic and operational use cases with one set of data and one pipe. And we talked to, just wrapped up multi-cloud strategy of looking across all the cloud providers for best in breed solutions within their offerings as well as is using it to build full scale redundancies for business continuity. So with that we've got, what do we have about 10 minutes, Chris?

Chris Detzel:

Yeah, I mean if there are no other questions, so there is one question. So Reltio integration hub is one realtime integration option. What are some of the others?

Chris Ilof:

So we've also got connectors. So the connectors are more for the way we've rolled out our coverage can all be realtime. RIH, you should consider RIH for when you need to do some customization, right? The out-of-the-box connector, like the Salesforce or the Snowflake connector from Reltio, you know is much more restricted. We know it needs to do at least this right? To get data to Salesforce and back. So the data models are locked on that it's a much more locked in type of product. If you wanted to connect to Salesforce, but you needed a couple different fields that aren't in or are unique and are not included in the model that comes with the Salesforce connector as an example, or you wanted to add some transactions to the data because you wanted to make data changes within that setting data back and forth with Salesforce, that's when you could look at RIH again.

And both of those operate in sub second real time, as do our APIs. So it's really about considering your skillset, what's appropriate for the solution, what's appropriate for your organization? The APIs are much more the DIY custom build. You're usually going to build those API calls into your service stack right? Within your organization. So those tend to be a little even more customized, but all of them actually support realtime integration.

Chris Detzel:

Cool. I don't think there's any other questions. Just as an FYI, let's get technical and go deep into building complex survivorship strategies and data modeler next week. If you want to know more about that. We've done lots of stuff around survivorship strategies, so-

 


#communitywebinar
#trends
#MDM
#datamesh
#datavirtualization
#AI
#ML
#MachineLearning
#CommunityWebinar
0 comments
11405 views

Permalink