Transcript:
Chris Detzel: Welcome everyone to another Reltio community show. My name is Chris Detzel, and we have a special guest today, Venki Subramanianan. Did I get that right Venki?
Venki Subramanian...: Okay. You did.
Chris Detzel: Good enough? Good. And I'm Chris Detzel. I'm the director of Customer community and engagement at Reltio. And Venki is our senior VP of product Management here at Reltio. And today's topic is Manage Your Core Data as a Product. Next slide. So, as usual, some rules of the show is keep yourself on mute, all questions should be asked and chat and.or take yourself off mute and ask the question. Some of these questions we might not be able to answer, but due to not necessarily confidentiality, I should've deleted that one, so I apologize. But just depending on how we get through this. But we definitely want your questions and the show will be recorded and posted to the Reltio community. The goal is to have that done by the end of this week, maybe even sooner. Next slide. We have a lot of great and upcoming events coming up.
Today's topic will be Manage Your Core Data as a product. We do have a case study next week on Tuesday. Click Case Study Driving Clicks, MDM Program Success. Really a good one, right? Venki, I know you've seen this one already.
Venki Subramanian...: Absolutely.
Chris Detzel: And then we do have Takeda coming on a Q&A show with one of our senior customer success managers on data quality management, commercial pharma master data management landscape with Takeda. Really excited about that. And then we just opened up a new show, applying the Forester TEI calculator to your enterprise. We'll demo a little bit there and show you the self-help view of this calculator that we have that we think is going to help you not just justify your master data management, but just to maybe even go deeper into the organization. And then on December one, we know that the change in the UI is going to be February, I believe. And so we want to go a little bit deeper into the UI here at Reltio. And then we have another show on the eighth on emerging trends in data management, which I'm really excited about. So really some great shows. I'm looking at trying to get one or two more up and running before the end of the year. But other than that, that's all for me, Venki.
Venki Subramanian...: Thank you, Chris. And I was just sharing this with you earlier, that these are an amazing lineup of speakers that you're lining up here in the community. I love watching them. I'll attend as many of them as possible. And if I'm not able to, then I catch them pass up or the weekend or something and they're fun. So I'm really looking forward to some of these sessions as well and I'll try and attend them.
Chris Detzel: Great.
Venki Subramanian...: Great. Okay, so let's then dive into our topic of today. And before I go into that, maybe a little bit about myself. Like Chris said, my name is Venki Subramanianan. I lead product management here at Reltio. I've been to with Reltio for about three years. And prior to that I've been in companies like ServiceNow and SAP, where I've led product management for various customer experience and CRM products.
So when we talk about master data, core data, a lot of us care a lot about our customer data. That's one of the foundational pieces of data that companies use, companies track, manage, so that we can provide great experiences to customers, we can accelerate our growth by providing better product services to customers and things like that. That's really the reason why I came to Reltio, and being on this exciting journey, building our product and working really closely with many of our large strategic customers, enabling them to use this technology to power their customer initiatives. So here I'm responsible for product strategy along with our team, and I'm really looking forward to the conversation with you all today. I hope it's not just going to be a presentation at you. I welcome your feedback, your comments and questions as we go through this because this is really not about a specific feature or functionality.
This is really about how we approach data and data management, both as an architecture and as a discipline in our organizations. So before we get into the depth of how do we treat data as products and all that, maybe the first few minutes I want to spend on understanding why do we need this? What problems are we really trying to solve? And how are those problems being solved today with our modern data architectures? And nobody here is unfamiliar with this picture about the massive proliferation of applications and technologies which end up creating a lot of data silos. And we are all struggling with that, right? Companies are accelerating the adoption of various SaaS applications. We are still dealing with a lot of legacy on-premise applications. Many of the investments are moving to cloud and cloud transformation itself is an accelerating trend. But what happens when we do all of these are, you can see that a lot of critical data is still locked up in legacy systems.
A lot of point to point integrations. Business processes become highly inefficient because we need data, the right data, the right quality of data at the right time to power several business processes. And identifying and using the right data to drive key initiatives becomes a challenge. So to solve these data challenges, the traditional approaches as you have seen, are more about understanding or taking the data from the system of records, which is on the left-side here, and moving them into different data platforms, either operational data stores or data lakes or data warehouses, and then creating use case specific views or use case specific data sets out of those things. And those use case specific ones, when you think of use cases, it could be things like call center or digital banking or any kind of mobile application, for example, eCommerce and those kind of things.
Those are different applications of this data. So let's call them use cases for now. And if you literally look at this, and this picture itself actually came from a McKinsey article that was written earlier this year. It's a great article and if you look at it talks about how to unlock a full value of data. And the source of this itself is from a Harvard Business Review article before that. And they talk about these use case specific data sets that are getting created that then gets utilized by use case specific technologies, create more inefficiencies, overall, this process becomes even more inefficient. One inefficiency comes from just the complexity of such a landscape itself, moving from different sources into these different data platforms and then creating use case specific data sets and applying different technologies to consume. So this goes back to the picture that we saw earlier with the spaghetti diagram, right? With data silos, it's probably a nicer neater version, but ultimately behind-the-scenes it looks exactly the same.
And all the problems that we see there still continues with this, right? And data for each domain, such as customer for example, it gets copied over and reworked or repackaged for almost every single use case instead of creating a single consistent view of the data that gets leveraged across multiple of these use cases. The other thing is the usage patterns, if you will, some of them require real-time access to the data, some of them require near real-time, some of them are more about analytical use cases for predictive modeling. And some of those usage of data ends up creating insights which then needs to go back to the source systems, to the systems of records or systems of actions or systems of engagement. So that information, the insights that are actually captured out can be utilized in the right business processes.
So this is not a one way flow of data, if you really think about it, the different applications of the data that you do on the right side, like predictive modeling, predictive churn models and those kind of things, some parts of that needs to flow back so that it gets utilized in a consistent way to drive the outcomes that these companies are trying to drive out of it. So these are the problems and these distributed data architecture patterns as you are seeing, right? If you really look at what they're solving for, the data lake architecture have common failure modes. And this again is one of the reasons why the modern data architectures like data fabric or data mesh are evolving. These are new architectures that have being created to be able to resolve some of those problems. Now about two weeks back, Ansh Kanwar, our head of technology or technology data session, data community webinar on comparing some of these new data architectures, data fabric and data mesh specifically.
And he talked at length about many of these things. So if you have not had a chance to see that, I would encourage you to go look at that recording. It is available. It's a very insightful, very interesting session, and some of this content I took directly out of that because today's conversation is in some way a continuation of that and looking at one specific aspect of it, which is around treating data as a product to be able to solve some of those inefficiencies and challenges that we see there. So, again, coming back to the distributed data architecture patterns are solving for the previous generation of monolithic centralized data management, which is inefficient. And if you look at the previous architectures, the ingest process is orthogonal or basically the pipeline has to be decomposed for higher efficiencies and that is not how the previous architectures were set up.
And we also need to solve for hyper-specialized ownership, centralized data teams that are siloed that only increases the silo problems that we have. Not just that the data is siloed, but the knowledge about that data is also siloed. The management of the data becomes siloed as well. So these modern architectures are trying to solve for. And you can see this again from Zhamak Dehghani who actually acts as a founder, as a data mesh architecture, she has talked about it and written about it at length. So if we think about what is a path forward? What do we need to do to enable data as a strategic capability? Because I think I've had many conversations with data leaders across organizations, nobody has ever complained about not having enough data, at least in the modern enterprises that we are working on today, everybody, everywhere there is more data, there is a proliferation of data.
The challenge is not having data, it is about how to make use of the data in the right way so that it becomes truly an asset for the organizations. So what are the things that we need to do there? We have to break down these data silos clearly. We have to make sure that the right data is available, the right points of consumption in the right manner, at the right time. We have to make sure that we are empowering people and processes across the organization using data, which means any place where data needs to be consumed, the people who care about that, they need to be able to discover the right data, they need to be able to utilize the right data, instrument their processes with the right data. While doing all of this. We have to solve for speed and scale. So cloud-based data infrastructures is a solution for that.
That is why there is more data ecosystems being built around cloud native technologies or on the cloud infrastructure. And this is really to enable speed and agility. The scale and the size of the data that is being managed is changing so fast that we need to solve for scale and speed and agility. And while doing all of this, it's also really important to make sure that the trust and security safeguarding the sensitive data, especially when we are talking about core data or master data, safeguarding the data is critical as well, which means we have to make sure that the investments in data security are made at the foundational level and is applied uniformly across all of these data sets. So these are some of the foundational principle. What are the ingredients? What are the critical investments needed to enable data as a strategic capability?
Now if you look at data mesh, data mesh actually talks about some of these key principles, actually the four principles that you can see here. Domain ownership, data as a product, self-service, data platforms, and federated governance as a way to solve for some of those things. They nicely align to the strategic capabilities that we talked about previously as well. Now, let's just briefly look at that and then we are going to dive deeper on the second box here. The second principle here, which is data as a product. That is what we're going to spend some more time on today. But what are the key principles? The four key principles are domain ownership, which means a team should own a domain, and should be responsible for aggregating the data, identifying the right sources, and providing the data in the right form with responsibility for the quality of that data as well.
Data quality becomes the responsibility of these domain teams. The second part is really treating data as a product in itself, which means there are a set of characteristics, common characteristics that need to be applied to any data set that is being managed by these data domain teams. Data should be discoverable, addressable, trustworthy, it should be self-describing. There is a lot of focus in data mesh about the self-service so that anybody who wants to consume the data can do that on their own without requiring another IT project, a lengthy project to be able to do that. So that is why even when we look at data, the data itself should have self describing syntax, semantics, principles associated with it and should be standards based wherever possible. Security is built into it. Once again, just as we talked about in the previous slide, security is built into it when we think of data as a product and observability as well is built into it.
And the first bullet on this one talks about datasets exposed by APIs, which is all of this data that is being curated is available through an API interface so that it's easy for consumption. Self-service is an important aspect, making sure that all of this data can be consumed without requiring, again, more technology investments. And then federated governance, which is really about distributing the governance aspects of this data so that each domain expert team is actually responsible for governance aspect. And there is a governance guild composed of domain owners and practitioners who are actually responsible for governance so that there is no centralized governance, like all data need to be governed by a single data governance organization only adds to inefficiency. So this talks about more of the federated governance, which aligns very well with treating data as a product because each product has its own product, ownership teams, product management teams, and they're responsible for the building and the governance and the quality of the data that they provide.
I see a couple of comments. I think they are just links to some of the articles and details that we were talking about as well as a link to the previous webinar that I just referred to. Now, we have been talking about this from relative's perspective about many of these principles and relative especially in the age of data virtualization. Again, there was a very interesting conversation between Ansh and Manish, our CEO and co-founder, I mean our CEO and founder a few months back. Again, that session is also available on our community. And Manish talked at length about core data, which is information about customers, vendors, location, assets, suppliers among other things. Being one of the foundational elements that every company has to consider investing, we used to call it master data or core data sometimes interchangeably. And this is evolving from almost like an afterthought in the earlier days when the data complexity and the data landscape complexity increases, companies had to invest in MDM as a way to solve some of the problems it creates as opposed to now there is the modern way of thinking.
MDM is being thought of as a critical component of the data architecture enabling easy consumption of the right data, the right points for the right use cases. So that is a change we are making and we are seeing as well. And the reason for that is really an understanding that data needs to be governed as products and that is why we believe that we deliver core data as products to our consumers and that is a key part of how to think about master data going forward. So it's not just an opt to thought, something that needs to be put in to solve specific problems. And again, in the legacy world, more thought of more as part of an analytical pipeline to more of the modern way of looking at it where when you're talking about core data sets like customers, vendors, locations, each of them are a domain, and they need to be treated as a product and master data systems. Modern master data systems enables you to treat that as a product to consolidate, to make it discoverable, self-service, and consumption through real time interfaces like APIs.
So that's just some of the precursor. Now let's talk about how do we do this? How do we treat data as a product and some of the principles? And again, I want to start with one of couple of codes from Zhamak Dehghani, from her literature around data mesh principles and logical architecture. And you can see that it talks about data as a product principle is designed to address the data quality and age old data silos problem. So the key problem that we are trying to solve with this is the data quality and data silos and domain data teams, which are the teams that we assemble to be responsible for the data by each of these domains, they must apply product thinking. Now I'll talk a little bit more about product thinking. Coming from a product world, being a product manager for a good part of my career, what does it really mean?
What does product thinking really mean? We'll come back to that. But what this says is, data teams must apply product thinking to data sets that they provide. Considering their data assets as their products for the rest of the organization like data scientists, ML and data engineers as their customers. So when you think of a product, a product essentially is a collection of capabilities that solve a set of problems for an identified set of users. That's probably the simplest one of the definitions that I've seen, especially when we talk about digital products. A digital product is a collection of capabilities that solve specific problems for an identified set of users or personas. So you apply that to data as a product. You can see the mapping very nicely here that domain data teams must apply product thinking, which means their data sets that they provide, they need to understand what are these data sets really used for?
Who is using these data sets? Treat those users as their customers. It could be ML data engineers, it could be any kind of other systems, data scientists, all of those are their users. And these products actually need to solve or meet the needs of these variety of users for the specific applications that they're using this data for. And all of their responsibility now falls in the domain data teams to be able to understand those customers, understand their different usage scenarios and the patterns of the data, and create the right data sets and manage and deliver them as products for these customers. So that's at least my interpretation and the expanded version of how I interpret this statement from Zhamak. And if you look at the principles, the data as a product principles on the right, which we saw a couple of slides back, again, all of those are the core principles that then needs to get applied on this, right?
The data then needs to be discoverable, addressable. Addressable means it can be accessed with a specific location or other characteristics. Trustworthy, which means it needs to provide a specific SLO. The data needs to be, whether it's API performance or the ability to actually get data with the specific predefined latency or lag from the source to the consumption points and those kind of things. It needs to have lineage capabilities, it needs to enforce or ensure certain level of quality. The data should have a self-describing syntax and antic samples. It should have standards which are interoperable so that systems can actually interface with that data in an easy interoperable manner.
Security and observability. These are some of the characteristics that then needs to get considered into those data as a product. Now why do this? So that users can easily discover, understand securely use high-quality data with a delightful experience, a data that is distributed across many domains. So that's the primary purpose. Why solve this? If you put all those principles into a simple statement, what are we solving for? So the data users who can be anybody, any one of those persons that we talked about can easily discover, understand, and securely use high-quality data with delightful experience.
So just imagine if we have this and if we are subscribing to and following these principles, the problems that it solves and the value that it can unlock is immense, right? Because it then solves the data silos problem. It solves the problem of not knowing which data to consume for what purposes and things like that. And a lot of inefficiencies that we see in organizations really about hunting for data wrangling and those kind of things, would essentially get simplified by applications of these principles. Now we talked about digital products and we are talking about data as a product. So again, I found this interesting comparison of what are the different key capabilities when people say product features and how do they get applied to data as a product or in contrast to a digital product like a computer application or software application or a physical product like a car?
And again, the source of this is pointed in this link if you want to read that article. Like I said, it's highly recommended. It's a very insightful, interesting article. And if you look at the product features, some of the base characteristics that any product needs to support is of course, like I said, it provides a base set of capabilities that are easily understood that solve specific set of problems for identified set of users. In addition to that, it supports customization of the base product for different users. Because a product inherently is not solving the problem for a single user, it's a collection of users and they often fall into different types. So the product needs to support certain base customizations or configurations to be able to meet the needs of different types of users. And you can see how software products or computer applications enable users to do certain things like personalizing layouts, creating your own saved searches, as an example, if you want to look at the data or your own visualizations of data through charts and dashboards and things like that.
Similarly, a physical product like a car might have different variants with the upholstery, color, printed windows, and whatnot. Similarly, a data product can be wired to support different systems that consume data such as advanced analytics or reporting. So when we look at data as a product, it needs to also provide that kind of configurability or at least different optionality for the different types of users. You need to consider that this data might get used in real time applications, it might get copied over into other application systems, systems of record, system of engagement and might acted upon. It might also get moved over into different analytical systems. And like I said, sometimes predictive models get built on top of it, machine learning models get built on top of it and some of the data needs to flow back into this data set itself. So these data domains, data domain products or data products needs to have that level of flexibility and configurability to be able to address different types of consumption for different types of users.
Chris Detzel: Hey, Venki?
Venki Subramanian...: Mm-hmm.
Chris Detzel: Couple of questions and maybe you actually answered this later, but how do I specifically implement data products into Reltio? So technical configurations, et cetera.
Venki Subramanian...: Yeah, we'll come to that. I'll talk a little bit of the Reltio specific aspects here. That's a good question, but just give me some time. I will address that in the upcoming section.
Chris Detzel: Okay, and then Chino says, do you think all elements of the product lifecycle apply? So I'm thinking of things like product specifications, support documentation, now, sellers, training, upgrades, et cetera.
Venki Subramanian...: Yeah, I think all of those things apply and that is one of the good parts of treating data as a product because the reason why those things exist, for example, before you build a product, you understand the users, the personas, you do the requirements gathering, you do some customer validation, those kind of things. Applying those kind of things is extremely important because that ensures that the data sets that we create, the products we create is created with a good understanding of who the users are, what do they need. And the other thing that I would also say, it's a good question is just like in product, we apply an MVP principle, right? We don't try to build a product that has all imaginable features and wait until all of them are available to release. You always look at an MVP approach and then you follow an iterative model to build on it.
Similarly, even when we think about data products, that is one of the approaches that we strongly recommend. Start with your MVP, identify your first use case, first value milestone. Understand what kind of data is required to be able to support that and build for that and then iterate and add additional capabilities. But every time you add additional capabilities, you do it in a way where it builds on what you've already delivered. It does not disrupt existing customers, users of that data, but you're adding additional capabilities, unlocking more value, enabling more users, more usage scenarios.
Chris Detzel: Thanks, Venki. That's it for now.
Venki Subramanian...: So I won't go through the other ones in as much detail, but you understand that just like any of the product feature sets, like customization or I mean delivery of regular product enhancements, which is what I just talked about or reusing of existing processes, missionary components and those kind of things, you can apply some of the same principles either across product features or production efficiency across data as a product as well. So delivery of regular product enhancements. One of the things that I do want to mention here is there is a code from Gartner that actually talks about three folds of MDM implementations fail to deliver tangible business value. That is a really concerning statement if you really think about it that why would anybody invest in a technology with such a low success rate? And the reason for that is the way these things were probably approached in the past, which are solved with modern MDM systems like Reltio, which enables companies to really understand different use cases and provide iterative value to customers.
So instead of the nine months, 12 months, 18 months long project, and then you enable it and then you see value out of it. What we are seeing customers do with modern MDM systems like Reltio is get up and running in the first couple of weeks, understand your data, enable the first use case, first consumption within the first four to six weeks. And it is possible to do that because you're not locked down by the scope that you initially deliver. You're able to continuously deliver more value and enable more use cases, more consumption mechanisms on top of it as we go.
Chris Detzel: Mary makes a good point. She said technology alone doesn't solve the MDM problem. To me, that is why so many fail. Probably the people can ask.
Venki Subramanian...: Absolutely, and that's so true and that's why I'm not talking today about technology as much as I'm talking about the principles that need to get applied. I cannot agree more, absolutely true. It is all about how we approach it and again, this is maybe a little bit biased coming from a product manager, but approaching this as a product, understanding your users, enabling the first set of users with a minimum set of capabilities that needed and then building on top of it enables you to deliver value, test some of your assumptions, course correct as you go. And this is as much about processes and mindset as it is about the technology that enables you to do that.
So let's talk a little bit more about how relative specifically enables some of these capabilities. So if you look at how we talk to our customers and prospects, we talk about how Reltio's platform enables customers to collect data from disparate sources, unified that data for some of our high value market segments that we support today to enable activation of the data for key customer initiatives and then delivering business outcomes. And if you look at this, this is really following many of the principles that we just talked about that in that unification and activation, we are treating data core data master data as products that have a clear explanation, a clear structure that is published and can be consumed in a self-service manner that then can enable activation of the data for key customer initiatives. So, again, let's apply the working backwards methodology on this one for a minute.
So if you look at the right most swim lane delivering business outcomes, every company cares about delivering business outcomes to accelerate growth, increasing efficiency and managing risk and compliance. Almost any initiative that a company has, whether it's omnichannel engagement, intelligent process automation, or investments in privacy, consent management, and all of that, can be mapped to one of these key business outcomes. So we work with companies and I mean I would also encourage everyone to apply similar principles. So you have to understand, what are my highest priority initiatives and which business outcomes do they deliver? And this also can then serve as a way to quantify those business outcomes that we are targeting. Once you identify the prioritized list of initiatives, then you can go to what kind of data do I need? And you might need different types of data, but one of the foundational investments is always around the core data.
What is the master data that I need to be able to support? For example, if it is omnichannel engagement, you need to know the customer data. You need to know customer 360. Taking an industry specific example like insurance, I need to know who my customer is, who are the members of the household, what active policies do they have? Do they have any active claim? That is the information that a call center is agent is expected to have in that industry. Or you look at privacy and consent management in case of retail as an industry, you still need your customer data for the consumers, what channels, how do they engage with us on what permissions have they given us to be able to access and use data and how do I enable a system that actually ensures compliance to those, the consent that the customers provided as well as comply with different regulations based on the regulatory framework that we operate under.
So you can apply the principles and come up with prioritized set initiatives and identify what kind of data and even go down to the specific set of attribution level and say that for this kind of data, my customer data with these set of characteristics or attributes is what is needed for me to power this initiative. Then you can work back from there to identify which source systems do I get these data from and do I need to enrich the data with specific third-party applications? And you apply these principles to be able to then define your data product for that data set and your scope of the release that you're targeting and to enable a specific initiative. Once you enable that, then you can go out and go ahead and measure. So a simplified view of that same thing if you look at it in a sequence would be you start with your data models for the key data domains for your identified set of initiatives.
So that will come down to again, customer, organization, location, product, supplier, vendor, whatever it is. You define that data domain or data model, you identify the third party data sources that you need to enrich the data with. You identify the first-party data from where you get this information from, and then you connect that to the key date customer initiatives that you want this data to be used for to be activated with. So that is the key ingredients and sequence of how you would treat the data as a product using Reltio as a technology to enable that. Now I've taken a specific example here and this is a framework that we use very effectively with our customers and prospects. And probably many of you have seen this already. So if you apply the same principles of collect, unify, activate, you can look at any one of those initiatives across growth, efficiency, risk and compliance, and you can look at what data domains are needed.
For example, in the middle box enables you to manage the data for person organization, supply a product, any one of those key data domains and enable that data consumption through APIs with the full 360 degree view powered by our omnichannel and MDM platform, which again at its core starts with data unification, right? That's where Reltio [inaudible 00:35:13] where slide comments comes into play. So now you have this picture. What this helps you to do is you can use that exact same framework to create your data product roadmap with specific value that can be attributed to each one of those initiatives. So an example of this is you can go into let's say omnichannel engagement on the top left as one of the initiatives. You can say that by improving the quality of data for your customer data or by providing customer 360 data that thereby reducing the time it takes, let's say for your call center improved first call resolution rates, whatever, you're able to influence revenue by four basis points.
These are just illustrations, these are just examples, you can go down to even more specific measures and KPIs for each one of them. But what it allows you to do then is from there you can map it back to whether it's depending on whether it's a B2B, B2C an organization, you can understand well do I need person data? Do I need organization data? Do I need associated interaction data to be able to influence or deliver those outcomes? How do I assemble that product from my core set of first-party systems and enrich that data with third-party systems? So this serves as a framework and I also want to call your attention to one of the recent studies that we had done with Forester. Forester did a study of total economic impact that Reltio provides and those results and those reports are available on our website.
I believe there is also going to be a community show. So there is more information available of understanding business value and the impact that investments in MDMs with Reltio can provide. But this framework essentially helps you drive thinking around treating data as a product and attributing the business value in a very systematic way.
Chris Detzel: So there is a question, Venki. So I was under the impression that data products lead to creation of data islands. So how can we protect against that being an unexpected outcome?
Venki Subramanian...: That's a good question actually. Yeah. On data, again, if data products are being thought of or the principles are applied in the right way, the product principles are applied in the right way, data products do not end up creating silos because every product needs to also understand its integrations or interoperability with other products, with other domains. So even if there are multiple data products, essentially those product owners of the data domain teams who own those products need to understand the dependencies, interoperability with other product and every one of them still gets published in a uniform self-service consumption manner.
So they don't create silos, they've done the right way. If the principles are applied the right way, they do not create more islands. They actually might look like different products, but they're all provided in the single standard way in terms of how they're assembled and how they're consumed and how they're described. And the same principles we talked about, some of those principles like SLOs, observability, security, all of those get applied in the same standard way. So it would will not feel like you're consuming different products. It will not end up creating different islands even if the ownership and the governance is federated. That's one of the beauty of applying data mesh architecture or principles to this.
Chris Detzel: I love this idea, how can I get my C-suite to buy in?
Venki Subramanian...: This particular slide is your way into the C-suite, right? That's why I spent some time on this one and it's not about the technology but it's about, so why? Why should it matter to any organization? Because if you think about it again, truly like a product, you would start with the value and that is what the C-suite cares about. So if I can clearly show that by making investments in these data products in this specific way, I'm able to unlock data that is getting locked up in different parts, data that is getting siloed in my organization, creating inefficiency. I can unlock that and actually support customer initiatives, key initiatives at the company level that support growth efficiency or risk and compliance, which are the positive business outcomes that everyone is trying to striving towards. We can quantify the impact and quantify business impact is what ultimately everybody cares about. And you can do that at the beginning stage. You can set the targets and you can continuously measure and then show the actual impact that is produced over time.
Chris Detzel: All right, one last question here is, what if I have a very dynamic quickly changing data set, can I still make a product out of it?
Venki Subramanian...: Absolutely, Absolutely. I mean, the principles do not change. These pipelines that you see that actually are the ways in which the data gets assembled into a specific data product into a specific domain, it really does not change. The principles do not change whether the data is changing once a year or data is changing continuously. The same principles can be applied and you can still treat them as products and only thing as the technology that you use then needs to scale with that. That is one of the important things and that's why we focus so much on why cloud matters is because it can support that scale and the performance expectations from that so that the rate of change of data or the size of data, the volume of data, those kind of things can be factored into the design itself and should not be a hindrance, should not be a blocker for you to treat them as products.
Chris Detzel: At first, I thought you were just going to say absolutely and leave it at that, but no, that was great. There's another question. Is there a Reltio feature or method that can be employed to systematically gather store and embed compliance proof data along the master data profile as a product such that self-service reporting is possible?
Venki Subramanian...: You might have to repeat that. That's a lengthy question.
Chris Detzel: I'll say it one more time. I agree. Is there a Reltio feature or method that can be employed to systematically gather store and embed compliance proof data along the master data profile as a product such that self-service reporting is possible?
Venki Subramanian...: The short answer is yes, but it's not a specific feature. If you think of Reltio as a platform and the capabilities that we provide, we allow you to first of all aggregate data from multiple sources. We allow you to describe the data that you are capturing in any domain. So the capabilities in Reltio like the data modeler provides a visual representation of the data model and we have APIs through which you can query the metadata and understand the attribution details and all that of any particular data product or essentially data domain that you're managing in Reltio. By domain, there are capabilities to describe the domain, describe the relationships between that domain and multiple cloud domains, it's inherently via a multi-domain platform. We have an entity graph that actually can enable you to describe relationships.
We describe every aspect of this data and that description is available in a visual way using our tools or it's accessible through APIs in a describable manner that can even be consumed by other systems. So you can take that description and publish it out into data catalogs and other kind of things into other self-service platforms. So there are multiple, every aspect of this journey that I described, there are capabilities in relative that enables you to do that all the way down to publishing this data out into analytics data warehouses where you can build your own reports to not only monitor the operational processes within real, but also then activate this data for any of the analytics driven use cases.
Chris Detzel: Okay. We do have another question. How do we create buy-in to data product when external product teams does not think data quality is a priority?
Venki Subramanian...: That's an interesting question. I definitely would like to ask whoever would ask the question, who would not consider data quality a priority? I mean, how can you trust any data if you cannot understand the quality or ensure quality of that data? One of the basic principles in the data world is, or we report it in so many different contexts, but garbage in, garbage out. So I find it difficult to imagine that somebody would be okay with poor quality of data. What it tells me most of the times is they don't even understand the quality of the data, so they don't understand the bad effects that it might be creating.
Speaker 3: Yeah. So I'm actually a data product manager and I deal with this scenario all the time on an education layer why data quality is so important to customer facing or front facing type of products and that's constantly been my challenge of that education layer, it always seems to be like an upward battle.
Venki Subramanian...: Interesting. Yeah, I think my short answer is, and maybe there is a one-on-one conversation that I would like to have, but my short answer to that is whenever we are able to quantify the current quality of data and show that an investment in improving that would yield specific results, I've seen us get more buy in with that kind of an approach. And even personal examples that I've used sometimes recently I got from one of the large travel companies in the US I came home one day and there were two different envelopes in the email waiting for me. One had my shortened version of my name Venki Subramanianan with my correct address and the other one had [inaudible 00:45:29] Subramanian which is my expanded form of the name. It's a real impact of poor data quality. They did not know that I was the same person with a slight difference in the name.
They ended up spending money and sending me two different letters with their marketing material to my home address. It costs them real money to be able to send that. The real reason of that is data quality. There are more impactful examples of it that I've seen. For example, during COVID when there was a mandate to actually provide refunds for insurance companies because drivers were driving less and there were state mandates to actually refund some part of the premium, insurance companies had to get back in touch with these individuals and issue them notices of this refund and ways to apply and things like that. And there are companies, one of the companies that I was talking to, they said that they realized that about 20 25% of the data address data was poor quality, that it was non-deliverable addresses.
These are real costs of poor data quality. So I would go back to quantifying the current state of data quality, quantifying the impact of that and then using that as a way to get buy in. But you are right, I mean it requires education, especially for an organization that is not familiar with something like this. It is definitely, it requires education.
Chris Detzel: That was a really great thought and Cameron agree. A couple of comments here is data quality is so important in the pharma, especially the address I think in a lot of different industries. And then data quality issues are symptoms of greater issues and the education piece is always challenging. So if companies just cleanse data, it doesn't stop and prevent the issues. I suggest a read, it's called Telling Your Data Story by Scott Taylor. I actually see him on LinkedIn a lot. Anyways, that was good. That was really good. That's all the questions.
Venki Subramanian...: Yeah, well, great. Actually this is a great conversation. I want to also introduce you to another tool that I've come across called the Data Product Canvas. And again, some of you, you are probably already data product managers like Cameron you mentioned. So you might be familiar with some of these things, but this is another tool, another version of that same framework that we looked at. But the difference here is it's domain specific, so it talks about the domain and the data product that you're creating and it provides you a nice framework for you to describe starting from again, working backwards. That's the way at least I look at any product that I defined. So starting with your consumers, who needs the data, what are they trying to use the data for? And then defining the data product design in a simple a one page Data Product Canvas methodology or tool that is provided here.
And again, you can find the link to the Data Product Canvas if you want to use this. But this is a nice way of describing all of those aspects that you need to consider including how are you providing the output of the data, how are you describing the metadata, what kind of observability characteristics are you building into it, including some of the quality metrics that are important to understand the right quality of data is being produced and is produced consistently over time. That is what observability really means.
What is the design of the data product? What are the source systems and the inputs and what are the other domain products and the inputs from there? So it also connects to other domain products from there. So, the way to use Data Product Canvas simply is to start with the key initiatives, understand the use cases, identify the consumers, define the data consumption requirements and the outputs as a data canvas calls it, define the metadata and the governance policies. Define and implement the data observability for data quality operational metrics and SLOs. Identify the first-party data sources or inputs and the input formats and identify the enrichment sources and define them. And the last piece which I input in here is identify the dependent domains and what you are sourcing or connecting with the other domains.
So Data Product Canvas is again a simpler tool for data product managers to start using to describe their data, their data products in a consistent manner irrespective of the domain. And this goes back to the other question that we talked about earlier. How do you avoid creating silos is by following standard principles and applying it uniformly across any domain and making sure that the consumption is also standardized for any of the domains. So I want to end with this view then. So, ultimately, if we do all of that and if you manage data as a product, what does it do? So Reltio first of all enables you to manage your core data, your master data as products, assemble the right data, create a data store that actually enables you to manage the quality of the data and the consumption of the data over time, and connect the different silos, the data silos that exist in your organization across your on-premise cloud, SaaS applications, and all of these different silos.
And all of this enables you to create that single version of the truth for your core data, which produces trusted high-quality data that powers key initiatives in your organization. This data is available real time always on, and it enables you to act on this core data sets with confidence so that you reduce the time to unlock value from this data and make sure that your data is not stuck in different silos, but data's a strategic asset that you're able to use and power different company initiatives. I think that was my last slide. That's what I wanted to end the session with. We still have a few minutes for any additional questions.
Chris Detzel: No, I don't see any. I do have actually one question that came up around the data mesh concept. So it's an interesting one, because it wasn't about data mesh but maybe you can answer it. The data mesh concept of which data products are part of is declared dead on arrival by Gartner, for example. So how should we think about that?
Venki Subramanian...: Sorry, can you just say that again? Because I think I've heard the statement, but just repeat the question for me if you don't mind.
Chris Detzel: The data mesh concept of which data products are a part of is declared dead on arrival by Gartner. So how should we think about that?
Venki Subramanian...: Okay, yeah, that's what I thought. So there are conflicting views or rather I would say there are conflicting terminologies being used by different thought leaders out there and different analysts out there. Data mesh has gained a lot of popularity. Gartner talks a lot about data fabric, which is another data architecture that also has gained popularity. But if you really feel below the layers, except for some significant differences, a lot of the principles that both of these architectures talk about are exactly the same. And the problems they're trying to solve are also exactly the same. There is a nice comparison between the two, the data mesh and the data fabric architecture that is available as a report from Gartner itself.
So if you have a chance to look at it, please take a look. It's just called I think Data Fabric and Data Mesh is I think the title of the paper itself. I would also recommend once again, going back into giving the session that on Ansh Kanwar did couple of weeks back, but he has talked a lot more about data mesh data architecture. I think it is an unfortunate character characterization of data mesh being dead on arrival because definitely being applied in reality in companies now and as again as a set of principles that enables you to create modern data architectures that solve a lot of the pain points of the previous generation data lake architectures.
Chris Detzel: Yeah, that was good. I appreciate you answering that question. No other questions that I can tell. So thank you everyone for coming to today's Community Show. Really do appreciate it. Venki, thank you so much for your thought leadership on the topic of data as a product. Really good stuff. Take a look, everyone at the Reltio community shows coming up. I put that in the link at the very beginning. Additionally, we do have a survey for you to take. It'll only take you a few minutes, so please take that. Your feedback is super important to us. I pretty much share that to our entire organization to let folks know how they're doing and how we can improve. So thanks again, Venki. Thanks everyone for coming. Hopefully you liked it, and we'll see you twice next week. We have two shows.
Venki Subramanian...: Thanks, everyone. Thank you.
Speaker 4: Excellent. Thank you.