Reltio Connect

 View Only

Event Streaming with Payload: Best Practices and New Features - Webinar

By Chris Detzel posted 09-05-2021 10:02


Join this session with Dmitry Blinov to learn more about event streaming with Reltio, the recommended scenarios of event streaming with payload, best practices, known limitations and new capabilities of this feature.

Find the PPT: Event Streaming with Payload: Best Practices and New Features

For more information, head to the Reltio Community and get answers to the questions that matter to you:

​​​Here is the Transcript: 

Dmitry Blinov

Okay. Today, we will talk about external streaming. I call it streaming [inaudible 00:00:11] but actually you'll learn during this presentation that all of the streams we have, this streaming payload. It's just a question of what type of payload to include. So we'll talk about what is Reltio Event Streaming, how event streaming works. And let me step aside here a little bit, and we call it event streaming called [message streaming 00:00:31]. And this is the same thing, actually. Supported types of the queues and known limitations there. How to deal with the message size limit, that's a very popular question.


Dmitry Blinov (00:42):

Event streaming architecture, why there is no first in first out and what to do about it, how to order events specifically. How do I filter event stream and how we filtering is [inaudible 00:00:53]. And we'll do a deeper dive into the even filtering and go through some things that are not even documented today. We are working on documenting them, but they are already supported. Payload configuration today and the future. Delta even streaming, this is Samsung. Users' coming in with the 23 review and then release. And then we'll talk about, getting the best value out of event streaming and how to use it best practice and the future plans. Where are we moving business?


Dmitry Blinov (01:24):

So let streams not overlap in the view, what is Reltio Event Streaming. It's a service in Reltio platform, which enables you to stream internal skills to the outside. So you can read them, filter on them and include some content into them. And they streamed in the JSON format. And we have a console UI. This part is called Tenant Management and the Tenant Management, you can control your streaming. You can add more use and new queues of streams to the Tenant. You can enable, disable them. You can filter them, you can select the type that you wanted to add. So basically you can configure them. It supports filter by event type. So you can say, I want to receive only these types of events into this specific queue. The service supports object filtering the same manner as search and expert task.


Dmitry Blinov (02:27):

It supports play a lot again, I already mentioned it, but basically you can say I want, or I don't want to include attributes into my events. I want or I don't want to include crosswalks into my events. Only create time or else want to see update time as well and stuff like that. News that is coming, payload maybe configure per queue.


Dmitry Blinov (02:51):

That's actually already supported. That's not new that it is coming. The only new part that is coming is we will also support, console UI for that. So you can configure it through UI.


Dmitry Blinov (03:01):

If you, if today you have access to the Tenant physical configuration, you can already do that. The user is streaming of data payload. Same thing here is already supported since like [inaudible 00:03:15] fix in the physical configuration. But the configuration UI was not allowed so officially it will be rolled out as part of 21.3. Also coming... Release this October, September October time is [inaudible 00:03:34] configuration was moved down one level to this Q level.


Dmitry Blinov (03:37):

So basically before you have to set up whether you want to include attributes yes or no, on the Tenant level. Now you can do it specific queue, and you can have multiple queues for one Tenant. And also something that is coming. You can have multiple destinations per single queue. We'll talk about this a little bit later here. Next slide. Yeah here. So how could I move this? Sorry, how advance streaming works. And I have a scheme here, which is kind of flow of the update and how it goes into the external user. This contributor on [inaudible 00:04:20]. Let's say I have some data what, or I'm just posting an individual update to Entity A here through Rest API. I posted it. Let's say it created Entity A instead of updating it, it just created Entity A and in the end, Entity created will be generated and sent into the internal CRUD queue.


Dmitry Blinov (04:45):

At the same time after I created Entity A it got matched and then merged with already existing Entity B. Again in the system internally, Entity changed event will be on message will be generated for this Entity B and sent to the internal CRUD queue. These two Entities will merge into with AB Entity. I just called it that, and then Entity merge, the event will be generated and again, sent into the internal CRUD queue. This internal CRUD queue is used to synchronize versions of the Entities or other types of objects like relations between the primary data storage and secondary data storages like index, for example, which is used for search on match storage, which is used for match and merge. If you configured one or two or three external or whatever number that is, there is no limitation to this, there is just best practice, queue for your tenant, external queue for your tenant, you will also be receiving all of these events into your queues.


Dmitry Blinov (05:48):

So in this specific case, I created two queues and that one, I put a filter on it that I only want to receive Entity created types of events into this queue. So I will only receive this event here and this queue, I configure that I want to receive other types of events or not entity created. And I also want to use some object filter here. So I don't know. I only want to receive if events that came for attribute name [inaudible 00:06:22] entity type attribute type name. And I'll be only receiving events in this queue for this type of events. There is an important note here, which says that you really create like tens or hundreds of queues per tenant. We recommend to use between one and three queues and this is what all our customers usually use.


Dmitry Blinov (06:47):

And we don't recommend to use more than five strong queues. Now, why would I use that? Where do I need to use that? Obviously with this filter and everything, it allows you to do segmentation. You'll receive active notifications. You don't need to go back into the tenant and check, were there any changes in the entity. You will receive this event will be... It's an active way of monitoring the changes in, in your tenant. But the previous kind of way of using it was I'll receive an event, which only tells me that there is some change in the entity and there is URI of this entity, and I'll do an additional get GET API code to get this entity and understand the change in it and consume it and propagate into my downstream system.


Dmitry Blinov (07:38):

Today, you can configure payload. You don't need to do this additional yet. There'll be no additional query into your primary storage, which will not only do less impact on your performance. It will also work much faster. You get everything you need here already. And as I already mentioned, you have flexible filtering and only things you need to take care of are event size things, which we'll talk about later and ordering as well. So to the next slide, which types of queues we support to today and some limitations on them. So we support AWS SQS. And the biggest limit here is, maximum event size allow is 256 kilobytes. We also support GCP PubSup and Microsoft Azure, and the limitation on event sites, there is much better in terms of how bigger message I can put into the queue. In the projected future, our projected future here is is 22.1, between 22.1 and 22.2.


Dmitry Blinov (08:44):

We also want to start supporting direct team into Kafka. A lot of our customer asking about it and really actively putting this, reviewing this and putting into our roadmap right now. So Reltio platform stream all these events into these types of queues using the same JSON format. So platform itself is stream events into any of the queues in the same JSON format, which is a non decorated JSON, meaning no additional spaces, [inaudible 00:09:15] returns, or any other characters is added. But important note here is there are two important notes. First of all, these cues may have their own schemes put on top of them. I mean, we recently hit this with PubSub, which [inaudible 00:09:31] Google PubSub. You can have your own scheme on top of it. And once you stream your event into PubSub, it'll change the JSON to format it according to the process, new schema.


Dmitry Blinov (09:43):

By default, it matches what format we stream, but you may have known default so check that. Another important note, we unfortunately had some events in the past where, whoever implement the consumer for the queue would implement the parser as a String object, not as a JSON. So obviously schema will impact this, will have a heavy impact on this. If you parse your message as a stream, any change in the schema will break the implementation, will have to change the code. If you parse it as JSON changes in schema will not impact you. So we always recommend to parse messages as JSON, not as String. How to deal with the message size limit, we just mention this right and the most popular queue is this QWS SQS that we use. And but the limitations are 256 kilobyte is not ideal.


Dmitry Blinov (10:43):

A lot of messages that you will stream, especially with payload will have a bigger size on that one. Today, you have to make sure that you have large Object support enabled an [inaudible 00:10:55] configuration. And if it is, there will be additional flag or attribute added to your message, which called exceed queue size limit. Reading this attribute from the message, you will always understand whether it is true or false, basically whether queue size limit was exceeded or not for this specific message.


Dmitry Blinov (11:16):

If it's false, just process your message as usual. But if it is, you'll have to do an additional GET API request to get the actual data, because that flag tells you that payload was removed from the message. So you will still receive the message, you will read it, but the payload will be cut from it. So you have to do additional, GET API request to get the actual payload. When recommend this is the best practice to implement consumers for external streaming today. We recommend to always have this piece of code, you will see as I'm presenting further, that our approach is to go into more kind of narrow down granular filtering capabilities, which will allow you to only receive a piece of data you're interested in this way.


Dmitry Blinov (12:02):

You will overcome the limitation. We also recommend to always use archived or zipped format of the message. We have two in JSON and we have [inaudible 00:12:19]. But even though you still may hit the limit and there is a way to kind of overcome that. And it should be same as exponential back office. You should just have this piece of logic there. Even if it'll never work for your tenant because you have fairly average or small size of entities and as result average a small size of messages and will not hit this limit, still have this logic. In 99.9% of cases, you will not need it, but you'll need it in 0.1% of cases. You better have it.


Chris (12:55):

Hey Dmitry.


Dmitry Blinov (12:56):



Chris (12:57):

Quick question. How do you enable the audit log for each event?


Dmitry Blinov (13:03):

Audit log for each event?


Chris (13:06):

Yeah. How do you enable the audit log for each event?


Dmitry Blinov (13:10):

Let me explain, audit log and events is also two separate concepts and services today. So they not kind of, how could I say... They're not bind one to one and there is no way to bind them today, but we did something else. We did streaming of data in the audit log format, right into the queue. This is the... I'll show you in the end of my presentation, but that's how we did about this. But there is no way to bind activity log to events, one to one today.


Chris (13:48):

Cool. And one other question for you, Dmitry. You mentioned that in 21.3, all tenants will have large object support enabled. Is that all tenants or all new tenants?


Dmitry Blinov (14:00):

All tenants, because will not be enabled on a tenant level, just enabled on a platform level and it's completely backward compatible. So nothing will be interrupted or impacted. You just have this, that basically mean that you'll start exchanging this flag in all of the messages, but if you didn't parse it, you just keep receiving it them. That's it.


Chris (14:22):

Cool. Thanks Dmitry.


Speaker 3 (14:24):

Yeah. One question. Sorry. It is related to the same question, what Chris asked. Audit log for each event, we want to log it. What is the message ID? Like what is a unique ID has been triggered by Reltio and we need that in the log, so that the reason is for traceability, we are asking.


Dmitry Blinov (14:54):

Understood. I think we had a number of customers asking for that. I'll definitely put it in the roadmap in review. It's not that we don't want to do that. It's just, again, from the beginning, those are two different concepts. So sometimes at least in the past, I can tell you for sure, for some cases, two different events will end up in a single activity log record or vice versa. Because they created and triggered separately. But I think today we can do that, especially I'll just repeat, we just build a payload type. We call it Deltas now, which stream activity log format, right into this message. So I think we can support that. It's not supported today, but definitely, consider for the roadmap and in the nearest future, not like two years roadmap.


Chris (15:50):

Yeah. And Dmitry I put a link to our [inaudible 00:15:55] portal. I guess you click on that, create a login or login if you have that. And then put your idea there, because a lot of our customers and partners will go there and upload it and things like that. So it's a great place to put the that.


Dmitry Blinov (16:13):

Yeah. Agree. Thank you Chris. So should I continue?


Chris (16:19):



Dmitry Blinov (16:20):

Yeah events, thank you. Events streaming and architecture and why there is no first in first out. This is again, very popular question because that's another thing, as I mentioned. We just talked about size limitations, but second thing you can struggle with and you need to deal with is messages are not necessarily sorted by timestamp. And basically you need, if you'd like to make sure that in the downstream, if you persist this data propagate it further, it's propagated in the time order as changes will happening inside of the tenant, you need to take care about that.


Dmitry Blinov (16:58):

First of all, why is this happens and why first in first out in the external event stream queue would not help. Because AWS, for example, has first interest out SQS type of the queue. So obvious question is why don't you support that and it'll be FIFO here. But the answer is it'll not be that because events will already come and sort it inside of the external queue.


Dmitry Blinov (17:20):

So the first in will not be the actual first from the system. Why is that is because for example, if we have a REST API and we have a data log, we have multiple post entity calls for a single entity. For multiple entities, but a lot of them come for a single entity. Now be processed by parallel data processing node in the data processing layer because our platform scales horizontally in very efficient way. And nodes can be added on the [inaudible 00:18:00] while this data what happens and that often happens actually. So new nodes may be added and some of the messages will be streamed into the new nodes. Now nodes, normally they'll process events in the same [inaudible 00:18:15] and same place as they receive them.


Dmitry Blinov (18:19):

And normally Event A will be processed by Node A first and then second Event B will processed by Node B. Second and will [inaudible 00:18:28] first in first out here. But sometimes for various reasons, maybe milliseconds change, maybe even seconds delay. In worst case scenarios where something really wrong happened and specific node got hang, we know it shouldn't happen. And normally it does not, but it may have, I don't know, get in the platform, in the product, in the release, whatever. Specific node did not process event in time and say Event A, first event went out of this part of this Node much later than Event X that came much later here and should come later, but it came first. It'll go into the external queue first and only after that, maybe I don't know, 10 seconds later, Event A came out of Node A here. So there will not be first in, first out already here, there'll be vice versa.


Dmitry Blinov (19:22):

This is why supporting FIFO here doesn't make any sense. And instead we should do something different. So let's look at what we should do about that. So today the best way to kind of sort your events is to add update date timestamp into the message payload. It's not there by default and instead we have only created by default. So you should add it there, and you should read this update date every time you consume the message to propagate further into the downstream system or to persist in the database. You should consume this field and you should compare it to what you already persist in your database or whatever the aggregation point you have in the pipeline and basically compare that what you have above this entity ID has later or earlier update date already.


Dmitry Blinov (20:22):

And if it's some, you already have the latest date and if something earlier came, you just ignore it. This is what we recommend. Starting 21.3, you'll not need to specifically add this filter to the payload. It'll be in the header of every message, other options. For example, you can do aggregations in things like Kafka. Today again, they don't support this direct stream to Kafka so you have to restream from SQS queue to Kafka. In Kafka you can implement your own processor and you can use key value store, this Kafka, specific things. So if you should ever work with Kafka, know what processor and what queue value store is. And in the queue value store, you would put a timestamp as a key and aggregate and sort events by timestamp there.


Dmitry Blinov (21:17):

And you can define aggregation period, five seconds, 10 seconds, something longer. I don't think makes any sense to aggregate events for like one minute or more because it may be too many events to aggregate, but you can do that as well. In again 99, 95% of cases that will do, but I would still keep the logic of checking the update time stamp, just to help because that will solve hundred percent of all cases.


Dmitry Blinov (21:49):

Okay. If no questions I'll continue. How do I filter event stream? So let's talk about filtering. I can filter, as I already mentioned, I can filter by type of event and I can also use object filter. This is the same filtering we use in search and export. And this is very flexible. It allows you to do a lot of [inaudible 00:22:13] type of things and combined with use of multiple queues, or as I'll show you later, multiple destinations, it's a very powerful tool. So just some examples of how you can filter. You can filter through all attribute values by specific values. You just say attributes, whenever there is an attribute, is value [inaudible 00:22:35]. Any attribute I want this event to go through or [inaudible 00:22:42]. If there is something like [Mike or Michael 00:22:45], whatever in the name attribute in the message, I want this message to get through.


Dmitry Blinov (22:52):

You can do any type of exact search, you can even use regexp to filter the... Not search filter, I should say, filter. You can do regexp filter to filter in or out the messages that you have. You want to consume that, don't want to consume that, just filter. You can obviously do all types of range greater than, less greater or equal than, greater or less than type of things. There's an example here. This is an example from our documentation directly. Let's do a little bit of a deep dive into filtering. What else can you do?


Chris (23:30):

Hey Dmitry, before you go the steep dive, quick question. Is there a way to know if any event was not published for an entity update due to some internal area within the platform? So we have our audit logs for event traceability once we get the message on SQS queue. But any failure prior to SQS message, we have no visibility. How would we ensure, which event or record was not sent to SQS queue?


Dmitry Blinov (24:03):

This is a good question. And we have a good picture to answer this question. As I already mentioned, all messages for events, that's propagated to the external queue are always propagated from the internal CRUD queue. We call it, create read update, delete queue, but it's just internal queue. In the Reltio console UI, in the tenant management, you have visibility into this queue. It's just called internal queue there.


Dmitry Blinov (24:30):

So you can manage your internal queue. You can actually monitor your internal queue. If you click on the monitor internal queue, you'll see two types of monitoring there, fails and dead letter queue. If you would have anything failed to be propagated from here to here, you would see it in one of these two views, error messages or dead letter queue messages. This is the same pipeline there.


Dmitry Blinov (24:58):

Once something failed, it'll retried actually a lot of times, about some 100 times through an exponential time out. So first it'll be very short time out and it'll be longer, longer and longer. But for a long period of time, this queue will try to resend this event into this queue. If it completely fails, it'll appear in the dead letter queue and you'll see it there. And this is how you know, so obviously you'll always have visibility into something failed.


Dmitry Blinov (25:26):

This is how you know that this is failed. If something would fail before this internal queue, you have an internal service that monitors this internal queue. We just introduced it in the beginning. I'd say middle of 2019, beginning of 2020. I don't remember exactly when, but we introduced it as part of the queue of the platform [inaudible 00:25:47], it's called queue synchronization monitoring service.


Dmitry Blinov (25:52):

And it'll repeat any missed messages. Since we introduce this service, we see, I would say, no messages lost. Yeah. Not even... Almost no messages lost. Sometimes you'll see extra messages is going out in the outside queues. So this is one of the things to consider. And this is why it's hard to bind messages to activity log by the way. Sometimes messages may be repeated from here to here. So you should make sure that you don't consider messages to be unique. Sometimes we will just repeat them just to make sure you receive them in the downstream system. The messages can be repeated and replicated, but this is how we make sure that nothing is lost.


Chris (26:39):

Thanks Dmitry. That's it.


Dmitry Blinov (26:41):

Cool. Let's continue a deeper dive into the event filtering. So just two things to consider, filter can be configured per destination. And multiple destinations can be configured per queue. This is not documented anywhere, but we will definitely document. But if you ever saw the configuration in the physical time configuration, the configuration of the queue, you see that it cells are actually called destination there. And the way we work with it today through our UI is one destination is allowed per one queue, but you can actually have many and starting 21.3, it'll be allowed starting probably [inaudible 00:27:20], will also build the UI for that. Today, one destination per queue so you can have multiple queues, same thing here. And let's consider the next example use case. I'd like to receive [inaudible 00:27:33] and this is actually a real use case from real customer. I'd like to receive a message only if there was a change in a particular attribute. Any change, but I'd like to know how that there was a change in this attribute. To address this use case, you would create two queues.


Dmitry Blinov (27:49):

You would configure one of them to only receive entity created type of event, and only for specific type of attributes. So if there was an entity created with this attribute, it did not exist before, you would receive it here. But if there was a change to already existing entity in this specific attribute, you would receive a filter called changes, and you would mention this specific attribute there, and you would receive it in this queue. Why do I need two queues in this case because changes, it used data and when something is created, the data will be empty and you will not receive this event here. Sometimes this is again, not properly documented. We are working actively on improving on documentation. But this is something that not always well understood by our customers and partners. And if something new is created, I would not receive an event here and I would think that something doesn't work.


Dmitry Blinov (28:49):

Instead, you should have two queues in this case and process separately, creations and updates. Payload, real quick rush to payload. Today, we support payload. You can configure in this field, you can put multiple different types there. What's not supported, you cannot filter or view changes, only you receive everything and you cannot go to granular there. So if you say attributes, you will receive all types of attributes. You can filter events using the filters before. So you'll only receive event, if specific attribute change was done, but you will receive an event with a payload with all attributes. So event itself will change, our message itself will be big. And if it exceeds the limit of the queue, you have to go with the technic of get API call.


Dmitry Blinov (29:49):

If you keep this parameter empty, you'll receive everything. So it's not intuitive today, but this is how it works. By default, it's never empty. By default, we will always have URI, Type, createdby and createdTime this four. If you just create a default queue from UI, you'll have this four, but if you go ahead and remove everything, instead you start to receive everything. Starting 21.3, we'll first of all, improve our UI and we will [inaudible 00:30:21] configuration of this payload through our console UI. And also we introduced some new type of Delta... Of payload called Deltas. We talk about in this next slide, but that's really cool. This is something that should improve the use of [inaudible 00:30:39], a lot in the future, definitely where I'm going to move resolve that. I'm going to move into more granular and flexible configuration for the payload itself.


Dmitry Blinov (30:54):

I would say that filtering on the events, what I receive or not receive is very flexible today. But if in addition, I want to narrow down to a very specific piece of update, very specific piece of entity I would like to receive... So I don't want to receive the entire entity. I want to do for two reasons. First of all, reduce our parsing efforts and second, limit the size of the message. This is where we are going to go in our product roadmap overall, and obviously improve the UI capabilities to configure all that. Delta Event Streaming. Will be coming in 21.3 preview already implemented so it will be there. This is how we configure it. So additional type, additional attribute on destination level is added. And you can have two values there. Deltas or snapshots. Snapshot of the previous one, which talk about it in the previous slide where you can say, I want to have attributes, attributes and crosswalks, stuff like that.


Dmitry Blinov (31:55):

But if instead you say Deltas in this configuration, you'll start to see in payload always in activity log Delta format. Same as you, if you ever use activity log API, and you parse this activity log JSON that you received there, will be the same format here. So say I have entity changed event for this entity and I'll receive a payload in the format Deltas. Was any [OV value 00:32:25] change yes or no. And then that the old value and the new value for this specific attribute here, and which source is associated with this update. And this is the only thing I would receive from this event. So I know exactly what changed, from which source it changed and how it was changed. Basically, what was before and what is the new value there? This is new. Now this is what we added in the coming release.


Dmitry Blinov (32:55):

And with that, I think I'm ready to switch to question and answers before I'd like to reiterate on kind of best practice real quick. So always try things in lower environment first, before going to production. Always use JSON parser to parse your messages. We recommend to use payload and now this new Deltas payload is something you can use. Instead of going back to REST API and do a GET on every event, you receive, we recommend to process events asynchronously. So basically consumers should be implemented, asynchronously. You should not wait to parse new current event before consuming the new ones that you can consume them in the [inaudible 00:33:41] pool and then you just parse them in parallel while receiving them. [Jacque 00:33:46], I know you're on the call. Anything else you'd like to recommend as a best practice here? Anything I missed maybe. And I'll stop sharing.


Jacque (33:53):

No, I think you've covered that, Dmitry. Thank you.


Chris (33:56):

Hey, Dmitry. And we do have one question here, but feel free to ask some other questions here in the chat as Dmitry answers this one. So Delta payload will also publish non OV values as question mark?


Dmitry Blinov (34:12):

Yes. Good question. Today yes, but the next step there, which will be added, I think between 21.3 and 22.1 is this next filter. This is exactly next thing in the roadmap filter data by [inaudible 00:34:28]. But today yes, you'll receive everything. You only have indication of what you received didn't impact OV attributes, yes or no. So you can filter out non OV but you'll receive it and you'll have to have the logic to filter it out.


Chris (34:47):

Great. Thanks. Not as many questions today, as usual. Usually we're getting like 15 or 20 of them. So you must have been extremely clear Dmitry and folks like [Sandro 00:35:02], I don't know if you any other questions. He usually does, not usually too shy, but please feel free to ask them. So next week, please look at the and go to the events tab. Oh, good Sandro, and sign up for some other webinars. So we do have one next week that I'm super excited about as well. What else? Anything else, Dmitry for kind of last... ? Wait, we have a question. So with the Delta event streaming, we do not need to call API again, to get the latest information for an entity? That's a question.


Dmitry Blinov (35:41):

Exactly. Yes. This is where I implemented it. And just make sure that you aggregate and sort events so that you receive the latest one by time ten, as I mentioned before, and then you have everything you need in this exact payload, this data.


Dmitry Blinov (35:58):

You don't need to do the... And this is the best practice. And we would recommend customers to start using this widely, even for all use cases. Whenever you have an active streaming. And if today you have an implementation, we will just receive an event, which by default, we only have [inaudible 00:36:18] URI type who created, who made the change and when the entity was created. And then you will have always, once you receive the event, you'll have to always go back to your REST API and do a GET to the specific entity using the entity [inaudible 00:36:37] you got from the event and understand what change there, parse everything. Compare somehow, you normally have complicate logic to understand what change just happened to kind of get it and propagate to the downstream or [inaudible 00:36:52] persist somewhere in the downstream. You don't need any of that anymore. You just receive the event, you already have a Delta there. Just parse the Delta, stores the change, stores it. That's all. That's very simplified one.


Chris (37:06):

Great. Hopefully these are helpful. Please post in the chat. If you think these are helpful and if they are, and there's other... You guys keep coming, so it must be helpful, but we are looking for other topics that are going to be of interest to you. Every now and then I'll get a topic or two, and then I'll go in and talk to the right person about it. So please feel free to put your comments in the chat as we're kind of closing this out. Dmitry thanks so much.


Dmitry Blinov (37:36):

Thank you, Chris. And I think just to add one more thing, a lot of things I was talking about today, this is very new. So I think we need to let everybody kind of consume and digest, and then there'll be more questions about it, I'm sure. And much better greater documentation is coming, so feel free to ask questions in the community, about any of that, or you'll be able to meet about all this now.


Chris (38:03):

As a matter of fact Mark yesterday asked a question in the community about kind of an area around this and thanks thanks to [Jacque 00:38:12], he went in and, and had a really good answer to it. So please post your questions in the community as well as they come up. As of this, I think we're finished now, Dmitry. Thanks so much everyone for coming. I look forward to seeing you next week and the other weeks to come. So also get on the community, post your questions, let's connect to each other and just... Let us know if you need anything. So thanks everyone. I'm going to stay on for a minute or two, but good stuff. Very, like you said, I think it's kind of new to folks, Dmitry so...


Dmitry Blinov (38:51):

Thank you.


Chris (38:52):

It's really good. All right, well, I'm going to drop off now, so all right Dmitry, thanks again. Jacque, thanks for coming. Thanks everyone for coming. Bye-bye.


Jacque (39:04):

Bye. Thank you.