Hi
@Kartik Shah,
That is an Amazon SQS size limitation. AWS has a cap on the size of the objects that can be published to an SQS queue. The way that we get around this today with the full object publish is that for SQS, if it has an object that is too large to be written to SQS, we write just the URI and the event type, and then we publish an error message inside of that message that says the object was too large.
You can then take that URI and make an additional get call to grab all that data from Reltio. There is a little bit of back and forth for those large objects. But that full object publish we believe will reduce the chattiness between those API transactions and those events being read downstream and be minimal. Most of our customers objects are significantly underneath that size. But that size limit is an AWS limitation, and so we cannot really change that. The way that works today is Amazon writes the message information to an S3 location and provides you a link to the S3 location via the SQS object. It is not really on the SQS object to support over 256, it is still the SQS object itself, is still a 256 maximum.
If you missed it, make sure you check out the #dataextraction webinar on Reltio.
------------------------------
Mike Frasca
------------------------------
Original Message:
Sent: 07-15-2021 09:23
From: Kartik Shah
Subject: We are reading the whole object from SQS but, we have a limitation of 256KB. Do you plan to increase the size in future?
If not, is there a workaround? #dataextraction
------------------------------
Kartik Shah
BCBSNC
------------------------------