Hello team! Referring to the <issue on ignoring re...
# feedback-and-requests
a
Hello team! Referring to the issue on ignoring records too big for Redshift here, what is the recommended solution to address these ignored records? We’d still want them to end up in Redshift eventually, so wondering whether anybody has ideas on how to address this?
Hi @Arnold Cheong @Chris (deprecated profile) commented the following in the code:
Copy code
Truncate json data instead of throwing whole record away?
or should we upload it into a special rejected record table instead?
I'd suggest you to open an issue as a feature request on our repo if you think one of this option is valid for your use case, or if you have another solution in mind.
t
Does Airbyte have a “dead letter queue” equivalent like there is with Kafka?
p
Thanks @[DEPRECATED] Augustin Lafanechere, I’ll do that after the holidays. Was just wondering if any other Airbyte Redshift users found workarounds for this as I imagine ignoring records wholesale is hardly a long term solution.
@[DEPRECATED] Augustin Lafanechere just to double check, does the existing implementation give any warnings in the logs of any sort if records are ignored?
@Chris (deprecated profile) perhaps you’re the better person to answer, please and thank you!
a
Using Redshift’s SUPER type would handle records of up to 1MB: https://docs.aws.amazon.com/redshift/latest/dg/r_SUPER_type.html