I’m getting the error below, any insight when runn...
# ask-community-for-troubleshooting
r
I’m getting the error below, any insight when running an incremental export from a postgres database using CDC via wal2json plugin.
Copy code
Stack Trace: org.postgresql.util.PSQLException: ERROR: out of memory
  Detail: Cannot enlarge string buffer containing 1073741293 bytes by 659 more bytes.
  Where: slot "airbyte_slot", output plugin "wal2json", in the change callback, associated LSN 1/92C0F500
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
Setup Airbyte: 0.40.10 Source: postgres 1.0.11 connector Destination: Snowflake 0.4.38
✍️ 1
Only thing I see on the forum about this is here, but I see no resolution!
u
@[DEPRECATED] Marcos Marx turned this thread into Zendesk ticket 2653 to ensure timely resolution!
🙏 1
r
Upgrading to postgres 1.0.14 connector, since it looks like dezezium was upgraded to 1.9.6 which hopefully fixes the issue.
Is there a way for me to view the ticket created in Zendesk? Or is that for Airbyte employees only?
u
Hi Rocky, The Zendesk tickets are indeed for internal use only. I'm looking into this and hope to have some ideas for you soon!
u
Hi Rocky, The Zendesk tickets are indeed for internal use only. I'm looking into this and hope to have some ideas for you soon!
🙏 1
u
I found a similar issue where upgrading fixed it: https://discuss.airbyte.io/t/postgres-cdc-not-working-no-deleted-at-and-incremental-append-seems-to-add-all-rows-and-not-updates/1653/6 It was a while ago, so not sure if it'll be applicable to your case. Were you able to upgrade both Airbyte and the source connector successfully?
u
I found a similar issue where upgrading fixed it: https://discuss.airbyte.io/t/postgres-cdc-not-working-no-deleted-at-and-incremental-append-seems-to-add-all-rows-and-not-updates/1653/6 It was a while ago, so not sure if it'll be applicable to your case. Were you able to upgrade both Airbyte and the source connector successfully?
r
Yes upgraded both - doing an initial sync now
That issue seems to be a different problem though. We specifically can’t get records from postgres replication slot because of the memory error above. Here is how airbyte starts the replication slot. I wonder if
write-in-chunks
needs to be set to
1
, not
0
START_REPLICATION SLOT “airbyte_slot” LOGICAL 1/779AD058 (“include-not-null” ‘true’, “include-timestamp” ‘1’, “pretty-print” ‘1’, “write-in-chunks” ‘0’, “include-xids” ‘1’)
Or maybe
write-in-chunks
won’t return proper json? not sure