Ramon Vermeulen
08/26/2022, 12:49 PMTerminating due to java.lang.OutOfMemoryError: Java heap space
on connections (including custom connectors) lately. I'm running airbyte within a GKE cluster, and I was wondering if there is anything I can do to solve this. I'm using the helm chart deployement.
Update: Eventually editing the limits/requests of the "job" deployment in the helm chart solved the issue for me.rea peleg
08/26/2022, 6:38 PMRocky Appiah
08/29/2022, 1:00 PMRocky Appiah
08/29/2022, 1:00 PMRocky Appiah
08/29/2022, 7:57 PMdocker exec -it "temperal_container_id" bash
, I don’t see anything in the /tmp
dir?Jordyn
08/29/2022, 8:50 PMRocky Appiah
08/30/2022, 11:55 AMDenis
08/30/2022, 3:24 PMPavan Charan Dharmavaram Hari Rao
08/30/2022, 8:02 PMMiłosz Szymczak
08/31/2022, 11:11 AM2022-08-31 08:28:33 source > Syncing stream: Contact
2022-08-31 08:30:38 source > [{"message":"Your query request was running for too long.","errorCode":"QUERY_TIMEOUT"}]
2022-08-31 08:30:38 source > Cannot receive data for stream 'Contact', error message: 'Your query request was running for too long.'
As you can see it's just 2 minutes which I understand is the standard timeout for Salesforce API. Unfortunately there's no way to modify the query from the UI to somehow prevent the long execution. Any ideas how to approach it? It's initial load so I believe this could work with the increments, but manually setting the checkpoint in Status entity on the database didn't help.Pablo Castelletto
08/31/2022, 4:33 PMSELECT pg_create_logical_replication_slot('airbyte_slot', 'pgoutput');
ALTER TABLE test.test REPLICA IDENTITY DEFAULT;
CREATE PUBLICATION airbyte_publication FOR TABLE test.test;
Sadly, it starting taking all of my diskspace on my source db, so i deleted the slot
It seems that the slot was producing events for every table in my db instead of just for the test table indicated in the publication and no one were consuming them 😞.
What can i do to fix this issue? i couldn't find a solution
Thanks!!Amanda Murphy
08/31/2022, 8:40 PMMaxime Naulleau
09/02/2022, 9:56 AMDragan
09/02/2022, 3:47 PM2022-01-01
but this end up in Snowflake as VARCHAR
and it looks like 2022-01-01T00:00:00Z
we can normalise this in dbt but is there a plan to sort this out or is there an option to fix it while ingesting the dataVu Le Hoang
09/05/2022, 7:22 AMAbiodun Adenle
09/05/2022, 10:46 PMName: Jedi
Age: 30
Salary: null
In snowflake I only get
Name: Jedi
Age: 30
The salary field is missing
What can I do to ensure all null fields in oracle are in Snowflake correctly as null?Giovani Freitas
09/06/2022, 9:32 PMgunu
09/08/2022, 1:36 AMrobinspilner
09/08/2022, 2:25 PMPierre Kerschgens
09/08/2022, 2:56 PMrobinspilner
09/08/2022, 5:39 PMconnectionString = switch (connectionType) {
case SERVICE_NAME -> buildConnectionString(config, protocol.toString(), SERVICE_NAME.toUpperCase(),
config.get(CONNECTION_DATA).get(SERVICE_NAME).asText());
case SID -> buildConnectionString(config, protocol.toString(), SID.toUpperCase(), config.get(CONNECTION_DATA).get(SID).asText());
default -> throw new IllegalArgumentException("Unrecognized connection type: " + connectionType);
};
} else {
// To keep backward compatibility with existing connectors which doesn't have connection_data
// and use only sid.
connectionString = buildConnectionString(config, protocol.toString(), SID.toUpperCase(), config.get(SID).asText());
}
Pierre Kerschgens
09/09/2022, 8:21 AMaddu all
09/09/2022, 3:51 PMaddu all
09/09/2022, 3:53 PMaddu all
09/09/2022, 4:07 PMPhilip Johnson
09/10/2022, 12:47 AMcross-database references are not implemented...
Philip Johnson
09/10/2022, 8:57 PMSlackbot
09/11/2022, 2:51 PMToan Doan
09/11/2022, 11:53 PMEli Sigal
09/12/2022, 3:43 PMDocker volume job log path: /tmp/workspace/1091/0/logs.log
it says
Source did not output any state messages
hard to say exactly