Júlia Lemes
08/22/2025, 2:22 PMLisha Zhang
08/22/2025, 2:53 PMKiet Luu (Ken)
08/22/2025, 3:13 PMRenu Fulmali
08/22/2025, 4:18 PMyingting
08/22/2025, 4:20 PMio.airbyte.cdk.integrations.source.relationaldb.state.FailedRecordIteratorException: java.lang.RuntimeException: java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: feature not supported on beam relations
Lukas Heinz
08/22/2025, 4:27 PMAn unknown error occurred. (HTTP 504)
Todd Matthews
08/22/2025, 4:39 PMJúlia Lemes
08/22/2025, 7:18 PMJúlia Lemes
08/22/2025, 7:33 PMCody Redmond
08/22/2025, 7:46 PMJúlia Lemes
08/22/2025, 8:20 PMSteve Caldwell
08/22/2025, 8:41 PMJúlia Lemes
08/22/2025, 9:11 PMJúlia Lemes
08/22/2025, 9:22 PMJúlia Lemes
08/22/2025, 10:00 PMHari Haran R
08/23/2025, 5:55 AMINISH KASHYAP
08/23/2025, 10:37 AMabctl
on AWS EC2 and would appreciate any guidance or insights.
Environment Setup:
• Instance: AWS EC2 t3.large (2 vCPUs, 8GB RAM, 45GB storage)
• OS: Amazon Linux 2023
• Docker: 25.0.8
• abctl: 0.30.1
• Region: ap-south-1 (India)
The Problem:
Every abctl local install
attempt fails at the exact same point - during nginx/ingress-nginx Helm chart installation. The process runs for 75+ minutes before timing out.
Command used:
bash
abctl local install --host <http://myairbytezin.duckdns.org|myairbytezin.duckdns.org> --insecure-cookies --port 8000
Error Pattern:
1. ✅ Cluster creation succeeds
2. ✅ Initial setup completes
3. ❌ Gets stuck at: Installing 'nginx/ingress-nginx' (version: 4.13.1) Helm Chart
4. ❌ Repeated timeout errors:
W0823 04:39:38.002418 13320 reflector.go:561] failed to list *unstructured.Unstructured:
Get "<https://127.0.0.1:34281/apis/batch/v1/namespaces/ingress-nginx/jobs>": dial tcp 127.0.0.1:34281: i/o timeout
Resources Confirmed Sufficient:
• Storage: 36GB free (21% usage)
• Memory: 7.0GB available (only 387MB used)
• Docker: Healthy with 6.7GB reclaimable spaceThomas Niederberger
08/23/2025, 11:32 PMPoorna Premachandra
08/24/2025, 3:28 AM[config_error] MySQL Connector Error: The sync encountered an unexpected error in the change event producer and has stopped. Please check the logs for details and troubleshoot accordingly.
<https://docs.oracle.com/javase/9/docs/api/java/lang/RuntimeException.html>
the connection set to sync from CDC. initial sync completes in 12 hours with 375 GB loaded. Binlogs are available for 16 hours. After initial sync completed second sync ran and in 1 hour it gets failed with 1 hour. In source connection, I have set initial loadout time
to 16 hours and concurrency
to 2 as well.
What could be the issue here?Stav Hans
08/24/2025, 9:27 AMOfek Eliahu
08/24/2025, 11:42 AMNoam Moskowitz
08/24/2025, 11:44 AMio.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: grpc: received message larger than max (4194365 vs. 4194304)
Ofek Eliahu
08/24/2025, 1:12 PMbasic_auth = BasicHttpAuthenticator(
username=credentials["client_id"],
password=credentials["client_secret"],
config={},
parameters={},
)
return SingleUseRefreshTokenOauth2Authenticator(
connector_config=config,
token_refresh_endpoint="<https://app.gong.io/oauth2/generate-customer-token>",
client_id=credentials["client_id"],
client_secret=credentials["client_secret"],
access_token_config_path=("access_token",),
refresh_token_config_path=("refresh_token",),
token_expiry_date_config_path=("token_expiry_date",),
refresh_request_body={
"access_type": "offline",
},
refresh_request_headers=basic_auth.get_auth_header(),
)
Temitope Sam
08/24/2025, 2:32 PMkapa.ai
08/24/2025, 7:16 PMclass_name
key detected). This feature is not compatible with the declarative YAML executor. To use this executor, please try again with the Python executor. means that your connector’s manifest references a Python class (via the class_name
key), which requires actual Python code to be executed. The declarative YAML executor in Airbyte is designed to run connectors defined purely in YAML, without any custom Python code. When a manifest references a class, it expects the Python executor to be used, which can interpret and run Python code. What does this mean for you? - If your connector (like source-mailchimp
) uses the class_name
key or otherwise requires Python code, you cannot use it with the declarative YAML executor. - You need to use the Python executor, which is capable of running connectors that include both YAML and Python code. What should you do? - If you want to use the declarative YAML executor, ensure your connector is fully declarative (YAML only, no class_name
or custom Python code). - If your connector requires Python code, run it with the Python executor instead. This limitation is by design and is not a bug. The knowledge sources do not provide a workaround for using Python-based connectors with the declarative YAML executor, so you must use the appropriate executor for your connector type. If you need to use your connector in environments like PyAirbyte or Jupyter, and it requires Python code, you must ensure the environment supports the Python executor, not just the declarative YAML executor. For purely declarative connectors, you can use the YAML executor as described in the [PyAirbyte tutorial](https://airbyte.com/tutorials/how-to-add-custom-source-to-pyairbyte-using-the-no-code-builder). If you need more details on how to structure your connector or switch executors, please provide more context or refer to the official Airbyte documentation.Colin
08/24/2025, 7:16 PMclass_name
key detected). This feature is not compatible with the declarative YAML executor. To use this executor, please try again with the Python executor.
Connector Name: ‘source-mailchimp’Colin
08/24/2025, 7:33 PMHari Haran R
08/25/2025, 6:14 AMFabrizio Spini
08/25/2025, 7:35 AMemit.unchanged.fields = true
in Debezium would solve this, but this parameter is not exposed in the Airbyte UI for MySQL Source v3.11.1.
My Questions:
1. Is there a supported way in OSS to enable emit.unchanged.fields
in v3.11.1?
2. Is there any plan to expose emit.unchanged.fields
in the Airbyte UI (or env vars) for OSS users?Matheus Dantas
08/25/2025, 8:29 AM2025-08-25 08:03:43 platform ERROR Stage Pipeline Exception: io.airbyte.workload.launcher.pipeline.stages.model.StageError: java.lang.RuntimeException: Init container for Pod: pods did not complete successfully. Actual termination reason: Error.
message: java.lang.RuntimeException: Init container for Pod: pods did not complete successfully. Actual termination reason: Error.
stackTrace: [Ljava.lang.StackTraceElement;@5e394e0a
2025-08-25 08:03:43 platform INFO Attempting to update workload: d749198f-04d4-49e7-8560-2ccaec19c5c2_25675_0_sync to FAILED.
2025-08-25 08:03:43 platform INFO Pipeline aborted after error for workload: d749198f-04d4-49e7-8560-2ccaec19c5c2_25675_0_sync.
2025-08-25 08:03:43 platform INFO
----- START POST REPLICATION OPERATIONS -----
2025-08-25 08:03:43 platform INFO No post-replication operation(s) to perform.
2025-08-25 08:03:43 platform INFO
----- END POST REPLICATION OPERATIONS -----