https://linen.dev logo
Join Slack
Powered by
# ask-ai
  • k

    kapa.ai

    12/10/2025, 7:21 PM
    The knowledge sources don’t contain a direct description of the exact error message you’re seeing for the Google Ads “Test” save action, but they do show closely related issues and how Airbyte surfaces HTTP errors. 1. What
    errors.http.default
    means
    In other contexts, Airbyte surfaces HTTP errors as a generic
    errors.http.default
    with an attached HTTP status code and backend message (for example, a 409 with “Failed to access database. Check the server logs for more information”). [[DB 409 issue](https://github.com/airbytehq/airbyte/issues/54150)] This suggests your “Failed to save Google Ads Test due to the following error: errors.http.default” is a generic UI wrapper around an HTTP error returned by the Airbyte backend (not by Google Ads itself). 2. Known Google Ads–related save/check issues - There is an open issue where a Google Ads source “Check connection” passes at the connector level but the platform fails with
    422 Unprocessable Entity
    on an internal
    updateSource
    call after the check, even though the same config works on Airbyte Cloud. [[Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] - Another open issue reports Google Ads authentication problems that appear specific to Airbyte’s handling of Google Ads credentials, while the same credentials work in another tool. [[Google Ads auth issue](https://github.com/airbytehq/airbyte/issues/63374)] Your symptom (“Failed to save … Test”) is consistent with these platform‑side problems: the connector may be fine, but the platform fails when persisting or updating the source. 3. What you can do next Because the sources don’t show a definitive fix for your exact message, I can’t say precisely what’s wrong in your instance. However, based on the related issues, the next concrete steps are: - Check the underlying HTTP error and logs - Look at the browser network tab for the failing request (likely
    /api/v1/sources/create
    or
    /api/v1/sources/update
    or
    updateSource
    ) and note the HTTP status code and response body. - Check
    airbyte-server
    logs around the time of the failure; in similar cases they contain a more specific message (e.g., DB conflict, validation error). [[DB 409 issue](https://github.com/airbytehq/airbyte/issues/54150); [Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] - Compare with the known 422 Google Ads issue If you see a
    422 Unprocessable Entity
    on an internal
    updateSource
    call after a successful check, you’re likely hitting the same bug as in the OSS +
    abctl
    deployment report. [[Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] In that case, the problem is on the platform side rather than your Google Ads credentials. - Try on Airbyte Cloud or another environment (if possible) In the 422 issue, the same Google Ads config worked on Airbyte Cloud but failed on a specific OSS deployment, which helped confirm it was a platform bug. [[Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] If you can share the HTTP status code and response body from the failing request (or the corresponding
    airbyte-server
    log line), I can help you map it more precisely to the known issues in the provided sources.
  • d

    Danton Bertuol

    12/10/2025, 7:41 PM
    large database usage in query getPrivilegesTableForCurrentUser, using has_table_privilege in postgres source
    k
    • 2
    • 22
  • r

    Rafael Felipe

    12/10/2025, 8:13 PM
    Unable to connect: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=X, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: Remote host terminated the handshake}, caused by {java.io.EOFException: SSL peer shut down incorrectly}}
    k
    • 2
    • 1
  • r

    Rafael Felipe

    12/10/2025, 8:33 PM
    mongo-self hosted, what I should have at
    replicaSet
    k
    • 2
    • 1
  • m

    Michael Gallivan

    12/10/2025, 9:04 PM
    @kapa.ai can I overwrite the source defined cursor for greenhouse?
    k
    • 2
    • 1
  • a

    A S Yamini

    12/11/2025, 4:55 AM
    2025-12-11T045446.206073Z [info ] Email alerting: attempt 1 [airflow.utils.email] loc=email.py:261 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/airflow/.local/lib/python3.12/site-packages/airflow/utils/email.py", line 79, in send_email return backend( ^^^^^^^^ File "/home/airflow/.local/lib/python3.12/site-packages/airflow/utils/email.py", line 154, in send_email_smtp send_mime_email(e_from=mail_from, e_to=recipients, mime_msg=msg, conn_id=conn_id, dryrun=dryrun) File "/home/airflow/.local/lib/python3.12/site-packages/airflow/utils/email.py", line 263, in send_mime_email smtp_conn = _get_smtp_connection(smtp_host, smtp_port, smtp_timeout, smtp_ssl) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/airflow/.local/lib/python3.12/site-packages/airflow/utils/email.py", line 306, in _get_smtp_connection return smtplib.SMTP(host=host, port=port, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/python/lib/python3.12/smtplib.py", line 255, in init (code, msg) = self.connect(host, port) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/python/lib/python3.12/smtplib.py", line 341, in connect self.sock = self._get_socket(host, port, self.timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/python/lib/python3.12/smtplib.py", line 312, in _get_socket return socket.create_connection((host, port), timeout, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/python/lib/python3.12/socket.py", line 865, in create_connection raise exceptions[0] File "/usr/python/lib/python3.12/socket.py", line 850, in create_connection sock.connect(sa) OSError: [Errno 99] Cannot assign requested address
    k
    • 2
    • 1
  • m

    Muhammad Bilal

    12/11/2025, 6:22 AM
    @kapa.ai I have deployed airbyte to my eks cluster using helm. How can I add labels to the airbyte jobs?
    k
    • 2
    • 10
  • m

    Muhammad Hassaan Mustafa

    12/11/2025, 10:46 AM
    @kapa.ai i want to refresh data with deleteing already present data via API
    k
    • 2
    • 1
  • t

    Tia

    12/11/2025, 10:56 AM
    @kapa.ai i have column _id with type double, however it got failed during sync process with error : Unsupported _id type Double, does INT64 type is acceptable for airbyte version 2.0.0? or should it converted to string?
    k
    • 2
    • 8
  • a

    Abhijith C

    12/11/2025, 12:54 PM
    @kapa.ai Is there a way to configure priority class for the job pods of airbyte to prevent eviction by karpenter for underutilization
    k
    • 2
    • 1
  • b

    Bernd Töpfer

    12/11/2025, 1:37 PM
    I am getting this error, when I load a large table from Oracle to DataBricks : 025-12-11 120714 destination INFO sync-operations-2 i.a.i.d.d.o.DatabricksStorageOperation(cleanupStage):158 Deleting Staging directory /Volumes/raw_data/airbyte_internal/coast_raw__stream_mad_staging/2025_12_11_10_8a825abf-3787-4e2c-ba96-a071dc78ac0c 2025-12-11 120714 destination ERROR main i.a.c.i.u.ConnectorExceptionUtil(logAllAndThrowFirst):37 Following exceptions occurred while finalizing the syncjava.util.concurrent.CompletionException: com.databricks.sdk.core.error.platform.BadRequest: Response from server (Bad Request) { "error_code" : "BAD_REQUEST", "message" : "The directory is not empty. This operation is not supported on non-empty directories.", "details" : [ { "@type" : "type.googleapis.com/google.rpc.ErrorInfo", "reason" : "FILES_API_DIRECTORY_IS_NOT_EMPTY", "domain":"**********" } ] }: Cannot construct instance of
    com.databricks.sdk.core.error.ErrorDetail
    , problem: Cannot invoke "java.util.Map.getClass()" because "m" is null at [Source: REDACTED (
    StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION
    disabled); line: 8, column: 3] (through reference chain: com.databricks.sdk.core.error.ApiErrorBody["details"]->java.util.ArrayList[0]) at io.airbyte.commons.concurrency.CompletableFutures$allOf$1.invoke(CompletableFutures.kt:46) at io.airbyte.commons.concurrency.CompletableFutures$allOf$1.invoke(CompletableFutures.kt:31) at io.airbyte.commons.concurrency.CompletableFutures.allOf$lambda$1(CompletableFutures.kt:31) at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934) at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1773) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) at java.base/java.lang.Thread.run(Thread.java:1583) Caused by: com.databricks.sdk.core.error.platform.BadRequest: Response from server (Bad Request) { "error_code" : "BAD_REQUEST", "message" : "The directory is not empty. This operation is not supported on non-empty directories.", "details" : [ { "@type" : "type.googleapis.com/google.rpc.ErrorInfo", "reason" : "FILES_API_DIRECTORY_IS_NOT_EMPTY", "domain":"**********" } ] }: Cannot construct instance of
    com.databricks.sdk.core.error.ErrorDetail
    , problem: Cannot invoke "java.util.Map.getClass()" because "m" is null at [Source: REDACTED (
    StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION
    disabled); line: 8, column: 3] (through reference chain: com.databricks.sdk.core.error.ApiErrorBody["details"]->java.util.ArrayList[0]) at com.databricks.sdk.core.error.AbstractErrorMapper.apply(AbstractErrorMapper.java:43) at com.databricks.sdk.core.error.ApiErrors.getDatabricksError(ApiErrors.java:29) at com.databricks.sdk.core.ApiClient.executeInner(ApiClient.java:276) at com.databricks.sdk.core.ApiClient.getResponse(ApiClient.java:235) at com.databricks.sdk.core.ApiClient.execute(ApiClient.java:227) at com.databricks.sdk.core.ApiClient.DELETE(ApiClient.java:188) at com.databricks.sdk.service.files.FilesImpl.deleteDirectory(FilesImpl.java:45) at com.databricks.sdk.service.files.FilesAPI.deleteDirectory(FilesAPI.java:85) at com.databricks.sdk.service.files.FilesAPI.deleteDirectory(FilesAPI.java:73) at io.airbyte.integrations.destination.databricks.operation.DatabricksStorageOperation.cleanupStage(DatabricksStorageOperation.kt:159) at io.airbyte.integrations.base.destination.operation.AbstractStreamOperation.finalizeTable(AbstractStreamOperation.kt:342) at io.airbyte.integrations.base.destination.operation.DefaultSyncOperation.finalizeStreams$lambda$9$lambda$8(DefaultSyncOperation.kt:138) at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)
    k
    • 2
    • 1
  • t

    Todd Matthews

    12/11/2025, 1:42 PM
    how do I know which terraform provider version works with which version of airbyte oss
    k
    • 2
    • 1
  • b

    Bernd Töpfer

    12/11/2025, 1:54 PM
    How can I get rid of this error when writing a table to databricks : java.sql.SQLException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: 42P01, Query: COPY INTO *, Error message from Server: org.apache.hive.service.cli.HiveSQLException: Error running query: [DELTA_MISSING_DELTA_TABLE_COPY_INTO] com.databricks.sql.transaction.tahoe.DeltaAnalysisException: [DELTA_MISSING_DELTA_TABLE_COPY_INTO] Table doesn't exist. Create an empty Delta table first using CREATE TABLE
    raw_data
    .
    airbyte_internal
    .
    coast_raw__stream_mad_airbyte_tmp
    . at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:49) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:1050) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:787) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$5(SparkExecuteStatementOperation.scala:578) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at org.apache.spark.sql.execution.SQLExecution$.withRootExecution(SQLExecution.scala:869) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:578) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:80) at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:348) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:59) at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:344) at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:78) at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:75) at com.databricks.spark.util.DatabricksTracingHelper.withAttributionContext(DatabricksSparkTracingHelper.scala:62) at com.databricks.spark.util.DatabricksTracingHelper.withSpanFromRequest(DatabricksSparkTracingHelper.scala:89) at com.databricks.spark.util.DBRTracing$.withSpanFromRequest(DBRTracing.scala:43) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$15(ThriftLocalProperties.scala:238) at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:80) at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:348) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:59) at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:344) at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:78) at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:75) at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29) at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:127) at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:108) at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:29) at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:108) at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:216) at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668) at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780) at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789) at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668) at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withAttributionTags(SparkExecuteStatementOperation.scala:76) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$12(ThriftLocalProperties.scala:233) at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:229) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:89) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:76) at
    k
    • 2
    • 1
  • a

    Aviad Deri

    12/11/2025, 2:21 PM
    @kapa.ai on google gke installation, in mssql source pod i see errors in log: 2025-12-11 141858,280 [main] INFO i.m.r.Micronaut(start):183 - No embedded container found. Running as CLI application Exception in thread "main" io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.airbyte.workers.workload.WorkloadOutputWriter] Message: /secrets/gcs-log-creds/gcp.json Path Taken: new @j.i.Singleton i.a.c.ConnectorWatcher(@Named("output") Path outputPath, @Named("configDir") String configDir, @Value int fileTimeoutMinutes, @Value int fileTimeoutMinutesWithinSync, SidecarInput sidecarInput, ConnectorMessageProcessor connectorMessageProcessor, AirbyteMessageSerDeProvider serDeProvider, AirbyteProtocolVersionedMigratorFactory airbyteProtocolVersionedMigratorFactory, GsonPksExtractor gsonPksExtractor, WorkloadApiClient workloadApiClient, WorkloadOutputWriter outputWriter, SidecarLogContextFactory logContextFactory, HeartbeatMonitor heartbeatMonitor, MetricClient metricClient) \---> new @j.i.Singleton i.a.c.ConnectorWatcher(@Named("output") Path outputPath, @Named("configDir") String configDir, @Value int fileTimeoutMinutes, @Value int fileTimeoutMinutesWithinSync, SidecarInput sidecarInput, ConnectorMessageProcessor connectorMessageProcessor, AirbyteMessageSerDeProvider serDeProvider, AirbyteProtocolVersionedMigratorFactory airbyteProtocolVersionedMigratorFactory, GsonPksExtractor gsonPksExtractor, WorkloadApiClient workloadApiClient, [WorkloadOutputWriter outputWriter], SidecarLogContextFactory logContextFactory, HeartbeatMonitor heartbeatMonitor, MetricClient metricClient) \---> new @j.i.Singleton i.a.w.w.WorkloadOutputWriter([@Named("outputDocumentStore") StorageClient storageClient], AirbyteApiClient airbyteApiClient, MetricClient metricClient) at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2352) at io.micronaut.context.DefaultBeanContext.createRegistration(DefaultBeanContext.java:3150) at io.micronaut.context.SingletonScope.getOrCreate(SingletonScope.java:80) at io.micronaut.context.DefaultBeanContext.resolveBeanRegistration(DefaultBeanContext.java:3000) at io.micronaut.context.DefaultBeanContext.resolveBeanRegistration(DefaultBeanContext.java:2762) at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1781) at io.micronaut.context.AbstractBeanResolutionContext.getBean(AbstractBeanResolutionContext.java:210) at io.micronaut.context.AbstractInitializableBeanDefinition.resolveBean(AbstractInitializableBeanDefinition.java:2130) at io.micronaut.context.AbstractInitializableBeanDefinition.getBeanForConstructorArgument(AbstractInitializableBeanDefinition.java:1360) at io.airbyte.workers.workload.$WorkloadOutputWriter$Definition.instantiate(Unknown Source) at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2337) at io.micronaut.context.DefaultBeanContext.createRegistration(DefaultBeanContext.java:3150) at io.micronaut.context.SingletonScope.getOrCreate(SingletonScope.java:80) at io.micronaut.context.DefaultBeanContext.resolveBeanRegistration(DefaultBeanContext.java:3000) at io.micronaut.context.DefaultBeanContext.resolveBeanRegistration(DefaultBeanContext.java:2762) at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1781) at io.micronaut.context.AbstractBeanResolutionContext.getBean(AbstractBeanResolutionContext.java:210) at io.micronaut.context.AbstractInitializableBeanDefinition.resolveBean(AbstractInitializableBeanDefinition.java:2130) at io.micronaut.context.AbstractInitializableBeanDefinition.getBeanForConstructorArgument(AbstractInitializableBeanDefinition.java:1360) at io.airbyte.connectorSidecar.$ConnectorWatcher$Definition.instantiate(Unknown Source) at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2337) at io.micronaut.context.DefaultBeanContext.createRegistration(DefaultBeanContext.java:3150) at io.micronaut.context.SingletonScope.getOrCreate(SingletonScope.java:80) at io.micronaut.context.DefaultBeanContext.resolveBeanRegistration(DefaultBeanContext.java:3000) at io.micronaut.context.DefaultBeanContext.resolveBeanRegistration(DefaultBeanContext.java:2762) at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1781) at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:855) at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:847) at io.airbyte.connectorSidecar.ApplicationKt.main(Application.kt:25) at io.airbyte.connectorSidecar.ApplicationKt.main(Application.kt) Caused by: java.nio.file.NoSuchFileException: /secrets/gcs-log-creds/gcp.json at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:261) at java.base/java.nio.file.Files.newByteChannel(Files.java:380) at java.base/java.nio.file.Files.newByteChannel(Files.java:432) at java.base/java.nio.file.Files.readAllBytes(Files.java:3281) at io.airbyte.commons.storage.StorageClientKt.gcsClient(StorageClient.kt:544) at io.airbyte.commons.storage.GcsStorageClient.<init>(StorageClient.kt:272) at io.airbyte.commons.storage.StorageClientFactory.create(StorageClient.kt:72) at io.airbyte.connectorSidecar.config.ApplicationFactory.workloadStorageClient(ApplicationFactory.kt:18) at io.airbyte.connectorSidecar.config.$ApplicationFactory$WorkloadStorageClient0$Definition.instantiate(Unknown Source) at io.micronaut.context.DefaultBeanContext.resolveByBeanFactory(DefaultBeanContext.java:2337) ... 29 more what is the issue?
    k
    • 2
    • 1
  • p

    Py Bot

    12/11/2025, 5:34 PM
    when I backfill for 30 days, can I upload it to different folder strcutures in s3?
    k
    • 2
    • 1
  • l

    Lucas Segers

    12/11/2025, 5:41 PM
    I have a stream with incremental sync configured pagination is also configured however, when the page comes "empty", airbyte errors because I'm trying to add a field but it seems to fail if there are no records in the stream_slice airbyte_cdk.sources.concurrent_source.stream_thread_exception.StreamThreadException: Exception while syncing stream posicao_carteira: 'NoneType' object does not support item assignment
    k
    • 2
    • 4
  • o

    Omree Gal-Oz

    12/11/2025, 5:51 PM
    @kapa.ai I have transient errors after restarting many connections at once. Is this expected behavior?
    k
    • 2
    • 1
  • g

    Gideon Stowell

    12/11/2025, 10:45 PM
    @kapa.ai I am seeing these errors in the airbyte server logs, what does it mean?
    Copy code
    2025-12-11 22:44:20,021 [io-executor-thread-4]    WARN    i.a.c.j.JsonSchemas(traverseJsonSchemaInternal):203 - The object is a properties key or a combo keyword. The traversal is silently stopped. Current schema: {"type":"object","ai ││ rbyte_hidden":true,"additionalProperties":true}
    k
    • 2
    • 4
  • a

    Amit Kumar

    12/12/2025, 2:53 AM
    @kapa.ai while setting up embedded flow inside my system can this be white labeled ? Branding should be of my system only and airbyte should not come anywhere in the onboarding flow
    k
    • 2
    • 1
  • a

    Abhijith C

    12/12/2025, 5:35 AM
    @kapa.ai Where should we configure Airbyte Heartbeat. or where is this flag.yaml I have airbyte deployed in EKS via helm. Does it go under values.yaml?
    k
    • 2
    • 1
  • a

    Aviad Deri

    12/12/2025, 5:37 AM
    @kapa.ai i see this error in the logs Error from server: Get "https://172.19.0.2:10250/containerLogs/airbyte-abctl/airbyte-abctl-server-6b77cd774b-hqlwx/airbyte-server-container": dial tcp 172.19.0.210250 connect: connection refused what is the issue?
    k
    • 2
    • 16
  • r

    Rahul

    12/12/2025, 6:01 AM
    @kapa.ai How I can give the tab names in Excel file formate when creating Microsoft Sharepoint source in Airbyte OSS?
    k
    • 2
    • 4
  • h

    Harsh Dodiya

    12/12/2025, 6:12 AM
    Hey @kapa.ai, I have setup the Airbyte using abctl in my VM (with 4 CPU and 32 GB Memory) I am trying to sync from MySQL to PostgreSQL DB. But when we try to sync a large table, Airbyte caches a large number of records from the source connection. Because of that the replication-orchestrator pod kubernetes experiences excessive memory consumption during syncs. We can clearly see it on Pod status:
    Copy code
    airbyte-abctl replication-job-662-attempt-0 0/3 Error 0 4h24m
    airbyte-abctl replication-job-690-attempt-0 2/3 OOMKilled 0 9m15s
    As I installed Airbyte using abctl, but I didn’t set any resource limits during the installation. Now I’d like to adjust the memory allocation and apply new resource limits. Is there a way to update these settings after installation, or do I need to reinstall Airbyte to change them?
    k
    • 2
    • 1
  • p

    Phan Hong Dat (K18 HL)

    12/12/2025, 7:18 AM
    @kapa.ai Airbyte OSS hỗ trợ chạy trong môi trường không ? tôi bị lỗi khi https://pypi.org/pypi/airbyte-cdk/json gọi tới api này
    k
    • 2
    • 1
  • p

    Phan Hong Dat (K18 HL)

    12/12/2025, 7:23 AM
    @kapa.ai I’m getting
    ERR_NAME_NOT_RESOLVED
    and
    Failed to fetch
    errors because the UI is trying to request PyPI URLs. Is there an official way to run Airbyte fully offline or disable these external network checks?
    k
    • 2
    • 1
  • p

    Py Bot

    12/12/2025, 8:22 AM
    how to run previous day pipeline in airbyte using cron
    k
    • 2
    • 1
  • c

    Cal wsoo

    12/12/2025, 9:18 AM
    @kapa.ai airbye T functionality within ETL?
    k
    • 2
    • 1
  • a

    Abhishek Batra

    12/12/2025, 1:19 PM
    @kapa.ai, how to create replicated tables and on cluster in clickhouse using the airbyte clickhouse connector
    k
    • 2
    • 1
  • k

    Kothapalli Venkata Avinash

    12/12/2025, 1:39 PM
    @kapa.ai, Can we connect as400 db2 source using existing Airbyte DB2 connector?
    k
    • 2
    • 4
  • f

    Francisco Carneiro

    12/12/2025, 1:44 PM
    @kapa.ai, any visibility of when this issue will be solved of if there are any workarounds (https://github.com/airbytehq/airbyte/issues/55846).
    Copy code
    In summary the CDC collect fails with this error:
    Saved offset is before replication slot's confirmed lsn. Please reset the connection, and then increase WAL retention and/or increase sync frequency to prevent this from happening in the future. See <https://docs.airbyte.com/integrations/sources/postgres/postgres-troubleshooting#under-cdc-incremental-mode-there-are-still-full-refresh-syncs> for more details.
    Even though the WAL retention is maximum and not that much time passed between two syncs (very little actually)
    k
    • 2
    • 1
1...4445464748Latest