kapa.ai
12/10/2025, 7:21 PMerrors.http.default means In other contexts, Airbyte surfaces HTTP errors as a generic errors.http.default with an attached HTTP status code and backend message (for example, a 409 with “Failed to access database. Check the server logs for more information”). [[DB 409 issue](https://github.com/airbytehq/airbyte/issues/54150)] This suggests your “Failed to save Google Ads Test due to the following error: errors.http.default” is a generic UI wrapper around an HTTP error returned by the Airbyte backend (not by Google Ads itself). 2. Known Google Ads–related save/check issues - There is an open issue where a Google Ads source “Check connection” passes at the connector level but the platform fails with 422 Unprocessable Entity on an internal updateSource call after the check, even though the same config works on Airbyte Cloud. [[Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] - Another open issue reports Google Ads authentication problems that appear specific to Airbyte’s handling of Google Ads credentials, while the same credentials work in another tool. [[Google Ads auth issue](https://github.com/airbytehq/airbyte/issues/63374)] Your symptom (“Failed to save … Test”) is consistent with these platform‑side problems: the connector may be fine, but the platform fails when persisting or updating the source. 3. What you can do next Because the sources don’t show a definitive fix for your exact message, I can’t say precisely what’s wrong in your instance. However, based on the related issues, the next concrete steps are: - Check the underlying HTTP error and logs - Look at the browser network tab for the failing request (likely /api/v1/sources/create or /api/v1/sources/update or updateSource) and note the HTTP status code and response body. - Check airbyte-server logs around the time of the failure; in similar cases they contain a more specific message (e.g., DB conflict, validation error). [[DB 409 issue](https://github.com/airbytehq/airbyte/issues/54150); [Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] - Compare with the known 422 Google Ads issue If you see a 422 Unprocessable Entity on an internal updateSource call after a successful check, you’re likely hitting the same bug as in the OSS + abctl deployment report. [[Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] In that case, the problem is on the platform side rather than your Google Ads credentials. - Try on Airbyte Cloud or another environment (if possible) In the 422 issue, the same Google Ads config worked on Airbyte Cloud but failed on a specific OSS deployment, which helped confirm it was a platform bug. [[Google Ads 422 issue](https://github.com/airbytehq/airbyte/issues/60900)] If you can share the HTTP status code and response body from the failing request (or the corresponding airbyte-server log line), I can help you map it more precisely to the known issues in the provided sources.Danton Bertuol
12/10/2025, 7:41 PMRafael Felipe
12/10/2025, 8:13 PMRafael Felipe
12/10/2025, 8:33 PMreplicaSetMichael Gallivan
12/10/2025, 9:04 PMA S Yamini
12/11/2025, 4:55 AMMuhammad Bilal
12/11/2025, 6:22 AMMuhammad Hassaan Mustafa
12/11/2025, 10:46 AMTia
12/11/2025, 10:56 AMAbhijith C
12/11/2025, 12:54 PMBernd Töpfer
12/11/2025, 1:37 PMcom.databricks.sdk.core.error.ErrorDetail, problem: Cannot invoke "java.util.Map.getClass()" because "m" is null
at [Source: REDACTED (StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION disabled); line: 8, column: 3] (through reference chain: com.databricks.sdk.core.error.ApiErrorBody["details"]->java.util.ArrayList[0])
at io.airbyte.commons.concurrency.CompletableFutures$allOf$1.invoke(CompletableFutures.kt:46)
at io.airbyte.commons.concurrency.CompletableFutures$allOf$1.invoke(CompletableFutures.kt:31)
at io.airbyte.commons.concurrency.CompletableFutures.allOf$lambda$1(CompletableFutures.kt:31)
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1773)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: com.databricks.sdk.core.error.platform.BadRequest: Response from server (Bad Request) {
"error_code" : "BAD_REQUEST",
"message" : "The directory is not empty. This operation is not supported on non-empty directories.",
"details" : [ {
"@type" : "type.googleapis.com/google.rpc.ErrorInfo",
"reason" : "FILES_API_DIRECTORY_IS_NOT_EMPTY",
"domain":"**********"
} ]
}: Cannot construct instance of com.databricks.sdk.core.error.ErrorDetail, problem: Cannot invoke "java.util.Map.getClass()" because "m" is null
at [Source: REDACTED (StreamReadFeature.INCLUDE_SOURCE_IN_LOCATION disabled); line: 8, column: 3] (through reference chain: com.databricks.sdk.core.error.ApiErrorBody["details"]->java.util.ArrayList[0])
at com.databricks.sdk.core.error.AbstractErrorMapper.apply(AbstractErrorMapper.java:43)
at com.databricks.sdk.core.error.ApiErrors.getDatabricksError(ApiErrors.java:29)
at com.databricks.sdk.core.ApiClient.executeInner(ApiClient.java:276)
at com.databricks.sdk.core.ApiClient.getResponse(ApiClient.java:235)
at com.databricks.sdk.core.ApiClient.execute(ApiClient.java:227)
at com.databricks.sdk.core.ApiClient.DELETE(ApiClient.java:188)
at com.databricks.sdk.service.files.FilesImpl.deleteDirectory(FilesImpl.java:45)
at com.databricks.sdk.service.files.FilesAPI.deleteDirectory(FilesAPI.java:85)
at com.databricks.sdk.service.files.FilesAPI.deleteDirectory(FilesAPI.java:73)
at io.airbyte.integrations.destination.databricks.operation.DatabricksStorageOperation.cleanupStage(DatabricksStorageOperation.kt:159)
at io.airbyte.integrations.base.destination.operation.AbstractStreamOperation.finalizeTable(AbstractStreamOperation.kt:342)
at io.airbyte.integrations.base.destination.operation.DefaultSyncOperation.finalizeStreams$lambda$9$lambda$8(DefaultSyncOperation.kt:138)
at java.base/java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1768)Todd Matthews
12/11/2025, 1:42 PMBernd Töpfer
12/11/2025, 1:54 PMraw_data.airbyte_internal.coast_raw__stream_mad_airbyte_tmp. at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:49) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:1050) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:51) at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:104) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:787) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$5(SparkExecuteStatementOperation.scala:578) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at org.apache.spark.sql.execution.SQLExecution$.withRootExecution(SQLExecution.scala:869) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:578) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18) at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:80) at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:348) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:59) at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:344) at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:78) at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:75) at com.databricks.spark.util.DatabricksTracingHelper.withAttributionContext(DatabricksSparkTracingHelper.scala:62) at com.databricks.spark.util.DatabricksTracingHelper.withSpanFromRequest(DatabricksSparkTracingHelper.scala:89) at com.databricks.spark.util.DBRTracing$.withSpanFromRequest(DBRTracing.scala:43) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$15(ThriftLocalProperties.scala:238) at com.databricks.logging.AttributionContextTracing.$anonfun$withAttributionContext$1(AttributionContextTracing.scala:80) at com.databricks.logging.AttributionContext$.$anonfun$withValue$1(AttributionContext.scala:348) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:59) at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:344) at com.databricks.logging.AttributionContextTracing.withAttributionContext(AttributionContextTracing.scala:78) at com.databricks.logging.AttributionContextTracing.withAttributionContext$(AttributionContextTracing.scala:75) at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:29) at com.databricks.logging.AttributionContextTracing.withAttributionTags(AttributionContextTracing.scala:127) at com.databricks.logging.AttributionContextTracing.withAttributionTags$(AttributionContextTracing.scala:108) at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:29) at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:108) at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:216) at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:668) at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:780) at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:789) at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:668) at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:666) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withAttributionTags(SparkExecuteStatementOperation.scala:76) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$12(ThriftLocalProperties.scala:233) at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:229) at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:89) at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:76) atAviad Deri
12/11/2025, 2:21 PMPy Bot
12/11/2025, 5:34 PMLucas Segers
12/11/2025, 5:41 PMOmree Gal-Oz
12/11/2025, 5:51 PMGideon Stowell
12/11/2025, 10:45 PM2025-12-11 22:44:20,021 [io-executor-thread-4] WARN i.a.c.j.JsonSchemas(traverseJsonSchemaInternal):203 - The object is a properties key or a combo keyword. The traversal is silently stopped. Current schema: {"type":"object","ai ││ rbyte_hidden":true,"additionalProperties":true}Amit Kumar
12/12/2025, 2:53 AMAbhijith C
12/12/2025, 5:35 AMAviad Deri
12/12/2025, 5:37 AMRahul
12/12/2025, 6:01 AMHarsh Dodiya
12/12/2025, 6:12 AMairbyte-abctl replication-job-662-attempt-0 0/3 Error 0 4h24m
airbyte-abctl replication-job-690-attempt-0 2/3 OOMKilled 0 9m15s
As I installed Airbyte using abctl, but I didn’t set any resource limits during the installation. Now I’d like to adjust the memory allocation and apply new resource limits. Is there a way to update these settings after installation, or do I need to reinstall Airbyte to change them?Phan Hong Dat (K18 HL)
12/12/2025, 7:18 AMPhan Hong Dat (K18 HL)
12/12/2025, 7:23 AMERR_NAME_NOT_RESOLVED and Failed to fetch errors because the UI is trying to request PyPI URLs. Is there an official way to run Airbyte fully offline or disable these external network checks?Py Bot
12/12/2025, 8:22 AMCal wsoo
12/12/2025, 9:18 AMAbhishek Batra
12/12/2025, 1:19 PMKothapalli Venkata Avinash
12/12/2025, 1:39 PMFrancisco Carneiro
12/12/2025, 1:44 PMIn summary the CDC collect fails with this error:
Saved offset is before replication slot's confirmed lsn. Please reset the connection, and then increase WAL retention and/or increase sync frequency to prevent this from happening in the future. See <https://docs.airbyte.com/integrations/sources/postgres/postgres-troubleshooting#under-cdc-incremental-mode-there-are-still-full-refresh-syncs> for more details.
Even though the WAL retention is maximum and not that much time passed between two syncs (very little actually)