https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • k

    Krishna Channa

    08/28/2024, 4:05 PM
    Hi Team i am getting below error while configuring Destination as MYSQL An unknown error occurred. (HTTP 504) i am using MySQL version: '8.0.39' i installed both MySQL and Airbyte on my laptop can some one help if any face similar issue
  • a

    Aditya Gupta

    08/28/2024, 4:14 PM
    Does anyone knows how to get the openapi.json endpoint for the local server instace of airbyte?
  • g

    Gordon MacMaster

    08/28/2024, 4:30 PM
    Hey @Ryan Waskewich I'm trying to create a connection between Mongo and Redshift but am getting an error. I haven't set any configs within Airbyte except getting the successful response from the Source and Destination step and then selecting the source and destination from the setup steps. Does this seem like a config I need on my mongo instance or something within Airbyte? Thanks in advance!
    Copy code
    Internal message: com.mongodb.MongoCommandException: Command failed with error 292 (QueryExceededMemoryLimitNoDiskUseAllowed): 'PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.' on server {REMOVED}. The full response is {"ok": 0.0, "errmsg": "PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.", "code": 292, "codeName": "QueryExceededMemoryLimitNoDiskUseAllowed", "$clusterTime": {"clusterTime": {"$timestamp": {"t": 1724862367, "i": 4}}, "signature": {"hash": {"$binary": {"base64": "{REMOVED}", "subType": "00"}}, "keyId": {REMOVED}}}, "operationTime": {"$timestamp": {"t": 1724862367, "i": 4}}}
    r
    • 2
    • 6
  • n

    Narayan Zeermire

    08/28/2024, 6:35 PM
    Hi team, generally i am facing following issue while syn data from mongodb to s3 can anyone suggest how to solve this? The _id fields in a collection must be consistently typed (collection = fund_transaction).
  • b

    Brian Kasen

    08/28/2024, 7:10 PM
    Has anyone encountered situations where syncing data from S3 --> Snowflake (incremental append) results in Airbyte executing Soft Resets every time the sync runs? We have some large datasets, where we are seeing significant performance degradation as the airbyte_internal table continues to grow and soft reset is happening every time driving syncs that should only take a few minutes to now taking 2 to 3 hours, because it is setting the loaded_at timestamp to NULL across all internal table records, rebuilding the entire final table, and then resetting the loaded_at timestamp to current. What’s interesting is that reivewing the logs, I see this line indicating SoftReset is not needed
    Copy code
    2024-08-28 12:35:49 [43mdestination[0m > INFO main i.a.c.d.j.JdbcDatabase(executeWithinTransaction$lambda$1):46 executing query within transaction: insert into "airbyte_internal"."_airbyte_destination_state" ("name", "namespace", "destination_state", "updated_at") values ('airbyte_brand', 'TALENTREEF', '{"needsSoftReset":false,"airbyteMetaPresentInRaw":true}', '2024-08-28T12:35:48.549790446Z')
    , but shortly thereafter I see :
    Copy code
    2024-08-28 12:35:51 [43mdestination[0m > INFO sync-operations-3 i.a.i.b.d.t.TyperDeduperUtil(executeTypeAndDedupe):212 Attempting typing and deduping for TALENTREEF.airbyte_brand with suffix _ab_soft_reset
    We have syncs that run every 8 hours and, what I suspect is a bug, is quickly becoming cost prohibitive since the long-term solution is not sizing up the Snowflake WH. Kapa did not provide sufficient info on why SoftReset was triggering in this case Has anyone encountered this before and can anyone from the Airbyte Team assist? cc @Abhra Gupta / @Ritika Naidu
    default_workspace_job_13341_attempt_1_txt.txt
  • s

    Sumit Kumar

    08/28/2024, 7:23 PM
    Hi, I'm using a self-hosted Airbyte instance on GCP and am encountering an error while trying to sync MongoDB data to BigQuery. "java.lang.RuntimeException: Unable extract the offset out of state, State mutation might not be working. {"[\"test\",{\"server_id\":\"test\"}]":"{\"sec\":0,\"ord\":-1,\"resume_token\":\"8266CDDEFD000000042B0229296E04\"}"}" Any help would be appreciated. Thanks
  • b

    Beatrice Nasike

    08/28/2024, 7:52 PM
    I migrated from docker compose to abctl, after configuring external state and logging with abctl by running abctl local install --values values.yaml --secret secrets.yaml And when I run my connections I get Warning from source: Workload failed, source: workload-launcher with the following error log io.airbyte.workload.launcher.pipeline.stages.model.StageError: io.airbyte.workload.launcher.pods.KubeClientException: Failed to create pod source-postgres-check-1582-1-vaqpv. at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:38) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.$$access$$apply(Unknown Source) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Exec.dispatch(Unknown Source) at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456) at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:129) at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:61) at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:44) at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:138) at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.apply(Unknown Source) at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:24) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367) at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193) at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) at reactor.core.publisher.Mono.subscribe(Mono.java:4552) at reactor.core.publisher.MonoSubscribeOn$SubscribeOnSubscriber.run(MonoSubscribeOn.java:126) at reactor.core.scheduler.ImmediateScheduler$ImmediateSchedulerWorker.schedule(ImmediateScheduler.java:84) at reactor.core.publisher.MonoSubscribeOn.subscribeOrReturn(MonoSubscribeOn.java:55) at reactor.core.publisher.Mono.subscribe(Mono.java:4552) at reactor.core.publisher.Mono.subscribeWith(Mono.java:4634) at reactor.core.publisher.Mono.subscribe(Mono.java:4395) at io.airbyte.workload.launcher.pipeline.LaunchPipeline.accept(LaunchPipeline.kt:50) at io.airbyte.workload.launcher.pipeline.consumer.LauncherMessageConsumer.consume(LauncherMessageConsumer.kt:28) Any Idea why this is happening?
    p
    • 2
    • 1
  • e

    Eric Markewitz

    08/28/2024, 8:08 PM
    Hi! I'm self hosting airbyte on ec2. I'm trying to migrate from docker compose to abctl but running into an error with system architecture I believe. Wondering if anyone ran into the same issue and how they solved it.
    Copy code
    uname -m
    aarch64
    Copy code
    curl -LsfS <https://get.airbyte.com> | bash -
    
    Installing for Linux...
    Downloading abctl from <https://github.com/airbytehq/abctl/releases/download/v0.13.1/abctl-v0.13.1-linux-amd64.tar.gz>
    Installing 'release/abctl-v0.13.1-linux-amd64/abctl' to /usr/local/bin
    bash: line 242: /usr/local/bin/abctl: cannot execute binary file
    
    abctl install failed: bash: line 242: /usr/local/bin/abctl: cannot execute binary file
    j
    a
    b
    • 4
    • 4
  • h

    Herbert Sousa

    08/28/2024, 9:42 PM
    Hi! I am using a Google Sheets -> S3 connection and I have some connections that have more than one stream (spreadsheet). What happens is that the connection sync returns a status of succeeded but just some of the streams have been updated. Has anyone have something similar?
    Copy code
    {
      "jobId": 17013431,
      "status": "succeeded",
      "jobType": "sync",
      "startTime": "2024-08-28T18:01:51Z",
      "connectionId": "223b217d-8398-4920-b979-1dc8d5a28ec5",
      "lastUpdatedAt": "2024-08-28T18:10:53Z",
      "duration": "PT9M2S",
      "bytesSynced": 828539,
      "rowsSynced": 4512
    }
    • 1
    • 1
  • s

    Sean Miltenberger

    08/28/2024, 10:40 PM
    Hello! I'm having trouble getting any connector to run. I've installed Airbyte locally on a GCP VM running Debian Linux. Airbyte seems to be running fine but when I try to run a connection it fails. I've attached logs for a sync between Sendgrid and BigQuery. I'm not sure exactly what is wrong here but it seems to be something with pods not starting. If anyone has any insight into this issue and what the problem might be your help would be appreciated.
    • 1
    • 1
  • k

    Krishna Channa

    08/29/2024, 12:50 AM
    Hi Team i am getting below error while configuring Destination as MYSQL An unknown error occurred. (HTTP 504) i am using MySQL version: '8.0.39' i installed both MySQL and Airbyte on my laptop can some one help if any face similar issue
    r
    • 2
    • 1
  • j

    John Dorlus (Power Coder)

    08/29/2024, 12:50 AM
    Hello all. I am working with the connection builder to grab data from an OData source (REST API) and I wanted to know if there is a way to do incremental syncs based the time that you last synced. If the data itself does not have a column that has the date updated, is there a way for the builder to know to grab all new data from the last sync.
  • d

    Dean Lau

    08/29/2024, 6:38 AM
    Hey team, for the new Refresh mode, When i switch from full refresh to incremental (dedupe), it will ask to refresh the stream, Wondering, it is refresh and
    retain
    OR
    remove
    record?
    p
    • 2
    • 2
  • l

    Lisardo Erman

    08/29/2024, 7:09 AM
    Hi all, I’m wondering if anybody can point me to the right documentation of the airbyte internal API that I can use on a self hosted instance (version 0.39.20-alpha). The only documentation I can find online contains some routes that are not available on my instance. I guess the API itself has a route for getting the documentation but I don’t remember the specific endpoint.
    p
    • 2
    • 2
  • y

    Yannis Thomopoulos

    08/29/2024, 8:37 AM
    Hello all. I’ve logged in to cloud.airbyte.com numerous times and I can’t use the web app. I get the following error. I’m using Chrome and Firefox behaves with the same way. I have tried cleaning all site data multiple times. Any suggestions?
    j
    t
    • 3
    • 2
  • a

    Aditya Gupta

    08/29/2024, 8:38 AM
    Hello all,I need one help from you? Just wanted to know that I want to create separate workspaces for different company connections, and though if in future it the workspaces numbers reaches to around 1000, would the server break? how many workspaces can the Airbyte server handle, I have already deployed it using Amazon EC2. If anyone knows, please help
  • a

    Alexandre Martins

    08/29/2024, 9:36 AM
    Hey all! I'm trying to upgrade the Facebook Marketing connector to the latest version (v3.3.15) and I'm having a problem with the source check. Apparently, the new version introduces a feature to persist the
    source_config.json
    in the local filesystem, during the check process. However, as I run Airbyte OSS on Kubernetes (EKS), I'm getting a permission denied when the job pod tries to create this file:
    Copy code
    PermissionError: [Errno 13] Permission denied: 'source_config.json'
        with open(config_path, "w") as fh:
      File "/usr/local/lib/python3.10/site-packages/airbyte_cdk/connector.py", line 60, in write_config
        source.write_config(migrated_config, config_path)
      File "/airbyte/integration_code/source_facebook_marketing/config_migrations.py", line 186, in _modify_and_save
        cls._modify_and_save(config_path, source, config),
      File "/airbyte/integration_code/source_facebook_marketing/config_migrations.py", line 160, in migrate
        MigrateSecretsPathInConnector.migrate(sys.argv[1:], source)
      File "/airbyte/integration_code/source_facebook_marketing/run.py", line 18, in run
        run()
      File "/airbyte/integration_code/main.py", line 9, in <module>
    Traceback (most recent call last):
    I suspect it has to do with the securityContext of the job pods, not allowing for write access to the internal filesystem. However, I don't see how we can update the security context of the job/check pods in the charts, only annotations, labels, etc. :https://github.com/airbytehq/airbyte-platform/blob/4aa1fd563b22802d268febfc5f61bbc928c40b33/charts/airbyte/values.yaml#L136-L170 Any ideas on how to solve this? Its blocking us from upgrading to the latest version 😕
  • u

    user

    08/29/2024, 9:44 AM
    #44884 [Helm] Support the GKE Workload Identity New discussion created by chrisduong It had been mentioned here https://discuss.airbyte.io/t/logging-to-gcs-with-workload-identity-in-airbyte-on-gke/6731 In the security point of view, we should use the GKE Workload Identity instead. This means we don't need to set
    global.storage.gcs.credentialsJson
    . However it is not possible, as the Helm chart always requires it. The remedy is to update the Helm chart to allow
    global.storage.gcs.credentialsJson
    to be empty. airbytehq/airbyte
  • j

    Julien Ruey

    08/29/2024, 10:10 AM
    Monday connector only returns 100 records for 'boards' stream Hi all, I'm running into an issu with the Monday connector, which seem to be returning only 100 boards, while we actually have 177 total boards. I'm suspecting that there's an error in the management of the pagination at the graphql requests level. Anybody has run into this issue already or hear about a fix ? Bests !
  • u

    user

    08/29/2024, 10:13 AM
    #44886 Unable to found "Transformation" Tab in 0.63.13 Version New discussion created by Mudassarhu Hello, I have successfully moved the data from Postgres to Clickhouse by using Airbyte. But my table data is showing in JSON format. I want to see the data in tabular format. But I unable to find any "Transformation" tab in this version. although I have found the "transformation" tab in 0.40.0 version. What should i do ? Thanks airbytehq/airbyte
  • l

    Lisardo Erman

    08/29/2024, 10:19 AM
    Hi everybody, I’m quite stuck with the problem that images revert back to an older version after restart of on 0.39.20-alpha server. I changed the docker image version tag of airbyte/destination-bigquery-denormalized being 1.5.3. in the airbyte UI. After redeploying the server (docker-compose down -v and docker-compose up) the image version reverts back to 1.1.8. I already tried to change the image tag inside “destination_definitions.yaml” file but with no effect. Can anybody hint me towards another approach? Thanks
  • u

    user

    08/29/2024, 10:26 AM
    #44887 Airbyte taking to much of time to replicate the data from MySQL to s3 (20 mins) New discussion created by Naveenkrish840 From MySQL to s3 we have connection for update & delete case, it took too much of time for processing. Sometimes sync process is done within 1-2 minutes, but most of the times it is taking more than 5 mins. Kindly help me to tackle this issue. airbytehq/airbyte
  • u

    user

    08/29/2024, 11:07 AM
    #44888 [source-amazon-seller-partner] Schema validation errors found for stream GET_VENDOR_INVENTORY_REPORT (startDate and endDate invalid date-time) New discussion created by MindlessDreams Hello, I need help with Amazon Seller Partner connector. Connector Name airbyte/source-amazon-seller-partner Connector Version 4.3.0 What step the error happened? During the sync Relevant information Hello, this repport GET_VENDOR_INVENTORY_REPORT returns mostly "NULL"s for all columns but 3. I am passing all mandatory reportsOptions. It does not make any difference if I go for DAY, WEEK, MONTH with or without specified startDate and endDate - the result is the same "Schema validation error". Please advice where the issue might be. All other reports which require reportsOptions run fine and produce results. I get this error in the log:
    platform > Schema validation errors found for stream _GET_VENDOR_INVENTORY_REPORT. Error messages: [$.startDate: 2024-05-01 is an invalid date-time, $.endDate: 2024-05-31 is an invalid date-time]
    Dates are set in the propper format. Please help. airbytehq/airbyte
  • s

    SANGADO

    08/29/2024, 11:36 AM
    Hello, I keep getting this error from Amazon-seller-partner connector, all mandatory attributes passed correctly: platform > Schema validation errors found for stream _GET_VENDOR_INVENTORY_REPORT. Error messages: [$.startDate: 2024-05-01 is an invalid date-time, $.endDate: 2024-05-31 is an invalid date-time]
    u
    • 2
    • 2
  • d

    dhanesh

    08/29/2024, 12:05 PM
    Hi, I am trying to create JAVA destination connector, while running ./generate.sh, I am getting below error the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty' While trying to generate a connector, an error occurred on line 38 of generate.sh and the process aborted early. This is probably a bug. Can anyone help here..
  • w

    Willian Yoshio Iwamoto

    08/29/2024, 12:13 PM
    Hello everybody, Can anyone help here with this scenario? I am connecting with Zendesk Support and it`s taking too long to return the data, my start data is Aug/2024, is it normal? Am I configuring something wrong? Thanks!
  • k

    Kornel All

    08/29/2024, 12:31 PM
    Any idea on this?
  • s

    Stockton Fisher

    08/29/2024, 12:54 PM
    Does anyone know what this error means?
    > message='io.airbyte.workers.exception.WorkloadMonitorException: Airbyte could not track the sync progress. No heartbeat within the time limit indicates the process might have died.', type='java.lang.RuntimeException', nonRetryable=false
  • u

    user

    08/29/2024, 1:25 PM
    #44895 [HELM][GCS] Found issue in helm template related to GCS Storage in airbyte-workload-api-server volume New discussion created by jrmbrgs Hello Airbyte ! Thx you for the OSS product you build ! I think I found a small bug in a helm deployment chart. Context According to the Helm External Storage implementation guide. We can define a secret containing the service account json key, as defined in the Integration storage doc like : apiVersion: v1 kind: Secret metadata: name: airbyte-config-secrets type: Opaque stringData: gcp.json: | { "type": "service_account", ... } Then referencing the previously created secret in the
    values.yaml
    file as mentioned in the documentation global: storage: type: "GCS" storageSecretName: airbyte-config-secrets bucket: log: airbyte-bucket state: airbyte-bucket workloadOutput: airbyte-bucket gcs: projectId: <project-id> Later, during the deployment, this secret is used to populate the
    gcs-log-creds-volume
    volume for at least the
    airbyte-workload-api-server
    and
    airbyte-server
    charts Issue While this is working for
    airbyte-server
    charts I've found an issue w/
    airbyte-workload-api-server
    . The
    airbyte-workload-api-server
    deployment fails because of this err :
    MountVolume.SetUp failed for volume "gcs-log-creds-volume" : secret "airbyte-gcs-log-creds" not found
    At the end, the secret you've provided in your values.yaml is not used, the default one is kept instead Code In
    airbyte-server
    chart deployment file volumes: {{- if eq .Values.global.deploymentMode "oss" }} {{- if eq (lower (default "" .Values.global.storage.type)) "gcs" }} - name: gcs-log-creds-volume secret: secretName: {{ ternary (printf "%s-gcs-log-creds" ( .Release.Name )) .Values.global.storage.storageSecretName (not ((.Values.global.storage).storageSecretName)) }} {{- end }} While in
    airbyte-workload-api-server
    chart deployment file volumes: {{- if and (eq .Values.global.deploymentMode "oss") (eq (lower (default "" .Values.global.storage.type)) "gcs") }} - name: gcs-log-creds-volume secret: secretName: {{ ternary (printf "%s-gcs-log-creds" ( .Release.Name )) (.Values.global.credVolumeOverride) (eq .Values.global.deploymentMode "oss") }} {{- end }} It sounds the helm ternary function should looks like the one in
    airbyte-server
    to use the
    .Values.global.storage.storageSecretName
    if it has been provided in the
    values.yaml
    ... secretName: {{ ternary (printf "%s-gcs-log-creds" ( .Release.Name )) .Values.global.storage.storageSecretName (not ((.Values.global.storage).storageSecretName)) }} ... Regarding the small fix it might represent I was wondering if it was worth sending a PR. airbytehq/airbyte
1...216217218...245Latest