https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • u

    user

    10/22/2024, 9:22 PM
    #47256 Almost nothing under "Deploy Airbyte" leads to a working installation using Helm on K8s. New discussion created by jesperbagge Page URL No response Description Hello. I have for three days tried to deploy Airbyte 1.1.0 on K8s using the official documentation and have failed monumentally. Almost every aspect of the documentation to deploy Airbyte is outright wrong. State and Logging Storage For S3, the documentation details secret keys that should be named
    s3-access-key-id
    and
    s3-secret-access-key
    and that I should invoke their values using the key
    global.storage.storageSecretName
    . This breaks the Helm parser. According to an archived thread on Slack it should instead be
    global.storage.secretName
    . At least when using this key the parser doesn't break here. However the Helm deployment will instead fail due to that
    Secret.stringData.AWS_ACCESS_KEY_ID
    and
    Secret.stringData.AWS_SECRET_ACCESS_KEY
    has unknown objects of type "nil". It seems that the supplied secrets never gets picked up. Adding
    Secret.stringData.AWS_ACCESS_KEY_ID
    and
    Secret.stringData.AWS_SECRET_ACCESS_KEY
    to my K8s secrets does not help. There is also a sidenote in the documentation that if I want to use another S3-compatible interface, then the
    endpoint
    -key should be supplied. The documentation does not mention where this key should be supplied. Perhaps it should be under
    global.storage.s3
    but I can't test that since the deployment fails anyhow. Secret Management According to the documentation the four supported secret managers are AWS Secrets Manager, Google Secrets Manager, Azure Key Vault and Hashicorp Vault. However there are only configuration examples of the first three. I asked ChatGPT if it knew how to configure Hashicorp Vault as a secrets manager for Airbyte. It gave a suggestion that looked promising but the deployment failed due to Helm parsing errors. External database The documentation first outlines how to disable the internal Postgres db by setting
    postgresql.enabled: false
    . This will make the Helm parser fail, complaining about "nil" in
    Secret.stringData.DATABASE_PASSWORD
    and
    Secret.stringData.DATABASE_USER
    . Skipping the setting
    postgresql.enabled: false
    will allow the deployment to continue but will make the
    airbyte-bootloader
    pod fail because it cannot connect to the external database. After a lot of trial and errors and
    printenv
    inside the failing pod I came to the conclusion that the only way to get at least something to work is to set the env
    DATABASE_URL
    to something like
    jdbc:<postgresql://external-db.com:5432/airbyte>
    . That was the only way to make the bootloader care about anything else than references to a K8s-internal service. It still ultimately fails, though because the temporal pod fails to connect to a database. Probably another reference somewhere that doesn't get set properly. All in all. The current state of the documentation and Helm chart is a mess. airbytehq/airbyte
  • c

    Cale Anderson

    10/22/2024, 9:54 PM
    Has anyone else been getting this error when trying to setup a Postgres connector?
    s
    t
    • 3
    • 9
  • e

    Ethan Brown

    10/23/2024, 12:15 AM
    I'm getting getting this error on a few connections with every sync.
    message='activity ScheduleToStart timeout', timeoutType=TIMEOUT_TYPE_SCHEDULE_TO_START
    It looks like some others have reported this but I don't see any resolutions. Has anyone else encountered and solved this before?
    o
    • 2
    • 3
  • u

    user

    10/23/2024, 1:30 AM
    #47265 Cannot deploy Airbyte to multiple namespaces New discussion created by col I've like to have two deployments of Airbyte (you can think of these as prod and nonprod) within the same K8s cluster. This currently is not possible due to the
    node-viewer
    ClusterRole created by the Helm chart. As this is a cluster wide resource is conflicts when I try to deploy Airbyte to a second namespace. I believe this is an avoidable conflict but needs either: • a configuration option, something like
    .Values.serviceAccount.createClusterRole
    • a check to see if it already exists or not, something like
    if not (lookup "<http://rbac.authorization.k8s.io/v1|rbac.authorization.k8s.io/v1>" "ClusterRole" "" "node-viewer")
    airbytehq/airbyte
    s
    m
    • 3
    • 2
  • s

    Sunil Jimenez

    10/23/2024, 2:20 AM
    @John (Airbyte) Hello. How are you guys. A couple of days ago I opened a ticket to solve an issue with one of my connectors. I share here in case it is an easy thing I can solve,
    j
    • 2
    • 4
  • u

    user

    10/23/2024, 4:50 AM
    #47268 [connector-request] Bird Eye New discussion created by aazam-gh Topic This is a suggestion to create a connector for Bird Eye using the UI Connector Builder. The website for the connector is birdeye Relevant information api docs: https://developers.birdeye.com/ airbytehq/airbyte
  • s

    Shubham

    10/23/2024, 7:44 AM
    Hello all, I am creating a custom connector for for
    WareIQ
    (https://documenter.getpostman.com/view/17076115/U16nM5Tu#6bc956d6-d020-4029-b290-f78a7c72bc27) using no code connector builder. For Orders endpoint, we have the options shown in the attached image as payload for the POST request. I need to insert the start and end date to make this stream incremental, but there is only one place where I need to provide both start and end date in the payload (as a range/list). How do I do this in no code connector ?
  • s

    Sergi Gómez

    10/23/2024, 8:07 AM
    When I try to log in to my Airbyte Cloud account the page goes black....
  • s

    Seb J

    10/23/2024, 8:26 AM
    Hello everyone, I use the GA 4 connector (community) with the version 2.4.2 and I have this error. On https://github.com/airbytehq/airbyte/issues/39423, I see for resolve the issue, I need to downgrade the version to 2.0.3. With higher version of 2.4.2, the problem is always present ?
  • u

    user

    10/23/2024, 8:31 AM
    #47273 Data USA New discussion created by Harmaton The API data source is a definitive place to explore US public data . Streams could include 1. Locations 2. Industries 3. Jobs 4. Universities 5. Degrees 6. Products 7. Reports just to name a few . DOCS airbytehq/airbyte
  • u

    user

    10/23/2024, 8:41 AM
    #47274 TikTok : Dispay API New discussion created by Harmaton Display API display a user's profile and videos on the profile airbytehq/airbyte
  • s

    Sergi Gómez

    10/23/2024, 8:58 AM
    Is Airbyte Cloud down??
  • u

    user

    10/23/2024, 9:05 AM
    #47276 ClickHouse connector gives getResultSet not implemented error New discussion created by iluk807 I'm encountering an issue while trying to load data from ClickHouse using Airbyte. The process fails with the following error:
    getResultSet not implemented
    I’ve tried using different versions of the ClickHouse JDBC driver, but the error persists. Has anyone experienced this issue before or found a workaround for it? Any suggestions on how to resolve this would be greatly appreciated! Below are more detailed logs from the error:
    2024-10-18 10:55:45 source > 2024-10-18 10:55:45 ERROR i.a.c.d.j.StreamingJdbcDatabase$1(tryAdvance):109 - SQLState: 0A000, Message: getResultSet not implemented 2024-10-18 10:56:01 source > 2024-10-18 10:56:01 ERROR i.a.c.u.CompositeIterator(close):126 - exception while closing 2024-10-18 10:56:01 source > java.lang.RuntimeException: java.sql.SQLFeatureNotSupportedException: getResultSet not implemented 2024-10-18 10:56:01 source > at io.airbyte.cdk.db.jdbc.StreamingJdbcDatabase.lambda$unsafeQuery$0(StreamingJdbcDatabase.java:77) ~[airbyte-cdk-core-0.20.4.jar:?] 2024-10-18 10:56:01 source > at java.base/java.util.stream.AbstractPipeline.close(AbstractPipeline.java:323) ~[?:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.concurrency.VoidCallable.call(VoidCallable.java:15) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.DefaultAutoCloseableIterator.close(DefaultAutoCloseableIterator.java:53) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.LazyAutoCloseableIterator.close(LazyAutoCloseableIterator.java:56) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.concurrency.VoidCallable.call(VoidCallable.java:15) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.DefaultAutoCloseableIterator.close(DefaultAutoCloseableIterator.java:53) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.concurrency.VoidCallable.call(VoidCallable.java:15) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.DefaultAutoCloseableIterator.close(DefaultAutoCloseableIterator.java:53) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.concurrency.VoidCallable.call(VoidCallable.java:15) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.DefaultAutoCloseableIterator.close(DefaultAutoCloseableIterator.java:53) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.CompositeIterator.close(CompositeIterator.java:124) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.AutoCloseableIterators.lambda$appendOnClose$0(AutoCloseableIterators.java:106) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.concurrency.VoidCallable.call(VoidCallable.java:15) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.DefaultAutoCloseableIterator.close(DefaultAutoCloseableIterator.java:53) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.AutoCloseableIterators.lambda$appendOnClose$0(AutoCloseableIterators.java:106) ~[airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.concurrency.VoidCallable.call(VoidCallable.java:15) [airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.commons.util.DefaultAutoCloseableIterator.close(DefaultAutoCloseableIterator.java:53) [airbyte-cdk-dependencies-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.cdk.integrations.base.IntegrationRunner.readSerial(IntegrationRunner.java:275) [airbyte-cdk-core-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.cdk.integrations.base.IntegrationRunner.runInternal(IntegrationRunner.java:173) [airbyte-cdk-core-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.cdk.integrations.base.IntegrationRunner.run(IntegrationRunner.java:125) [airbyte-cdk-core-0.20.4.jar:?] 2024-10-18 10:56:01 source > at io.airbyte.integrations.source.clickhouse.ClickHouseSource.main(ClickHouseSource.java:134) [io.airbyte.airbyte-integrations.connectors-source-clickhouse-0.50.50.jar:?]
    airbytehq/airbyte
  • s

    Seppo Puusa

    10/23/2024, 11:59 AM
    When deploying the Helm charts, is it possible to define persistent storage for the webapp pods? In the UI you can trigger it to download the latest versions of destinations and sources. Seems like this update does not survive pod updates. I’m wondering if I add persistent volume to the pod would that get used and the updates I do in the UI survive pod restarts.
    p
    • 2
    • 9
  • u

    user

    10/23/2024, 12:46 PM
    #47285 [source-mongodb] support for standalone instances New discussion created by losblancoo Connector Name source-mongodb Connector Version every version > 1.0.0 Topic Every version prior 1.0.0 had support for standalone instances screenshot atached. Screenshot 2024-10-23 at 11 01 03 Every version > 1.0.0 has support only for replica sets and mongo atlas. Screenshot 2024-10-23 at 11 02 42 Are there plans for reintroducing support for standalone instance types? Also what was the reason for removing this feature? airbytehq/airbyte
  • d

    Damien Querbes

    10/23/2024, 1:50 PM
    Hello 👋 , I am struggling on how to set up connector secrets (i.e. tokens) in my
    values.yaml
    . I plan to store these secrets in GCP Secret Manager. I have already referred
    gcp.json
    in
    values.yaml
    as described in the doc but I don’t get how to map connector secrets (stored in GCP secret managers) with the airbyte keys specific to connector tokens. Can anyone clarify this please? 🙏
    s
    p
    • 3
    • 7
  • u

    user

    10/23/2024, 2:33 PM
    #47296 [connector-request] Google Forms New discussion created by bala-ceg This is a suggestion to create a connector for Google Forms using the UI Connector Builder. The website for the connector is https://developers.google.com/forms/api/reference/rest/?apix=true If you want to try, claim this issue and start working on it following the steps below. Steps: Comment in the issue and wait to be assigned to start working on it. Map the API endpoints (get approval to move to step 3). Describe steps to get credentials. Create the connector using the UI Builder. Airbyte doesn't have sandbox credentials for this connector. You must have access/credentials to the service provider to create the connector. This is a suggestion, and there may be cases where creating the connector using the Builder won't be possible. To minimize the risk of investing a lot of time directly in creating the connector, we strongly recommend following the steps above. airbytehq/airbyte
  • u

    user

    10/23/2024, 2:56 PM
    #47300 [connector-request] Mailtrap New discussion created by gemsteam This is a suggestion to create a connector for Mailtrap using the UI Connector Builder. The website for the connector is Mailtrap Relevant information API docs: https://api-docs.mailtrap.io/docs/mailtrap-api-docs/5tjdeg9545058-mailtrap-api Issue: #47301 airbytehq/airbyte
  • u

    user

    10/23/2024, 3:01 PM
    #47302 [connector-request] Trip Advisor New discussion created by bala-ceg Overview This is a suggestion to create a connector for Google Forms using the UI Connector Builder. The website for the connector is https://tripadvisor-content-api.readme.io If you want to try, claim this issue and start working on it following the steps below. Steps: • Comment in the issue and wait to be assigned to start working on it. • Map the API endpoints (get approval to move to step 3). • Describe steps to get credentials. • Create the connector using the UI Builder. Airbyte doesn't have sandbox credentials for this connector. You must have access/credentials to the service provider to create the connector. This is a suggestion, and there may be cases where creating the connector using the Builder won't be possible. To minimize the risk of investing a lot of time directly in creating the connector, we strongly recommend following the steps above. airbytehq/airbyte
  • n

    Nicolas Gutierrez

    10/23/2024, 3:36 PM
    I'm trying to run a full sync locally from MSSQL to BigQuery but it keeps failing with vague error messages like
    Attempted to close a destination which is already closed.
    and
    java.io.IOException: Broken pipe
    . The replication orchestrator summarized the failures like so:
    Copy code
    2024-10-23 15:00:19 replication-orchestrator > failures: [ {
      "failureOrigin" : "replication",
      "internalMessage" : "No exit code found.",
      "externalMessage" : "Something went wrong during replication",
      "metadata" : {
        "attemptNumber" : 2,
        "jobId" : 8
      },
      "stacktrace" : "java.lang.IllegalStateException: No exit code found.\n\tat io.airbyte.workers.internal.ContainerIOHandle.getExitCode(ContainerIOHandle.kt:104)\n\tat io.airbyte.workers.internal.LocalContainerAirbyteDestination.getExitValue(LocalContainerAirbyteDestination.kt:119)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromDestination(BufferedReplicationWorker.java:493)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsync$2(BufferedReplicationWorker.java:215)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n",
      "timestamp" : 1729695494141
    }, {
      "failureOrigin" : "destination",
      "internalMessage" : "Destination process message delivery failed",
      "externalMessage" : "Something went wrong within the destination connector",
      "metadata" : {
        "attemptNumber" : 2,
        "jobId" : 8,
        "connector_command" : "write"
      },
      "stacktrace" : "io.airbyte.workers.internal.exception.DestinationException: Destination process message delivery failed\n\tat io.airbyte.workers.general.BufferedReplicationWorker.writeToDestination(BufferedReplicationWorker.java:451)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithTimeout$5(BufferedReplicationWorker.java:243)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: java.io.IOException: Broken pipe\n\tat java.base/sun.nio.ch.UnixFileDispatcherImpl.write0(Native Method)\n\tat java.base/sun.nio.ch.UnixFileDispatcherImpl.write(UnixFileDispatcherImpl.java:65)\n\tat java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:137)\n\tat java.base/sun.nio.ch.IOUtil.write(IOUtil.java:102)\n\tat java.base/sun.nio.ch.IOUtil.write(IOUtil.java:72)\n\tat java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:300)\n\tat java.base/sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)\n\tat java.base/sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)\n\tat java.base/sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:309)\n\tat java.base/sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:381)\n\tat java.base/sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:357)\n\tat java.base/sun.nio.cs.StreamEncoder.lockedWrite(StreamEncoder.java:158)\n\tat java.base/sun.nio.cs.StreamEncoder.write(StreamEncoder.java:139)\n\tat java.base/java.io.OutputStreamWriter.write(OutputStreamWriter.java:219)\n\tat java.base/java.io.BufferedWriter.implFlushBuffer(BufferedWriter.java:178)\n\tat java.base/java.io.BufferedWriter.flushBuffer(BufferedWriter.java:163)\n\tat java.base/java.io.BufferedWriter.implWrite(BufferedWriter.java:334)\n\tat java.base/java.io.BufferedWriter.write(BufferedWriter.java:313)\n\tat java.base/java.io.Writer.write(Writer.java:278)\n\tat io.airbyte.workers.internal.VersionedAirbyteMessageBufferedWriter.write(VersionedAirbyteMessageBufferedWriter.java:39)\n\tat io.airbyte.workers.internal.LocalContainerAirbyteDestination.acceptWithNoTimeoutMonitor(LocalContainerAirbyteDestination.kt:139)\n\tat io.airbyte.workers.internal.LocalContainerAirbyteDestination.accept(LocalContainerAirbyteDestination.kt:96)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.writeToDestination(BufferedReplicationWorker.java:436)\n\t... 5 more\n",
      "timestamp" : 1729695499021
    }, {
      "failureOrigin" : "source",
      "internalMessage" : "Source process read attempt failed",
      "externalMessage" : "Something went wrong within the source connector",
      "metadata" : {
        "attemptNumber" : 2,
        "jobId" : 8,
        "connector_command" : "read"
      },
      "stacktrace" : "io.airbyte.workers.internal.exception.SourceException: Source process read attempt failed\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:375)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithHeartbeatCheck$3(BufferedReplicationWorker.java:222)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: java.lang.IllegalStateException: No exit code found.\n\tat io.airbyte.workers.internal.ContainerIOHandle.getExitCode(ContainerIOHandle.kt:104)\n\tat io.airbyte.workers.internal.LocalContainerAirbyteSource.getExitValue(LocalContainerAirbyteSource.kt:90)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:355)\n\t... 5 more\n",
      "timestamp" : 1729695499108
    }, {
      "failureOrigin" : "replication",
      "internalMessage" : "io.airbyte.workers.exception.WorkerException: Destination has not terminated.  This warning is normal if the job was cancelled.",
      "externalMessage" : "Something went wrong during replication",
      "metadata" : {
        "attemptNumber" : 2,
        "jobId" : 8
      },
      "stacktrace" : "java.lang.RuntimeException: io.airbyte.workers.exception.WorkerException: Destination has not terminated.  This warning is normal if the job was cancelled.\n\tat io.airbyte.workers.general.BufferedReplicationWorker$CloseableWithTimeout.lambda$close$0(BufferedReplicationWorker.java:545)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithTimeout$5(BufferedReplicationWorker.java:243)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: io.airbyte.workers.exception.WorkerException: Destination has not terminated.  This warning is normal if the job was cancelled.\n\tat io.airbyte.workers.internal.LocalContainerAirbyteDestination.close(LocalContainerAirbyteDestination.kt:65)\n\tat io.airbyte.workers.general.BufferedReplicationWorker$CloseableWithTimeout.lambda$close$0(BufferedReplicationWorker.java:543)\n\t... 5 more\n",
      "timestamp" : 1729695559154
    } ]
    I wasn't able to find anything more helpful in the logs but happy to post more snippets if that would be useful. Anyone have any suggestions for debugging?
    h
    • 2
    • 1
  • m

    Mert Ors

    10/23/2024, 3:47 PM
    Hey guys, I am fetching in some amazon seller partner data; every stream works fine except sponsored_brands_report stream. Is there a reason why this stream does not fetch any data?
  • m

    Mert Ors

    10/23/2024, 3:53 PM
    FYI it does not give out any errors, just fetches 0 rows, every other stream fetches data like normal and matches what's on the platform
  • m

    Mert Ors

    10/23/2024, 3:54 PM
    Another user is possibly having similar issues (https://github.com/airbytehq/airbyte/discussions/39102) is there a possibility this is a bug or that I am missing something?
  • m

    Marco Hemken

    10/23/2024, 5:11 PM
    How does one add an environment variable to the
    orchestrator-repl-job
    pods? Use case: • Logging won't write to S3 bucket unless
    AWS_REGION
    is set. It doesn't work with
    AWS_DEFAULT_REGION
    . • using Helm chart version
    0.64.151
    • 1
    • 2
  • a

    Arun Addagatla

    10/23/2024, 5:42 PM
    Hey Guys, Is there any way to fetch the github files using airbyte?
    u
    • 2
    • 1
  • c

    Colin

    10/23/2024, 7:44 PM
    Are there plans to have the Low Code CDK (https://docs.airbyte.com/connector-development/config-based/tutorial/install-dependencies) work with Python 3.12?
    p
    • 2
    • 4
  • u

    user

    10/23/2024, 10:49 PM
    #47321 need Stream Slicing in source-mysql to raise updateTable other than waiting too long unsafely for copy tmp files New discussion created by amelia-ay Connector Name source-mysql Connector Version 3.73 What step the error happened? During the sync Relevant information When we synchronize data from MySQL to Databricks using airbyte-helm, the synchronized table has TB-sized records. We noticed that the data is always continuously written to the tmp file, and then merged to the destination table until whole stream success. During load large number of tmp files, it is highly likely to be interrupted, resulting in Sync Partially Succeeded. However, this does not solve the problem well. The data synchronization digitization still takes several days, and it will also get stuck during the final "merge into...". If the source can do Stream Slicing, this problem may be solved. maybe we can slicing the stream by running hours or by records' size. Also, before this method is implemented, is there any other way to increase the number of action of update table to obtain a similar result? We tested buildImage: mysql-dev Relevant log output replication-orchestrator > Records read: 73790000 (84 GB) Contribute • Yes, I want to contribute airbytehq/airbyte
  • u

    user

    10/23/2024, 10:50 PM
    #47322 [helm] Pipeline Error during Stripe to PostgreSQL Sync New discussion created by ayanguas Helm Chart Version Helm 1.1.0 What step the error happened? During the Sync Relevant information I encountered an error during a sync operation between Stripe and PostgreSQL using Airbyte. The synchronization fails at the replication stage, with the following error:
    Copy code
    Warning from replication: Something went wrong during replication
    
    message='io.temporal.serviceclient.CheckedExceptionWrapper: io.airbyte.workers.exception.WorkerException: Init container error encountered while processing workload for id: 30263b0f-ca05-44ec-8907-9fc7845dfe44_1_4_sync. Encountered exception of type: class com.amazonaws.SdkClientException. Exception message: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName, com.amazonaws.auth.profile.ProfileCredentialsProvider@402f61f5: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@3dc55719: Unauthorized (Service: null; Status Code: 401; Error Code: null; Request ID: null; Proxy: null)].', type='java.lang.RuntimeException', nonRetryable=false
    Source Stripe image: airbyte/source-stripe:5.6.2 Destination Postgres image: airbyte/destination-postgres:2.4.0 Orchestrator image: airbyte/container-orchestrator:1.1.0 I would appreciate any guidance on resolving this issue. Relevant log output 2024-10-22 084517 ERROR i.a.w.l.p.h.FailureHandler(apply):39 - Pipeline Error io.airbyte.workload.launcher.pipeline.stages.model.StageError: java.lang.RuntimeException: Init container for Pod: pods did not complete successfully. Actual termination reason: Error. at io.airbyte.workload.launcher.pipeline.stages.model.Stage.apply(Stage.kt:46) ~[io.airbyte-airbyte-workload-launcher-1.1.0.jar:?] at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:38) ~[io.airbyte-airbyte-workload-launcher-1.1.0.jar:?] at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.$$access$$apply(Unknown Source) ~[io.airbyte-airbyte-workload-launcher-1.1.0.jar:?] at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Exec.dispatch(Unknown Source) ~[io.airbyte-airbyte-workload-launcher-1.1.0.jar:?] at io.micronaut.context.AbstractExecutableMethodsDefinition$DispatchedExecutableMethod.invoke(AbstractExecutableMethodsDefinition.java:456) ~[micronaut-inject-4.6.5.jar:4.6.5] at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:134) ~[micronaut-aop-4.6.5.jar:4.6.5] at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.doIntercept(InstrumentInterceptorBase.kt:61) ~[io.airbyte.airbyte-metrics-metrics-lib-1.1.0.jar:?] at io.airbyte.metrics.interceptors.InstrumentInterceptorBase.intercept(InstrumentInterceptorBase.kt:44) ~[io.airbyte.airbyte-metrics-metrics-lib-1.1.0.jar:?] at io.micronaut.aop.chain.MethodInterceptorChain.proceed(MethodInterceptorChain.java:143) ~[micronaut-aop-4.6.5.jar:4.6.5] at io.airbyte.workload.launcher.pipeline.stages.$LaunchPodStage$Definition$Intercepted.apply(Unknown Source) ~[io.airbyte-airbyte-workload-launcher-1.1.0.jar:?] at io.airbyte.workload.launcher.pipeline.stages.LaunchPodStage.apply(LaunchPodStage.kt:24) ~[io.airbyte-airbyte-workload-launcher-1.1.0.jar:?] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:158) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2571) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.request(MonoFlatMap.java:194) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2367) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:117) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:193) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53) ~[reactor-core-3.6.9.jar:3.6.9] at reactor.core.publisher.Mono.subscribe(Mono.java:4560) ~[reactor-core-3.6.9.jar:3.6.9] airbytehq/airbyte
  • u

    user

    10/23/2024, 10:50 PM
    Comment on #47322 [helm] Pipeline Error during Stripe to PostgreSQL Sync Discussion answered by ayanguas Upon further investigation, I discovered that the error was due to a misconfiguration in the values.yaml file. Specifically, the
    authenticationType
    should be set to
    credentials
    instead of
    instanceProfile
    . Without this adjustment, the sync operation does not work as expected. I hope this helps anyone encountering a similar issue. airbytehq/airbyte
  • s

    Shubham

    10/24/2024, 4:33 AM
    Hello, I'd like to know how others are handling the JSON columns (instead of nested columns) being loaded in
    destination-bigquery
    ?
    j
    • 2
    • 2
1...240241242...245Latest