https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • r

    Rami M Theeb

    12/06/2022, 9:14 PM
    hey guys, i am facing this issue with my k8s airbyte deployment, looks like a connection is stuck and spamming retries, i can see these logs
    Copy code
    2022-12-06T16:53:37.423923573Z stdout F {"level":"warn","ts":"2022-12-06T16:53:37.423Z","msg":"history size exceeds warn limit.","shard-id":4,"address":"10.1.1.248:7234","component":"history-cache","wf-namespace-id":"769c5c50-2fb9-4b30-a8d3-ea88dd398f4c","wf-id":"connection_manager_84a09e16-67ec-432b-9cc3-1224516d6ad2","wf-run-id":"a3dd8fcd-19b6-4fc9-8faf-b7c3075bf0c0","wf-history-size":32381433,"wf-event-count":4380,"logging-call-at":"context.go:920"}
    Copy code
    2022-12-06T16:52:07.390203639Z stdout F java.lang.IllegalStateException: Transitioning Job 1881 from JobStatus SUCCEEDED to INCOMPLETE is not allowed. The only valid statuses that an be transitioned to from SUCCEEDED are []
    2022-12-06T18:52:07+02:00
    2022-12-06T16:52:07.390170567Z stdout F 2022-12-06 16:52:07 
    WARN
     i.t.i.a.ActivityTaskExecutors$BaseActivityTaskExecutor(execute):114 - Activity failure. ActivityId=d551e5c8-e3ea-336d-82e8-c03469ea67b7, activityType=AttemptFailureWithAttemptNumber, attempt=1
    2022-12-06T18:51:49+02:00
    2022-12-06T16:51:49.228566415Z stdout F 2022-12-06 16:51:49 
    INFO
     i.a.w.t.s.ConnectionManagerWorkflowImpl(runMandatoryActivityWithOutput):628 - Waiting PT10M before restarting the workflow for connection 1acb8818-fdba-4780-840e-d552bd2d4684, to prevent spamming temporal with restarts.
    theses errors keep showing in the logs, and i can’t delete the connection for some reason too, any help ?
    n
    • 2
    • 3
  • h

    Haritha Gunawardana

    12/06/2022, 10:51 PM
    hello! question on how airbyte transformation work. I've a use case to read the data from a kafka topic (source data is in JSON format) and perform transformation of data to our data models (common data model - target JSON format). If anyone in this group has done similar work? I'd need help on understanding how to setup the transformation (convert source JSON to target JSON)
    w
    м
    • 3
    • 6
  • t

    Tmac Han

    12/07/2022, 1:22 AM
    hello, I am testing a normalization on my local machine, but got an error like this:
    u
    u
    • 3
    • 8
  • p

    Praveenraaj K S

    12/07/2022, 4:49 AM
    Hello Team, For upgrading Airbyte to the latest version, we couldn't find any demo videos and document resources except https://docs.airbyte.com/operator-guides/upgrading-airbyte/ this one. It would be helpful if you could publish more details about this like videos and step-by-step Info.
    n
    • 2
    • 1
  • y

    Yusuf Fahry

    12/07/2022, 5:28 AM
    Hi team, I saw there is documentation about JIRA source for Airbyte Cloud. But somehow I couldn't find it (pictured). Could anyone please help me figure out how to set up the JIRA source?
    ✅ 1
    m
    • 2
    • 1
  • t

    Tmac Han

    12/07/2022, 7:23 AM
    Hello team, I always has this error when using normalization, but I use dbt on my local machine to 'dbt run ' has no error.
    m
    • 2
    • 1
  • n

    Nick Scheifler

    12/07/2022, 8:03 AM
    Hi airbyte community, has anyone been able to resolve issues of Postgres<>BigQuery jobs always failing at Normalization step after upgrading airbyte? It seemingly gets all the way through normalization successfully and then just fails at the end. Job is able to run with normalization turned off. At this point it seems like our only option is to revert upgrades. Logs attached 🙏
    airbyte_normalization_failed_log_example.txt
    m
    • 2
    • 12
  • s

    Sebastian Brickel

    12/07/2022, 8:22 AM
    Hey team, while we all know that storage is cheap these days, there is no need to take a sledgehammer to crack a nut. We all know that (self hosted) Airbyte uses a lot of storage and I think we can agree that the recommendation of at least 30GBs of disk per node is the absolute lowest minimum. So my question. Did anyone ever look into how much disk space is needed on average per connector? I assume that that would be the easiest metric to determine how much storage is actually needed by your Airbyte instance. Or how do you approach this issue?
    u
    s
    h
    • 4
    • 4
  • a

    Aleksandar

    12/07/2022, 9:10 AM
    Hello! I am adding a new private source connector with HTTP Client. I have my docker image accessible in the Airbyte UI but Once I add it from
    Add new conntector
    in Settings -> New Connector. I do not see any error. It does not appear in the sources and I cannot see it anywhere. I cannot understand if there is some problem with my image or I am missing some step in how I make my source visible in the UI. Any help will be greatly appreciate! NOTE: Locally I had the same problem and connect to the airbyte database (e.g.
    airbyte-db
    ) and update in
    actor_definition table
    the field
    public=true
    . After that I am able to see it.
    j
    n
    • 3
    • 8
  • s

    Steven

    12/07/2022, 11:18 AM
    Hey everyone. We're trying to use the Databricks Lakehouse destination connector to get our data into Databricks. It's not working and for us it's currently not entirely clear why. We're getting the following error:
    Copy code
    Failed to finalize copy to temp table due to: java.sql.SQLException: [Databricks][DatabricksJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: org.apache.hive.service.cli.HiveSQLException: Error running query: Failure to initialize configurationInvalid configuration value detected for fs.azure.account.key
    We're using the Azure Blob Storage data source instead of the default S3. We can see that Airbyte is writing files into the Azure Blob Storage account correctly, but we think Databricks might have trouble accessing that same location. We have tried multiple things: • adding the Azure Blob Storage account as an "external location" in the Databricks Unity Catalog with the associated storage credential. • writing directly to the managed Azure Blob Storage account • the Databricks Access Connector has been added as a storage contributor to the storage account • all network rules are open • we've tried both the regular Azure Blob Storage and ADLS Gen2 • We've correctly set up the connection from Airbyte to the Azure Blob Storage account using the SAS token auth. • The "test connection" button in Airbyte works correctly. Any pointers? --- Actually solved this just now. Apparently you should direct Airbyte towards an actual Databricks compute cluster. Directing it towards a SQL Warehouse does NOT work. Now getting another error, which does seem to be a problem in the Databricks Lakehouse integration itself:
    Copy code
    CREATE TABLE public._airbyte_tmp_tvd_pokemon (_airbyte_ab_id string, _airbyte_emitted_at string, `abilities` , `base_experience` , `forms` , `game_indices` , `height` , `held_items` , `id` , `is_default ` , `location_area_encounters` , `moves` , `name` , `order` , `species` , `sprites` , `stats` , `types` , `weight` ) USING csv LOCATION 'abfss:REDACTED_LOCAL_PART@REDACTED.dfs.core.windows.net/d2b9f209-e36e-451c-9a66-6d29a712c699/public/_airbyte_tmp_tvd_pokemon/' options ("header" = "true", "multiLine" = "true")
    Looks like the types of the columns are missing. I guess something's going wrong here? https://github1s.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/[…]ination/databricks/DatabricksAzureBlobStorageStreamCopier.java
    👀 2
    u
    s
    +2
    • 5
    • 6
  • s

    Steven

    12/07/2022, 11:26 AM
    @Ashley Baer maybe you can help
  • k

    KalaSai

    12/07/2022, 11:58 AM
    Hello, I am facing issue at airbyte/worker Can anyone list the required env variables for airbyte/worker when deploying on kubernetes, I read many post on issues related to. Mino/S3 config but I want to use the inbuilt default state and logs
    Copy code
    ERROR i.m.r.Micronaut(handleStartupException):338 - Error starting Micronaut server: Failed to inject value for parameter [secretPersistence] of method [secretsHydrator] of class: io.airbyte.config.persistence.split_secrets.SecretsHydrator
    
    Message: No bean of type [io.airbyte.config.persistence.split_secrets.SecretPersistence] exists for the given qualifier: @Named('secretPersistence').
    r
    • 2
    • 3
  • a

    Aleksei Abisev

    12/07/2022, 12:10 PM
    I am trying to set
    Paypal transactions
    source Everything looks fine, but it always return 0 transactions. No errors, no nothing. Balances stream is returning records.
    Copy code
    2022-12-07 11:51:50 source > Starting syncing SourcePaypalTransaction
    2022-12-07 11:51:50 source > Syncing stream: transactions 
    2022-12-07 11:51:52 source > Maximum allowed start_date is 2022-12-07 09:29:59+00:00 based on info from API response
    2022-12-07 11:53:13 source > Read 0 records from transactions stream
    2022-12-07 11:53:13 source > Finished syncing transactions
    2022-12-07 11:53:13 source > SourcePaypalTransaction runtimes:
    Syncing stream transactions 0:01:23.353576
    When sending direct requests to api endpoint https://api-m.paypal.com/v1/reporting/transactions transactions are returned properly. Any suggestions what should I try?
    ✅ 1
    a
    m
    • 3
    • 20
  • j

    Jonny Wray

    12/07/2022, 12:54 PM
    Hello. I have a quick question regarding using the cloud service. I've been developing a source connector locally and now have the docker image pushed to a Dockerhub repo under my account. Is it possible to use this to set up a source in the cloud version?
    n
    r
    • 3
    • 5
  • d

    Daryl Thomas

    12/07/2022, 2:17 PM
    Hi Team I want to sync a few mysql database tables that are more than 50Gb in size to a clickhouse Data Warehouse. I do not want to move historic data and only want to sync new data upto a month old along with data that will get added daily from the tables . Since Airbyte syncs entire data the first time, I want to know if the following approach is feasible. Is it possible to make airbyte sync incrementally from the latest record that was added to the destination database?
    e
    u
    • 3
    • 5
  • b

    Brian Olsen

    12/07/2022, 2:35 PM
    Hey all, I'm reading from a csv file, and despite setting the dtype, my connection still is reading all types as string. To make things simple, I tried to just set Reader Options to
    {"dtype" : "object"}
    to see if I was formatting something wrong. It still shows all strings in the connection though.
    e
    • 2
    • 3
  • k

    KalaSai

    12/07/2022, 2:58 PM
    Hello, I have a beginner question. So we are deploying on kubernetes(non-managed ) not via kustomize or helm but manually using the same overlay structure. My question is when we upgrade to next version does all the version of the different services need to be in sync or can they be different versions ?
    ✅ 1
    r
    • 2
    • 1
  • j

    José Ferraz Neto

    12/07/2022, 3:41 PM
    Hello, can I pull images for a custom connector from a private repository even when I'm using Airbyte cloud?
    ✅ 1
    r
    • 2
    • 1
  • j

    Jonny Wray

    12/07/2022, 5:48 PM
    A question about best to approach structuring a solution - any thoughts welcome. Basic problem is a situation where daily data is on one endpoint and historical data on another, and how to go about merging the data. 1. One API gives access to all data for every stock in a stock exchange (eg NYSE). A date can be given as a query parameter. 2. Another API gives all historic data for one stock. You'd need to iterate over all stocks in an exchange to get all historic data. To backfill the database I could call API 1 from an initial start date in the 70s - but this will hit daily API limits very quickly. They allow 1000 (exchange, date) pairs per day (which is fine for daily updates). Backfill via the API 2 would mean a call for every possible stock which would be much easier on the rate limiting. But, that API isn't suitable for daily updates, that's the purpose of API1. Any suggestions on how go about backfilling the DB such that everything is compatible with incremental sync for the daily updates?
    e
    u
    • 3
    • 4
  • e

    Emma Forman Ling

    12/07/2022, 6:12 PM
    Hey y’all! I’m working on creating a new destination connector and would like it to support CDC and incremental append sync mode. I’m a little confused about what is entailed to support CDC. 1. It appears from the example in the overview doc that you just add a boolean delete field to records in the destination. Is that correct? Is it expected that all destination connectors handle CDC correctly? I didn’t see it in the spec docs. 2. In this post it says that incremental append sync mode requires dbt support, but the docs linked don’t mention that. Is dbt support required to implement append sync for a destination connector? EDIT: I found a bunch of destination connectors in the repo that have
    supportsDBT: false
    and support
    append
    sync mode, so I think that blog post is just outdated.
    ✅ 1
    m
    h
    • 3
    • 4
  • m

    Marcel Coetzee

    12/07/2022, 6:20 PM
    Can anyone tell me how to check the compression codec used when saving to S3? I selected different compression codecs, but the files are all saved with a
    .parquet
    postfix, instead of something more descriptive like
    .snappy.parquet
    n
    • 2
    • 4
  • k

    KalaSai

    12/07/2022, 8:07 PM
    Hello, I am facing the same issue as this one : https://github.com/airbytehq/airbyte/issues/16587. Is there a fix planned for this?
    u
    • 2
    • 1
  • s

    Sam Stoelinga

    12/07/2022, 8:14 PM
    I'm trying to setup mongo as a source in my local environment. Mongo is listening on 127.0.0.1:27017 however still getting this error: "Message: Unable to execute any operation on the source!". Anyone got ideas on how to troubleshoot further?
    n
    • 2
    • 5
  • j

    Jaye Howell

    12/07/2022, 8:21 PM
    we are trying to get Airbyte up and running on EKS with sources of stripe and an in house Postgres's database. Destination is snowflake. We are able to get the connections established for both source and destination and verified the credentials correct. Setting up the connection is able to pull the schemas. for the sources. The issues is when running the sync it keeps failing. We have been thorough the logs but unclear on where the issue could be or how we can get more detailed logs. The message we seem to get is
    Copy code
    Failure Origin: airbyte_platform, Message: Something went wrong within the airbyte platform
    1:46PM 12/07
    2022-12-07 20:15:36 - Additional Failure Information: scheduledEventId=72, startedEventId=73, activityType='RunWithJobOutput', activityId='d9c45409-2372-30d6-ae8f-29bdb4949aa2', identity='', retryState=RETRY_STATE_MAXIMUM_ATTEMPTS_REACHED
    I have downloaded the full logs here is the last part of the log... note, this was trying to write to S3 instead of snowflake as a test.
    Copy code
    22-12-07 20:05:32 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - integration args: {check=null, config=source_config.json}
    2022-12-07 20:05:32 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Running integration: io.airbyte.integrations.destination.s3.S3Destination
    2022-12-07 20:05:32 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Command: CHECK
    2022-12-07 20:05:32 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Integration config: IntegrationConfig{command=CHECK, configPath='source_config.json', catalogPath='null', statePath='null'}
    2022-12-07 20:05:33 [33mWARN[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):117 - Unknown keyword order - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
    2022-12-07 20:05:33 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - S3 format config: {"format_type":"Avro","compression_codec":{"codec":"no compression"}}
    2022-12-07 20:05:33 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Creating S3 client...
    2022-12-07 20:05:37 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Started testing if IAM user can call listObjects on the destination bucket
    2022-12-07 20:05:39 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Finished checking for listObjects permission
    2022-12-07 20:05:39 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Started testing if all required credentials assigned to user for single file uploading
    2022-12-07 20:05:40 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Finished checking for normal upload mode
    2022-12-07 20:05:40 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Started testing if all required credentials assigned to user for multipart upload
    2022-12-07 20:05:40 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Initiated multipart upload to mudflap.data.snowflake.loadstage//airbyte/test_1670443540085 with full ID MMGkcQrUGmDTk7znWge4J9yrJ9nXqRkFZu8QZ4G.ZNB3LipHsG64nk.nms6wXPMaFjf5RUQon4Wg2E5PhRmQtqphGE2j3Z62l4zegSJj0Xazg1jzokYue88h1NrWeHubM.vEC8TW8veSIip6r72r_3QX2oz7FKRostkQ1TCEOns-
    2022-12-07 20:05:40 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Called close() on [MultipartOutputStream for parts 1 - 10000]
    2022-12-07 20:05:40 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Called close() on [MultipartOutputStream for parts 1 - 10000]
    2022-12-07 20:05:40 [33mWARN[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):117 - [MultipartOutputStream for parts 1 - 10000] is already closed
    2022-12-07 20:05:40 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - [Manager uploading to mudflap.data.snowflake.loadstage//airbyte/test_1670443540085 with id MMGkcQrUG...Q1TCEOns-]: Uploading leftover stream [Part number 1 containing 3.34 MB]
    2022-12-07 20:05:41 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - [Manager uploading to mudflap.data.snowflake.loadstage//airbyte/test_1670443540085 with id MMGkcQrUG...Q1TCEOns-]: Finished uploading [Part number 1 containing 3.34 MB]
    2022-12-07 20:05:41 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - [Manager uploading to mudflap.data.snowflake.loadstage//airbyte/test_1670443540085 with id MMGkcQrUG...Q1TCEOns-]: Completed
    2022-12-07 20:05:41 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Finished verification for multipart upload mode
    2022-12-07 20:05:41 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Completed integration: io.airbyte.integrations.destination.s3.S3Destination
    2022-12-07 20:05:41 [32mINFO[m i.a.w.i.DefaultAirbyteStreamFactory(internalLog):120 - Completed destination: io.airbyte.integrations.destination.s3.S3Destination
    2022-12-07 20:05:44 [32mINFO[m i.a.w.p.KubePodProcess(close):737 - (pod: data-eng / destination-s3-check-11-0-nixlg) - Closed all resources for pod
    2022-12-07 20:05:44 [32mINFO[m i.a.w.t.TemporalAttemptExecution(get):162 - Stopping cancellation check scheduling...
    2022-12-07 20:05:44 [32mINFO[m i.a.c.i.LineGobbler(voidCall):114 - 
    2022-12-07 20:05:44 [32mINFO[m i.a.c.i.LineGobbler(voidCall):114 - ----- END CHECK -----
    2022-12-07 20:05:44 [32mINFO[m i.a.c.i.LineGobbler(voidCall):114 - 
    2022-12-07 20:05:44 [33mWARN[m i.t.i.w.ActivityWorker$TaskHandlerImpl(logExceptionDuringResultReporting):365 - Failure during reporting of activity result to the server. ActivityId = d9c45409-2372-30d6-ae8f-29bdb4949aa2, ActivityType = RunWithJobOutput, WorkflowId=connection_manager_145da5f5-a373-4b9d-8957-0383c8946517, WorkflowType=ConnectionManagerWorkflow, RunId=68b13988-1ec6-48cf-8475-3aa10c94288e
    io.grpc.StatusRuntimeException: NOT_FOUND: invalid activityID or activity already timed out or invoking workflow is completed
    	at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271) ~[grpc-stub-1.50.2.jar:1.50.2]
    	at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252) ~[grpc-stub-1.50.2.jar:1.50.2]
    	at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165) ~[grpc-stub-1.50.2.jar:1.50.2]
    	at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.respondActivityTaskCompleted(WorkflowServiceGrpc.java:3840) ~[temporal-serviceclient-1.17.0.jar:?]
    s
    • 2
    • 1
  • t

    Tomer Mesika

    12/07/2022, 9:08 PM
    Hello team, I am trying to move a very big single table from PostgreSQL to Snowflake (1,800 GB). Already managed to move several other tables between the same source and destination. Airbyte deployment is based on K8S, all resources (source, destination and Airbyte) are in AWS us-east-1 I am currently trying for several days without any luck, two things come to mind: 1. It seems that none of the resources are near their memory/CPU limits yet Airbyte seems to be able to move only around 30GB/hour - it seems that the fact it reads only 10,000 records at a time should be configured to a higher rate. Any way to do that? 2. It seems that a random glitch in either source/destination will cause the entire attempt to fail - the odds of that highly increase on an approximate 100-hour transfer and yet it can never finish (usually fails after reading somewhere between 20GB and 200GB, failure is sometimes due to source and sometimes due to destination) - any way to let it continue from the same point and not restart from scratch? (Incremental sync - deduped + history - single table)
    m
    • 2
    • 1
  • w

    Walker Philips

    12/07/2022, 9:44 PM
    Does anyone know how to connect to an Azure SQL Server instance that requires ActiveDirectoryPassword? I have tried using the MSSQL connector, but have had no luck with the various combinations I have been trying. The ODBC connection string that works in python is 'DRIVER={ODBC Driver 18 for SQL Server};server='+server+';UID='+username+';PWD='+ password+';Authentication=ActiveDirectoryPassword;'
    n
    • 2
    • 10
  • s

    Sam Stoelinga

    12/07/2022, 9:52 PM
    I think I broke my local install. UI shows "Oops! Something went wrong… Unknown error occurred". and these logs were from Chrome console. This started happening after I clicked update all connectors. I can just nuke and retry but sharing here just in case
    Copy code
    manifest.json:1          GET <http://localhost:8000/manifest.json> 401 (Unauthorized)
    manifest.json:1 Manifest: Line: 1, column: 1, Syntax error.
    apiOverride.ts:57          POST <http://localhost:8000/api/v1/web_backend/connections/list> 500 (Internal Server Error)
    (anonymous) @ apiOverride.ts:57
    d @ regeneratorRuntime.js:86
    (anonymous) @ regeneratorRuntime.js:66
    (anonymous) @ regeneratorRuntime.js:117
    r @ asyncToGenerator.js:3
    s @ asyncToGenerator.js:25
    (anonymous) @ asyncToGenerator.js:32
    (anonymous) @ asyncToGenerator.js:21
    (anonymous) @ apiOverride.ts:24
    ae @ AirbyteClient.ts:3276
    value @ WebBackendConnectionService.ts:17
    (anonymous) @ useConnectionHook.tsx:243
    fetchFn @ query.js:298
    l @ retryer.js:95
    c @ retryer.js:156
    t.fetch @ query.js:330
    n.fetchOptimistic @ queryObserver.js:180
    (anonymous) @ useBaseQuery.js:84
    A @ useQuery.js:7
    o @ useSuspenseQuery.ts:19
    O @ useConnectionHook.tsx:243
    le @ AllConnectionsPage.tsx:23
    sa @ react-dom.production.min.js:157
    Gs @ react-dom.production.min.js:267
    Au @ react-dom.production.min.js:250
    Ou @ react-dom.production.min.js:250
    Cu @ react-dom.production.min.js:250
    _u @ react-dom.production.min.js:243
    (anonymous) @ react-dom.production.min.js:123
    t.unstable_runWithPriority @ scheduler.production.min.js:18
    Vi @ react-dom.production.min.js:122
    Ki @ react-dom.production.min.js:123
    M @ scheduler.production.min.js:16
    b.port1.onmessage @ scheduler.production.min.js:12
    react_devtools_backend.js:4012 Error: Internal Server Error: Duplicate key 99a5b8cc-01ce-4a02-946c-a33d91b3d1b2 (attempted merging values io.airbyte.config.ActorCatalogFetchEvent@3825d1c9[id=<null>,actorId=99a5b8cc-01ce-4a02-946c-a33d91b3d1b2,actorCatalogId=5be6c2c9-692c-467b-a052-56ccc9915c7c,configHash=<null>,connectorVersion=<null>,createdAt=1670444899] and io.airbyte.config.ActorCatalogFetchEvent@4e9e2fd1[id=<null>,actorId=99a5b8cc-01ce-4a02-946c-a33d91b3d1b2,actorCatalogId=e1f51180-5fb3-4f7d-b509-17b6dbcf0675,configHash=<null>,connectorVersion=<null>,createdAt=1670444899])
        at apiOverride.ts:107:9
        at d (regeneratorRuntime.js:86:17)
        at Generator._invoke (regeneratorRuntime.js:66:24)
        at Generator.next (regeneratorRuntime.js:117:21)
        at r (asyncToGenerator.js:3:20)
        at s (asyncToGenerator.js:25:9)
    overrideMethod @ react_devtools_backend.js:4012
    onError @ query.js:356
    h @ retryer.js:67
    (anonymous) @ retryer.js:132
    Promise.catch (async)
    l @ retryer.js:116
    c @ retryer.js:156
    t.fetch @ query.js:330
    n.fetchOptimistic @ queryObserver.js:180
    (anonymous) @ useBaseQuery.js:84
    A @ useQuery.js:7
    o @ useSuspenseQuery.ts:19
    O @ useConnectionHook.tsx:243
    le @ AllConnectionsPage.tsx:23
    sa @ react-dom.production.min.js:157
    Gs @ react-dom.production.min.js:267
    Au @ react-dom.production.min.js:250
    Ou @ react-dom.production.min.js:250
    Cu @ react-dom.production.min.js:250
    _u @ react-dom.production.min.js:243
    (anonymous) @ react-dom.production.min.js:123
    t.unstable_runWithPriority @ scheduler.production.min.js:18
    Vi @ react-dom.production.min.js:122
    Ki @ react-dom.production.min.js:123
    M @ scheduler.production.min.js:16
    b.port1.onmessage @ scheduler.production.min.js:12
    2react_devtools_backend.js:4012 Error: Internal Server Error: Duplicate key 99a5b8cc-01ce-4a02-946c-a33d91b3d1b2 (attempted merging values io.airbyte.config.ActorCatalogFetchEvent@3825d1c9[id=<null>,actorId=99a5b8cc-01ce-4a02-946c-a33d91b3d1b2,actorCatalogId=5be6c2c9-692c-467b-a052-56ccc9915c7c,configHash=<null>,connectorVersion=<null>,createdAt=1670444899] and io.airbyte.config.ActorCatalogFetchEvent@4e9e2fd1[id=<null>,actorId=99a5b8cc-01ce-4a02-946c-a33d91b3d1b2,actorCatalogId=e1f51180-5fb3-4f7d-b509-17b6dbcf0675,configHash=<null>,connectorVersion=<null>,createdAt=1670444899])
        at apiOverride.ts:107:9
        at d (regeneratorRuntime.js:86:17)
        at Generator._invoke (regeneratorRuntime.js:66:24)
        at Generator.next (regeneratorRuntime.js:117:21)
        at r (asyncToGenerator.js:3:20)
        at s (asyncToGenerator.js:25:9)
    🙏 1
    a
    j
    +2
    • 5
    • 10
  • e

    Emma Forman Ling

    12/07/2022, 10:35 PM
    Hey y’all! Another question about building my own destination connector. How does airbyte expect destinations to behave in the case of partial failure in full refresh overwrite mode? Is it assumed that the destination tables are deleted before any writes begin? These docs suggest that partial success from the emitted message would result in the next sync picking up where that one left off. If the data is deleted and failure happens before anything is written, the destination could be in an empty state. Is that valid? And if it fails after one record is written, and the next time it picks up after that record but does a full refresh with overwrite, it would end up missing the first record, right? That seems like an incorrect state for the destination tables to be in…
    n
    • 2
    • 1
  • s

    Seowan Lee

    12/08/2022, 3:21 AM
    Hello, I'm trying to deploy airbyte in AWS EKS. However, I got the error below.
    Copy code
    helm install --values /Users/bagelcode/Workspace/deploy-core-eks/values/airbyte.dev.yaml airbyte-dev-adhoc airbyte/airbyte --debug                1 err | data-core-eks-dev kube | 11:58:31 AM
    install.go:173: [debug] Original chart version: ""
    install.go:190: [debug] CHART PATH: /Users/bagelcode/Library/Caches/helm/repository/airbyte-0.40.40.tgz
    
    client.go:282: [debug] Starting delete for "airbyte-dev-admin" ServiceAccount
    client.go:122: [debug] creating 1 resource(s)
    client.go:282: [debug] Starting delete for "airbyte-dev-adhoc-airbyte-env" ConfigMap
    client.go:311: [debug] configmaps "airbyte-dev-adhoc-airbyte-env" not found
    client.go:122: [debug] creating 1 resource(s)
    client.go:282: [debug] Starting delete for "airbyte-dev-adhoc-airbyte-secrets" Secret
    client.go:311: [debug] secrets "airbyte-dev-adhoc-airbyte-secrets" not found
    client.go:122: [debug] creating 1 resource(s)
    client.go:282: [debug] Starting delete for "airbyte-dev-adhoc-airbyte-bootloader" Pod
    client.go:311: [debug] pods "airbyte-dev-adhoc-airbyte-bootloader" not found
    client.go:122: [debug] creating 1 resource(s)
    client.go:491: [debug] Watching for changes to Pod airbyte-dev-adhoc-airbyte-bootloader with timeout of 5m0s
    client.go:519: [debug] Add/Modify event for airbyte-dev-adhoc-airbyte-bootloader: ADDED
    client.go:578: [debug] Pod airbyte-dev-adhoc-airbyte-bootloader pending
    client.go:519: [debug] Add/Modify event for airbyte-dev-adhoc-airbyte-bootloader: MODIFIED
    client.go:580: [debug] Pod airbyte-dev-adhoc-airbyte-bootloader running
    Error: failed pre-install: timed out waiting for the condition
    helm.go:81: [debug] failed pre-install: timed out waiting for the condition
    I cannot figure out what is the problem. I found slack archive, which is the same as my problem but has no solution. Could you help me how troubleshoot this problem? https://airbytehq.slack.com/archives/C021JANJ6TY/p1665660532221119?thread_ts=1665656538.448909&amp;cid=C021JANJ6TY
    u
    • 2
    • 1
  • p

    Piyush

    12/08/2022, 4:02 AM
    Hey Everyone - Pardon me for asking naive question here ….curious to learn and know if I can plug my custom logics or scripts similar to DAGs / Worklfow on Airflow on Airbyte……. like a simple usecase where based on response from API -> need to call other APIs (worflow) and consolidate and enrich data and then write the same in Data Lake(s3) post processing from DAG?
    e
    • 2
    • 1
1...106107108...245Latest