https://linen.dev logo
Join Slack
Powered by
# ask-community-for-troubleshooting
  • a

    Alfie Mountfield

    03/21/2022, 11:10 AM
    Is there a way to put a time-frame on a sync? E.g. only pull data in the past week
    ✅ 1
    h
    • 2
    • 2
  • m

    Miquel Rius

    03/21/2022, 11:58 AM
    Hi, all airbyte versions still alpha? I see all of them tagged as pre-release
    ✅ 1
    h
    • 2
    • 1
  • t

    Tan Ho

    03/21/2022, 12:22 PM
    Hi all, I’m trying to write a custom destination. Is there anyway to get the current job id in the write function? Ideally I think airbyte should include the running context info(connection id, job id) in write method.
    Copy code
    class CustomDestination(Destination):
        def write(
            self, config: Mapping[str, Any], configured_catalog: ConfiguredAirbyteCatalog, input_messages: Iterable[AirbyteMessage]
        ) -> Iterable[AirbyteMessage]:
            # How to get the current job id of current connection sync
    👀 1
    ✅ 1
    o
    a
    • 3
    • 5
  • t

    Thomas Ebermann

    03/21/2022, 7:34 PM
    hey guys i wanted to make facebook-marketing connector work with api v13 so i changed the connector to pull the facebook_marketing lib from a forged lib and now have build the whole airbyte docker image again with gradle. how do i now deploy it not just on my local dev but on my prod box? this is probably a small docker question than anything else e.g. I am using the official https://github.com/airbytehq/airbyte/blob/master/docker-compose.yaml to start it on prod, i guess i need to register my local image somewhere and then have an own docker compose manifest that pulls it from there? (bearbeitet)
    👀 1
    h
    • 2
    • 6
  • i

    Ivan Zhabin

    03/22/2022, 7:20 AM
    Hi guys. Can someone suggest which JIRA source report should I use to get the status transition for an issue
    o
    • 2
    • 1
  • j

    Johan Strand

    03/22/2022, 8:49 AM
    Followed the guide to deploy on GCP without problems. Then i try to create a ssh tunnel with
    gcloud --project=$PROJECT_ID beta compute ssh $INSTANCE_NAME -- -L 8000:localhost:8000 -N -f
    I get
    ssh: connect to host xx.xx.xx.xx port 22: Operation timed out
    Anything else that needs to be done? Also tried changing to port 22
    👀 1
    h
    • 2
    • 3
  • s

    Saket Singh

    03/23/2022, 6:08 AM
    Hey guys, I am facing a cap on compute when it comes to moving large data on a very frequent schedule (5 minutes). One way I am thinking of distributing the load is having multiple airbyte deployments on different instances that share the same postgres DB. I will distribute connections to both places accordingly. Is this a good way to go about it? Will there be conflicts in picking jobs if both airbytes share a DB?
    ✅ 1
    a
    • 2
    • 3
  • a

    Ashish

    03/23/2022, 11:01 AM
    Hi, is there a way to inject custom script or code in lifecycle of connectors? use case - I need to update metadata centralised metastore once the sync job completes with S3 destination
    ✅ 1
    o
    a
    • 3
    • 6
  • m

    Martin Carlsson

    03/23/2022, 12:45 PM
    Good connector to teach Airbyte? Tomorrow I'm hosting a Snowflake introduction evening at my company. And I also want to show Airbyte. However, I would like to avoid setting up a source for Airbyte myself, or have the participants to do it. Are there any Aitbyte sources that don't require any setup - that just works after
    docker-compose up
    ✅ 1
    o
    a
    • 3
    • 4
  • c

    Chris Stout

    03/23/2022, 2:37 PM
    I'm was able to get my first postgres source working after going back to version 0.4.4 to get around some role/grantee issues. One thing I noticed though, in that my integers are converted to floats, and timestamps converted to strings with precision loss. Has any one else had this problem or am I missing something? I didn't see a way to change the schema type in the source settings, and it looks like airbyte supports both integer and timestamp". https://docs.airbyte.com/understanding-airbyte/supported-data-types. Is this just a current issue with the source connector or some other fundamental problem with the core and/or JSON Schema?
    ✅ 1
    a
    • 2
    • 3
  • v

    Vladimir Remar

    03/23/2022, 3:14 PM
    Hi guys, I have a question, I was able to deploy using https://github.com/airbytehq/airbyte/tree/master/charts/airbyte, is it normal to have a login screen? I mean I was espected the other one not the this one
    👀 1
    a
    y
    • 3
    • 6
  • d

    David Copeland

    03/23/2022, 5:23 PM
    Hi all, i'm a BI Analyst just trying out Airbyte for the first time and excited by it as i'm a 1 person team in a small business with multiple data sources 🙂 I have connected a source and i'm now trying to use my local MS SQL Server as a destination but just keep getting this error I have checked the server port is enabled, services are running and added allow connection rule to windows firewall. Any other ideas?
    ✅ 1
    a
    • 2
    • 3
  • i

    Ignacio Alasia

    03/23/2022, 6:07 PM
    Hello there I'm testing the local airbyte version to then move on to the kubernetes version. Currently I have it mounted in a docker, also a postgres image as source and snowflake as destination. I add how to configure it from pgAdmin following the documentation.
    Copy code
    --CREATE USER
    CREATE USER nacho PASSWORD 'MyPassword';
    
    --Grant to user
    GRANT USAGE ON SCHEMA public TO nacho;
    
    --More grants
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO nacho;
    
    -- Allow user to see tables created in the future
    ALTER DEFAULT PRIVILEGES IN public SCHEMA GRANT SELECT ON TABLES TO nacho;
    
    --Replication slots
    --This Airbyte user for your instance needs to be granted REPLICATION and LOGIN permissions.
    --You can create a role with
    CREATE ROLE rsrole REPLICATION LOGIN;
    
    --And grant that role to the user
    GRANT rsrole TO nacho;
    
    --CREATE REPLICATION SLOT
    SELECT pg_create_logical_replication_slot('rs_nacho', 'pgoutput');
    
    --DROP REPLICATION SLOT
    --select pg_drop_replication_slot('rs_nacho');
    
    --ACTIVATE RS
    SELECT redo_lsn, slot_name, restart_lsn,
    round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind, database, active
    FROM pg_control_checkpoint(), pg_replication_slots;
    
    --CREATE RANDOM TABLE
    CREATE TABLE stage(
    IDESTADE SERIAL PRIMARY KEY,
    name VARCHAR(45) NOT NULL,
    address VARCHAR(45) NOT NULL
    );
    
    --inserts
    INSERT INTO stadium (name, address) VALUES ('pepito','bsas');
    INSERT INTO stadium (name, address) VALUES ('rolinga','bsas');
    --
    ALTER TABLE stage REPLICA IDENTITY DEFAULT;
    --create Publication
    CREATE PUBLICATION airbyte_test FOR TABLE stage;
    I managed to make the connection between source and destination successfully, it creates the normalization table but when it comes to taking the information it throws me the following error:
    Copy code
    "2022-03-23 ​​17:26:41 normalization > 17:26:41 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
    2022-03-23 ​​17:26:41 INFO i.a.w.DefaultNormalizationWorker(run):69 - Normalization executed in 23 seconds.
    2022-03-23 ​​17:26:41 INFO i.a.w.t.TemporalAttemptExecution(get):131 - Stopping cancellation check scheduling...
    2022-03-23 ​​17:26:41 INFO i.a.w.t.TemporalUtils(withBackgroundHeartbeat):235 - Stopping temporary heartbeat...
    2022-03-23 ​​17:26:41 INFO i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.errors: $.plugin: is not defined in the schema and the schema does not allow additional properties, $.publication: is not defined in the schema and the schema does not allow additional properties, $.replication_slot: is not defined in the schema and the schema does not allow additional properties, $.method: does not have a value in the enumeration [Standard], $.method: must be a constant value Standard
    2022-03-23 ​​17:26:41 INFO i.a.v.j.JsonSchemaValidator(test):56 - JSON schema validation failed.
    errors: $.method: does not have a value in the enumeration [Standard]"
    Could someone shed a little light here?
    👀 1
    m
    m
    • 3
    • 11
  • j

    Jyothi

    03/23/2022, 7:27 PM
    • @here : Airbyte version : 0.35.37-alpha When using postgres sql connection , connection is success but dont show schema objects... Any special permission we should give to schema or tables in the schema to visible in airbyte connection .....or is this issue with the airbyte product specific to postgre sql source . Any help would be appreciated
    o
    • 2
    • 1
  • n

    Nirmit Jain

    03/24/2022, 6:23 AM
    hey how can I override connector images to take from some other source rather than using public cloud images, while deploying with helm
    ✅ 1
    👀 1
    o
    h
    g
    • 4
    • 7
  • n

    Naphat Theerawat

    03/24/2022, 8:36 AM
    Hi Airbyte Team, I am having an issue where my source table has updated table schema, but it is not being reflected on the destination table. What is the current workaround for this? Do I manually alter the destination schema or should I get rid of the connection and delete the Airbyte tables and resync? I saw that my issue is pretty much related to https://github.com/airbytehq/airbyte/issues/1967 but I don't see the workaround for the solution. Kindly advise. Thank you.
  • s

    Shubham Pinjwani

    03/24/2022, 8:44 AM
    Is there a limit to no. of connections. Will there be any issue(something like efficiency or anything else) if I create many connections like 1,000,000 or something.
    ✅ 1
    m
    • 2
    • 1
  • s

    Shubham Pinjwani

    03/24/2022, 8:46 AM
    Will there be any issue efficiency wise or any other issue if for every sync I create a new connection, perform a sync, and then delete the connection. The thing is in the destination side the schema is changing according to the source if I do something like this.
    ✅ 1
    a
    • 2
    • 1
  • d

    David Copeland

    03/24/2022, 9:48 AM
    Jira connector URL - Apologies if there is an answer to this, I searched previous chats and found that self hosted custom domains aren't yet supported but i am having this issue with our cloud hosted Jira. It does not accept our domain ending jira.com?
    ✅ 1
    o
    a
    • 3
    • 4
  • h

    Harry.C

    03/24/2022, 1:01 PM
    Hi. I'm new to Airbyte. I notice that the connections in Airbyte GUI are unnamed. Am I missing anything?
    ✅ 1
    o
    a
    • 3
    • 4
  • m

    Miquel Rius

    03/24/2022, 2:44 PM
    Hi, one question related to notification. Is there a way to customize the output of a notification send to slack from airbyte? Current msg is:
    Copy code
    Your connection from Google Sheets version 0.2.9 to Redshift version 0.3.23 succeeded
    This was for sync started on Thursday, March 24, 2022 at 8:40:34 AM Coordinated Universal Time, running for 30 seconds.
    I would like to add name of the connection. and change the url as it says localhost:8080, and it dont redirect to the server installed in EC2.
    ✍️ 1
    ✅ 1
    a
    o
    • 3
    • 3
  • e

    Emil Ordonez

    03/24/2022, 5:53 PM
    Hi Everyone, I have one question and I'll try to get you in context: I've recently started my trial on airbyte cloud, I've also taken a look at demo.airbyte. My question is: Why I can find a Shopify and Amazon SP API connector on the demo site and not on the cloud trial site?
    ✅ 1
    a
    n
    • 3
    • 4
  • a

    Andrei Batomunkuev

    03/24/2022, 6:48 PM
    Hello! I have a question regarding adding custom dbt transformations to AirByte. I have set some sources as Incremental Dedup + Basic Normalization in Airbyte. My questions is, knowing that my source uses incremental dedup, do I need to configure my custom dbt models as incremental? For example, I have orders source as Incremental Dedup. In my custom dbt project, I have it as a source. Then, orders go though each layer in dbt (base/staging/marts). In the base layer, I have set the materialization = "ephemeral", staging = "view". Do I need to set materialization in marts as "incremental"? Or do I need to set all layers to incremental materialization, since Airbyte source is incremental dedup?
    ✅ 1
    o
    c
    • 3
    • 3
  • a

    Arvi

    03/25/2022, 1:55 AM
    Hi There, Is there a way to read data from a webhook in airbyte
    • 1
    • 1
  • r

    RC

    03/25/2022, 8:16 AM
    Hi. I am new here. I am trying the example in the Quickstart, but “Local JSON” is not available as the destination type. Did anyone run into the same problem?
    ✅ 1
    o
    a
    • 3
    • 3
  • b

    Benoit Hugonnard

    03/25/2022, 5:20 PM
    Hello 👋 Quick question, is it possible for Airbyte to work on K8S without a shared mount ? I’m having difficulties getting the approval from my Platform team, they say that mounted volumes are bad practices, they make upgrades complicated... Thanks 🙏
    👍 1
    ✅ 1
    m
    • 2
    • 4
  • s

    saad ab

    03/26/2022, 7:42 PM
    Hi, I am getting error when command runs
    docker-compse up
    , any one can help me out please
    WARNING: The DB_DOCKER_MOUNT variable is not set. Defaulting to a blank string.
    ERROR: Duplicate mount points: [data:.:rw, workspace:.:rw, .:.:rw]
  • m

    Marcus Vinicius Silva De Azevedo

    03/27/2022, 10:13 PM
    Hi, any chance of invoking a Job via HTTP request other than airflow ?
    ✅ 1
    o
    h
    • 3
    • 2
  • b

    Benoit Hugonnard

    03/28/2022, 8:09 AM
    It’s me again 👋 Still working with my platform team on the K8S deployment. I’d like to know if you have the goal to make K8S production ready soon ? Is it something you’re working on ? This question is based on several aspects • Are shared volumes something you aim to take out of the deployement ? • All services are defaulted scaled to 1, can they scale ? • Can you read access keys from profile of the pod instead ? Thank you for your time 🙂
    ✅ 1
    o
    a
    • 3
    • 3
  • a

    Alexandre Chouraki

    03/28/2022, 9:43 AM
    Hi Airbyte team! A few conceptual questions 🙂 basically, I’m trying to send data from N sources to the same destination, in the same namespace and tables (all of my sources have the same format, it’s just the data inside that’s different, and I’m trying to gather everything in the same place). This is what I understand, but I kinda need guidance... • Overwrite can’t work, because if I use it, the last sync will erase all the previous ones, so instead of having data from my N sources I’ll only have one. • Incremental, with a “created_at” cursor field might not work either, right? Because all of my sources will use the same cursor field in the destination, but if I have source A with data at t=1 and 3, and then source B with data at t=2, if A syncs before B I’ll lose B’s data? • Incremental, with a “id” cursor field might work though? Would gladly take any input you might have on this one 🙂 (otherwise I’ll do copies to N destinations and just use dbt to merge everything together, but I’d rather avoid it to keep my database as clean and simple as possible)
    ✅ 1
    c
    a
    • 3
    • 9
1...293031...245Latest