https://linen.dev logo
Join Slack
Powered by
# getting-started
  • m

    matt_elgazar

    12/05/2024, 6:58 PM
    hi how do I access the
    settings
    and
    select
    streams from meltano.yml in the tap itself? In the tap-mongodb codebase there is a part that hits all collections in the database, but this is unnecessary if I’m only running a select on one collection
    Copy code
    for collection in self.database.list_collection_names(authorizedCollections=True, nameOnly=True):
       ...
    I was thinking I can add a configuration for the behavior
    Copy code
    if self.discovery_mode == 'select':
                collections = <get current selected streams>
            else:
                collections = self.database.list_collection_names(authorizedCollections=True, nameOnly=True)
    I can force it in a way that’s probably super bad practice and wouldn’t generalize across different env configurations:
    Copy code
    selected_collections = yaml.safe_load(open('meltano.yml')).get('plugins').get('extractors')[0].get('select')
    r
    e
    • 3
    • 147
  • a

    Andres Felipe Huertas Suarez

    12/11/2024, 7:44 AM
    Hi! I have some problem trying to install the
    target-parquet
    loader I have done in the past for other repos, but now I seems to have some problems with the build of pyarrow (not quite sure what is happening) using
    uvx meltano add loader target-parquet
    yields the following error It has something to do with the pyarrow wheels, maybe some conflicting versions, I tried using
    uv pip install pyarrow
    or
    uv pip install pyarrow==14.0.0
    the uv.lock file looks like. I'm using python 3.9 as that s the supported version for the tap I want to use (tap-shopify) any ideas what is wrong?
    Copy code
    version = 1
    requires-python = "==3.9.20"
    
    [[package]]
    name = "dataops"
    version = "0.1.0"
    source = { virtual = "." }
    e
    • 2
    • 2
  • a

    Andres Felipe Huertas Suarez

    12/11/2024, 1:45 PM
    I'm also having this problem now, that I run
    uvx meltano install
    and the instalation fails, it was working before and now I dont't quite understand what is going wrong. It seems it is trying to install the tap-awin using a 3.13 python env that I dont know where is it coming from. The tap I have in a local repo, and the pyproject doesn't point to python 3.13 but:
    Copy code
    [tool.poetry.dependencies]
    python = "<3.10,>=3.6.2"
    requests = "^2.25.1"
    singer-sdk = "^0.3.16"
    and my meltano project should be running using python 3.9 (that is what I see when I do
    uv run python --version
    ) any ideas here? also If I go directly to the tap repo and run
    poetry install
    it works without issues, clues? thanks! 🙂
    ✅ 1
    e
    • 2
    • 2
  • j

    josh_lloyd

    12/13/2024, 8:18 PM
    On meltano.com your “join slack” link is broken. This one:
    <https://meltano.slack.com/join/shared_invite/zt-2mslb6jbl-5n1DlD_1mFudiJLGBWqA2Q#/shared-invite/error>
    ✅ 1
    e
    • 2
    • 2
  • a

    Alexander Trauzzi

    12/16/2024, 6:27 PM
    Hello hello! I have two questions which hopefully someone here can help me out with 🙂 First: Is it possible to have meltano automatically update a destination postgres target schema if the inbound data includes new columns (no need to worry about removals) Second: Is there any way to configure the meltano backend to use postgres? I'm unable to find any documentation on the systemdb option and whether the URI it accepts can include a postgres connection string...
    e
    • 2
    • 21
  • s

    Samson Eromonsei

    01/12/2025, 11:57 PM
    Hello Everyone, I’m new to Meltano, so apologies if this isn’t the right channel for my question. I’m facing an issue with a
    tap-rest-api-msdk
    extractor where I’m encountering a JSON decoding error. The API endpoint I’m using points to a CSV file hosted on AWS API Gateway. When I tested the API endpoint in Postman, it returned a 200 OK status and successfully provided the text data. However, I wrote my first meltano.yml file to replicate the same operation, but I’m running into an error related to JSON decoding. The API is suppose returns a CSV file, and I’ve specified the Content-Type in the headers as text/csv, but I’m unsure if I’ve configured it correctly. Here’s the error I’m seeing below Any guidance or suggestions to resolve this would be greatly appreciated! Thank you!
    File "site-packages/singer_sdk/tap_base.py", line 134, in streams
    for stream in self.load_streams():
    File "site-packages/singer_sdk/tap_base.py", line 358, in load_streams
    for stream in self.discover_streams():
    File "site-packages/tap_rest_api_msdk/tap.py", line 494, in discover_streams
    schema = self.get_schema()
    File "site-packages/tap_rest_api_msdk/tap.py", line 615, in get_schema
    extract_jsonpath(records_path, input=_json())
    File "site-packages/requests/models.py", line 978, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
    requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
    Here is my first ever version meltanol.yaml not sure if I am using a wrong version or doing the wrong thing just follow the basic instructions
    version: 1
    default_environment: dev
    project_id: fcde5f5-df01-438f-9b43-dd0e0f50e48a
    environments:
    ◦
    name: dev
    config:
    plugins:
    extractors:
    ◦
    name: tap-rest-api-msdk
    config:
    api_url: https://api.stormvistawxmodels.com/v1/model-data/ecmwf-eps/20250109/12
    streams:
    ◦
    name: ercot-solargen-forecast
    path: ~/home/file.csv
    headers:
    content-type: text/csv
    # api_keys:
    #   X-API-KEY: <your-api-key>
    loaders:
    ◦
    name: target-azureblobstorage
    config:
    account_name: dlsdevgbaz1527foan1st
    container_name: xxxxxxxxxxxx
    environments:
    ◦
    name: staging
    ◦
    name: prod
    plugins:
    extractors:
    ▪︎
    name: tap-rest-api-msdk
    variant: widen
    pip_url: tap-rest-api-msdk
    loaders:
    ▪︎
    name: target-azureblobstorage
    variant: shrutikaponde-vc
    pip_url: git+https://github.com/shrutikaponde-vc/target-azureblobstorage.git
    e
    • 2
    • 1
  • l

    liat katzav

    01/16/2025, 1:16 PM
    Hi, I am encountering an issue while trying to run the extractor using a state backend in Meltano (version 3.4.1). In the
    meltano.yml
    file, we have configured the S3 path. However, when I run:
    Copy code
    meltano --environment=dev run tap-stripe target-snowflake
    I get the following error message:
    Copy code
    boto3 required but not installed. Install meltano[s3] to use S3 as a state backend. state_backend=AWS S3
    2025-01-16T13:13:02.565031Z [error] Cannot start plugin tap-stripe: Failed to retrieve state
    Can you please advise on how to resolve this issue?
    ✅ 1
    r
    • 2
    • 2
  • a

    Anton Kuiper

    01/18/2025, 3:20 PM
    Hi all, Great to have a slack community for Meltano. I try to understand the concept. I start with the Getting Started tutorial : https://docs.meltano.com/getting-started/part1 . I've installed the software on my linux box. (and also on windows), both run into the same error / behavior? The expected output is a file in the folder /output but the folder is empty beside .gitingnor. Here is my multano.yml file:
    version: 1
    default_environment: dev
    project_id: 2910bb16-b01d-469c-8454-1c401537fe4c
    environments:
    - name: dev
    - name: staging
    - name: prod
    plugins:
    extractors:
    - name: tap-github
    variant: meltanolabs
    pip_url: meltanolabs-tap-github
    config:
    start_date: '2024-01-01'
    repositories:
    - meltano/meltano
    select:
    - commits.url
    - commits.sha
    - commits.commit_timestamp
    loaders:
    - name: target-jsonl
    variant: andyh1203
    pip_url: target-jsonl
    The output is run into the shell(terminal) i use Visual Studio Code. I've included the github personal key.. i can find that in the .env file. Any help would be nice. I've try to get some advice from chatGPT too but that one is far from it right now.
    r
    e
    • 3
    • 2
  • y

    Yasmim

    01/25/2025, 8:05 PM
    Hello, how can i use airflow with meltano? When i try the meltano add orchestrator airflow, it create but when i run meltano invoke airflow:create-admin, i got the error [Errno 2] No such file or directory: 'meltano_elt/.meltano/run/airflow/airflow.cfg' Do I need to install airflow before?
  • k

    Kurt Snyder

    01/27/2025, 11:36 PM
    I'm trying a quick eval of Meltano and trying to install the mysql extractor with
    meltano add extractor tap-mysql
    ran into this error (all other steps seemed to have worked):
    Copy code
    Building wheel for pendulum (pyproject.toml): started
      error: subprocess-exited-with-error
      
      × Building wheel for pendulum (pyproject.toml) did not run successfully.
      │ exit code: 1
      ╰─> See above for output.
    This is on M2 Pro MacOS 14.7 with pyenv running 3.12.6 right after installing meltano with
    Copy code
    pipx install "meltano"
      installed package meltano 3.6.0, installed using Python 3.13.1
      These apps are now globally available
        - meltano
    Any suggestions appreciated
    e
    • 2
    • 4
  • d

    Denis Gribanov

    01/28/2025, 8:28 PM
    Hi everyone! Is it considered ok to have custom taps and targets inside the
    /extract
    and
    /load
    directories? I'd like to keep all taps and targets in the same repository. What potential issues might I encounter if I take this approach?
    ✅ 1
    e
    • 2
    • 2
  • j

    Jean Paul Azzopardi

    01/28/2025, 9:28 PM
    hi everyone, starting out with meltano and trying to configure snowflake as my target destination. Currently have this setup in my
    meltano.yml
    file but keep receiving a "loader failed" error. Using key-pair auth with private key in
    .env
    file - any thoughts? Tried debugging but logs are unclear to me. Thanks!
    Copy code
    loaders:
      - name: target-snowflake
        variant: meltanolabs
        pip_url: meltanolabs-target-snowflake
        config:
          account: xxxx
          add_record_metadata: false
          database: production
          default_target_schema: public
          role: xxxx
          schema: xxxx
          user: xxxx
          warehouse: default
    ✅ 1
    r
    • 2
    • 8
  • j

    Jay

    01/31/2025, 9:16 AM
    Hi team - I'm a data analyst at Qogita and I'm looking to potentially set up your self host infrastructure - before I do I just want to check that you offer tapfiliate as 1 of your taps as this is the main reason I would want to set it up - hope to hear from you soon 😄
    r
    e
    • 3
    • 3
  • c

    Chris Walker

    02/13/2025, 9:43 PM
    Hi melters Is anyone able to guide me on the easiest way in 2025 to get SQLFluff working for my meltano project? I want that auto-formatting so so badly.
    ✅ 1
    e
    • 2
    • 2
  • d

    dbname

    02/17/2025, 8:20 PM
    Hi All, I have a more general question. When working with taps, is it normal for the catalog I generate to have issues for my targets? What I mean is that I am using tap-googleads with a target-postgres. If I generate the catalog and run a test with a target-jsonl, all works fine. However, if I use the stock catalog and try this with postgres, I need to go in and make lots of changes. Some examples of changes are column naming notation and selection of primary keys. I expected a catalog generation to generate something that all loaders could use but I'm starting to see that is not the case. I do not mind making these updates but want to make sure it it the proper work flow. The end result seems to be a pretty customized catalog file that I need to reference from meltano.yml.
    a
    e
    +4
    • 7
    • 23
  • k

    Kavin Srithongkham

    03/04/2025, 11:46 AM
    Hi there, I'm trying to test out
    tap-sharepointsites
    and I am getting this error
    Copy code
    2025-03-04T11:39:22.288098Z [error    ] Extractor 'tap-sharepointsites' could not be installed: Failed to install plugin 'tap-sharepointsites'.
    2025-03-04T11:39:22.288141Z [info     ] ERROR: Ignored the following versions that require a different python version: 0.0.1 Requires-Python >=3.7.1,<3.11
    ERROR: Could not find a version that satisfies the requirement tap-sharepointsites (from versions: none)
    ERROR: No matching distribution found for tap-sharepointsites
    
    Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
    join our friendly Slack community.
    
    Failed to install plugin(s)
    Originally, I had Python 3.11 so I used pyenv to downgrade to Python 3.10 but it still seems like it doesn't want to find the right version. Does anyone have any ideas about what I should try out?
    ✅ 1
    r
    • 2
    • 2
  • t

    Tanner Wilcox

    03/05/2025, 6:19 PM
    When I run
    meltano invoke dbt-postgres:run
    can I specify just one source to transform?
    e
    v
    • 3
    • 2
  • c

    Chad Bell

    03/06/2025, 3:56 AM
    Hi 👋 looking into Meltano for our ingestion from GCP Cloud SQL into BigQuery. Running:
    meltano run tap-postgres target-bigquery
    Is there a way to load the data into bigquery columns directly, instead of one json "data" column?
    ✅ 1
    a
    • 2
    • 2
  • a

    Allan Whatmough

    03/07/2025, 5:38 AM
    I haven't used Meltano for a while but I'm trying to get back to it now - I have a CI pipeline I built a long time ago but I've noticed it's giving me this error when trying to install Meltano:
    No matching distribution found for meltano==2.10.0
    Was this version yanked? I can still see it in PyPI
    r
    e
    • 3
    • 4
  • j

    Juan Pablo Herrera

    03/12/2025, 10:41 PM
    Hi all, new meltano user here. I am running into a "BrokenPipeError", and im not sure why. I have a csv file in my desktop and I trying to store in a parquet file. Here is also my meltano.yml. I thought it could be file size but right now my file has 10k rows. Thank you!
    Copy code
    version: 1
    default_environment: dev
    project_id: 2fc4aa94-ed4d-49cd-9b6b-c1644bf4608e
    environments:
    - name: dev
    - name: staging
    - name: prod
    plugins:
      extractors:
      - name: tap-spreadsheets-anywhere
        variant: ets
        pip_url: git+<https://github.com/ets/tap-spreadsheets-anywhere.git>
        config:
          tables:
          - path: 'file:///Users/juanherrera/Desktop/subway-monthly-data'
            name: 'subway_monthly_data'
            pattern: 'MTA_Subway_Hourly_Ridership_small.csv'
            start_date: '2025-03-12T15:30:00Z'
            prefer_schema_as_string: true
            key_properties: ['id']
            format: csv
    
      loaders:
      - name: target-parquet
        variant: automattic
        pip_url: git+<https://github.com/Automattic/target-parquet.git>
        config:
            destination_path: data/subway_data
            compression_method: snappy
            logging_level: info
            disable_collection: true
    ✅ 1
    e
    • 2
    • 4
  • n

    Nivetha

    03/17/2025, 7:32 PM
    Hi, I'm new to Meltano and data engg in general, wondering if there's something I'm doing wrong here or if it's just that Hubspot private apps are not yet supported in Meltano (https://github.com/singer-io/tap-hubspot/issues/211). I'm trying to configure the tap-hubspot extractor with my Hubspot private app details. I have the client_id set to my Hubspot private app access token, client_secret set to the client secret, and a redirect_url set as well. When I test the configuration, I keep getting the error "Exception: Config is missing required keys: ['refresh_token']". However, there is no refresh token available that I can see in my Hubspot private app. Is there a way to disable this requirement? I see a section called "overriding discoverable plugin properties" in this documentation (https://docs.meltano.com/guide/configuration) but unsure if that applies here. Thanks for any help you can provide.
    r
    • 2
    • 1
  • o

    Oren Teich

    03/20/2025, 4:21 AM
    What's the suggested approach for handling transformations that are beyond DBT's capabilities? I've got a postgres warehouse, loading CSV data in. I need to do fuzzy matching on company name (e.g. ACME Corp. vs Acme Co. vs Acme). It's brutal in SQL/DBT. Ideally I could write a python script to do it. I see that there are plugins that are 'strongly discouraged'. Is there a recommended approach? Worst case, I'll write a custom script that is outside the meltano pipeline, but was hoping for something that might be more integrated, for example to help deal with incremental updates.
    a
    v
    • 3
    • 11
  • a

    Alejandro Rodriguez

    03/20/2025, 2:04 PM
    Hi, I’m trying to set up replication from Cloud SQL MySQL to BigQuery and I’m having an issue with tap-mysql not using state. The replication works fine except it always starts from scratch due to not picking up on the state the previous run generated. It does generate the state well when a run completes, but on the next run I always see the following two lines:
    2025-03-20T13:59:43.766320Z [debug    ] Could not find state.json in /projects/.meltano/extractors/tap-mysql/state.json, skipping.
    2025-03-20T13:59:43.793158Z [warning  ] No state was found, complete import.
    and then every table says it requires a full resync. Even when I manually copy the state to the location in the first log line, it doesn’t pick it up. Any ideas?
    e
    • 2
    • 5
  • d

    Don Venardos

    04/22/2025, 1:03 AM
    I am having issues with tap-mssql variant SpaceCondor using log based replication (Change Tracking not CDC): https://hub.meltano.com/extractors/tap-mssql--spacecondor/ I have Change Tracking configured in SQL Server but am getting a full replication on each run when using meltano run (switched to target-jsonl from target-snowflake for debugging):
    meltano run tap-mssql target-jsonl
    The state gets updated with each run:
    {"bookmarks": {"dbo-c_logical_field_user_values": {}}}
    Extractor config:
    extractors:
    - name: tap-mssql
    config:
    host: PROJECT01
    port: 60065
    database: rss_test
    username: svcTestAccount
    default_replication_method: LOG_BASED
    sqlalchemy_url_query_options:
    - key: driver
    value: ODBC Driver 18 for SQL Server
    - key: TrustServerCertificate
    value: yes
    select:
    - dbo-c_logical_field_user_values.*
    I think this might be a configuration issue, not sure but perhaps isn't picking up the default replication method?
    {"event": "Visiting CatalogNode.STREAM at '.streams[352]'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910101Z"}
    {"event": "Setting '.streams[352].selected' to 'False'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910162Z"}
    {"event": "Setting '.streams[352].selected' to 'True'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910211Z"}
    {"event": "Skipping node at '.streams[352].tap_stream_id'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910259Z"}
    {"event": "Skipping node at '.streams[352].table_name'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910306Z"}
    {"event": "Skipping node at '.streams[352].replication_method'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910354Z"}
    {"event": "Skipping node at '.streams[352].key_properties[0]'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910402Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.logical_field_sid'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910457Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.enabled_flag'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910513Z"}
    {"event": "Skipping node at '.streams[352].schema.properties.enabled_flag.maxLength'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910604Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.modified_by_user_sid'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910779Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.modified_datetime'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910924Z"}
    {"event": "Skipping node at '.streams[352].schema.properties.modified_datetime.format'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910988Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.timestamp'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.911099Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.system_modified_datetime'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.911160Z"}
    {"event": "Skipping node at '.streams[352].schema.properties.system_modified_datetime.format'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911212Z"}
    {"event": "Skipping node at '.streams[352].schema.type'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911262Z"}
    {"event": "Skipping node at '.streams[352].schema.required[0]'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911312Z"}
    {"event": "Skipping node at '.streams[352].schema.$schema'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911361Z"}
    {"event": "Skipping node at '.streams[352].is_view'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911410Z"}
    {"event": "Skipping node at '.streams[352].stream'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911458Z"}
    {"event": "Visiting CatalogNode.METADATA at '.streams[352].metadata[0]'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.911509Z"}
    {"event": "Visiting metadata node for tap_stream_id 'dbo-c_logical_field_user_values', breadcrumb '['properties', 'logical_field_sid']'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911558Z"}
    {"event": "Setting '.streams[352].metadata[0].metadata.selected' to 'False'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911616Z"}
    {"event": "Setting '.streams[352].metadata[0].metadata.selected' to 'True'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911665Z"}
    Anyone have suggestions on troubleshooting? No errors like the previous question about not finding the state. SQL Server tables have Change Tracking enabled in SQL Server as:
    ALTER TABLE dbo.' + @ls_table_name + N'
    ENABLE CHANGE_TRACKING
    WITH (TRACK_COLUMNS_UPDATED = OFF);
    e
    • 2
    • 8
  • j

    jack yang

    04/24/2025, 2:15 AM
    Does Meltano support Change Data Capture (CDC) functionality for MySQL?
  • r

    Rafael Rotter

    04/28/2025, 6:47 PM
    Hello! I'm starting out in data engineering and I need to integrate a MongoDB database with BigQuery. I found Meltano with a solution for this, but I'm having problems; when I try to test the connection (
    meltano config tap-mongodb test
    ) I get the message:
    Copy code
    m-meltano:~/prj-mdb-gbq$ meltano config tap-mongodb test
    2025-04-28T17:50:03.990046Z [info     ] The default environment 'dev' will be ignored for `meltano config`. To configure a specific environment, please use the option `--environment=<environment name>`.
    2025-04-28T18:03:11.496374Z [warning  ] Stream `classe` was not found in the catalog
    Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to join our friendly Slack community.
    Plugin configuration is invalid
    No RECORD or BATCH message received. Verify that at least one stream is selected using 'meltano select tap-mongodb --list'.
    The meltano.yml looks like this:
    Copy code
    version: 1
    default_environment: dev
    project_id: c1ac854b-545d
    environments:
    - name: dev
    plugins:
      extractors:
      - name: tap-mongodb
        variant: z3z1ma
        pip_url: git+<https://github.com/z3z1ma/tap-mongodb.git>
        config:
          mongo:
            host: 12.34.5.678
            port: 27017
            directConnection: true
            readPreference: primary
            username: datalake
            password: ****
            authSource: db
            tls: false
          strategy: infer
        select:
        - classe.*
        metadata:
          dbprocapi_classe:
            replication_key: replication_key
            replication-method: LOG_BASED
    For testing purposes I am trying to load only the "classe" collection (- classe.*) from the db database. When I use the command "`meltano select tap-mongodb --list --all`" I have :
    Copy code
    Enabled patterns: classe.*
    but also appears in
    Copy code
    [excluded   ] db_classe.field1
    [excluded   ] db_classe.field2
    [excluded   ] db_classe.field3
    It is important to note that MongoDB does not have replicas. I'm using: • a VM on Google Cloud to access MongoDB, both on the same network; • the tap-mongodb extractor (z3z1ma). Could someone please help me? Thank you.
    ✅ 1
    e
    • 2
    • 7
  • t

    Tanner Wilcox

    04/29/2025, 8:37 PM
    We have a bunch of network devices all running the same software. I need to query each of them with the same extractor. The queries will all be the same and the schemas will also all be the same. What's the correct way to do that? I'm planning on using the rest tap but can write my own if that's cleaner. Ideally I would be able to supply a list of hosts to reach out to based on the response from an initial api call to get a list of hosts
    e
    • 2
    • 7
  • j

    Jordan Lee

    04/30/2025, 1:29 AM
    The containerization documentation (https://docs.meltano.com/guide/containerization/) recommends
    meltano add files files-docker-compose
    , but this adds a broken
    docker-compose.yml
    definition that doesn't start, throwing
    Error: No such command 'ui'.
    e
    • 2
    • 3
  • s

    Steven Searcy

    05/02/2025, 3:17 PM
    Hello! I am new to Meltano and I’m working on a pipeline using tap-csv to ingest a CSV file and would like to load its data into multiple Postgres tables, depending on column mappings. Has anyone done this with Meltano before? Curious if you’d recommend using stream maps, custom plugins, or something like dbt for post-load splitting. Any best practices or patterns would be greatly appreciated!
    v
    m
    • 3
    • 7
  • r

    Rafael Rotter

    05/09/2025, 2:41 PM
    Hello! I'm using
    target-bigqeury
    (z3z1ma) to receive data from MongoDB into BigQuery. I managed to send some collections to the target (not all), but some questions arose. If you could help me when you can, please, I would appreciate it: 1. How can I specify in
    target-bigquery
    some tables in BigQuery that should be partitioned by field X and clustered by Y, Z? 2. Why are two tables created in BigQuery: one with the execution time suffix, with data, and another without suffix and without data? Is a new table created with each load? (attached file) 3. I would like to confirm: normally there is no change in the MongoDB schema, but it can occur in case of an update. I am using denormalized: true. In case of a change, this can impact the load, correct? 4. The last error I got was "`ParseError: null is not allowed to be used as an element in a repeated field at processo.prioridade[0]`". Is it possible to handle this in stream-maps? Thanks!
    e
    • 2
    • 3