https://linen.dev logo
Join Slack
Powered by
# getting-started
  • a

    Anton Kuiper

    01/18/2025, 3:20 PM
    Hi all, Great to have a slack community for Meltano. I try to understand the concept. I start with the Getting Started tutorial : https://docs.meltano.com/getting-started/part1 . I've installed the software on my linux box. (and also on windows), both run into the same error / behavior? The expected output is a file in the folder /output but the folder is empty beside .gitingnor. Here is my multano.yml file:
    version: 1
    default_environment: dev
    project_id: 2910bb16-b01d-469c-8454-1c401537fe4c
    environments:
    - name: dev
    - name: staging
    - name: prod
    plugins:
    extractors:
    - name: tap-github
    variant: meltanolabs
    pip_url: meltanolabs-tap-github
    config:
    start_date: '2024-01-01'
    repositories:
    - meltano/meltano
    select:
    - commits.url
    - commits.sha
    - commits.commit_timestamp
    loaders:
    - name: target-jsonl
    variant: andyh1203
    pip_url: target-jsonl
    The output is run into the shell(terminal) i use Visual Studio Code. I've included the github personal key.. i can find that in the .env file. Any help would be nice. I've try to get some advice from chatGPT too but that one is far from it right now.
    r
    e
    • 3
    • 2
  • y

    Yasmim

    01/25/2025, 8:05 PM
    Hello, how can i use airflow with meltano? When i try the meltano add orchestrator airflow, it create but when i run meltano invoke airflow:create-admin, i got the error [Errno 2] No such file or directory: 'meltano_elt/.meltano/run/airflow/airflow.cfg' Do I need to install airflow before?
  • k

    Kurt Snyder

    01/27/2025, 11:36 PM
    I'm trying a quick eval of Meltano and trying to install the mysql extractor with
    meltano add extractor tap-mysql
    ran into this error (all other steps seemed to have worked):
    Copy code
    Building wheel for pendulum (pyproject.toml): started
      error: subprocess-exited-with-error
      
      × Building wheel for pendulum (pyproject.toml) did not run successfully.
      │ exit code: 1
      ╰─> See above for output.
    This is on M2 Pro MacOS 14.7 with pyenv running 3.12.6 right after installing meltano with
    Copy code
    pipx install "meltano"
      installed package meltano 3.6.0, installed using Python 3.13.1
      These apps are now globally available
        - meltano
    Any suggestions appreciated
    e
    • 2
    • 4
  • d

    Denis Gribanov

    01/28/2025, 8:28 PM
    Hi everyone! Is it considered ok to have custom taps and targets inside the
    /extract
    and
    /load
    directories? I'd like to keep all taps and targets in the same repository. What potential issues might I encounter if I take this approach?
    ✅ 1
    e
    • 2
    • 2
  • j

    Jean Paul Azzopardi

    01/28/2025, 9:28 PM
    hi everyone, starting out with meltano and trying to configure snowflake as my target destination. Currently have this setup in my
    meltano.yml
    file but keep receiving a "loader failed" error. Using key-pair auth with private key in
    .env
    file - any thoughts? Tried debugging but logs are unclear to me. Thanks!
    Copy code
    loaders:
      - name: target-snowflake
        variant: meltanolabs
        pip_url: meltanolabs-target-snowflake
        config:
          account: xxxx
          add_record_metadata: false
          database: production
          default_target_schema: public
          role: xxxx
          schema: xxxx
          user: xxxx
          warehouse: default
    ✅ 1
    r
    • 2
    • 8
  • j

    Jay

    01/31/2025, 9:16 AM
    Hi team - I'm a data analyst at Qogita and I'm looking to potentially set up your self host infrastructure - before I do I just want to check that you offer tapfiliate as 1 of your taps as this is the main reason I would want to set it up - hope to hear from you soon 😄
    r
    e
    • 3
    • 3
  • c

    Chris Walker

    02/13/2025, 9:43 PM
    Hi melters Is anyone able to guide me on the easiest way in 2025 to get SQLFluff working for my meltano project? I want that auto-formatting so so badly.
    ✅ 1
    e
    • 2
    • 2
  • d

    dbname

    02/17/2025, 8:20 PM
    Hi All, I have a more general question. When working with taps, is it normal for the catalog I generate to have issues for my targets? What I mean is that I am using tap-googleads with a target-postgres. If I generate the catalog and run a test with a target-jsonl, all works fine. However, if I use the stock catalog and try this with postgres, I need to go in and make lots of changes. Some examples of changes are column naming notation and selection of primary keys. I expected a catalog generation to generate something that all loaders could use but I'm starting to see that is not the case. I do not mind making these updates but want to make sure it it the proper work flow. The end result seems to be a pretty customized catalog file that I need to reference from meltano.yml.
    a
    e
    +4
    • 7
    • 23
  • k

    Kavin Srithongkham

    03/04/2025, 11:46 AM
    Hi there, I'm trying to test out
    tap-sharepointsites
    and I am getting this error
    Copy code
    2025-03-04T11:39:22.288098Z [error    ] Extractor 'tap-sharepointsites' could not be installed: Failed to install plugin 'tap-sharepointsites'.
    2025-03-04T11:39:22.288141Z [info     ] ERROR: Ignored the following versions that require a different python version: 0.0.1 Requires-Python >=3.7.1,<3.11
    ERROR: Could not find a version that satisfies the requirement tap-sharepointsites (from versions: none)
    ERROR: No matching distribution found for tap-sharepointsites
    
    Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
    join our friendly Slack community.
    
    Failed to install plugin(s)
    Originally, I had Python 3.11 so I used pyenv to downgrade to Python 3.10 but it still seems like it doesn't want to find the right version. Does anyone have any ideas about what I should try out?
    ✅ 1
    r
    • 2
    • 2
  • t

    Tanner Wilcox

    03/05/2025, 6:19 PM
    When I run
    meltano invoke dbt-postgres:run
    can I specify just one source to transform?
    e
    v
    • 3
    • 2
  • c

    Chad Bell

    03/06/2025, 3:56 AM
    Hi 👋 looking into Meltano for our ingestion from GCP Cloud SQL into BigQuery. Running:
    meltano run tap-postgres target-bigquery
    Is there a way to load the data into bigquery columns directly, instead of one json "data" column?
    ✅ 1
    a
    • 2
    • 2
  • a

    Allan Whatmough

    03/07/2025, 5:38 AM
    I haven't used Meltano for a while but I'm trying to get back to it now - I have a CI pipeline I built a long time ago but I've noticed it's giving me this error when trying to install Meltano:
    No matching distribution found for meltano==2.10.0
    Was this version yanked? I can still see it in PyPI
    r
    e
    • 3
    • 4
  • j

    Juan Pablo Herrera

    03/12/2025, 10:41 PM
    Hi all, new meltano user here. I am running into a "BrokenPipeError", and im not sure why. I have a csv file in my desktop and I trying to store in a parquet file. Here is also my meltano.yml. I thought it could be file size but right now my file has 10k rows. Thank you!
    Copy code
    version: 1
    default_environment: dev
    project_id: 2fc4aa94-ed4d-49cd-9b6b-c1644bf4608e
    environments:
    - name: dev
    - name: staging
    - name: prod
    plugins:
      extractors:
      - name: tap-spreadsheets-anywhere
        variant: ets
        pip_url: git+<https://github.com/ets/tap-spreadsheets-anywhere.git>
        config:
          tables:
          - path: 'file:///Users/juanherrera/Desktop/subway-monthly-data'
            name: 'subway_monthly_data'
            pattern: 'MTA_Subway_Hourly_Ridership_small.csv'
            start_date: '2025-03-12T15:30:00Z'
            prefer_schema_as_string: true
            key_properties: ['id']
            format: csv
    
      loaders:
      - name: target-parquet
        variant: automattic
        pip_url: git+<https://github.com/Automattic/target-parquet.git>
        config:
            destination_path: data/subway_data
            compression_method: snappy
            logging_level: info
            disable_collection: true
    ✅ 1
    e
    • 2
    • 4
  • n

    Nivetha

    03/17/2025, 7:32 PM
    Hi, I'm new to Meltano and data engg in general, wondering if there's something I'm doing wrong here or if it's just that Hubspot private apps are not yet supported in Meltano (https://github.com/singer-io/tap-hubspot/issues/211). I'm trying to configure the tap-hubspot extractor with my Hubspot private app details. I have the client_id set to my Hubspot private app access token, client_secret set to the client secret, and a redirect_url set as well. When I test the configuration, I keep getting the error "Exception: Config is missing required keys: ['refresh_token']". However, there is no refresh token available that I can see in my Hubspot private app. Is there a way to disable this requirement? I see a section called "overriding discoverable plugin properties" in this documentation (https://docs.meltano.com/guide/configuration) but unsure if that applies here. Thanks for any help you can provide.
    r
    • 2
    • 1
  • o

    Oren Teich

    03/20/2025, 4:21 AM
    What's the suggested approach for handling transformations that are beyond DBT's capabilities? I've got a postgres warehouse, loading CSV data in. I need to do fuzzy matching on company name (e.g. ACME Corp. vs Acme Co. vs Acme). It's brutal in SQL/DBT. Ideally I could write a python script to do it. I see that there are plugins that are 'strongly discouraged'. Is there a recommended approach? Worst case, I'll write a custom script that is outside the meltano pipeline, but was hoping for something that might be more integrated, for example to help deal with incremental updates.
    a
    v
    • 3
    • 11
  • a

    Alejandro Rodriguez

    03/20/2025, 2:04 PM
    Hi, I’m trying to set up replication from Cloud SQL MySQL to BigQuery and I’m having an issue with tap-mysql not using state. The replication works fine except it always starts from scratch due to not picking up on the state the previous run generated. It does generate the state well when a run completes, but on the next run I always see the following two lines:
    2025-03-20T13:59:43.766320Z [debug    ] Could not find state.json in /projects/.meltano/extractors/tap-mysql/state.json, skipping.
    2025-03-20T13:59:43.793158Z [warning  ] No state was found, complete import.
    and then every table says it requires a full resync. Even when I manually copy the state to the location in the first log line, it doesn’t pick it up. Any ideas?
    e
    • 2
    • 5
  • d

    Don Venardos

    04/22/2025, 1:03 AM
    I am having issues with tap-mssql variant SpaceCondor using log based replication (Change Tracking not CDC): https://hub.meltano.com/extractors/tap-mssql--spacecondor/ I have Change Tracking configured in SQL Server but am getting a full replication on each run when using meltano run (switched to target-jsonl from target-snowflake for debugging):
    meltano run tap-mssql target-jsonl
    The state gets updated with each run:
    {"bookmarks": {"dbo-c_logical_field_user_values": {}}}
    Extractor config:
    extractors:
    - name: tap-mssql
    config:
    host: PROJECT01
    port: 60065
    database: rss_test
    username: svcTestAccount
    default_replication_method: LOG_BASED
    sqlalchemy_url_query_options:
    - key: driver
    value: ODBC Driver 18 for SQL Server
    - key: TrustServerCertificate
    value: yes
    select:
    - dbo-c_logical_field_user_values.*
    I think this might be a configuration issue, not sure but perhaps isn't picking up the default replication method?
    {"event": "Visiting CatalogNode.STREAM at '.streams[352]'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910101Z"}
    {"event": "Setting '.streams[352].selected' to 'False'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910162Z"}
    {"event": "Setting '.streams[352].selected' to 'True'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910211Z"}
    {"event": "Skipping node at '.streams[352].tap_stream_id'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910259Z"}
    {"event": "Skipping node at '.streams[352].table_name'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910306Z"}
    {"event": "Skipping node at '.streams[352].replication_method'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910354Z"}
    {"event": "Skipping node at '.streams[352].key_properties[0]'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910402Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.logical_field_sid'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910457Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.enabled_flag'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910513Z"}
    {"event": "Skipping node at '.streams[352].schema.properties.enabled_flag.maxLength'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910604Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.modified_by_user_sid'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910779Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.modified_datetime'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.910924Z"}
    {"event": "Skipping node at '.streams[352].schema.properties.modified_datetime.format'", "level": "debug", "timestamp": "2025-04-22T00:34:06.910988Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.timestamp'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.911099Z"}
    {"event": "Visiting CatalogNode.PROPERTY at '.streams[352].schema.properties.system_modified_datetime'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.911160Z"}
    {"event": "Skipping node at '.streams[352].schema.properties.system_modified_datetime.format'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911212Z"}
    {"event": "Skipping node at '.streams[352].schema.type'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911262Z"}
    {"event": "Skipping node at '.streams[352].schema.required[0]'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911312Z"}
    {"event": "Skipping node at '.streams[352].schema.$schema'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911361Z"}
    {"event": "Skipping node at '.streams[352].is_view'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911410Z"}
    {"event": "Skipping node at '.streams[352].stream'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911458Z"}
    {"event": "Visiting CatalogNode.METADATA at '.streams[352].metadata[0]'.", "level": "debug", "timestamp": "2025-04-22T00:34:06.911509Z"}
    {"event": "Visiting metadata node for tap_stream_id 'dbo-c_logical_field_user_values', breadcrumb '['properties', 'logical_field_sid']'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911558Z"}
    {"event": "Setting '.streams[352].metadata[0].metadata.selected' to 'False'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911616Z"}
    {"event": "Setting '.streams[352].metadata[0].metadata.selected' to 'True'", "level": "debug", "timestamp": "2025-04-22T00:34:06.911665Z"}
    Anyone have suggestions on troubleshooting? No errors like the previous question about not finding the state. SQL Server tables have Change Tracking enabled in SQL Server as:
    ALTER TABLE dbo.' + @ls_table_name + N'
    ENABLE CHANGE_TRACKING
    WITH (TRACK_COLUMNS_UPDATED = OFF);
    e
    • 2
    • 7
  • j

    jack yang

    04/24/2025, 2:15 AM
    Does Meltano support Change Data Capture (CDC) functionality for MySQL?
  • r

    Rafael Rotter

    04/28/2025, 6:47 PM
    Hello! I'm starting out in data engineering and I need to integrate a MongoDB database with BigQuery. I found Meltano with a solution for this, but I'm having problems; when I try to test the connection (
    meltano config tap-mongodb test
    ) I get the message:
    Copy code
    m-meltano:~/prj-mdb-gbq$ meltano config tap-mongodb test
    2025-04-28T17:50:03.990046Z [info     ] The default environment 'dev' will be ignored for `meltano config`. To configure a specific environment, please use the option `--environment=<environment name>`.
    2025-04-28T18:03:11.496374Z [warning  ] Stream `classe` was not found in the catalog
    Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to join our friendly Slack community.
    Plugin configuration is invalid
    No RECORD or BATCH message received. Verify that at least one stream is selected using 'meltano select tap-mongodb --list'.
    The meltano.yml looks like this:
    Copy code
    version: 1
    default_environment: dev
    project_id: c1ac854b-545d
    environments:
    - name: dev
    plugins:
      extractors:
      - name: tap-mongodb
        variant: z3z1ma
        pip_url: git+<https://github.com/z3z1ma/tap-mongodb.git>
        config:
          mongo:
            host: 12.34.5.678
            port: 27017
            directConnection: true
            readPreference: primary
            username: datalake
            password: ****
            authSource: db
            tls: false
          strategy: infer
        select:
        - classe.*
        metadata:
          dbprocapi_classe:
            replication_key: replication_key
            replication-method: LOG_BASED
    For testing purposes I am trying to load only the "classe" collection (- classe.*) from the db database. When I use the command "`meltano select tap-mongodb --list --all`" I have :
    Copy code
    Enabled patterns: classe.*
    but also appears in
    Copy code
    [excluded   ] db_classe.field1
    [excluded   ] db_classe.field2
    [excluded   ] db_classe.field3
    It is important to note that MongoDB does not have replicas. I'm using: • a VM on Google Cloud to access MongoDB, both on the same network; • the tap-mongodb extractor (z3z1ma). Could someone please help me? Thank you.
    ✅ 1
    e
    • 2
    • 7
  • t

    Tanner Wilcox

    04/29/2025, 8:37 PM
    We have a bunch of network devices all running the same software. I need to query each of them with the same extractor. The queries will all be the same and the schemas will also all be the same. What's the correct way to do that? I'm planning on using the rest tap but can write my own if that's cleaner. Ideally I would be able to supply a list of hosts to reach out to based on the response from an initial api call to get a list of hosts
    e
    • 2
    • 7
  • j

    Jordan Lee

    04/30/2025, 1:29 AM
    The containerization documentation (https://docs.meltano.com/guide/containerization/) recommends
    meltano add files files-docker-compose
    , but this adds a broken
    docker-compose.yml
    definition that doesn't start, throwing
    Error: No such command 'ui'.
    e
    • 2
    • 3
  • s

    Steven Searcy

    05/02/2025, 3:17 PM
    Hello! I am new to Meltano and I’m working on a pipeline using tap-csv to ingest a CSV file and would like to load its data into multiple Postgres tables, depending on column mappings. Has anyone done this with Meltano before? Curious if you’d recommend using stream maps, custom plugins, or something like dbt for post-load splitting. Any best practices or patterns would be greatly appreciated!
    v
    m
    • 3
    • 7
  • r

    Rafael Rotter

    05/09/2025, 2:41 PM
    Hello! I'm using
    target-bigqeury
    (z3z1ma) to receive data from MongoDB into BigQuery. I managed to send some collections to the target (not all), but some questions arose. If you could help me when you can, please, I would appreciate it: 1. How can I specify in
    target-bigquery
    some tables in BigQuery that should be partitioned by field X and clustered by Y, Z? 2. Why are two tables created in BigQuery: one with the execution time suffix, with data, and another without suffix and without data? Is a new table created with each load? (attached file) 3. I would like to confirm: normally there is no change in the MongoDB schema, but it can occur in case of an update. I am using denormalized: true. In case of a change, this can impact the load, correct? 4. The last error I got was "`ParseError: null is not allowed to be used as an element in a repeated field at processo.prioridade[0]`". Is it possible to handle this in stream-maps? Thanks!
    e
    • 2
    • 3
  • c

    Christian Hilden

    05/12/2025, 9:11 AM
    Hello, I am trying to setup the rest-api tap. But, there seems to be a problem either with auth or my api url. Is there anyway to see which requests are beeing run by the plugin during config test (config tap-rest-api-msdk test)?
    r
    • 2
    • 9
  • f

    Florian Bergmann

    05/12/2025, 11:45 AM
    Hi all, I have a question concerning the use_singer_decimal property. My pipeline is from tap-oracle (variant: s7clarke10) to target-snowflake (variant: meltanolabs). The source column MYCOLUMN is Number(20,2) in the Oracle DB. My extractor's properties look like this:
    Copy code
    plugins:
      extractors:
      - name: tap-oracle
        variant: s7clarke10
        pip_url: git+<https://github.com/s7clarke10/pipelinewise-tap-oracle.git>
        config:
          ...
          use_singer_decimal: true
        select:
        - TEST-TEST_TABLE.*
        schema:
          TEST-TEST_TABLE:
            MYCOLUMN:
              type: [string, 'null']
              format: x-singer.decimal
              precision: 20
              scale: 2
    - The column MYCOLUMN is extracted in the JSON as expected: ...,"MYCOLUMN":"1234.56",... - However, in Snowflake the column MYCOLUMN is created as NUMBER(38,0) instead of NUMBER(20,2) and inserted as the rounded value '1235' - In case I create the target table in advance as NUMBER(20,2), the inserted value looks like '1235.00' What am I missing here / doing wrong?
    • 1
    • 2
  • r

    Rafael Rotter

    05/15/2025, 7:13 PM
    Hi everyone! I would like to confirm some informations, please: 1. _Does tap-mongodb (
    z3z1ma
    ) support the LOG_BASED replication-method?_ 2. If so, then the MongoDB database needs a replica set, right? Thanks!
    m
    • 2
    • 3
  • d

    Dries Beheydt

    05/20/2025, 8:11 PM
    Hi Melty Crew, I wanted to set up a toy example of a dbt-meltano-dagster combo. I used this awesome starting guide: https://medium.com/@kenokumura/how-to-orchestrate-dbt-with-dagster-in-multi-containers-on-docker-ebf0d171a3a9 and now added meltano and dagster-meltano to the Dockerfile_user_code, mounted my meltano project, and added a "load_jobs_from_meltano_project" to repo.py. Doing only dagster+meltano or dagster+dbt this way works, but together it breaks (something wrong with snowplow-tracker...), which I suspect is a version thing. To get started, which version of these packages should be compatible here? I trial-errored some combinations but got nowhere. Thanks a lot!!
    e
    a
    • 3
    • 5
  • h

    hammad_khan

    05/22/2025, 3:09 PM
    Hey there, I am using tap-salesforce to pull data, first time full load and then incremental. There are situations when I need to pull data only latest 10 records from Contact or Account. Is it possible to do in MeltanoLab variant?
    Copy code
    plugins:
      extractors:
      - name: tap-salesforce
        variant: meltanolabs
        pip_url: git+<https://github.com/MeltanoLabs/tap-salesforce.git@v1.9.0>
        config:
          client_id: xxxxx
          max_workers: 8
          api_type: REST
          instance_url: <https://xx.xx.salesforce.com>
        select:
        - Account.*
    e
    • 2
    • 1
  • m

    Miroslav Nedyalkov

    06/19/2025, 8:23 AM
    Hey, everyone. I’m trying to move some data from MongoDB to Snowflake. We already have a lot of dbt transformations running on Snoflake, so I’m just re-implementing our EL part of the ELT. I’ve decided to use the MongoDB Tap by MeltanoLabs and it successfully connected to my MongoDB database. I’m not sure, though, how am I supposed to: 1. Define which collections (and ideally which of their fields) to get moved to Snowflake (I figured I can use
    select
    in the
    meltano.yml
    , but not sure if that’s the right way to do it) 2. Define what strategy to use to sync the data - I’d like to generally use LOG_BASED, but need to do a complete sync first. It sees this particular tap doesn’t support FULL_SYNC. I also couldn’t get
    LOG_BASED
    to work (I set it up with
    metadata
    section in the
    meltano.yml
    file, where I set
    *
    under which
    replication-key: replication_key
    and
    replication-method: LOG_BASED
    ) - I get
    resume token string was not a valid hex string
    which I assume is because I’ve never done a full sync, but when I try to do it, I get an error this tap doesn’t work with full sync. My use case sounds rather standard - I just need to move data from MongoDB (limited set of collections) => Snowflake tracking changes, but without applying any transformations nor parsing. I need initial full import, and log-based import after that, but unfortunately I couldn’t get it to work…
    e
    m
    • 3
    • 13
  • v

    Vinay Mahindrakar

    06/26/2025, 1:50 PM
    Hi Everyone, I am new to Meltano and currently trying to set up an incremental load for tap-postgres and target-postgres. However, every time I run it, it performs a full refresh instead of an incremental load. I am sharing the YAML file below. Could you please help me resolve this issue?
    Copy code
    version: 1
    default_environment: dev
    project_id: 2dcf5675-7556-4619-a126-8394adc7269d
    
    environments:
    - name: dev
    - name: staging
    - name: prod
    
    plugins:
      extractors:
      - name: tap-postgres
        variant: meltanolabs
        pip_url: meltanolabs-tap-postgres
        config:
          host: localhost
          port: 5432
          user: postgres
          password: '123123'
          database: postgres
          filter_schemas: [public]
          streams:
            work_orders:
              replication-method: INCREMENTAL
              replication_key: created_at
              key-properties: ["id"]