https://linen.dev logo
Join Slack
Powered by
# singer-taps
  • c

    Chinmay

    04/06/2025, 2:19 PM
    Hi team, We're currently using
    tap-quickbooks
    and encountering an issue related to the
    start_date
    format. In our
    .yml
    config file, we’ve defined the
    start_date
    as:
    Copy code
    start_date: '2025-01-01T00:00:00.000Z'
    However, we’re getting the following error when running the tap:
    Copy code
    CRITICAL time data '2025-01-01T00:00:00+00:00' does not match format '%Y-%m-%dT%H:%M:%SZ'
        raise ValueError("time data %r does not match format %r" %
    ValueError: time data '2025-01-01T00:00:00+00:00' does not match format '%Y-%m-%dT%H:%M:%SZ'
    Could you please help us resolve this or point us in the right direction? Thanks!
    ✅ 1
    r
    • 2
    • 2
  • l

    Lior Naim Alon

    04/20/2025, 12:10 PM
    I'm using the tap-hubspot, my goal is to extract all contacts data, run some transformations in my DWH, then write the enriched data back to hubspot for some unknown reason, every since many records were added in hubspot (it's a sandbox environment) - the tap is looping endlessly extracting data, with many duplicates being written over and over again to my target. the records don't seem to be different, so i suspect the tap is not using a proper paging mechanism to progress through the extraction? here's my
    meltano.yml
    :
    Copy code
    version: 1
    default_environment: dev
    environments:
    - name: dev
    - name: staging
    - name: prod
    state_backend:
      uri: <s3://dwh/meltano-states/>
    s3:
        aws_access_key_id: ${AWS_ACCESS_KEY_ID}
        aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
    plugins:
      extractors:
      - name: tap-hubspot
        # python: python
        variant: meltanolabs
        pip_url: git+<https://github.com/MeltanoLabs/tap-hubspot.git@v0.6.3>
        config:
          start_date: '2020-01-01'
        select:
        - contacts.*
      loaders:
      - name: target-s3
        variant: crowemi
        pip_url: git+<https://github.com/crowemi/target-s3.git>
        config:
          append_date_to_filename: true
          append_date_to_filename_grain: microsecond
          partition_name_enabled: true
      - name: target-s3--hubspot
        inherit_from: target-s3
        config:
          format:
            format_type: parquet
          prefix: dwh/hubspot
          flattening_enabled: false
    e
    • 2
    • 2
  • s

    Samuel Nogueira Farrus

    04/30/2025, 11:34 AM
    Greetings! I tried to use extractor
    tap-db2
    but it returned an pip/wheel error when attempting to install:
    Copy code
    (venv) PS C:\meltano\db2> meltano add extractor tap-db2
    Cloning <https://github.com/mjsqu/tap-db2.git> to c:\temp\<user>\pip-req-build-km0p0dgy
      Running command git clone --filter=blob:none --quiet <https://github.com/mjsqu/tap-db2.git> 'C:\TEMP\<user>\pip-req-build-km0p0dgy'
      Resolved <https://github.com/mjsqu/tap-db2.git> to commit ea2cd49b9fcb4dd599e66249445d8c0d8b06d6d4
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
      Preparing metadata (pyproject.toml): started
      Preparing metadata (pyproject.toml): finished with status 'done'
    Collecting attrs==23.1.0 (from tap-db2==1.0.6)
      Using cached attrs-23.1.0-py3-none-any.whl.metadata (11 kB)
    Collecting ibm-db-sa==0.4.0 (from tap-db2==1.0.6)
      Using cached ibm_db_sa-0.4.0-py3-none-any.whl.metadata (5.3 kB)
    Collecting ibm-db==3.2.0 (from tap-db2==1.0.6)
      Using cached ibm_db-3.2.0.tar.gz (206 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
      Preparing metadata (pyproject.toml): started
      Preparing metadata (pyproject.toml): finished with status 'done'
    Collecting jinja2==3.1.2 (from tap-db2==1.0.6)
      Using cached Jinja2-3.1.2-py3-none-any.whl.metadata (3.5 kB)
    Collecting markupsafe<2.2.0 (from tap-db2==1.0.6)
      Using cached markupsafe-2.1.5-py3-none-any.whl
    Collecting pendulum==2.1.2 (from tap-db2==1.0.6)
      Using cached pendulum-2.1.2.tar.gz (81 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
      Preparing metadata (pyproject.toml): started
      Preparing metadata (pyproject.toml): finished with status 'done'
    Collecting pyodbc==5.0.1 (from tap-db2==1.0.6)
      Using cached pyodbc-5.0.1.tar.gz (115 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
      Preparing metadata (pyproject.toml): started
      Preparing metadata (pyproject.toml): finished with status 'done'
    Collecting pytz>=2018.1 (from tap-db2==1.0.6)
      Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
    Collecting singer-python>=5.12.0 (from tap-db2==1.0.6)
      Using cached singer_python-6.1.1-py3-none-any.whl
    Collecting sqlalchemy<3.0.0 (from tap-db2==1.0.6)
      Using cached sqlalchemy-2.0.40-cp313-cp313-win_amd64.whl.metadata (9.9 kB)
    Collecting python-dateutil<3.0,>=2.6 (from pendulum==2.1.2->tap-db2==1.0.6)
      Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
    Collecting pytzdata>=2020.1 (from pendulum==2.1.2->tap-db2==1.0.6)
      Using cached pytzdata-2020.1-py2.py3-none-any.whl.metadata (2.3 kB)
    Collecting six>=1.5 (from python-dateutil<3.0,>=2.6->pendulum==2.1.2->tap-db2==1.0.6)
      Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
    Collecting greenlet>=1 (from sqlalchemy<3.0.0->tap-db2==1.0.6)
      Using cached greenlet-3.2.1-cp313-cp313-win_amd64.whl.metadata (4.2 kB)
    Collecting typing-extensions>=4.6.0 (from sqlalchemy<3.0.0->tap-db2==1.0.6)
      Using cached typing_extensions-4.13.2-py3-none-any.whl.metadata (3.0 kB)
    Collecting jsonschema==2.*,>=2.6.0 (from singer-python>=5.12.0->tap-db2==1.0.6)
      Using cached jsonschema-2.6.0-py2.py3-none-any.whl.metadata (4.6 kB)
    Collecting simplejson==3.*,>=3.13.2 (from singer-python>=5.12.0->tap-db2==1.0.6)
      Using cached simplejson-3.20.1-cp313-cp313-win_amd64.whl.metadata (3.4 kB)
    Collecting backoff==2.*,>=2.2.1 (from singer-python>=5.12.0->tap-db2==1.0.6)
      Using cached backoff-2.2.1-py3-none-any.whl.metadata (14 kB)
    Collecting ciso8601==2.*,>=2.3.1 (from singer-python>=5.12.0->tap-db2==1.0.6)
      Using cached ciso8601-2.3.2.tar.gz (28 kB)
      Installing build dependencies: started
      Installing build dependencies: finished with status 'done'
      Getting requirements to build wheel: started
      Getting requirements to build wheel: finished with status 'done'
      Preparing metadata (pyproject.toml): started
      Preparing metadata (pyproject.toml): finished with status 'done'
    Using cached attrs-23.1.0-py3-none-any.whl (61 kB)
    Using cached ibm_db_sa-0.4.0-py3-none-any.whl (31 kB)
    Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
    Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
    Using cached sqlalchemy-2.0.40-cp313-cp313-win_amd64.whl (2.1 MB)
    Using cached greenlet-3.2.1-cp313-cp313-win_amd64.whl (295 kB)
    Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB)
    Using cached pytzdata-2020.1-py2.py3-none-any.whl (489 kB)
    Using cached backoff-2.2.1-py3-none-any.whl (15 kB)
    Using cached jsonschema-2.6.0-py2.py3-none-any.whl (39 kB)
    Using cached simplejson-3.20.1-cp313-cp313-win_amd64.whl (75 kB)
    Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
    Using cached typing_extensions-4.13.2-py3-none-any.whl (45 kB)
    Building wheels for collected packages: tap-db2, ibm-db, pendulum, pyodbc, ciso8601
      Building wheel for tap-db2 (pyproject.toml): started
      Building wheel for tap-db2 (pyproject.toml): finished with status 'done'
      Created wheel for tap-db2: filename=tap_db2-1.0.6-py3-none-any.whl size=29948 sha256=ab8ca931a326cb0937229d903708ff208bdded393e366c8e6eb2d1833290179e
      Stored in directory: C:\TEMP\<user>\pip-ephem-wheel-cache-w9wzy1j_\wheels\67\43\15\a99e5c72b4b3dcd727d50dfa99a0647c44e30ae3cc0f543b84
      Building wheel for ibm-db (pyproject.toml): started
      error: subprocess-exited-with-error
    
      Building wheel for ibm-db (pyproject.toml) did not run successfully.
      exit code: 1
    
      See above for output.
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      Building wheel for ibm-db (pyproject.toml): finished with status 'error'
      ERROR: Failed building wheel for ibm-db
      Building wheel for pendulum (pyproject.toml): started
      error: subprocess-exited-with-error
    
      Building wheel for pendulum (pyproject.toml) did not run successfully.
      exit code: 1
    
      See above for output.
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      Building wheel for pendulum (pyproject.toml): finished with status 'error'
      ERROR: Failed building wheel for pendulum
      Building wheel for pyodbc (pyproject.toml): started
      error: subprocess-exited-with-error
    
      Building wheel for pyodbc (pyproject.toml) did not run successfully.
      exit code: 1
    
      See above for output.
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      Building wheel for pyodbc (pyproject.toml): finished with status 'error'
      ERROR: Failed building wheel for pyodbc
      Building wheel for ciso8601 (pyproject.toml): started
      error: subprocess-exited-with-error
    
      Building wheel for ciso8601 (pyproject.toml) did not run successfully.
      exit code: 1
    
      See above for output.
    
      note: This error originates from a subprocess, and is likely not a problem with pip.
      Building wheel for ciso8601 (pyproject.toml): finished with status 'error'
      ERROR: Failed building wheel for ciso8601
    Successfully built tap-db2
    Failed to build ibm-db pendulum pyodbc ciso8601
    ERROR: Failed to build installable wheels for some pyproject.toml based projects (ibm-db, pendulum, pyodbc, ciso8601)
    Can someone help me?
    ✅ 1
    👀 1
    r
    e
    • 3
    • 5
  • r

    Reuben (Matatika)

    05/22/2025, 3:34 PM
    How can I set the log level of an SDK tap? I've look through the code and it looks like
    TAP_<NAME>_LOG_LEVEL=ERROR
    should work, but doesn't...
    e
    • 2
    • 12
  • s

    Siba Prasad Nayak

    05/23/2025, 10:38 AM
    Hi Team, I am using the "tap-sftp" from singer https://github.com/singer-io/tap-sftp Getting one issue saying
    Copy code
    paramiko.ssh_exception.SSHException: Incompatible ssh peer (no acceptable host key)
    For this I made a change in the client.py
    Copy code
    self.transport._preferred_keys = ('ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'ssh-dss')
    Copy code
    def __try_connect(self):
            if not self.__active_connection:
                try:
                    self.transport = paramiko.Transport((self.host, self.port))
                    self.transport.use_compression(True)
                    self.transport._preferred_keys = ('ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'ssh-dss')
                    self.transport.connect(username = self.username, pkey = self.key)
                    self.sftp = paramiko.SFTPClient.from_transport(self.transport)
                except (AuthenticationException, SSHException) as ex:
                    self.transport.close()
                    self.transport = paramiko.Transport((self.host, self.port))
                    self.transport.use_compression(True)
                    self.transport._preferred_keys = ('ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'ssh-dss')
                    self.transport.connect(username= self.username, pkey = None)
                    self.sftp = paramiko.SFTPClient.from_transport(self.transport)
                self.__active_connection = True
                # get 'socket' to set the timeout
                socket = self.sftp.get_channel()
                # set request timeout
                socket.settimeout(self.request_timeout)
    Even after making this change, its not resolving the issue.
    ✅ 1
    r
    • 2
    • 3
  • r

    Rafał

    06/02/2025, 1:19 PM
    I writing a tap for reading OpenOffice Calc (ODS) files from a versioned S3 bucket. I know there's tap-spreadsheets-anywhere but it neither supports ODS nor versioned buckets, it's architecturally incompatible with pyexcel and is unmaintained. The ODS files have multiple sheets, but the same schema across files, so all files produce the same streams and all streams span across multiple files. Naturally the tap wouldn't produce records stream by stream, but file by file, with streams intertwined. That's not how the singer-sdk want's me to do things. Is this mode supported by the sdk (I'm assuming that Meltano will be fine with it)? Is there an example tap I could look at?
    a
    • 2
    • 16
  • r

    Rafał

    06/05/2025, 10:36 AM
    I find the role of
    def discover_streams
    unclear, if it's called even if --catalog is passed. I'd expect it to be called only without the catalog or with --discover, when a discovery is actually needed
    e
    r
    • 3
    • 25
  • s

    Siba Prasad Nayak

    06/06/2025, 5:15 AM
    Team, Do we have tap-onedrive from singer side ? Has anyone ever tried it ?
    e
    • 2
    • 2
  • a

    Ayush

    06/06/2025, 5:44 AM
    hello, i have a question not sure where to ask it: 1. I’m currently trying to extract (tap-salesforce) data from salesforce and load it onto a json file (target-jsonl). 2. Then I want to extract (tap-singer-jsonl) from the json and load it onto mongodb (target-mongodb). however, the tap-singer-jsonl does not successfully extract the data from the json file. I also tried using targer-singer-jsonl instead of target-jsonl (in step 1) to test the singer config, but it says “run invocation could not be completed as block failed: Loader failed. Seems like a singer issue since I was able to do step 1 with target-jsonl
    ✅ 1
    a
    m
    • 3
    • 7
  • a

    Ayush

    06/06/2025, 5:45 AM
    Does anyone know what’s going on?
  • c

    Chinmay

    06/06/2025, 8:13 AM
    Hello team, we are using tap-quickbooks for fetching QBO data (https://github.com/hotgluexyz/tap-quickbooks), but it does not have RefundReceipt, Check & CreditCardCredit record types data given by this repo, so how to get these records? Can you guys help get these record types?.
    e
    • 2
    • 1
  • a

    azhar

    06/10/2025, 9:06 AM
    Hello team, https://github.com/MeltanoLabs/tap-linkedin-ads --- We are using LinkedIn singer tap, from June 1, we started to get the 426 client error as it seems LinkedIn has deprecated old API endpoints. Also, noticed this tap is using LinkedIn-Version 2024 in the headers. ---
    Copy code
    error:          2025-06-10T02:55:07.873581Z [info     ] 2025-06-10 02:55:07,872 | ERROR    | tap-linkedin-ads.accounts | An unhandled error occurred while syncing 'accounts' cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap
    -linkedin-ads                                                                                                                                                                                                                                                                                                                                                                      
    2025-06-10T02:55:07.875835Z [info     ]     raise FatalAPIError(msg)   cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads                                                                                          
    2025-06-10T02:55:07.875945Z [info     ] singer_sdk.exceptions.FatalAPIError: 426 Client Error: Upgrade Required for path: /rest/adAccounts cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads                      
    2025-06-10T02:55:07.880461Z [info     ]     raise FatalAPIError(msg)   cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads                                                                                          
    2025-06-10T02:55:07.880569Z [info     ] singer_sdk.exceptions.FatalAPIError: 426 Client Error: Upgrade Required for path: /rest/adAccounts cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads                      
    2025-06-10T02:55:16.772779Z [error    ] Extractor failed                                                                                                                                                                                                                                                                                                                           
    2025-06-10T02:55:16.772957Z [error    ] Block run completed.           block_type=ExtractLoadBlocks err=RunnerError('Extractor failed') exit_codes={: 1} set_number=0 success=False
    e
    • 2
    • 1
  • h

    hammad_khan

    06/23/2025, 11:59 AM
    Hello team, Anyone is using snowflake tap https://github.com/MeltanoLabs/tap-snowflake? I noticed its not maintaining any bookmarks in state.json for the tables. Also I dont seem to find a setting for start_date. For instance: below state.json after successfully pulling first time
    Copy code
    {
      "completed": {
        "singer_state": {
          "bookmarks": {
            "dw_hs-dim_accounts": {},
            "dw_hs-dim_activities": {
              "starting_replication_value": null
            }
          }
        }
      },
      "partial": {}
    }
    ✅ 1
    m
    • 2
    • 2
  • n

    Nathan Sooter

    06/27/2025, 6:16 PM
    I'm using
    tap-salesforce
    and am looking for the config to pass WHERE clauses into the SOQL that Meltano generates. I need to filter to particular values in a particular column in the Account object to make sure specific records aren't extracted. Chat gpt is leading me astray with configs that don't actually exist...does one exist?
    m
    e
    • 3
    • 6
  • f

    Florian Bergmann

    07/03/2025, 9:23 AM
    Hi all, I got errors during my last run of extracting data using tap-oracle, variant s7clarke10, replication method log_based and I want to debug them. For that purpose, I have a rather basic question: How do I change the Log-level for tap-oracle? I tried running meltano with --log-level=debug or --log-level=info, but that has no effect on the output from tap-oracle. I figured out that tap-oracle uses singer.get_logger. So I suppose I have to adjust its settings. Any hints how to do so? I'd like to get those info messages either printed to terminal or in a log file like LOGGER.info("Running in thick mode")
    ✅ 1
    e
    • 2
    • 2
  • e

    Emwinghare Kelvin

    07/23/2025, 6:57 PM
    Hello everyone, I’m experiencing an issue with 
    tap-rest-api-msdk
    when making POST requests. Has anyone resolved this or can suggest an alternative tap I could use?
    e
    • 2
    • 2
  • c

    Chandana S

    07/24/2025, 6:06 AM
    Hi everyone, I have a use case where I need to extract the data from mysql and load it into bigquery. I want to use meltano's tap-mysql for this. One doubt I have is, the incremental load should happen through the updated_at column and according to standard ETL process, checking the destination before proceeding with the load is good. Is there a way, I can pass a WHERE condition or even a filter condition to the tap before I run the job?
    e
    • 2
    • 1
  • e

    Evan Guyot

    07/25/2025, 10:18 AM
    Hey, I hope I'm reaching out in the right channel. I've created a custom catalog from a tap (based on the existing one) to add a new field. However, in some cases, this field is not returned at all by the REST API — not even as
    null
    , but completely missing — which leads to a Singer exception. I was wondering if there's a catalog's property designed to handle this kind of situation? I’ve already tried defining the field as nullable and using
    additionalProperties
    , but I’m still encountering the Singer error when the field is absent from the object. Here is the Singer error :
    2025-07-25T10:06:57.048305Z [error  ] Loading failed        code=1 message="singer_sdk.exceptions.InvalidRecord: Record Message Validation Error: {'sub_prop_1': 'abc', 'sub_prop_2': 'def'} is not of type 'string'"
    Here is what i have tried in the catalog :
    Copy code
    {
      "streams": [
        {
          "tap_stream_id": "obj",
          ...,
          "schema": {
            "properties": {
              "prop_1": {
                "type": ["array", "null"],
                "items": {
                  "type": "object",
                  "properties": {
                    "sub_prop_1": { "type": ["string", "null"] },
                    "sub_prop_2": { "type": ["string", "null"] },
                    "optional_sub_prop_3": { "type": ["string", "null"] }
                  },
                  "additionalProperties": true
                }
              }
            }
          }
        }
      ]
    }
    Thanks in advance to anyone who takes the time to help ☺️
    r
    • 2
    • 7
  • r

    Reuben (Matatika)

    08/01/2025, 2:10 PM
    What is the point of
    select_filter
    ? Isn't
    select
    a kind of filtering mechanism by definition? Why would I need a filter for a filter? 😅
    v
    h
    +2
    • 5
    • 10
  • s

    Sac

    08/05/2025, 1:03 PM
    Hi everyone 👋 I'm working with the community-managed
    tap-quickbooks
    and noticed that some secrets (like API keys or tokens) seem to be logged in plain text during execution. From what I understand, there’s a
    _make_request
    method in the tap that logs the URL and the full body of the POST request used to request a token — which includes API secrets.
    Copy code
    [...]
    
    def _make_request(self, http_method, url, headers=None, body=None, stream=False, params=None, sink_name=None):
            if http_method == "GET":
                <http://LOGGER.info|LOGGER.info>("Making %s request to %s with params: %s", http_method, url, params)
                resp = self.session.get(url, headers=headers, stream=stream, params=params)
            elif http_method == "POST":
                <http://LOGGER.info|LOGGER.info>("Making %s request to %s with body %s", http_method, url, body)
                resp = <http://self.session.post|self.session.post>(url, headers=headers, data=body)
            else:
                raise TapQuickbooksException("Unsupported HTTP method")
    
    [...]
    Is there a way in Meltano to prevent secrets from being written to log files if the logging is done by the tap itself? Or is this considered a tap-specific issue that should be addressed on GitHub? 🤷‍♂️ Thanks in advance for any insights!
    r
    • 2
    • 2
  • s

    Sac

    08/08/2025, 7:25 PM
    Hello everyone, I need some advice on the QuickBooks tap. QuickBooks uses OAuth2, where the refresh token gets updated roughly every day. Although this connector has a mechanism to capture the new refresh token when it’s updated, since there’s no write-back capability to the tap and target settings (as far as I understand – see issue #2660), the new refresh token value just gets lost. I wanted to ask: if someone has experience with this tap, how do you handle this? The only workaround I can think of is an additional helper script that runs right after the pipeline. This script would fetch the new token from the logs, where it’s stored as plain text (which isn’t ideal, but in this case, it’s useful). Currently, I’m running Meltano in a container, so what I’m trying now is to: 1. Pack the additional Python script in the same container. 2. Mount the
    .env
    file with the token. 3. Let the pipeline run, capturing the new token if there is one, and saving it to the log. 4. Have the Python script fetch it as soon as the pipeline is done. 5. Update the value in the
    .env
    file so the next sync uses the new valid token. I don’t have a better idea at the moment, apart from forking the connector and modifying the logic there, which I’d prefer to avoid. Has anyone faced a similar scenario? What do you think of this solution? Any advice or suggestions? Many thanks in advance!
    e
    • 2
    • 2
  • s

    steven_wang

    08/26/2025, 9:21 PM
    I'm looking to sync data from MongoDB and noticed there are several MongoDB tap variants on Meltano Hub: https://hub.meltano.com/extractors/tap-mongodb/ Has anyone tried these and have opinions on which one to use? I noticed the default one hasn't been updated in 2 years and incremental replication is not working.
    m
    • 2
    • 2
  • j

    Jazmin Velazquez

    09/09/2025, 7:45 PM
    i want to use
    tap-google-sheets
    to extract data from multiple google sheets (with different sheet IDs). How do I configure meltano for this?
    r
    • 2
    • 4
  • l

    Luca Capra

    09/10/2025, 10:25 AM
    Hello, I am looking for guidance on incremental handling of taps. I have developed some code and would like to get incremental updates right. Right now I am using a plain fs directory over fsspec and planning to add S3-compatible storage. I have files like the following with monthly/daily additions 2025-01.csv 2025-02.csv I follwowed https://sdk.meltano.com/en/latest/incremental_replication.html and https://sdk.meltano.com/en/latest/implementation/state.html so far What I have been working on is here https://github.com/celine-eu/tap-spreadsheets/blob/main/tap_spreadsheets/stream.py#L33-L41 Basically using a custom ___updated_at field_ to track row level progress and tracking the reference file mtime. So, I suspect to have reinvented the wheel :) My question, what is already managed by the SDK and what should I do in my own code? Thank you
    v
    • 2
    • 3
  • t

    Tanner Wilcox

    09/25/2025, 8:00 PM
    Is there any way to disable ssl verification for tap-rest-api-msdk?
    r
    • 2
    • 1
  • s

    steven_wang

    09/26/2025, 6:39 PM
    In the Salesforce tap, does anyone know how to sync objects other than Account, Opportunity, Opportunityhistory, Lead, User, and Contact? I'm trying to sync the Task object in our Salesforce account but can't seem to select it. Here is my yaml config:
    Copy code
    - name: tap-salesforce
        variant: meltanolabs
        config:
          select_fields_by_default: true
          login_domain: ${TAP_SALESFORCE_LOGIN_DOMAIN}
          streams_to_discover: ["Task"]
        select_filter:
         - 'Task.*'
    https://github.com/MeltanoLabs/tap-salesforce/issues/89
    ✅ 1
    e
    • 2
    • 6
  • k

    Kevin Phan

    10/10/2025, 8:02 PM
    hey folks im using the rest api tap to retrive info from chainalysis endpoint but I keep getting errors about validating 'type' in schema. An example error i have is:
    Copy code
    2025-10-10T19:56:07.945399Z [info     ] Failed validating 'type' in schema['properties']['service']: cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
    2025-10-10T19:56:07.949526Z [info     ]     {'type': 'string'}         cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
    2025-10-10T19:56:07.952388Z [info     ]                                cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
    2025-10-10T19:56:07.955352Z [info     ] On instance['service']:        cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
    2025-10-10T19:56:07.957711Z [info     ]     None
    where it expects string but it can also be of none value. Is there a way to do schema overrides for this tap? I did not see such an option in here . I can probably do it with mappers but id rather not if there is a way inside the tap configs
    e
    • 2
    • 2
  • l

    Lior Naim Alon

    10/16/2025, 1:27 PM
    hello, i'm using tap-hubspot --variant "airbyte" to extract data from several hubspot streams. the tap takes about 45 minutes to extract very small amounts of data (~80MB) to S3, but the log is flooded with lots of errors along the lines of
    Copy code
    2025-10-16T13:05:43.487376Z [info     ] {'level': 'WARN', 'message': "Couldn't parse date/datetime string in hs_lifecyclestage_lead_date, trying to parse timestamp... Field value: 1709470649329. Ex: Unable to parse string [1709470649329]"} cmd_type=elb consumer=False job_name=staging:tap-hubspot-to-target-s3--raw-crm:eu-west-1-20251016 name=tap-hubspot producer=True run_id=0199ed1f-676c-7a87-ba25-9ddc70d8434c stdio=stderr string_id=tap-hubspot
    Since the amount of data is very low and other ETLs are running fairly faster, I imagine the issue is with the amount of parsing errors and parsing attempts, logging the error, etc. it looks like there is a log entry for each row in the source data. I tried (to no avail) to filter the specific fields using selection / custom mappers, but the errors persist. It is crucial for me to use the airbyte variant as it is the only variant that supports custom hubspot objects out-of-the-box. I'm looking for ways to tackle this issue - the goal is to make the ETL run as fast as a few minutes instead of 45 minutes
    e
    • 2
    • 2
  • o

    Otto Enholm

    10/23/2025, 8:19 AM
    Hello! I'm new to meltano and just learning how to use it, it seems my team set up a tap for adyen data that has started failing as of recently. It seems the repo has been removed? Do you have any suggestions on ways to work around this for adyen-tap? https://hub.meltano.com/extractors/tap-adyen/
    ✅ 1
    a
    r
    • 3
    • 2
  • m

    mark_estey

    10/23/2025, 2:43 PM
    I'm trying to set up the Meltanolabs
    tap-snowflake
    to read a single table but running into an issue where it keeps trying to look at other schemas in the database that it does not have permission to. This is how my config looks (with values changed):
    Copy code
    plugins:
      extractors:
      - name: tap-snowflake
        variant: meltanolabs
        config:
          account: ...
          role: ...
          user: ...
          warehouse: ...
          database: my_database
          schema: my_schema
          tables:
            - my_schema.my_table
        select:
          - my_schema-my_table.*
    And this is the error I keep getting:
    Copy code
    sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 002043 (02000): 01bfe764-3203-6517-0000-120d27b7901e: SQL compilation error:
    Object does not exist, or operation cannot be performed.
    [SQL: SHOW /* sqlalchemy:get_schema_tables_info */ TABLES IN SCHEMA some_other_schema]
    The database user does not have permission to
    some_other_schema
    and will not get permission to that schema. I read that setting the tables config would limit discovery of the tap to only the listed objects, how do I get it to stop trying to inspect the other schemas in the database?
    e
    • 2
    • 1