Chinmay
04/06/2025, 2:19 PMtap-quickbooks and encountering an issue related to the start_date format.
In our .yml config file, we’ve defined the start_date as:
start_date: '2025-01-01T00:00:00.000Z'
However, we’re getting the following error when running the tap:
CRITICAL time data '2025-01-01T00:00:00+00:00' does not match format '%Y-%m-%dT%H:%M:%SZ'
raise ValueError("time data %r does not match format %r" %
ValueError: time data '2025-01-01T00:00:00+00:00' does not match format '%Y-%m-%dT%H:%M:%SZ'
Could you please help us resolve this or point us in the right direction?
Thanks!Lior Naim Alon
04/20/2025, 12:10 PMmeltano.yml :
version: 1
default_environment: dev
environments:
- name: dev
- name: staging
- name: prod
state_backend:
uri: <s3://dwh/meltano-states/>
s3:
aws_access_key_id: ${AWS_ACCESS_KEY_ID}
aws_secret_access_key: ${AWS_SECRET_ACCESS_KEY}
plugins:
extractors:
- name: tap-hubspot
# python: python
variant: meltanolabs
pip_url: git+<https://github.com/MeltanoLabs/tap-hubspot.git@v0.6.3>
config:
start_date: '2020-01-01'
select:
- contacts.*
loaders:
- name: target-s3
variant: crowemi
pip_url: git+<https://github.com/crowemi/target-s3.git>
config:
append_date_to_filename: true
append_date_to_filename_grain: microsecond
partition_name_enabled: true
- name: target-s3--hubspot
inherit_from: target-s3
config:
format:
format_type: parquet
prefix: dwh/hubspot
flattening_enabled: falseSamuel Nogueira Farrus
04/30/2025, 11:34 AMtap-db2 but it returned an pip/wheel error when attempting to install:
(venv) PS C:\meltano\db2> meltano add extractor tap-db2
Cloning <https://github.com/mjsqu/tap-db2.git> to c:\temp\<user>\pip-req-build-km0p0dgy
Running command git clone --filter=blob:none --quiet <https://github.com/mjsqu/tap-db2.git> 'C:\TEMP\<user>\pip-req-build-km0p0dgy'
Resolved <https://github.com/mjsqu/tap-db2.git> to commit ea2cd49b9fcb4dd599e66249445d8c0d8b06d6d4
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting attrs==23.1.0 (from tap-db2==1.0.6)
Using cached attrs-23.1.0-py3-none-any.whl.metadata (11 kB)
Collecting ibm-db-sa==0.4.0 (from tap-db2==1.0.6)
Using cached ibm_db_sa-0.4.0-py3-none-any.whl.metadata (5.3 kB)
Collecting ibm-db==3.2.0 (from tap-db2==1.0.6)
Using cached ibm_db-3.2.0.tar.gz (206 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting jinja2==3.1.2 (from tap-db2==1.0.6)
Using cached Jinja2-3.1.2-py3-none-any.whl.metadata (3.5 kB)
Collecting markupsafe<2.2.0 (from tap-db2==1.0.6)
Using cached markupsafe-2.1.5-py3-none-any.whl
Collecting pendulum==2.1.2 (from tap-db2==1.0.6)
Using cached pendulum-2.1.2.tar.gz (81 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting pyodbc==5.0.1 (from tap-db2==1.0.6)
Using cached pyodbc-5.0.1.tar.gz (115 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting pytz>=2018.1 (from tap-db2==1.0.6)
Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting singer-python>=5.12.0 (from tap-db2==1.0.6)
Using cached singer_python-6.1.1-py3-none-any.whl
Collecting sqlalchemy<3.0.0 (from tap-db2==1.0.6)
Using cached sqlalchemy-2.0.40-cp313-cp313-win_amd64.whl.metadata (9.9 kB)
Collecting python-dateutil<3.0,>=2.6 (from pendulum==2.1.2->tap-db2==1.0.6)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytzdata>=2020.1 (from pendulum==2.1.2->tap-db2==1.0.6)
Using cached pytzdata-2020.1-py2.py3-none-any.whl.metadata (2.3 kB)
Collecting six>=1.5 (from python-dateutil<3.0,>=2.6->pendulum==2.1.2->tap-db2==1.0.6)
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Collecting greenlet>=1 (from sqlalchemy<3.0.0->tap-db2==1.0.6)
Using cached greenlet-3.2.1-cp313-cp313-win_amd64.whl.metadata (4.2 kB)
Collecting typing-extensions>=4.6.0 (from sqlalchemy<3.0.0->tap-db2==1.0.6)
Using cached typing_extensions-4.13.2-py3-none-any.whl.metadata (3.0 kB)
Collecting jsonschema==2.*,>=2.6.0 (from singer-python>=5.12.0->tap-db2==1.0.6)
Using cached jsonschema-2.6.0-py2.py3-none-any.whl.metadata (4.6 kB)
Collecting simplejson==3.*,>=3.13.2 (from singer-python>=5.12.0->tap-db2==1.0.6)
Using cached simplejson-3.20.1-cp313-cp313-win_amd64.whl.metadata (3.4 kB)
Collecting backoff==2.*,>=2.2.1 (from singer-python>=5.12.0->tap-db2==1.0.6)
Using cached backoff-2.2.1-py3-none-any.whl.metadata (14 kB)
Collecting ciso8601==2.*,>=2.3.1 (from singer-python>=5.12.0->tap-db2==1.0.6)
Using cached ciso8601-2.3.2.tar.gz (28 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Using cached attrs-23.1.0-py3-none-any.whl (61 kB)
Using cached ibm_db_sa-0.4.0-py3-none-any.whl (31 kB)
Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached sqlalchemy-2.0.40-cp313-cp313-win_amd64.whl (2.1 MB)
Using cached greenlet-3.2.1-cp313-cp313-win_amd64.whl (295 kB)
Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB)
Using cached pytzdata-2020.1-py2.py3-none-any.whl (489 kB)
Using cached backoff-2.2.1-py3-none-any.whl (15 kB)
Using cached jsonschema-2.6.0-py2.py3-none-any.whl (39 kB)
Using cached simplejson-3.20.1-cp313-cp313-win_amd64.whl (75 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached typing_extensions-4.13.2-py3-none-any.whl (45 kB)
Building wheels for collected packages: tap-db2, ibm-db, pendulum, pyodbc, ciso8601
Building wheel for tap-db2 (pyproject.toml): started
Building wheel for tap-db2 (pyproject.toml): finished with status 'done'
Created wheel for tap-db2: filename=tap_db2-1.0.6-py3-none-any.whl size=29948 sha256=ab8ca931a326cb0937229d903708ff208bdded393e366c8e6eb2d1833290179e
Stored in directory: C:\TEMP\<user>\pip-ephem-wheel-cache-w9wzy1j_\wheels\67\43\15\a99e5c72b4b3dcd727d50dfa99a0647c44e30ae3cc0f543b84
Building wheel for ibm-db (pyproject.toml): started
error: subprocess-exited-with-error
Building wheel for ibm-db (pyproject.toml) did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for ibm-db (pyproject.toml): finished with status 'error'
ERROR: Failed building wheel for ibm-db
Building wheel for pendulum (pyproject.toml): started
error: subprocess-exited-with-error
Building wheel for pendulum (pyproject.toml) did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for pendulum (pyproject.toml): finished with status 'error'
ERROR: Failed building wheel for pendulum
Building wheel for pyodbc (pyproject.toml): started
error: subprocess-exited-with-error
Building wheel for pyodbc (pyproject.toml) did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for pyodbc (pyproject.toml): finished with status 'error'
ERROR: Failed building wheel for pyodbc
Building wheel for ciso8601 (pyproject.toml): started
error: subprocess-exited-with-error
Building wheel for ciso8601 (pyproject.toml) did not run successfully.
exit code: 1
See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Building wheel for ciso8601 (pyproject.toml): finished with status 'error'
ERROR: Failed building wheel for ciso8601
Successfully built tap-db2
Failed to build ibm-db pendulum pyodbc ciso8601
ERROR: Failed to build installable wheels for some pyproject.toml based projects (ibm-db, pendulum, pyodbc, ciso8601)
Can someone help me?Reuben (Matatika)
05/22/2025, 3:34 PMTAP_<NAME>_LOG_LEVEL=ERROR should work, but doesn't...Siba Prasad Nayak
05/23/2025, 10:38 AMparamiko.ssh_exception.SSHException: Incompatible ssh peer (no acceptable host key)
For this I made a change in the client.py
self.transport._preferred_keys = ('ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'ssh-dss')
def __try_connect(self):
if not self.__active_connection:
try:
self.transport = paramiko.Transport((self.host, self.port))
self.transport.use_compression(True)
self.transport._preferred_keys = ('ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'ssh-dss')
self.transport.connect(username = self.username, pkey = self.key)
self.sftp = paramiko.SFTPClient.from_transport(self.transport)
except (AuthenticationException, SSHException) as ex:
self.transport.close()
self.transport = paramiko.Transport((self.host, self.port))
self.transport.use_compression(True)
self.transport._preferred_keys = ('ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384', 'ecdsa-sha2-nistp521', 'ssh-ed25519', 'ssh-dss')
self.transport.connect(username= self.username, pkey = None)
self.sftp = paramiko.SFTPClient.from_transport(self.transport)
self.__active_connection = True
# get 'socket' to set the timeout
socket = self.sftp.get_channel()
# set request timeout
socket.settimeout(self.request_timeout)
Even after making this change, its not resolving the issue.Rafał
06/02/2025, 1:19 PMRafał
06/05/2025, 10:36 AMdef discover_streams unclear, if it's called even if --catalog is passed. I'd expect it to be called only without the catalog or with --discover, when a discovery is actually neededSiba Prasad Nayak
06/06/2025, 5:15 AMAyush
06/06/2025, 5:44 AMAyush
06/06/2025, 5:45 AMChinmay
06/06/2025, 8:13 AMazhar
06/10/2025, 9:06 AMerror: 2025-06-10T02:55:07.873581Z [info ] 2025-06-10 02:55:07,872 | ERROR | tap-linkedin-ads.accounts | An unhandled error occurred while syncing 'accounts' cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap
-linkedin-ads
2025-06-10T02:55:07.875835Z [info ] raise FatalAPIError(msg) cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads
2025-06-10T02:55:07.875945Z [info ] singer_sdk.exceptions.FatalAPIError: 426 Client Error: Upgrade Required for path: /rest/adAccounts cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads
2025-06-10T02:55:07.880461Z [info ] raise FatalAPIError(msg) cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads
2025-06-10T02:55:07.880569Z [info ] singer_sdk.exceptions.FatalAPIError: 426 Client Error: Upgrade Required for path: /rest/adAccounts cmd_type=elb consumer=False job_name=prod:tap-linkedin-ads-to-target-clickhouse:UMOJn5gijo name=tap-linkedin-ads producer=True run_id=ab987cd0-89aa-4d5a-b179-8fb04e6d3f7d stdio=stderr string_id=tap-linkedin-ads
2025-06-10T02:55:16.772779Z [error ] Extractor failed
2025-06-10T02:55:16.772957Z [error ] Block run completed. block_type=ExtractLoadBlocks err=RunnerError('Extractor failed') exit_codes={: 1} set_number=0 success=Falsehammad_khan
06/23/2025, 11:59 AM{
"completed": {
"singer_state": {
"bookmarks": {
"dw_hs-dim_accounts": {},
"dw_hs-dim_activities": {
"starting_replication_value": null
}
}
}
},
"partial": {}
}Nathan Sooter
06/27/2025, 6:16 PMtap-salesforce and am looking for the config to pass WHERE clauses into the SOQL that Meltano generates. I need to filter to particular values in a particular column in the Account object to make sure specific records aren't extracted.
Chat gpt is leading me astray with configs that don't actually exist...does one exist?Florian Bergmann
07/03/2025, 9:23 AMEmwinghare Kelvin
07/23/2025, 6:57 PMtap-rest-api-msdk when making POST requests. Has anyone resolved this or can suggest an alternative tap I could use?Chandana S
07/24/2025, 6:06 AMEvan Guyot
07/25/2025, 10:18 AMnull, but completely missing — which leads to a Singer exception.
I was wondering if there's a catalog's property designed to handle this kind of situation?
I’ve already tried defining the field as nullable and using additionalProperties, but I’m still encountering the Singer error when the field is absent from the object.
Here is the Singer error : 2025-07-25T10:06:57.048305Z [error ] Loading failed code=1 message="singer_sdk.exceptions.InvalidRecord: Record Message Validation Error: {'sub_prop_1': 'abc', 'sub_prop_2': 'def'} is not of type 'string'"
Here is what i have tried in the catalog :
{
"streams": [
{
"tap_stream_id": "obj",
...,
"schema": {
"properties": {
"prop_1": {
"type": ["array", "null"],
"items": {
"type": "object",
"properties": {
"sub_prop_1": { "type": ["string", "null"] },
"sub_prop_2": { "type": ["string", "null"] },
"optional_sub_prop_3": { "type": ["string", "null"] }
},
"additionalProperties": true
}
}
}
}
}
]
}
Thanks in advance to anyone who takes the time to help ☺️Reuben (Matatika)
08/01/2025, 2:10 PMselect_filter? Isn't select a kind of filtering mechanism by definition? Why would I need a filter for a filter? 😅Sac
08/05/2025, 1:03 PMtap-quickbooks and noticed that some secrets (like API keys or tokens) seem to be logged in plain text during execution.
From what I understand, there’s a _make_request method in the tap that logs the URL and the full body of the POST request used to request a token — which includes API secrets.
[...]
def _make_request(self, http_method, url, headers=None, body=None, stream=False, params=None, sink_name=None):
if http_method == "GET":
<http://LOGGER.info|LOGGER.info>("Making %s request to %s with params: %s", http_method, url, params)
resp = self.session.get(url, headers=headers, stream=stream, params=params)
elif http_method == "POST":
<http://LOGGER.info|LOGGER.info>("Making %s request to %s with body %s", http_method, url, body)
resp = <http://self.session.post|self.session.post>(url, headers=headers, data=body)
else:
raise TapQuickbooksException("Unsupported HTTP method")
[...]
Is there a way in Meltano to prevent secrets from being written to log files if the logging is done by the tap itself? Or is this considered a tap-specific issue that should be addressed on GitHub? 🤷♂️
Thanks in advance for any insights!Sac
08/08/2025, 7:25 PM.env file with the token.
3. Let the pipeline run, capturing the new token if there is one, and saving it to the log.
4. Have the Python script fetch it as soon as the pipeline is done.
5. Update the value in the .env file so the next sync uses the new valid token.
I don’t have a better idea at the moment, apart from forking the connector and modifying the logic there, which I’d prefer to avoid.
Has anyone faced a similar scenario? What do you think of this solution? Any advice or suggestions?
Many thanks in advance!steven_wang
08/26/2025, 9:21 PMJazmin Velazquez
09/09/2025, 7:45 PMtap-google-sheets to extract data from multiple google sheets (with different sheet IDs). How do I configure meltano for this?Luca Capra
09/10/2025, 10:25 AMTanner Wilcox
09/25/2025, 8:00 PMsteven_wang
09/26/2025, 6:39 PM- name: tap-salesforce
variant: meltanolabs
config:
select_fields_by_default: true
login_domain: ${TAP_SALESFORCE_LOGIN_DOMAIN}
streams_to_discover: ["Task"]
select_filter:
- 'Task.*'
https://github.com/MeltanoLabs/tap-salesforce/issues/89Kevin Phan
10/10/2025, 8:02 PM2025-10-10T19:56:07.945399Z [info ] Failed validating 'type' in schema['properties']['service']: cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
2025-10-10T19:56:07.949526Z [info ] {'type': 'string'} cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
2025-10-10T19:56:07.952388Z [info ] cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
2025-10-10T19:56:07.955352Z [info ] On instance['service']: cmd_type=elb consumer=True job_name=dev:tap-chainalysis-alerts-to-target-jsonl name=target-jsonl producer=False run_id=2a2e07ff-7928-4500-847f-5f58e7e96baf stdio=stderr string_id=target-jsonl
2025-10-10T19:56:07.957711Z [info ] None
where it expects string but it can also be of none value. Is there a way to do schema overrides for this tap? I did not see such an option in here . I can probably do it with mappers but id rather not if there is a way inside the tap configsLior Naim Alon
10/16/2025, 1:27 PM2025-10-16T13:05:43.487376Z [info ] {'level': 'WARN', 'message': "Couldn't parse date/datetime string in hs_lifecyclestage_lead_date, trying to parse timestamp... Field value: 1709470649329. Ex: Unable to parse string [1709470649329]"} cmd_type=elb consumer=False job_name=staging:tap-hubspot-to-target-s3--raw-crm:eu-west-1-20251016 name=tap-hubspot producer=True run_id=0199ed1f-676c-7a87-ba25-9ddc70d8434c stdio=stderr string_id=tap-hubspot
Since the amount of data is very low and other ETLs are running fairly faster, I imagine the issue is with the amount of parsing errors and parsing attempts, logging the error, etc. it looks like there is a log entry for each row in the source data.
I tried (to no avail) to filter the specific fields using selection / custom mappers, but the errors persist.
It is crucial for me to use the airbyte variant as it is the only variant that supports custom hubspot objects out-of-the-box.
I'm looking for ways to tackle this issue - the goal is to make the ETL run as fast as a few minutes instead of 45 minutesOtto Enholm
10/23/2025, 8:19 AMmark_estey
10/23/2025, 2:43 PMtap-snowflake to read a single table but running into an issue where it keeps trying to look at other schemas in the database that it does not have permission to. This is how my config looks (with values changed):
plugins:
extractors:
- name: tap-snowflake
variant: meltanolabs
config:
account: ...
role: ...
user: ...
warehouse: ...
database: my_database
schema: my_schema
tables:
- my_schema.my_table
select:
- my_schema-my_table.*
And this is the error I keep getting:
sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 002043 (02000): 01bfe764-3203-6517-0000-120d27b7901e: SQL compilation error:
Object does not exist, or operation cannot be performed.
[SQL: SHOW /* sqlalchemy:get_schema_tables_info */ TABLES IN SCHEMA some_other_schema]
The database user does not have permission to some_other_schema and will not get permission to that schema. I read that setting the tables config would limit discovery of the tap to only the listed objects, how do I get it to stop trying to inspect the other schemas in the database?