Noam Siegel
02/04/2025, 7:38 AM(etl) ➜ etl git:(main) ✗ meltano config tap-singer-jsonl set local.folders /Users/noamsiegel/Downloads/tripadvisor-matched-files/
2025-02-04T07:34:25.310094Z [info ] The default environment 'dev' will be ignored for `meltano config`. To configure a specific environment, please use the option `--environment=<environment name>`.
Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
join our friendly Slack community.
Failed to parse JSON array from string: '/Users/noamsiegel/Downloads/tripadvisor-matched-files/'
I tried with and without the last "/" at the end of the folder pathPawel Plaszczak
02/04/2025, 4:14 PMEmre Üstündağ
02/04/2025, 9:55 PMPawel Plaszczak
02/18/2025, 9:45 AMAndy Crellin
02/21/2025, 10:14 AMJesse Neumann
02/21/2025, 10:45 PMSiba Prasad Nayak
02/24/2025, 7:10 AMChad
02/26/2025, 2:47 AMashish singh
03/06/2025, 9:50 PMPouya Barrach-Yousefi
03/10/2025, 7:00 PMPawel Plaszczak
03/20/2025, 12:41 AMPawel Plaszczak
03/20/2025, 10:15 AM_sdc_deleted_at
timestamps (if produced by tap-oracle) and use DBT transformations to handle these. Would this be the recommended way? Does anyone know whether tap-oracle correctly produces these timestamps?
More generally, I am of an impressions that deletes are rarely implemented by the targets. Am I correct and if so, why is it so? I understand they are somewhat more tricky to implement than barebones inserts, however they seem to me quite an important and probably quite frequent requirement for proper ETL. How are people in general dealing with this problem?
https://medium.com/@pp_85623/meltano-in-action-hands-on-evaluation-of-an-open-source-elt-framework-5a1d5b93b483Siba Prasad Nayak
03/21/2025, 7:33 AMPawel Plaszczak
03/21/2025, 4:02 PMSiba Prasad Nayak
04/02/2025, 3:50 AMmeltano config
. To configure a specific environment, please use the option --environment=<environment name>
.
(meltanoEnv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend>
I am getting no values.
Is it the right way to set configuration if I want to use env variables.
The purpose is not to store the config parameters in meltano.yml file.
note: there is no .env file under my meltano project root directory.
Thanks in advance !Siba Prasad Nayak
04/02/2025, 7:46 AM2025-04-02T07:37:31.279724Z [warning ] Certificate did not match expected hostname: <http://sp.meltano.com|sp.meltano.com>. Certificate: {'subject': ((('commonName', '_.<http://ops.snowcatcloud.com|ops.snowcatcloud.com>'),),), 'issuer': ((('countryName', 'US'),), (('organizationName', 'Amazon'),), (('commonName', 'Amazon RSA 2048 M02'),)), 'version': 3, 'serialNumber': '0668C0E7C8CD0F1A31A21E5DDD2FD67D', 'notBefore': 'Mar 7 00:00:00 2025 GMT', 'notAfter': 'Apr 5 23:59:59 2026 GMT', 'subjectAltName': (('DNS', '_.<http://ops.snowcatcloud.com|ops.snowcatcloud.com>'),), 'OCSP': ('*<http://ocsp.r2m02.amazontrust.com>*',), 'caIssuers': ('*<http://crt.r2m02.amazontrust.com/r2m02.cer>*',), 'crlDistributionPoints': ('*<http://crl.r2m02.amazontrust.com/r2m02.crl',)>}*
2025-04-02T07:37:31.283308Z [warning ] Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=0)) after connection broken by 'SSLError(CertificateError("hostname '<http://sp.meltano.com|sp.meltano.com>' doesn't match '_.<http://ops.snowcatcloud.com|ops.snowcatcloud.com>'"))': /com.snowplowanalytics.snowplow/tp2_
_2025-04-02T073732.200577Z [warning ] Certificate did not match expected hostname: sp.meltano.com. Certificate: {'subject': ((('commonName', '_`.ops.snowcatcloud.com'),),), 'issuer': ((('countryName', 'US'),), (('organizationName', 'Amazon'),), (('commonName', 'Amazon RSA 2048 M02'),)), 'version': 3, 'serialNumber': '0668C0E7C8CD0F1A31A21E5DDD2FD67D', 'notBefore': 'Mar 7 000000 2025 GMT', 'notAfter': 'Apr 5 235959 2026 GMT', 'subjectAltName': (('DNS', '_.ops.snowcatcloud.com'),), 'OCSP': ('*http://ocsp.r2m02.amazontrust.com*',), 'caIssuers': ('*http://crt.r2m02.amazontrust.com/r2m02.cer*',), 'crlDistributionPoints': ('*http://crl.r2m02.amazontrust.com/r2m02.crl',)}*_`
_2025-04-02T07:37:32.204045Z [warning ] Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=0)) after connection broken by 'SSLError(CertificateError("hostname '<http://sp.meltano.com|sp.meltano.com>' doesn't match '_.<http://ops.snowcatcloud.com|ops.snowcatcloud.com>'"))': /com.snowplowanalytics.snowplow/tp2
2025-04-02T073733.138758Z [warning ] Certificate did not match expected hostname: sp.meltano.com. Certificate: {'subject': ((('commonName', '`_.ops.snowcatcloud.com'),),), 'issuer': ((('countryName', 'US'),), (('organizationName', 'Amazon'),), (('commonName', 'Amazon RSA 2048 M02'),)), 'version': 3, 'serialNumber': '0668C0E7C8CD0F1A31A21E5DDD2FD67D', 'notBefore': 'Mar 7 000000 2025 GMT', 'notAfter': 'Apr 5 235959 2026 GMT', 'subjectAltName': (('DNS', '_.ops.snowcatcloud.com'),), 'OCSP': ('*http://ocsp.r2m02.amazontrust.com*',), 'caIssuers': ('*http://crt.r2m02.amazontrust.com/r2m02.cer*',), 'crlDistributionPoints': ('*http://crl.r2m02.amazontrust.com/r2m02.crl',)}*`
2025-04-02T07:37:33.141577Z [warning ] Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=0)) after connection broken by 'SSLError(CertificateError("hostname '<http://sp.meltano.com|sp.meltano.com>' doesn't match '_.<http://ops.snowcatcloud.com|ops.snowcatcloud.com>'"))': /com.snowplowanalytics.snowplow/tp2_
_2025-04-02T073734.072331Z [warning ] Certificate did not match expected hostname: sp.meltano.com. Certificate: {'subject': ((('commonName', '_`.ops.snowcatcloud.com'),),), 'issuer': ((('countryName', 'US'),), (('organizationName', 'Amazon'),), (('commonName', 'Amazon RSA 2048 M02'),)), 'version': 3, 'serialNumber': '0668C0E7C8CD0F1A31A21E5DDD2FD67D', 'notBefore': 'Mar 7 000000 2025 GMT', 'notAfter': 'Apr 5 235959 2026 GMT', 'subjectAltName': (('DNS', '*.ops.snowcatcloud.com'),), 'OCSP': ('*http://ocsp.r2m02.amazontrust.com*',), 'caIssuers': ('*http://crt.r2m02.amazontrust.com/r2m02.cer*',), 'crlDistributionPoints': ('*http://crl.r2m02.amazontrust.com/r2m02.crl',)}*`
(meltanoEnv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend>
Siba Prasad Nayak
04/02/2025, 4:06 PM- name: tap-snowflake
namespace: tap_snowflake
pip_url: ./connectors/tap-snowflake
executable: tap-snowflake
capabilities:
- state
- catalog
- discover
- about
- stream-maps
settings:
- name: account
kind: string
value: aigoiop-hq79023
description: The Snowflake account identifier.
- name: user
kind: string
value: udey
description: The Snowflake username.
- name: password
kind: string
value: ***********
description: The Snowflake password.
sensitive: true
- name: database
kind: string
value: PARTHAN_DB
description: The Snowflake database name.
- name: warehouse
kind: string
value: COMPUTE_WH
description: The Snowflake warehouse name.
select:
- public-account.*
-------------------------------------------------------------------
- name: target-salesforce
namespace: target_salesforce
pip_url: ./connectors/target-salesforce
executable: target-salesforce
capabilities:
- about
- stream-maps
- schema-flattening
config:
username: udey@vcs.sandbox
password: ********
security_token: nanLbbN3lexEw70gK7tLrzP4s
api_type: sandbox
#sobject: account
action: insert
stream_maps:
public-employees:
target: Account
#key_properties: []
mappings:
- source: name
target: Name
I am getting an error as below for this stream_maps.
2025-04-02T12:36:00.006657Z [info ] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cmd_type=elb consumer=True job_name=dev:tap-postgres-to-target-salesforce name=target-salesforce producer=False run_id=a2dbeee1-5078-41a8-a7d9-aba40dd70d46 stdio=stderr string_id=target-salesforce
2025-04-02T12:36:00.008151Z [warning ] Received state is invalid, incremental state has not been updated
2025-04-02T12:36:00.098874Z [info ] Incremental state has been updated at 2025-04-02 12:36:00.098808+00:00.
2025-04-02T12:36:00.100101Z [info ] TypeError: unhashable type: 'list' cmd_type=elb consumer=True job_name=dev:tap-postgres-to-target-salesforce name=target-salesforce producer=False run_id=a2dbeee1-5078-41a8-a7d9-aba40dd70d46 stdio=stderr string_id=target-salesforce
2025-04-02T12:36:00.220899Z [error ] Loader failed
2025-04-02T12:36:00.221982Z [error ] Block run completed. block_type=ExtractLoadBlocks err=RunnerError('Loader failed') exit_codes={<PluginType.LOADERS: 'loaders'>: 1} set_number=0 success=False
Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
join our friendly Slack community.
Run invocation could not be completed as block failed: Loader failed
(meltanoEnv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend>
Seems there is some issue with the stream_maps !
Can anyone please guideTanner Wilcox
04/03/2025, 10:58 PMSiba Prasad Nayak
04/04/2025, 10:19 AM- name: tap-snowflake
namespace: tap_snowflake
pip_url: ./connectors/tap-snowflake
executable: tap-snowflake
capabilities:
- state
- catalog
- discover
- about
- stream-maps
settings:
- name: account
kind: string
value: aigoiop-hq79023
description: The Snowflake account identifier.
- name: user
kind: string
value: udey
description: The Snowflake username.
- name: password
kind: string
value: ********
description: The Snowflake password.
sensitive: true
- name: database
kind: string
value: PARTHAN_DB
description: The Snowflake database name.
- name: warehouse
kind: string
value: COMPUTE_WH
description: The Snowflake warehouse name.
select:
- public-account.name
- name: target-salesforce
namespace: target_salesforce
pip_url: ./connectors/target-salesforce
executable: target-salesforce
capabilities:
- about
- stream-maps
#- schema-flattening
config:
username: udey@vcs.sandbox
password: ********
security_token: nanLbbN3lexEw70gK7tLrzP4s
api_type: sandbox
sobject: Employee__c
action: insert
stream_maps:
'*': # Stream from the tap
__alias__: Employee__c
mappings:
"name": "Name"
Somehow this "mappings" configuration is not working. Throwing error as
2025-04-04T10:11:29.098748Z [warning ] Received state is invalid, incremental state has not been updated
2025-04-04T10:11:29.163005Z [info ] Incremental state has been updated at 2025-04-04 10:11:29.162967+00:00.
2025-04-04T10:11:29.163935Z [info ] TypeError: unhashable type: 'dict' cmd_type=elb consumer=True job_name=dev:tap-snowflake-to-target-salesforce name=target-salesforce producer=False run_id=0f693284-543d-4331-8029-ac397f2c6d83 stdio=stderr string_id=target-salesforce
2025-04-04T10:11:29.182189Z [info ] 2025-04-04 15:41:28,998 | INFO | snowflake.connector.connection | Snowflake Connector for Python Version: 3.13.2, Python Version: 3.13.2, Platform: Windows-11-10.0.22631-SP0 cmd_type=elb consumer=False job_name=dev:tap-snowflake-to-target-salesforce name=tap-snowflake producer=True run_id=0f693284-543d-4331-8029-ac397f2c6d83 stdio=stderr string_id=tap-snowflake
2025-04-04T10:11:29.183433Z [info ] 2025-04-04 15:41:28,999 | INFO | snowflake.connector.connection | Connecting to GLOBAL Snowflake domain cmd_type=elb consumer=False job_name=dev:tap-snowflake-to-target-salesforce name=tap-snowflake producer=True run_id=0f693284-543d-4331-8029-ac397f2c6d83 stdio=stderr string_id=tap-snowflake
2025-04-04T10:11:29.184298Z [info ] 2025-04-04 15:41:28,999 | INFO | snowflake.connector.connection | This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity. cmd_type=elb consumer=False job_name=dev:tap-snowflake-to-target-salesforce name=tap-snowflake producer=True run_id=0f693284-543d-4331-8029-ac397f2c6d83 stdio=stderr string_id=tap-snowflake
2025-04-04T10:11:29.241665Z [error ] Loader failed
2025-04-04T10:11:29.242795Z [error ] Block run completed. block_type=ExtractLoadBlocks err=RunnerError('Loader failed') exit_codes={<PluginType.LOADERS: 'loaders'>: 1} set_number=0 success=False
Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
join our friendly Slack community.
Run invocation could not be completed as block failed: Loader failed
(meltanoEnv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend>
So I am trying to map name (From Snowflake) to Name (To Salesforce) field. [Camelcase].
TypeError: unhashable type: 'dict'
Did not get much examples on internet related to this "mappings".Tanner Wilcox
04/08/2025, 4:55 PM- name: scp
namespace: scp
commands:
get_file:
executable: scp
args: -h
This doesn't
- name: test
namespace: test
commands:
get_file:
executable: /bin/bash
args: -c "./test.sh"
Neither does this
- name: test
namespace: test
commands:
get_file:
executable: ssh
args: -h
Here is my output from the frist scp test
[tanner@sato ubb-meltano]$ mel run scp
2025-04-08T16:50:21.070409Z [info ] Environment 'dev' is active
2025-04-08T16:50:21.117648Z [info ] usage: scp [-346ABCOpqRrsTv] [-c cipher] [-D sftp_server_path] [-F ssh_config] cmd_type=command name=scp stdio=stderr
2025-04-08T16:50:21.117864Z [info ] [-i identity_file] [-J destination] [-l limit] [-o ssh_option] cmd_type=command name=scp stdio=stderr
2025-04-08T16:50:21.117973Z [info ] [-P port] [-S program] [-X sftp_option] source ... target cmd_type=command name=scp stdio=stderr
Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
join our friendly Slack community.
'NoneType' object is not subscriptable
The other tests produce the same "NoneType" error but they don't print the command's help message. test.sh is just echo hello
.
I made an scp extension that used to be in my utilities. It looked like this
- name: scp-ext
namespace: scp-ext
pip_url: '../scp-ext'
executable: scp
I wonder if that's cached and that's why the scp command is the only thing that works?
I've been banging my head against this for days. Any help would really be appreciatedReuben (Matatika)
04/08/2025, 6:40 PM<plugin name>:<command name>
, so in your case
meltano run scp:get_file
or have you already tried that?Tanner Wilcox
04/08/2025, 8:00 PMTanner Wilcox
04/08/2025, 8:01 PMOscar Gullberg
04/09/2025, 10:28 AMstg
dataset after completion?
Use case: Ingesting data from multiple Shopify stores.
Right now, we run one Meltano pipeline per store, which:
• Extracts raw data into a shared raw_shopify
dataset in BigQuery
• Creates common views in a single stg_shopify
dataset
This setup causes some issues. Ideally, we want to:
1. Ingest each store's raw data into its own dataset (e.g. raw_shopify_store1
, raw_shopify_store2
, etc.) in parallel
2. Run per-store transforms into separate staging datasets (e.g. stg_shopify_store1
, etc.)
3. Run a final transform step that unions everything into a central stg_shopify
dataset
Is there a clean way to do this in Meltano? Any recommendations or patterns others are using?Siddu Hussain
04/11/2025, 10:28 AM{"key1": "value1" }, {"key2" : "value2"}
◦ random race condition data sample: the second record is emitted and written to the target before completing the first record writing.
▪︎ {"key1": {"key2" : "value2"}
◦ I was under the assumption this was happening because of batching at tap and data written to jsonl
◦ This is happening even on removing the batching at tap
• I tried Multiprocessing outside tap and call the meltano el as a subprocess for each chunk of data like below . This works without race condition.
def run(time_range):
try:
is_backfill = os.environ.get("TAP_ZOOM_IS_BACKFILL")
start_time, end_time = time_range
start_time = shlex.quote(start_time)
# start = start_time.replace(" ", "").replace(":", "")
end_time = shlex.quote(end_time)
cmd = (
f"export MELTANO_STATE_BACKEND_URI='<s3://iflow-prod/state/backfill/zoom/>'"
f"export TAP_ZOOM_FROM_DATE={start_time} TAP_ZOOM_TO_DATE={end_time} TAP_ZOOM_IS_BACKFILL={is_backfill}; "
f"source .venv/bin/activate ; "
f"meltano el tap-zoom target-s3-zoom --force;"
)
subprocess.run(cmd, shell=True, check=True)
return {"time_range": time_range, "status": "success"}
except Exception as e:
return {"time_range": time_range, "status": "error", "error": str(e)}
I was wondering if both the approaches spin an individual stdout pipe for each process spun but why is it getting into race condition in case 1 and not in case 2.
My understanding is meltano sends data to stdout as per the target emit code,
• might be a silly question but. How is Meltano differentiating logs that are emitted and singer records emitted?
• when I spin a separate process this stdout should be different from the main process stdout right or else, is it the same stdout pipe.
thanks for the time to read through any help is much appreciated, Thanks and have a great dayAnthony Shook
04/17/2025, 5:06 PMid
column. However, the table is mutable at the source and has an updated_at
column, so that means I’m not catching changes in the source table once I’ve pulled the at-the-moment value of a row. So my situation is this:
• I want to update meltano config from using id
as my replication-key to using updated_at
as my replication key, with id
as a value in table-key-properties
• I don’t want to start from the beginning of time, because it’s absolutely too much data to handle, so I’ve got to manually set a date
So the question is — how would you go about it?Tanner Wilcox
05/01/2025, 6:50 PMshow arp
command on all routers at our ISP and get that data into our warehouse. Ansible is really good at communicating with network devices. It has profiles for each type and is able to recognize when a command starts/ends and parses that data for you. I don't think there's an equivalent tap for that with meltano. I'm wondering what the best way is to merge the two. Maybe I could make a tap that will call out to ansible and ansible can write what I want to a json file then my meltano tap can read from that json and pump it in to a raw table. Seems kind of weird at that point because all my tap is doing is just reading from a file. I could have ansible write directly to my postgres db but that feels like it'd be stepping on Meltano's toes. Looking for inputDon Venardos
05/05/2025, 10:57 PMSiba Prasad Nayak
05/08/2025, 1:42 PM- name: tap-mysql
namespace: tap_mysql
pip_url: ./connectors/tap-mysql
executable: tap-mysql
capabilities:
- about
- batch
- stream-maps
- schema-flattening
- discover
- catalog
- state
settings:
- name: host
kind: string
value: localhost
- name: port
kind: integer
value: 3306 # Or whatever port your PostgreSQL is running on
- name: user
value: root
- name: password
kind: string
value: ******* # Use an environment variable!
sensitive: true
- name: database
kind: string
value: world
- name: is_vitess
kind: boolean
value: false
Error:
(sibaVenv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend> meltano invoke tap-mysql
2025-05-08T13:38:51.175173Z [warning ] Failed to create symlink to 'meltano.exe': administrator privilege required
2025-05-08T13:38:51.189778Z [info ] Environment 'dev' is active
Need help fixing this problem? Visit <http://melta.no/> for troubleshooting steps, or to
join our friendly Slack community.
Catalog discovery failed: command ['C:\\Siba_\\Work\\POC_ConnectorFactory\\Gerrit\\Connector_Factory_Development\\meltano-backend\\.meltano\\extractors\\tap-mysql\\venv\\Scripts\\tap-mysql.exe', '--config', 'C:\\Siba_\\Work\\POC_ConnectorFactory\\Gerrit\\Connector_Factory_Development\\meltano-backend\\.meltano\\run\\tap-mysql\\tap.79a26421-4773-4b39-a35d-577aa37522b8.config.json', '--discover'] returned 1 with stderr:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Scripts\tap-mysql.exe\__main__.py", line 7, in <module>
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\click\core.py", line 1161, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\click\core.py", line 1081, in main
with self.make_context(prog_name, args, **extra) as ctx:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\click\core.py", line 949, in make_context
self.parse_args(ctx, args)
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\click\core.py", line 1417, in parse_args
value, args = param.handle_parse_result(ctx, opts, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\click\core.py", line 2403, in handle_parse_result
value = self.process_value(ctx, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\click\core.py", line 2365, in process_value
value = self.callback(ctx, self, value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\singer_sdk\tap_base.py", line 554, in cb_discover
tap.run_discovery()
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\singer_sdk\tap_base.py", line 309, in run_discovery
catalog_text = self.catalog_json_text
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\singer_sdk\tap_base.py", line 329, in catalog_json_text
return dump_json(self.catalog_dict, indent=2)
^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\tap_mysql\tap.py", line 333, in catalog_dict
result["streams"].extend(self.connector.discover_catalog_entries())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\.meltano\extractors\tap-mysql\venv\Lib\site-packages\singer_sdk\connectors\sql.py", line 998, in discover_catalog_entries
(reflection.ObjectKind.TABLE, False),
^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'sqlalchemy.engine.reflection' has no attribute 'ObjectKind'
(sibaVenv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend>
(sibaVenv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend> python -m pip show SQLAlchemy
Name: SQLAlchemy
Version: 2.0.39
Summary: Database Abstraction Library
Home-page: <https://www.sqlalchemy.org>
Author: Mike Bayer
Author-email: <mailto:mike_mp@zzzcomputing.com|mike_mp@zzzcomputing.com>
License: MIT
Location: C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend\sibaVenv\Lib\site-packages
Requires: greenlet, typing-extensions
Required-by: alembic, meltano
(sibaVenv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend> meltano --version
meltano, version 3.7.4
(sibaVenv) PS C:\Siba_\Work\POC_ConnectorFactory\Gerrit\Connector_Factory_Development\meltano-backend>
Tanner Wilcox
05/09/2025, 10:36 PM{%- macro drop_schema() -%}
{%- set drop_query -%}
drop schema {{ target.schema }}
{%- endset -%}
{% do run_query(drop_query) %}
{%- endmacro -%}
I'm assuming I should be able to do something like this: mel run dbt:run-operation:drop_schema sonar warehouse
but I get an error saying it can't find drop_schema. I have it in ./macros/
. I'm assuming I need to put it in my meltano.yml under my dbt transfromer section. Maybe it should go under utilities? I'm at a loss