AS for where the files are exported for ingestion there should be 2 options with following sub-selecttions.
1.
External Stage (meaning customer has blob storage of their own that they manage) where
your code would do the job of uploading files to appropriate cloud storage locations.
a. AWS (credentials + S3 path )
b. Azure (credentials + blob path )
c. GCP Storage (credentials + blob path )
2.
Internal Stage (Snowflake managed storage defined by a name where access is granted via Snowflake RBAC to userid that is making connection to Snowflake). You would use JDBC, ODBC or another Snowflake driver to connect snowflake and issue a PUT SQL command that would encrypt & upload the file(s) automatically. You do not have to code the upload process in your connector and it is handled by the ODBC/JDBC driver.
a.
Option 1, use an existing Internal Stage: In this case, you would only define the Schema + the name of the internal stage along with optional SubFoldering to use.
b.
Option 2, use temp internal stage. user would specify a schema to use. You can use SQL CREATE STAGE to create a upload location, SQL PUT to upload local files to the stage & COPY to ingest them. Once data is ingested, You can delete the stage and that will get rid of all the files in that stage. (most ELT tools use this option)
https://docs.snowflake.com/en/user-guide/data-load-local-file-system-stage.html
https://docs.snowflake.com/en/user-guide/data-load-local-file-system-create-stage.html#creating-a-stage