Wejdan Bagais
06/17/2025, 4:52 PMSharan Arora
06/18/2025, 7:53 PMpostgresql_connection:
host: "${oc.env:POSTGRESQL_HOST}"
username: "${oc.env:POSTGRESQL_USER}"
password: "${oc.env:POSTGRESQL_PASSWORD}"
port: "${oc.env:POSTGRESQL_PORT}"
and each of these information are stored in a .env file in the same local
folder however when I do kedro run
postgresql_connection isn't recognized and we are unable to detect the actual values provided in the .env file that should be passed onto credentials.yml since I want this to be dynamic and based on user input. Any idea how to resolve this?
Additionally what is the process to getting kedro to read credentials.yml as well? it seems on kedro run it only cares about the catalog.yml? is it just linking credentials in catalog? i tried but then it reads the dynamic string literallyRachid Cherqaoui
06/20/2025, 11:21 AM/doc_20250620*_delta.csv
But I noticed that YAML interprets
*
as an anchor, and it doesn't seem to behave like a wildcard here.
How can I configure a dataset in catalog.yml
to use a wildcard when loading files from an SFTP path (e.g. to only fetch files starting with a certain prefix and ending with _delta.csv
)? Is there native support for this kind of pattern in Kedro's SFTPDataSet or do I need to implement a custom dataset?
Any guidance or examples would be super appreciated! ๐Rachid Cherqaoui
06/23/2025, 7:34 AMCSVDataset
. Here's the relevant entry from my `catalog.yml`:
yaml
cool_dataset:
type: pandas.CSVDataSet
filepath:
<sftp://my-sftp-server/outbox/DW_Extracts/my_file.csv>
load_args: {}
save_args:
index: False
When I run:
python
df = catalog.load("cool_dataset")
I get the following error:
It seems like Kedro/Pandas is trying to use
ur`llib`
to open the SFTP URL, which doesn't support the
sftp://
protocol natively.
Has anyone successfully used Kedro to load files from SFTP? If so, could you share your config/setup?Adrien Paul
06/23/2025, 5:02 PMNathan W.
06/25/2025, 7:32 AM.env
or credentials.yml
and then use it in my nodes parameters to make API requests. Are there any simple solutions (without putting it in parameters.yml
and then risk to push my key into production...) I missed ?
Thanks a lot in advance for your response, Have a nice day!Fazil Topal
06/25/2025, 8:24 AMJamal Sealiti
06/26/2025, 10:14 AMRachid Cherqaoui
06/27/2025, 2:20 PMPradeep Ramayanam
06/27/2025, 5:34 PMRachid Cherqaoui
06/30/2025, 9:11 AM.txt
file generated by a Kedro pipeline that I created, and I'd like to send it to a folder on a remote server via SFTP.
After several attempts, I found it quite tricky to handle this cleanly within Kedro, especially while keeping things consistent with its data catalog and hooks system.
Would anyone be able to help or share best practices on how to achieve this with Kedro?
Thanks in advance for your support!Jamal Sealiti
06/30/2025, 11:29 AMolufemi george
07/02/2025, 4:52 PMminmin
07/03/2025, 12:51 PMmodel_1.mae:
type: <http://kedro_mlflow.io|kedro_mlflow.io>.metrics.MlflowMetricDataset
model_2.mae:
type: <http://kedro_mlflow.io|kedro_mlflow.io>.metrics.MlflowMetricDataset
if however i try and template the name in the catalog it fails:
"{model_name}.mae":
type: <http://kedro_mlflow.io|kedro_mlflow.io>.metrics.MlflowMetricDataset
I get the error message:
DatasetError: Failed while saving data to dataset
MlflowMetricDataset(run_id=...).
Invalid value null for parameter 'name' supplied: Metric name cannot be None. A key name must be provided.
do I just have to avoid templating in the catalog when it comes to mlflow related entries?Adrien Paul
07/04/2025, 8:42 AMjulie tverfjell
07/04/2025, 10:20 AMFelipe Monroy
07/08/2025, 4:15 PMKedroSession
. But in cases where the Pipeline
and DataCatalog
is already defined in the notebook itself, what would be the best practice for running it?Pooja Mukund
07/09/2025, 2:34 AMINFO Running node: export_comparables: export_comparables() -> node.py:370
[07/08/25 16:22:13] INFO Saving data to comparables (CSVDataset)... data_catalog.py:445
WARNING There are 3 nodes that have not run. runner.py:344
You can resume the pipeline run from the nearest nodes with persisted inputs by adding the following argument to your previous command:
--from-nodes "export_comparables,tree_to_lookup_table,tree_to_python"
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\core.py:324 in save โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro_datasets\pandas\csv_dataset.py:186 โ
โ in save โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\util\_decorators.py:333 in โ
โ wrapper โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\core\generic.py:3967 in to_csv โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\format.py:1014 in โ
โ to_csv โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:270 in save โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:275 in _save โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:313 in โ
โ _save_body โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:324 in โ
โ _save_chunk โ
โ โ
โ in pandas._libs.writers.write_csv_rows:73 โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\encodings\cp1252.py:19 in encode โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
UnicodeEncodeError: 'charmap' codec can't encode character '\u2033' in position 21: character maps to <undefined>
The above exception was the direct cause of the following exception:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ in _run_module_as_main:198 โ
โ in _run_code:88 โ
โ โ
โ in <module>:7 โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\cli\cli.py:263 in main โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1161 in __call__ โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\cli\cli.py:163 in main โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1082 in main โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1697 in invoke โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1443 in invoke โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:788 in invoke โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\cli\project.py:237 in โ
โ run โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\session\session.py:399 โ
โ in run โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\runner.py:131 in run โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\sequential_runner.py:72 in โ
โ _run โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\runner.py:245 in _run โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\task.py:88 in execute โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\task.py:186 in โ
โ _run_node_sequential โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\data_catalog.py:452 in save โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\core.py:858 in save โ
โ โ
โ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\core.py:329 in save โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
DatasetError: Failed while saving data to dataset CSVDataset(filepath=C:/Users/Pooja Mukund/OneDrive - McKinsey & Company/Documents/WWT/wwt-periscope/data/assets/comparables.csv, load_args={}, protocol=file, save_args={'encoding': utf-8-sig, 'index': False}).
'charmap' codec can't encode character '\u2033' in position 21: character maps to <undefined>
In my catalog.yml (screenshot attached), the encoding is set to utf-8. I've also tried switching it to utf-8-sig, but the error persists. It seems to occur when reading or writing certain datasets, and I suspect itโs related to how Windows handles file encoding by default.
Has anyone encountered a similar issue when running Kedro on Windows, or have suggestions on how to troubleshoot or resolve this?Rashida Kanchwala
07/09/2025, 8:04 AMRachid Cherqaoui
07/09/2025, 8:43 AMPradeep Ramayanam
07/10/2025, 3:33 PMPradeep Ramayanam
07/10/2025, 6:52 PMZubin Roy
07/14/2025, 10:40 AMdf_2:
type: polars.LazyPolarsDataset
filepath: data/01_raw/test.parquet
file_format: parquet
Error
Process finished with exit code 139 (interrupted by signal 11:SIGSEGV)
Guillaume Tauzin
07/22/2025, 7:12 AMYury Fedotov
07/23/2025, 2:40 AMstable
docs, perhaps because they were deprecated/renamed with 1.0 release.
This is fine, we can indeed update them. I'm just trying to decide if we should do this now or wait a bit until docs stabilize.
Question: do you expect the 1.0 docs to stay ~ as is in terms of URLs structure, or there are substantial changes coming soon?Jamal Sealiti
07/23/2025, 12:45 PMconfig_loader = OmegaConfigLoader(conf_source=str(Path.cwd() / "conf"))
# Load all matching globals configs
globals_config = config_loader.get("globals")
but i get {}Galen Seilis
07/23/2025, 10:03 PMFelipe Monroy
07/25/2025, 3:12 AMFilip Isak Mattsson
07/25/2025, 10:12 AMAdam
07/25/2025, 8:43 PMkedro-mlflow
and it uninstalled v1 and re-installed v0.19 - will it be updated to use v1 at some point?