https://kedro.org/ logo
Join Slack
Powered by
# questions
  • w

    Wejdan Bagais

    06/17/2025, 4:52 PM
    Hi everyone! ๐Ÿ‘‹ Iโ€™m currently exploring how to approach unit testing in Kedro, especially when working with large-scale data pipelines. Iโ€™d love to hear your thoughts on a few things: โ€ข Do you find unit tests valuable in the context of data pipelines? โ€ข How do you typically implement them in Kedro? โ€ข Given that data quality checks are often a key focus, how do you handle testing when the input datasets are huge? Creating dummy data for every scenario doesnโ€™t always seem practical. Any tips, examples, or lessons learned would be greatly appreciated! Thanks in advance ๐Ÿ™
    j
    d
    +2
    • 5
    • 5
  • s

    Sharan Arora

    06/18/2025, 7:53 PM
    Hello, had a question The pipeline I'm trying to build includes credentials for a PostgreSQL DB. The idea is to pass off a containerized pipeline and facilitate the necessary data cleaning, transformation and storage required for further analytics. In credentials.yml, I have added the following
    Copy code
    postgresql_connection:
      host: "${oc.env:POSTGRESQL_HOST}"
      username: "${oc.env:POSTGRESQL_USER}"
      password: "${oc.env:POSTGRESQL_PASSWORD}"
      port: "${oc.env:POSTGRESQL_PORT}"
    and each of these information are stored in a .env file in the same
    local
    folder however when I do
    kedro run
    postgresql_connection isn't recognized and we are unable to detect the actual values provided in the .env file that should be passed onto credentials.yml since I want this to be dynamic and based on user input. Any idea how to resolve this? Additionally what is the process to getting kedro to read credentials.yml as well? it seems on kedro run it only cares about the catalog.yml? is it just linking credentials in catalog? i tried but then it reads the dynamic string literally
    s
    m
    • 3
    • 2
  • r

    Rachid Cherqaoui

    06/20/2025, 11:21 AM
    Hi everyone! ๐Ÿ‘‹ I'm trying to load specific CSV files from an SFTP connection in Kedro, and I need to filter the files using a wildcard pattern. For example, I'd like to load only files that match something like:
    Copy code
    /doc_20250620*_delta.csv
    But I noticed that YAML interprets
    *
    as an anchor, and it doesn't seem to behave like a wildcard here. How can I configure a dataset in
    catalog.yml
    to use a wildcard when loading files from an SFTP path (e.g. to only fetch files starting with a certain prefix and ending with
    _delta.csv
    )? Is there native support for this kind of pattern in Kedro's SFTPDataSet or do I need to implement a custom dataset? Any guidance or examples would be super appreciated! ๐Ÿ™
    s
    j
    • 3
    • 5
  • r

    Rachid Cherqaoui

    06/23/2025, 7:34 AM
    Hi everyone ๐Ÿ‘‹ I'm currently working with Kedro and trying to load a CSV file hosted on an SFTP server using a
    CSVDataset
    . Here's the relevant entry from my `catalog.yml`:
    Copy code
    yaml
    Copy code
    cool_dataset:
      type: pandas.CSVDataSet
      filepath: 
    <sftp://my-sftp-server/outbox/DW_Extracts/my_file.csv>
      load_args: {}
      save_args:
        index: False
    When I run:
    Copy code
    python
    df = catalog.load("cool_dataset")
    I get the following error: It seems like Kedro/Pandas is trying to use ur`llib` to open the SFTP URL, which doesn't support the
    sftp://
    protocol natively. Has anyone successfully used Kedro to load files from SFTP? If so, could you share your config/setup?
    d
    j
    • 3
    • 8
  • a

    Adrien Paul

    06/23/2025, 5:02 PM
    Hello, In vscode kedro plugging, is it possible to run kedro viz with --include-hooks ? Thanks guys ๐Ÿ™
    ๐Ÿ‘€ 1
    r
    • 2
    • 4
  • n

    Nathan W.

    06/25/2025, 7:32 AM
    Hello guys, I couldn't find any way to store API keys in a
    .env
    or
    credentials.yml
    and then use it in my nodes parameters to make API requests. Are there any simple solutions (without putting it in
    parameters.yml
    and then risk to push my key into production...) I missed ? Thanks a lot in advance for your response, Have a nice day!
    ๐Ÿ‘€ 1
    r
    • 2
    • 1
  • f

    Fazil Topal

    06/25/2025, 8:24 AM
    hey everyone, I am building a system where i return the key/filepath of final dataset in the kedro pipeline. What's the ideal way of doing this? A method that also works for partitioned datasets where i get a list of filepaths? I have a catalog instance but somehow all methods are protected so im wondering if im missing something obvious here. I was doing catalog._get_dataset(output)._filepath which works only for non partitioned datasets
    n
    r
    • 3
    • 10
  • j

    Jamal Sealiti

    06/26/2025, 10:14 AM
    Hi, placeholders for catalog.yml not working. I have in conf/base/parameters.yml bootstrap_servers: "localhost:9092" and in my catalog.yml trying to use placeholder like this ${bootstrap_servers} . but i get this error InterpolationKeyError: Interpolation key ' bootstrap_servers' not found
    m
    • 2
    • 2
  • r

    Rachid Cherqaoui

    06/27/2025, 2:20 PM
    hello, How I can put a credentials argument as an input in the pipelines function ?
    ๐Ÿ‘€ 1
    r
    • 2
    • 7
  • p

    Pradeep Ramayanam

    06/27/2025, 5:34 PM
    Hi All, hope everyone is doing well! I have a weird file structure as attached and would love to hear if anyone has solved it before, I tried to solve it as attached but I am getting below error DatasetError: No partitions found in '/data/01_raw/nces_ccd/*/Staff/DataFile' Any help would be much appreciated, thanks in advance!!
    ๐Ÿ‘€ 1
    r
    • 2
    • 19
  • r

    Rachid Cherqaoui

    06/30/2025, 9:11 AM
    Hi everyone, I have a versioned
    .txt
    file generated by a Kedro pipeline that I created, and I'd like to send it to a folder on a remote server via SFTP. After several attempts, I found it quite tricky to handle this cleanly within Kedro, especially while keeping things consistent with its data catalog and hooks system. Would anyone be able to help or share best practices on how to achieve this with Kedro? Thanks in advance for your support!
    ๐Ÿ‘€ 1
    j
    s
    • 3
    • 6
  • j

    Jamal Sealiti

    06/30/2025, 11:29 AM
    Hi, i have kafka->bronze->silver->gold streaming pipline and i want to see data from each stage on kedro vz, its possible?
    d
    r
    • 3
    • 10
  • o

    olufemi george

    07/02/2025, 4:52 PM
    Hello. Newbie here. Pls whats the best practice for using kedro with airflow ( astro ). Should i ; 1. create 2 seperate projects ( astro and the kedro ) and then move the kedro project files into the airflow project ( where exactly do i put them? ) 2. create the airflow project and develop the kedro project within it.
    y
    j
    d
    • 4
    • 4
  • m

    minmin

    07/03/2025, 12:51 PM
    Hello, I am using kedro-mlflow and trying to namespace a pipeline at the same time to do a bunch of runs together. When trying to save a metric, if I use the namespace's names explicitly in the catalog, it works. i.e.:
    model_1.mae:
    type: <http://kedro_mlflow.io|kedro_mlflow.io>.metrics.MlflowMetricDataset
    model_2.mae:
    type: <http://kedro_mlflow.io|kedro_mlflow.io>.metrics.MlflowMetricDataset
    if however i try and template the name in the catalog it fails:
    "{model_name}.mae":
    type: <http://kedro_mlflow.io|kedro_mlflow.io>.metrics.MlflowMetricDataset
    I get the error message: DatasetError: Failed while saving data to dataset MlflowMetricDataset(run_id=...). Invalid value null for parameter 'name' supplied: Metric name cannot be None. A key name must be provided. do I just have to avoid templating in the catalog when it comes to mlflow related entries?
    ๐Ÿ‘€ 2
    d
    e
    y
    • 4
    • 9
  • a

    Adrien Paul

    07/04/2025, 8:42 AM
    Hello, Is it possible to use transcoding with kedro-azureml plugin ? I feel like it's no possible ... Thanks guys ๐Ÿ™
    d
    • 2
    • 3
  • j

    julie tverfjell

    07/04/2025, 10:20 AM
    Hi! I am wondering if anyone has experience with joining dataframes in Kedro and handling updates to the underlying dataframes? I am doing a stream-batch join, and i want to ensure that any updates to the batch dataframe gets propagated into my sink containing the joined data. The way I would want to solve this is to have a separate node that inputs my batch data and merges it into my sink with set intervals. In Kedro it is not possible to have two nodes outputting to the same dataframe. Is there a way to handle this in a diferent way? I thought about creating two instances of the batch dataset in the data catalog, which might omit the restriction kedro has on several nodes outputting to the same dataframe, but i don't know if it would be a good solution. To summarize: โ€ข I have a node that takes a streaming dataframe and a batch dataframe as input โ€ข The result is outputted to a sink (format: delta table) โ€ข I want my sink to reflect any updates to both data sources after the stream has started. โ€ข As of now, if there are any changes in the batch data, rows already existing in the sink will not be updated. โ€ข Also, i want to handle changes no matter when they arrive, so doing windowing is not an option. Any input will be appreciated ๐Ÿ™‚
    d
    • 2
    • 2
  • f

    Felipe Monroy

    07/08/2025, 4:15 PM
    Hello, I was going through the tutorial on visualizing pipelines in notebooks (

    YouTube linkโ–พ

    ) and found it quite insightful, especially for data teams that primarily work within notebook environments. However, Iโ€™m a bit unsure about the recommended approach for running a pipeline when it's defined directly within a notebook. I understand that normally, executing a pipeline requires initializing a
    KedroSession
    . But in cases where the
    Pipeline
    and
    DataCatalog
    is already defined in the notebook itself, what would be the best practice for running it?
    e
    • 2
    • 4
  • p

    Pooja Mukund

    07/09/2025, 2:34 AM
    Hi all, I'm encountering an encoding-related error (attached) when running a Kedro pipeline on a Windows machine via the command prompt. The same pipeline runs fine for colleagues on macOS, so I suspect this may be a Windows-specific issue. Here is the exact traceback:
    Copy code
    INFO     Running node: export_comparables: export_comparables() -> node.py:370
    [07/08/25 16:22:13] INFO     Saving data to comparables (CSVDataset)...                                                                                                                                                                                  data_catalog.py:445
                        WARNING  There are 3 nodes that have not run.                                                                                                                                                                                        runner.py:344
                                 You can resume the pipeline run from the nearest nodes with persisted inputs by adding the following argument to your previous command:                                                                                                                                                                                                     
                                   --from-nodes "export_comparables,tree_to_lookup_table,tree_to_python"                                                                                                                                                                                                     
    โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\core.py:324 in save             โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro_datasets\pandas\csv_dataset.py:186 โ”‚
    โ”‚ in save                                                                                          โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\util\_decorators.py:333 in        โ”‚
    โ”‚ wrapper                                                                                          โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\core\generic.py:3967 in to_csv    โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\format.py:1014 in      โ”‚
    โ”‚ to_csv                                                                                           โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:270 in save    โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:275 in _save   โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:313 in         โ”‚
    โ”‚ _save_body                                                                                       โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\pandas\io\formats\csvs.py:324 in         โ”‚
    โ”‚ _save_chunk                                                                                      โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ in pandas._libs.writers.write_csv_rows:73                                                        โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\encodings\cp1252.py:19 in encode                       โ”‚
    โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
    UnicodeEncodeError: 'charmap' codec can't encode character '\u2033' in position 21: character maps to <undefined>
    
    The above exception was the direct cause of the following exception:
    
    โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
    โ”‚ in _run_module_as_main:198                                                                       โ”‚
    โ”‚ in _run_code:88                                                                                  โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ in <module>:7                                                                                    โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\cli\cli.py:263 in main   โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1161 in __call__           โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\cli\cli.py:163 in main   โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1082 in main               โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1697 in invoke             โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:1443 in invoke             โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\click\core.py:788 in invoke              โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\cli\project.py:237 in    โ”‚
    โ”‚ run                                                                                              โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\framework\session\session.py:399   โ”‚
    โ”‚ in run                                                                                           โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\runner.py:131 in run        โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\sequential_runner.py:72 in  โ”‚
    โ”‚ _run                                                                                             โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\runner.py:245 in _run       โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\task.py:88 in execute       โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\runner\task.py:186 in              โ”‚
    โ”‚ _run_node_sequential                                                                             โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\data_catalog.py:452 in save     โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\core.py:858 in save             โ”‚
    โ”‚                                                                                                  โ”‚
    โ”‚ C:\Users\Pooja Mukund\.conda\envs\wwt\Lib\site-packages\kedro\io\core.py:329 in save             โ”‚
    โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
    DatasetError: Failed while saving data to dataset CSVDataset(filepath=C:/Users/Pooja Mukund/OneDrive - McKinsey & Company/Documents/WWT/wwt-periscope/data/assets/comparables.csv, load_args={}, protocol=file, save_args={'encoding': utf-8-sig, 'index': False}).
    'charmap' codec can't encode character '\u2033' in position 21: character maps to <undefined>
    In my catalog.yml (screenshot attached), the encoding is set to utf-8. I've also tried switching it to utf-8-sig, but the error persists. It seems to occur when reading or writing certain datasets, and I suspect itโ€™s related to how Windows handles file encoding by default. Has anyone encountered a similar issue when running Kedro on Windows, or have suggestions on how to troubleshoot or resolve this?
    d
    • 2
    • 1
  • r

    Rashida Kanchwala

    07/09/2025, 8:04 AM
    Question for the community- has anyone used Kedro with the Databricks Feature Store? Would love to learn more about how you integrated the two.
  • r

    Rachid Cherqaoui

    07/09/2025, 8:43 AM
    Hi everyone ๐Ÿ‘‹ I'm running into an issue and would appreciate your insights. I'm getting the following error: However, the file mentioned doesn't exist by design โ€” my code is supposed to create it later only if there's data to write. In some cases, the process can return an empty DataFrame, so there's simply nothing to save. Have you encountered this kind of situation before? Do you have any suggestions on how to properly handle this case, either by checking upstream or conditionally skipping the writing step? Thanks a lot in advance! ๐Ÿ™
    e
    • 2
    • 4
  • p

    Pradeep Ramayanam

    07/10/2025, 3:33 PM
    Hi team! How can others join this workspace? I invited few folks I know to be added to the workspace, but that didn't work
    d
    • 2
    • 2
  • p

    Pradeep Ramayanam

    07/10/2025, 6:52 PM
    Also team! I am facing a weird error and it started all of a sudden, nothing changed from code wise TypeVarTuple.__init__() got an unexpected keyword argument 'default'. Dataset 'RAW_CAMPUS' must only contain arguments valid for the constructor of 'kedro.extras.datasets.pandas.csv_dataset.CSVDataSet'. Below is the Catalog entry RAW_CAMPUS: type: pandas.CSVDataSet filepath: s3://464-shapeeda/raw/campus/Campus.csv Any help is much appreciated!!
    r
    a
    e
    • 4
    • 17
  • z

    Zubin Roy

    07/14/2025, 10:40 AM
    Hi team. I am encountering an error whenever I try to save a file using polars. I am able to load in the file all fine as a polars dataframe but when it comes to saving it the code always errors out with the below and I've also shown the catalog entry as well. I have tried this was the EagerPolarsDataset and get the same result. Any help or advice would be appreciated. Catalog Entry:
    Copy code
    df_2:
      type: polars.LazyPolarsDataset
      filepath: data/01_raw/test.parquet
      file_format: parquet
    Error
    Process finished with exit code 139 (interrupted by signal 11:SIGSEGV)
    j
    e
    • 3
    • 7
  • g

    Guillaume Tauzin

    07/22/2025, 7:12 AM
    Hi Team ๐Ÿ™‚ (CC @Antony Milne @Max S @Petar Pejovic as this may interest you!) I have a question regarding the possibility of using dataset factories without a pipeline that define each single datatset in the context of a Vizro dashboard. Basically, I have defined dataset factories for use only in my Vizro Dashboards to load files that I do not know the name apriori using a pattern. For now I use a partitioned dataset that gives me all the available files with their loader function but for a significant time delay as they are potentially millions of files. Is it possible? One way could simply be to try and create a dummy pipeline that uses the dataset I need and ask the catalog for the load function but I wanted to know if there's a nicer way! Just to make it clear to anyone outside of the team, Vizro is a (really nice ๐Ÿ™‚) Python dashboarding framework built on top of plotly Dash and Pydantic, that has an integration with Kedro data catalog and inbuilt data caching. Thanks a lot :)
    โœ… 1
    d
    e
    • 3
    • 4
  • y

    Yury Fedotov

    07/23/2025, 2:40 AM
    Hi team! I have a CI tool in my project that checks hyperlinks in docs not pointing to 404 pages, and today it started complaining on some links we had to Kedro
    stable
    docs, perhaps because they were deprecated/renamed with 1.0 release. This is fine, we can indeed update them. I'm just trying to decide if we should do this now or wait a bit until docs stabilize. Question: do you expect the 1.0 docs to stay ~ as is in terms of URLs structure, or there are substantial changes coming soon?
    l
    j
    h
    • 4
    • 8
  • j

    Jamal Sealiti

    07/23/2025, 12:45 PM
    Hi, Im trying to load parameters form conf/base/globals.ym :
    config_loader = OmegaConfigLoader(conf_source=str(Path.cwd() / "conf"))
    # Load all matching globals configs
    globals_config = config_loader.get("globals")
    but i get {}
    a
    • 2
    • 3
  • g

    Galen Seilis

    07/23/2025, 10:03 PM
    I just want to double check something. I 'think' when a node function accesses the parameters that it only uses a copy of the parameters. Is that correct? I was tinkering with Kedro v1.0.0 and I was not able to change the value of a given parameter. IMO it is desirable for these node functions to be unable to modify the parameters.
    l
    n
    • 3
    • 3
  • f

    Felipe Monroy

    07/25/2025, 3:12 AM
    Hello! Is there a way to modify a nodeโ€™s inputs using hooks? Iโ€™m not sure if this is the best approach, but I need to perform some operations on AWS Personalize using boto3, so my nodes will require the Personalize client as an input. Ideally, Iโ€™d like to inject the client rather than initialize it separately within each node.
    l
    • 2
    • 3
  • f

    Filip Isak Mattsson

    07/25/2025, 10:12 AM
    Hello and happy Friday, quick question: When I upgraded from 0.19.14 to 1.0.0. It seems like the DataCatalog.add() was removed. This causes problems with how I create a hook with an Abstract Dataclass wrapper for a snowpark session. What is the new way to handle this? ๐Ÿ™‚ All help welcome. Nerver mind. Fixed by using the dictionary assignment haha. Feel free to remove or leave it up for future references
  • a

    Adam

    07/25/2025, 8:43 PM
    Kedro v1.0.0 is looking ๐Ÿ”ฅBut I just tried installing
    kedro-mlflow
    and it uninstalled v1 and re-installed v0.19 - will it be updated to use v1 at some point?
    y
    • 2
    • 2
1...2728293031Latest