This message was deleted.
# ask-anything
s
This message was deleted.
i
So far the best solution I’ve found is to create a separate module with functions A & B, create tasks A & B within both the training and inference pipelines with different parameters, import the module functions within each task and apply different parameters. In this way, we ensure the training and inference tasks A&B remain the same, but we can apply different parameters. The downside is that separate training and inference tasks A&B need to be created. Do any of you use a different approach in this type of scenario?
e
you have two options 1. since it's only two tasks that are shared, you could declared each one of them in the training and inference pipeline, then pass different parameters 2. if you want to stick to
import_tasks_from
, you can define placeholders in the parameters
Copy code
source: some-task
params:
  # define a placeholder
  some_param: '{{some_param}}'
then, you could have
env.train.yaml
, and
env.inference.yaml
and define the values there:
Copy code
some_param: some_value
to switch between the two envs, you can set the environment variable before running the pipeline:
Copy code
export PLOOMBER_ENV_FILENAME=env.train.yaml && ploomber build
does this work?
i
Thanks for your quick reply Eduardo. This is very helpful!
e
no problem. if this doesn't work for you, let me know!
1