This message was deleted.
# ask-anything
s
This message was deleted.
e
Assuming you're using ploomber + soopervisor with the argo backed. here are a few thoughts: the ploomber + argo works fine on GCP. now regarding the artifacts of the pipeline you have two options: you can configure cloud storage, this way ploomber will upload the outputs to the bucket and the next task will download them to use them as inputs. this is simple to setup but slows the pipeline execution a bit since each task needs to download its inputs before starting. the second option is to configure a shared disk so all pods share the filesystem. under this config, you don't need to upload/download artifacts. our tutorial covers both use cases. let me know if you have questions!
k
Thanks for the breakdown Eduardo, looking into the tutorials now. I'd probably look into option 2 to share a file system and then delete the intermediate artifacts after each run.
👍 1
e
sure, let me know once you get that working, happy to help you with next steps to streamline the dev/deploy workflow 🙂
👍 1
and if you have questions with the tutorial, feel free to post another question on this channel
i
Hey Kevin, how did it go? Any blockers?
k
Hey Ido, Eduardo's method works great! We went with option 2. Ran Argo workflows with a shared attached disk to store the intermediate artefacts. Running workflow pipelines is the first step, orchestrating it with proper resources is the next challenge. We're using these workflow pipelines to serve users so potentially, at spikes, for every 1 CPU/RAM resource we have, we might have workflows that require 3-5 CPU/RAM resources total. To serve all of the workflow requests, we'll need an intelligent queue control system. The current challenge we are figuring out is that Argo workflow has no queues. This causes two problems for us. Firstly, if the pod's resources are fully utilized, the incoming workflows cannot be accepted. Secondly, assuming we use some form of queue system, if the pod's resources are partially utilised but are unable to fit bigger workflows, we haven't found an "intelligent broker system" that is able to fit smaller workflows (smaller datasets = smaller workflows) that are in the queue. E.g. Queue = 1, 2, 3, 4, 5 Pod 1 = 3/5 resources used -> I could fit in 1 / 2 here but not 3, 4, 5 Pod 2 = 2/5 resources used -> I could fit in 1 / 2 / 3 here but not 4, 5 Would you suggest any approach? @Shamiul Islam Shifat for your reference
🙌 1
e
did some quick research. yeah, argo doesn't have a notion of a queue so based on this, looks like you need to check the resources before submitting the workflow - check out the answer it provides a few interesting points. I'm no k8s expert but I'm guessing there's a way to to know how much resources are available, then use that to determine if accept reject a workflow submissions?
👀 1
k
Okay gotcha, I'll look into this. Yes, knowing how much resources should be 100% doable. The tricky part is knowing the resources required for every workflow in the queue such that we can intelligently fit smaller workflows into a pod on nodes with spare resources.
e
let us know how it goes
meerkat