This message was deleted.
# bolt
s
This message was deleted.
w
for example: do a mylvmsnapshot on many hosts in parallel. On those that were successful, run a mariadb upgrade plan, on the rest, return an error message, so that I may investigate and rerun
y
I don’t have anything under my hands now but I did similar things on my ex-job.
IIRC I was using
parallel
+
background
+
wait
though, if your action is just a single task/command, you don’t need it.. just do
run_task($targets)
, then continue with ok_set
in my case I was performing multiple things on every target in parallel, that’s why I was using
parallel
ah.. s/parallel/parallelize/, sorry
m
Yeah, once you have a
ResultSet
with all the targets you can use
.error_set
and
.ok_set
to filter out the ones that failed vs. succeeded respectively
w
thanks, that is pretty much what we settled on. I am always looking for ways to improve it though. for instance, if the plan includes a reboot, ALL hosts have to come back before ANY hosts can do the next thing. Is parallelize still considered a beta feature?
y
look around the module for some ideas 🙂
w
i like this idea of a sub-plan way too many examples here where we havea 'task' that does too many things
y
as far as I see both
parallelize
and
background
are GA now
m
Parallelize is more or less here to stay, but note if you are planning on using this with PE those functions won't work there as of right now
y
you can do
parallelize($targets)
then you’ll be unlocked on reboot
w
thanks. We probably won't drift from opensource puppet
y
it’ll take some time to wrap your head around but it should do the trick 🙂
w
ok, I think I have it. We have a task that takes a hostname and a status message, makes some kind of pretty output and outputs it to a dashboard. for various failure scenarios, we use it to say host:xxx failed the upgrade, or host:xyz - started backups, host abc:backup complete, upgrade starting. that kind of thing. we don't want to wait until all the hosts in a batch are done with their backup before seeing in the dashboard which ones failed (possibly early on in the process). so we end up using parallelize like this:
Copy code
parallelize(get_targets($correct_version_targets)) |$target| {
  $log_check_results = run_task('projectname::somecomplextask', $target, _catch_errors => true)
  $log_check_results.each |$result| {
    $t = $result.target.name
      if $result.ok {
        run_task('xneelo_pixar::update_status', localhost, {
            server            => $t,
            position          => '99',
            message           => 'done',
            database_password => $database_password
        })
      } else {
        run_task('xneelo_pixar::update_status', localhost, {
            server            => $t,
            position          => '90',
            message           => 'post_checks_failed',
            database_password => $database_password
        })
      }
  }
the idea being that each host will try to do the complex task, then run the dashboard updator as soon as it is done (no matter if successful). Am I right in saying that we could probably get the same result by creating what the earlier example called a subplan, and in that, have 3 tasks. 1. do complex task 2. filter the results of the complex task and run a dashboard updator task on each filter result 3. profit?
y
Subplan is just a way of organising tasks I’d say.. kind of function
You can make your example shorter by calling the update dashboard task with selector in the message parameter
w
doh! of course
thanks
👍🏻 1