This message was deleted.
# atlantis-community
s
This message was deleted.
p
a lot of people do that
specially users with terrugrunt
since they use terragrunt-atlantis-config tool to do that
j
no terragrunt here, but a large mono repo thats taking so long to plan/apply my EKS tokens are expiring
trying to get a pre-workflow hook to hook, using the example script on atlantis's docs site. but the image doesnt include yq so im workign on getting that in place atm
p
yes, whatever you need to run the script will have to be added to the container
j
hmm, script works locally, errors out as a workflow pre hook
p
works locally but you have in atlantis container all the toold you need to run the script?
j
i found the issue, its still relating to yq
lmao, that works, but now random projects get
Copy code
╷
│ Error: Failed to install provider
│ 
│ Error while installing hashicorp/aws v5.6.2: open
│ /atlantis-data/plugin-cache/registry.terraform.io/hashicorp/aws/5.6.2/linux_amd64/terraform-provider-aws_v5.6.2_x5:
│ text file busy
╵
always something
anddd i see the issue on github
r
@Justin S do you have a working example for generating the atlantis.yaml file? we are also not using terrugrunt
j
we stopped using it, since it makes the pod's memory/cpu usage go insance.. but i think i have it around
Copy code
❯ bat -p repo-config-generator.sh 
#!/usr/bin/env bash
shopt -s globstar
set -euo pipefail
echo '{"version":3,"parallel_plan":false,"parallel_apply":false,"projects":}' | yq -PM > atlantis.yaml
grep -P 'backend[\s]+"s3"' **/*.tf |
  rev | cut -d'/' -f2- | rev |
  sort |
  uniq |
  while read -r d; do \
    echo '[ {"name": "'"$d"'","dir": "'"$d"'", "autoplan": {"when_modified": ["*.tf", "**/*.tpl","**/*.yaml.tpl", ".terraform.lock.hcl"] }} ]' | yq -PM >> atlantis.yaml; \
  done
r
so running this script makes mem/cpu go crazy? why? what was the alternative? BTW we have 50 github repos with each has its own terraform project each terraform project has 4 terraform worksapces
p
I had this in my setup and never had issues with memory
if you had many many files and dirs it could have been slow
Justins case is not the same as yours Roi
j
we have large mono repo's for entire AWS accounts
running in a kubernetes pod.