https://cloudposse.com logo
Join Slack
Powered by
# terraform
  • w

    Wade

    09/17/2025, 1:46 AM
    I would love to understand your reasoning around provider pinning. I know Hashicorp's recommendations are currently 1. use minimum constraints
    >=
    for modules 2. use pessimistic semver constraints
    ~>
    for root modules but which of those does an Atmos terraform component fit into? We are building our own components for a brownfields deployment and have based all of our components on the cloudposse example module template which uses the cloudposse test-harness to ensure that provider versions are pinned with only minimum constraints. However, there are cases like https://medium.com/@mr.ryanflynn/why-hard-pinning-terraform-provider-versions-is-essential-a-lesson-from-an-aws-eks-issue-a03928ae410f and recommendations from seasoned terraform users in reddit that suggest versions should always be hard-pinned with
    =
    . I can also see the test-harness did allow pessimistic semver constraints at some point, I just can't see why it was allowed or why it was changed. We are also exploring the idea of using a component repo as either a component (root module) or a module (eg. an EKS component that includes the generic IAM component as a module to add roles using the cluster's own OIDC provider so we don't have to call the IAM component from atmos a second time)
    e
    m
    • 3
    • 8
  • s

    Slackbot

    09/17/2025, 11:02 AM
    This message was deleted.
    e
    • 2
    • 1
  • r

    Robert Wiesner

    09/18/2025, 12:22 PM
    I am trying to upgrade terraform-aws-msk-apache-kafka-cluster from v1.4.0 to v2.5.0 The plan shows that the whole msk cluster needs a replacement. Is there any guideline how to do or avoid that?
    Copy code
    # module.kafka.module.kafka.aws_msk_cluster.default[0] must be replaced
    -/+ resource "aws_msk_cluster" "default" {
    there is a guide for older releases that looks similar https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/blob/main/docs/migration-0.7.x-0.8.x+.md looks like this issue has some guideline https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/issues/93
    g
    a
    • 3
    • 5
  • y

    Yangci Ou

    09/18/2025, 3:20 PM
    Hey all! We're working through an IAM role delegation pattern for a central primary role (for Spacelift/Terraform executions), which would then assume into downstream account roles. The setup: • Primary role in the "Identity" account (Spacelift or any automation system like GHA assumes this) • The primary role can then assume into delegated admin roles in downstream accounts (trust policy allow) • The delegated roles have admin permission in their respective accounts BUT, now how do we, if we want to perform Terraform locally, assume into the Primary role in the Identity account? 1. Chain role a. Users authenticate via AWS SSO -> Primary TF admin role Identity account b. How do we do this? Leapp, we can do this via Chained Roles... but local CLI, we'd have to do an additional step to assume role via AWS CLI 2. We don't assume into primary role, Delegated roles directly have trust policy to allow the AWS SSO admin role in the Identity account. This is very similar to the CloudPosse's architecture guide, https://docs.cloudposse.com/layers/identity/centralized-terraform-access/ but from the Permission Set -> intermediary Primary role in the Identity account, how is that assumption usually done? Is an additional AWS CLI command the best option? I'm not sure which is the best path.
    e
    b
    • 3
    • 15
  • i

    idanl lodzki

    09/19/2025, 9:59 PM
    Hi everyone, I’m Idan. I’m working on an open-source project that helps monitor and control everything in an organization, with integrations to third-party tools. We’re looking for someone with Terraform experience to contribute code and help automate a demo environment so users can try it out quickly. Stars are of course very welcome ⭐ Check it out here: https://github.com/OpsiMate/OpsiMate
    alrightythen 1
    aws 1
  • z

    Zapier

    09/22/2025, 4:00 PM
    zoom Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom. This is an opportunity to ask us questions on
    terraform
    and get to know others in the community on a more personal level. Next one is Oct 01, 2025 01:30PM.
    👉 Register for Webinar
    S #CHDR1EWNA (our channel)
  • j

    Jackie Virgo

    09/24/2025, 3:51 PM
    Super random question, why do some CloudPosse modules support passing a permissions boundary but not path? I don't want to act like I have a ton of knowledge here but in my corporate experience if a permissions boundary is required so is a path. I have run into this with both EC2 & lambda module
    g
    e
    • 3
    • 3
  • z

    Zapier

    09/29/2025, 4:00 PM
    zoom Join us for "Office Hours" every Wednesday 01:30PM (PST, GMT-7) via Zoom. This is an opportunity to ask us questions on
    terraform
    and get to know others in the community on a more personal level. Next one is Oct 08, 2025 01:30PM.
    👉 Register for Webinar
    S #CHDR1EWNA (our channel)
  • m

    MichaelM

    10/01/2025, 10:13 AM
    Hi all, I need some clarification around licensing with Terraform (HashiCorp’s BUSL-1.1) and our own Terraform code. I’m trying to understand which parts are covered by HashiCorp’s license vs. what’s governed by how we license our own code. Specifically: 1. If we give a client access (via GitLab token) so they can run our Terraform code to create AWS resources in their account — do we need any special licensing from HashiCorp? 2. If the client clones our Terraform code and uploads it into their own repo (to run infra for themselves), does that raise any BUSL issues, or is it purely about how we choose to license our own Terraform modules? 3. If we don’t want clients to freely reuse or redistribute our Terraform code outside of the engagement, should we explicitly add a proprietary/custom license to our repo, or is “no license file” enough protection? Would love some guidance so we make sure we’re both compliant with HashiCorp’s BUSL and clear about our own IP boundaries. 🙏
    e
    • 2
    • 18
  • m

    Mateusz Loskot

    10/01/2025, 7:08 PM
    Anyone infra'ing AKS with Terraform and AzureRM 3.x and unexpectedly experiencing their Windows nodes being forcibly replaced despite no changes to config or code? I've just went into panic mode and reported this https://github.com/hashicorp/terraform-provider-azurerm/issues/30757
    e
    • 2
    • 3
  • s

    shannon agarwal

    10/01/2025, 11:20 PM
    Need some help here, I have never used Spacelift, I have my Githib Repo added to it and the stack created but I am trying to create a test s3 bucket using terraform. Any guides would be appreciated.
    m
    • 2
    • 1
  • s

    shannon agarwal

    10/02/2025, 4:12 PM
    If anyone able to provide any guidance?
    a
    • 2
    • 5
  • n

    Nayeem Mohammed

    10/02/2025, 8:42 PM
    Hey guys, looking to get some help with this terraform module https://github.com/cloudposse/terraform-aws-codebuild/tree/main I am creating codebuild projects using atmos and the above module it's creating the env vars by default which i have not defined and i'm unable to exempt them. any ideas?? the env vars that it currently adds
    Copy code
    AWS_REGION
    us-east-1
    PLAINTEXT
    
    AWS_ACCOUNT_ID
    11111111
    PLAINTEXT
    
    IMAGE_REPO_NAME
    UNSET
    PLAINTEXT
    
    IMAGE_TAG
    latest
    PLAINTEXT
    I want to exempt IMAGE_REPO_NAME and IMAGE_TAG variables
    m
    • 2
    • 1
  • m

    Michael Galey

    10/06/2025, 6:05 PM
    Hey Guys, haven't updated my terraform for a few years, whats the best practice for this: using terragrunt, trying to get clean tflint output etc. directories are like: domains/domain1.com/main.tf domains/terraform.tfvars production/productionapp1/main.tf production/terraform.tfvars modules/ I am currently using domains/terraform.tfvars to define various security rules/whitelist IPs, that are used in some domains, but not all, So if I don't define/use them in each domain folder, terraform or tflint show a warning about it. Should I just disable the linting rules for that? or else I'm considering a modules/shared_variables and then domain1/main.tf can use shareD_variables.whitelist_ips if it needs to, and it shouldn't throw warnings about unused outputs.
    m
    • 2
    • 1
  • m

    MichaelM

    10/08/2025, 8:33 AM
    Has anyone found a way to destroy/terminate namespaces created by the Terraform resource kubernetes_namespace when they get stuck in the Terminating state? Right now, the only thing that seems to work is manually clearing the finalizers, like this:
    kubectl get ns "$ns" -o json | jq 'del(.spec.finalizers)' | kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f -
    Just wondering if anyone's found a cleaner or automated way to handle this by terraform ?
    l
    • 2
    • 2
  • p

    paulm

    10/08/2025, 7:43 PM
    Office Hours ran long (always worthwhile, thank you to CloudPosse and everyone who contributes!), so I didn't get to ask this question… What Terraform version adoption statistics have people found? I want to balance compatibility and modernity when distributing open-source modules. Would I be shooting myself in the foot with minima of v1.10 of Terraform from November, 2024 (S3 state without DynamoDB) and v6.0 of the AWS provider from June, 2025 (multi-region without repetition), because typical users are slow to upgrade? At work, in multiple, fairly large shops, I'm ecstatic 🥲 when I see Terraform v1.x and AWS provider v5. Thanks for any advice!
    e
    • 2
    • 4
  • w

    will

    10/13/2025, 12:53 PM
    Hi, I'm using the ECR aws module (https://registry.terraform.io/modules/cloudposse/ecr/aws/latest). I would like some clarification on the
    max_image_count
    and
    protected_tags_keep_count
    parameters. 1. Does the
    max_image_count
    exclude the images with protected tags? 2. Is the
    protected_tags_keep_count
    per unique tag? We've had some issues with deployed tags being cleaned up and I want to make sure I fully understand these 2 settings. Thanks.
    y
    e
    b
    • 4
    • 3
  • m

    Marat Bakeev

    10/15/2025, 10:10 PM
    Hey guys, what is the procedure to add or update components in https://github.com/cloudposse-terraform-components ? For example, if we want to add some features to a component, or we have a completely new component - do we need to ask and discuss somewhere first? Or just send PRs? or..?
    m
    e
    • 3
    • 2
  • g

    Gustavo

    10/17/2025, 1:03 PM
    Hi! Is there an open source SQS module from cloudposse out there? I was checking the sqs-queue one but it's not listed in their modules library and I couldn't use it directly in tf
    m
    • 2
    • 2
  • e

    Erik Osterman (Cloud Posse)

    10/21/2025, 3:09 PM
    Would it be interesting if Cloud Posse offered something like a commercial "Bug Fix Insurance" across our module ecosystem?
    ❓ 1
    💯 1
    p
    m
    • 3
    • 9
  • c

    Craig

    10/23/2025, 6:44 PM
    👋 I am using the https://github.com/cloudposse/terraform-aws-sso/ module to create permission_sets and assign them to AWS accounts, pretty standard stuff. I would like to try and customize the trust policy associated with a permissionset to allow for assuming the role in one AWS account, from another AWS account within the same Org, but I'm not finding much to go on as far as examples go in this repo. I am trying to setup something that would allow users that have been assigned a role in AWS permissions to copy items from an S3 bucket in Account A to an S3 bucket in Account B, within the same region, similar to what's goin gon here: https://stackoverflow.com/questions/73639007/allow-user-to-assume-an-iam-role-with-sso-login the problem I am running into is I am finding nowhere to actually configure the contents of thePermissionSet Trust Policy, is that just something that is outside of the scope of the terraform-aws-sso module?
    l
    • 2
    • 4
  • c

    Craig

    10/23/2025, 6:45 PM
    I imagine I could create a x-account trust policy like this:
    Copy code
    data "aws_iam_policy_document" "xaccount_trust_policy" {
      provider = aws.destination
      statement {
        actions = [
          "sts:AssumeRole",
          "sts:TagSession",
          "sts:SetSourceIdentity"
        ]
        principals {
          type        = "AWS"
          identifiers = ["arn:aws:iam::${data.aws_caller_identity.source.account_id}:root"]
        }
      }
    }
    but I don't think you can apply it to the permissionset that is being created on the AWS destination account side
  • d

    Drew Fulton

    10/25/2025, 5:36 PM
    Good morning, I've been a longtime fan of the CloudPosse architecture as we used it at one of my former roles. While I was overseeing our architecture at the time, I was not the person that actually set up the original accounts a few year ago. As a result, I'm taking some time to go through the process myself so I can set things up in the future. I'm making really solid progress but seem to have run into a wall and could really use some help. I've been working through the foundation documents on my own. I'm currently in the Deploy Accounts (https://docs.cloudposse.com/layers/accounts/deploy-accounts/) stage. I've run everything through Step 6 deploying the accounts and account map. I'm now trying to apply the
    account-settings
    module and its failing with two instances of the
    The given key does not identify an element in this collection value.
    error. The docs mention that this is usually due to a mismatch of the
    root_account_aws_name
    in the account-map. I've confirmed that multiple times and have it set to
    root
    . For this troubleshooting, let's assume we are trying to apply the
    account-settings
    for the audit account which is called
    core-audit
    . The
    account-settings
    module appears to be looking for the
    audit
    index instead of
    core-audit
    . I've tried setting
    audit_account_account_name
    to both
    core-audit
    and
    audit
    , neither of which are working. I believe the value should be
    core-audit
    . Where else could I be going wrong? FWIW, I've confirmed I'm using the latest versions of all the modules. Thanks for any suggestions!
    e
    • 2
    • 5
  • m

    Mark Johnson

    10/27/2025, 6:16 PM
    Hi CloudPosse team - Any chance we can get an issue to update Terraform awsutils - https://github.com/cloudposse/terraform-provider-awsutils Updated such that the corresponding awsutils resources support a
    region
    parameter? Basically, similar to the AWS 6.0 Terraform provider? --- Use Case: We now pass in ~15
    awsutils
    providers each with separate regions to delete VPCs for all these regions. It would be amazing to loop over with a region parameter.
    🎫 1
    m
    e
    • 3
    • 9
  • c

    Craig

    10/29/2025, 11:47 PM
    👋 I'm trying to figure out what I am doing incorrectly when using the
    default_security_group_deny_all
    variable with the terraform-aws-vpc module. I have several VPCs already created from this module and am working towards removing the default VPC security group default egress & ingress rules. I thought I would be able to do this by simply adding the
    default_security_group_deny_all
    variable to my existing Terraform with a value of
    true
    and just redeploying my Terraform, however when I make a PR with those changes, my Terraform plan shows 0 changes to be made. If I set the value to
    false
    I see the default security group being removed (I imagine by setting this to
    false
    I'll need to make a
    moved
    block indicating that I am now managing this security group as part a different Terraform resource), but that's not what i want to do. Why does setting this value to
    true
    not seem to do anything for already created default VPC security groups?
  • p

    Prateek kumar

    10/30/2025, 12:19 PM
    I'm trying to build a tool which require, terraform core's connectivity, using RPC !!not building a plugin, its like a standalone software that imports terraform core and compares files, but didn't found any content on youtube, i really do even know how to initiate this project. i am an intern BTW!
    e
    • 2
    • 3
  • a

    Alek

    10/30/2025, 4:05 PM
    Hello team! 👋 I'm hitting a perpetual diff on various resources originating from the GitHub Provider, used in the aws-argocd-github-repo component. Specifically, the
    etag
    property is constantly changing on the GitHub's API side, creating ever-changing plans. Those plans are failing to apply via gitops with
    plan files have differences
    . I found our that recently, this PR was merged, which directly addresses handling of etags on the GH provider. Is my understanding correct that the issue should resolve on its own once the change gets released (currently it is not)? Are you aware of any other workaround here? (fyi.
    ignore_changes
    on
    etag
    does not work)
    e
    d
    • 3
    • 5
  • m

    Mateusz Loskot

    10/30/2025, 8:00 PM
    https://www.infoq.com/news/2025/10/iac-formae/
    👍 1
    e
    • 2
    • 1
  • m

    MrAtheist

    11/01/2025, 4:37 AM
    anyone know how to go about destroying a specific resource deep in the modules without making a mess...? in this case i would like to destroy
    module.service_b.module.ec2
    ...
    Copy code
    module "service_a" {
       source = "../modules/stuff"
    ...
    }
    
    module "service_b" {
       source = "../modules/stuff"
    ...
    }
    
    ...
    
    # modules/stuff
    
    module "ec2" {
       source = "../modules/ec2"
    }
    ...
    ... some more stuff
    i thought this was pretty trivial until i step thru the tf plan, but i dont think this is doable by messing with hcl itself, instead...
    Copy code
    terraform destroy --target module.service_b.module.ec2
    terraform state rm module.service_b.module.ec2
    any other suggestions...?
    e
    l
    m
    • 4
    • 6
  • j

    Jonathan

    11/03/2025, 5:58 PM
    Hey folks, I built a new Kubernetes Terraform provider that might be interesting to you. It solves a long-standing Terraform limitation: you can't create a cluster and deploy to it in the same apply. Providers are configured at the root, before resources exist, so you can't use a cluster's endpoint as provider config. Most people work around this with two separate applies, some use null_resource hacks, others split everything into multiple stacks. After being frustrated by this for many years, I realized the only solution was to build a provider that sidesteps the whole problem with inline connections. Example:
    Copy code
    resource "k8sconnect_object" "app" {
      cluster = {
        host  = aws_eks_cluster.main.endpoint
        token = data.aws_eks_cluster_auth.main.token
      }
      yaml_body = file("app.yaml")
    }
    Create cluster → deploy workloads → single apply. No provider configuration needed. Building with Server-Side Apply from the ground up (rather than bolting it on) opened doors to fix other persistent community issues with existing providers. • Accurate diffs - Server-side apply dry-run projections show actual changes, not client-side guesses • YAML + validation - K8s strict schema validation catches typos at plan time • CRD+CR same apply - Auto-retry handles eventual consistency (no more time_sleep) • Patch resources - Modify EKS/GKE defaults without taking ownership • Non-destructive waits - Timeouts don't force resource recreation 300+ tests, runnable examples for everything. GitHub: https://github.com/jmorris0x0/terraform-provider-k8sconnect Registry: https://registry.terraform.io/providers/jmorris0x0/k8sconnect/latest Would love feedback if you've hit this pain point.