This message was deleted.
# atlantis-community
s
This message was deleted.
n
I think you may need to run multiple atlantis instances per “permission boundary”.
☝️ 1
Each instance would have a different service account: atlantis X --> B atlantis Y --> C atlantis Z --> D
Or, you can specify the AWS creds in provider config for the end-user file, right?
e
having multiple atlantises is an option, however the more repos we have the more atlantises we’ll get and this can become hard to manage and sync
n
agreed ^
e
in s3 backend provider config the roles are passed with
Copy code
role_arn = "arn:aws:iam::xxxx:role/terraform"
j
Can you not just configure that in the provider
n
Your concern is that you want to restrict a repo to a particular set of AWS credentials such that it can only interact with that AWS instance. The hard boundary would be separate atlantis instances per AWS instance. However, you can also add a custom server workflow where you have a key/value of repo=AWS instance. When someone plans, your custom workflow will run and you will only pull in credentials (from a secret store for example) for that AWS instance in question.
Can you not just configure that in the provider
Then end-users have control over which AWS instance they connect to
j
Copy code
provider "aws" {
  region = "us-gov-west-1"
  allowed_account_ids = [
    "yyyyyy"
  ]
  assume_role {
    role_arn = "arn:aws-us-gov:iam::xxxxxxxx:role/terraform_infra"
  }
  default_tags {
    tags = {
      Environment  = "dev"
      DeploymentID = "infra"
      Terraform    = "true"
    }
  }
}
then tell atlantis it can only apply mergable Pull's
e
^ that doesn’t take in consideration repository name where PR originated from
j
Thats why you review it.
this is how we operate, the only difference being, our providers etc are all generated from a template.
e
are env variables available in conftest policies?
j
so we set the provider
allowed accounts
variable, and then users generate their provider config so its always correct.
we are a monorepo, but depending on which folder you generate your TF deployment it, changes the account ID you would get for your
allowed_account_ids
c
We solved this problem with multiple Atlantis instnaces.
😭 2
I think it's the only way to be absolutely sure there's no chance of your dev instance applying to prod, etc.
n
yeah multi-tenancy is hard w/o running multiple instances. The next best thing is loading secrets during application runtime based on repo validation (via an allowlist or something) per plan.