https://www.runatlantis.io/ logo
Join Slack
Powered by
# github-issues
  • g

    GitHub

    06/13/2025, 3:45 PM
    #5616 Cloudformation stacks in terraform containing list items can show up as line removals due to the diff replacement regex Issue created by nitrocode ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Cloudformation stacks in terraform containing list items can show up as line removals due to the diff replacement regex e.g. This yaml from a wiz stackset ManagedPolicyArns: - Fn:Sub arn:${AWS:Partition}iam:awspolicy/job-function/ViewOnlyAccess - Fn:Sub arn:${AWS:Partition}iam:awspolicy/SecurityAudit gets converted to this incorrectly ManagedPolicyArns: - Fn:Sub arn:${AWS:Partition}iam:awspolicy/job-function/ViewOnlyAccess - Fn:Sub arn:${AWS:Partition}iam:awspolicy/SecurityAudit However, this gets converted semi-correctly - isOrg: - AllowedValues: - - Enabled - - Disabled - Default: Disabled - Description: Enable org deploy - Type: String - orgId: - Default: "" - Description: The OU ID of the AWS Organization where we should deploy, preferably the root OU. This value is mandatory when isOrg is Enabled. You can submit one value, or a space separated list of multiple OUs - Type: String + # isOrg: + # AllowedValues: + # - Enabled + # - Disabled + # Default: Disabled + # Description: Enable org deploy + # Type: String + # orgId: + # Default: "" + # Description: The OU ID of the AWS Organization where we should deploy, preferably the root OU. This value is mandatory when isOrg is Enabled. You can submit one value, or a space separated list of multiple OUs + # Type: String to this - isOrg: + # isOrg: - AllowedValues: + # AllowedValues: - - Enabled + # - Enabled - - Disabled + # - Disabled - Default: Disabled + # Default: Disabled - Description: Enable org deploy + # Description: Enable org deploy - Type: String + # Type: String - orgId: + # orgId: - Default: "" + # Default: "" - Description: The OU ID of the AWS Organization where we should deploy, preferably the root OU. This value is mandatory when isOrg is Enabled. You can submit one value, or a space separated list of multiple OUs + # Description: The OU ID of the AWS Organization where we should deploy, preferably the root OU. This value is mandatory when isOrg is Enabled. You can submit one value, or a space separated list of multiple OUs - Type: String + # Type: String ### Reproduction Steps I'm having trouble reproducing this Here is the related code atlantis/server/events/models/models.go Lines 428 to 441 in</runatlantis/atlantis/commit/a7c712b921d9c4de15bd70d035c95be9a667a4c2|a7c712b> | var ( | | ---------------------------------------------------------------------------------------------------------------------------- | | diffKeywordRegex = regexp.MustCompile(
    (?m)^( +)([-+~]\s)(.*)(\s=\s\|\s->\s|<<|\{|\(known after apply\)| {2,}[^ ]+:.*)(.*)
    ) | | diffListRegex = regexp.MustCompile(
    (?m)^( +)([-+~]\s)(".*",)
    ) | | diffTildeRegex = regexp.MustCompile(
    (?m)^~
    ) | | ) | | | | // DiffMarkdownFormattedTerraformOutput formats the Terraform output to match diff markdown format | | func (p PlanSuccess) DiffMarkdownFormattedTerraformOutput() string { | | formattedTerraformOutput := diffKeywordRegex.ReplaceAllString(p.TerraformOutput, "$2$1$3$4$5") | | formattedTerraformOutput = diffListRegex.ReplaceAllString(formattedTerraformOutput, "$2$1$3") | | formattedTerraformOutput = diffTildeRegex.ReplaceAllString(formattedTerraformOutput, "!") | | | | return strings.TrimSpace(formattedTerraformOutput) | | } | Here is the regexr reproduction using 1. DiffKeywordRegex https://regexr.com/8fdf9 2. DiffListRegex https://regexr.com/8fden I believe the above contents are getting picked up by the former. ### Logs ### Environment details • Atlantis version: 0.34.0 • Deployment method: eks • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: • Atlantis flags: n/a ## options 1. Improve the regex to exclude yaml lists somehow 2. Exempt aws_cloudformation_stack resources from the diff conversion 3. Use hcl instead of diff and don't modify the terraform output at all ### Additional Context • aws_cloudformation_stack • #2438 runatlantis/atlantis
  • g

    GitHub

    06/17/2025, 5:21 PM
    #5620 no effected states after atlantis unlock Issue created by Slevy35 ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue after creating a PR, atlantis recognize the effected states and run plan. but after running
    atlantis unlock
    and immediately after
    atlantis apply
    which cause atlatis to think there's no effected states. we also enable auto merge after apply, which merged the PR. ### Reproduction Steps • create a PR • comment
    atlantis unlock
    • comment
    atlantis apply
    ### Logs ### Environment details • Atlantis version: v0.34.0 • Atlantis flags: auto merge ### Additional Context [Image](https://private-user-images.githubusercontent.com/35378572/456127804-ae3871e1-db48-4942-95cd-75b96b3cc042.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTAxODExODQsIm5iZiI6MTc1MDE4MDg4NCwicGF0aCI6Ii8zNTM3ODU3Mi80NTYxMjc4MDQtYWUzODcxZTEtZGI0OC00OTQyLTk1Y2QtNzViOTZiM2NjMDQyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA2MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNjE3VDE3MjEyNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTFlYmI0YmU3MWUyYTYxMTFhNDA1N2FjYjgxYmIyYTUyMTgwMDg1OGRiYzljZTMwZDcxYjM5YjQ1M2M0ZmE2MGYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.1I5L_nwzgHSU0ymLlnHACtHrHYNNZzESLkdTMbSa77g) [Image](https://private-user-images.githubusercontent.com/35378572/456127738-4eab8b52-2e23-4d5b-a184-989d08d1ec00.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTAxODExODQsIm5iZiI6MTc1MDE4MDg4NCwicGF0aCI6Ii8zNTM3ODU3Mi80NTYxMjc3MzgtNGVhYjhiNTItMmUyMy00ZDViLWExODQtOTg5ZDA4ZDFlYzAwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA2MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNjE3VDE3MjEyNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM3NDBjNjBjNzY0NDQ2NTAwYThkMTk5ZjMzMDMyYjY5YTk2MzlmODcyZmExZDQ0Mjk0MzM4MTA3NjZlMWFkMjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.nFxlYwGvp920Omuk8ot1QVy8dan85CMTapmF0YUuaRg) runatlantis/atlantis
  • g

    GitHub

    06/18/2025, 2:30 PM
    #5627 Make `-parallelism` a command flag Issue created by norman-zon ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story There are some cases, where I know that it will lead to race conditions when running
    apply
    in parallel. One example being Terraform resources that create git commits. But this are a rare exception in my repo, so I want to set
    ATLANTIS_PARALLEL_POOL_SIZE<0
    and
    ATLANTIS_PARALLEL_APPLY=true
    for my server/repo and also be able to overwrite the behaviour when invoking atlantis for this single PR. Describe the solution you'd like I would like to be able to use something like
    atlantis apply -parallelism 1
    to be able to apply all plans, but sequentially. Describe the drawbacks of your solution It would be sensible to not allow to set
    -parallelism
    higher than
    ATLANTIS_PARALLEL_POOL_SIZE
    to not let the user overload the instance. Aside from this I see no issues. Describe alternatives you've considered Technically it would be possible to set
    parallel_apply: false
    in the repo config, for a single PR and revert it afterwards, but that's far from ideal. runatlantis/atlantis
  • g

    GitHub

    06/18/2025, 11:23 PM
    #5568 Atlantis lock issue Issue created by yasinlachiny ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue In our current Atlantis setup, we explicitly configure Atlantis to lock on apply only — not on plan. This behavior is intentional to ensure that once a user has applied changes, no one else can override or interfere with those changes until the PR is merged. However, we've encountered an edge case: if a user runs atlantis plan or rebases the branch after apply but before merging the PR, the original apply lock is silently removed. This unlocks the PR and allows further plans or applies by others, violating the expected safety of the post-apply lock. This is not the intended behavior. Even though plan should not trigger a lock, we do not expect it to clear an existing apply lock. The lock should persist until the PR is merged, regardless of whether a new plan is executed. ### Reproduction Steps Open a PR and run atlantis plan. Run atlantis apply – this locks the PR (as per our config). Before merging the PR: Run atlantis plan again Observe that the lock is removed and the PR becomes unlocke ### Logs ### Environment details ### Additional Context runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    06/19/2025, 3:57 PM
    #773 Atlantis apply all after a failed apply; outputs Ran Apply for 0 projects Issue created by mlehner616 I have a repo that uses the default workspace but there are a number of different project folders. Atlantis version: 0.8.3 Terraform version: v0.12.8
    Copy code
    version: 3
    projects:
      - name: qa
        dir: qa_acct/qa_env
        terraform_version: v0.12.8
        autoplan:
          when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
          enabled: false
      - name: staging
        dir: prod_acct/staging_env
        terraform_version: v0.12.8
        autoplan:
          when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
          enabled: false
      - name: prod
        dir: prod_acct/prod_env
        terraform_version: v0.12.8
        autoplan:
          when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
          enabled: false
    Plans are generated for all three projects as normal after commenting exactly
    atlantis plan
    . Immediately afterword, commenting
    atlantis apply
    attempts to apply all three environments as expected. In this case, there was an apply error due to an AWS IAM policy being misconfigured and the plans were not successfully applied. A commit was pushed to fix this issue and another
    atlantis apply
    was submitted. Note, there was not another
    atlantis plan
    after the fix commit was pushed. Atlantis behaved as if it had forgotten about the failed plans and assumed they had been applied successfully when, in fact, they had not been. I believe the expected behavior should be to reject the apply since new commits were made and force another plan be run, correct? The result was the following:
    Copy code
    Ran Apply for 0 projects:
    Copy code
    Automatically merging because all plans have been successfully applied.
    Copy code
    Locks and plans deleted for the projects and workspaces modified in this pull request:
    
    * dir: `prod_acct/prod_env` workspace: `default`
    * dir: `prod_acct/staging_env` workspace: `default`
    * dir: `qa_acct/qa_env` workspace: `default`
    runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    06/19/2025, 4:10 PM
    #4850 `/api/plan` throws 500 error when using GitHub App Issue created by marcus-rev ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue I have some automation that's firing off a POST to
    /api/plan
    but is receiving a 500 error back. When checking the logs of Atlantis server, I see that Atlantis is trying to pull
    pull/0/head
    which doesn't work. We're also using the
    merge
    checkout strategy. To fix this issue, I believe we should add the
    c.pr.Num > 0
    condition to this if check, as the pr number is an optional parameter to plan/apply API endpoints: atlantis/server/events/working_dir.go Line 334 in</runatlantis/atlantis/commit/42e2dc706b97cf542137c9c076f56d48f57cb23c|42e2dc7> | if w.GithubAppEnabled { | | ----------------------- | ### Reproduction Steps Set ATLANTIS_CHECKOUT_STRATEGY=merge Note that the PR parameter is optional, and as such, is omitted:
    Copy code
    curl --request POST 'https://<ATLANTIS_HOST_NAME>/api/plan' \
    --header 'X-Atlantis-Token: <ATLANTIS_API_SECRET>' \
    --header 'Content-Type: application/json' \
    --data-raw '{
        "Repository": "repo-name",
        "Ref": "main",
        "Type": "Github",
        "Paths": [{
          "Directory": ".",
          "Workspace": "default"
        }],
    }'
    ### Logs
    Copy code
    [
        {
            "level": "info",
            "ts": "2024-08-16T14:33:16.101Z",
            "caller": "events/working_dir.go:235",
            "msg": "creating dir '/atlantis-data/repos/<myorg/my-repo-name>/0/default'",
            "json": {}
        },
        {
            "level": "error",
            "ts": "2024-08-16T14:33:18.541Z",
            "caller": "events/instrumented_project_command_builder.go:75",
            "msg": "Error building plan commands: running git fetch origin pull/0/head:: fatal: couldn't find remote ref pull/0/head\n: exit status 128",
            "json": {},
            "stacktrace": "<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats>\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:75\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35\ngithub.com/runatlantis/atlantis/server/controllers.(*APIRequest).getCommands\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:67\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).apiPlan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:148\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).Plan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:93\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"
        },
        {
            "level": "warn",
            "ts": "2024-08-16T14:33:18.541Z",
            "caller": "controllers/api_controller.go:261",
            "msg": "{\"error\":\"failed to build command: running git fetch origin pull/0/head:: fatal: couldn't find remote ref pull/0/head\\n: exit status 128\"}",
            "json": {},
            "stacktrace": "<http://github.com/runatlantis/atlantis/server/controllers.(*APIController).respond|github.com/runatlantis/atlantis/server/controllers.(*APIController).respond>\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:261\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).apiReportError\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:81\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).Plan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:95\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"
        }
    ]
    ### Environment details • Atlantis version: latest • Deployment method: eks • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: • Atlantis flags:
    ATLANTIS_CHECKOUT_STRATEGY=merge
    runatlantis/atlantis
  • g

    GitHub

    06/24/2025, 1:25 PM
    #5629 --default-tf-version does not take precedence over require_version &gt;= Issue created by eneves-emarketer When using atlantis 0.33.0, even with the flag "--default-tf-version" (actually ATLANTIS_DEFAULT_TF_VERSION in docker compose set to v1.9.8 and ATLANTIS_ALLOW_TERRAFORM_DOWNLOADS = true), the terraform version that was being used in tf plan and apply was 1.12.2 (having required_version = ">= 1.1.0" in the terraform code). No terraform flag enforced in the atlantis.yaml file. As per the documentation the flag to enforce the default tf version should enforce the expected version. runatlantis/atlantis
  • g

    GitHub

    06/25/2025, 12:01 PM
    #5630 Autoplan creates pending GitHub Check regardless of project changes since v0.33 Issue created by artych ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Starting from v0.33, we've observed that a GitHub CI check is deliberately created in the "pending" state during the autoplan run, regardless of whether any projects have changed files. Prior to v0.33, a check status was only created if changes were detected in one or more projects. Looking at the v0.33 changes, we suspect this behavior is a side effect of PR #5242. Previously, the creation of the check status was gated behind the condition
    if len(projectCmds) > 0
    , but this logic appears to have been removed in that update. Perhaps this is an intentional change, but we haven’t found any mention or documentation of it in the release notes or related discussions. Clarification would be appreciated. ### Reproduction Steps Reproducing the issue should be straightforward. We've consistently seen the
    atlantis/plan
    check remain in a pending state on every pull request since upgrading to v0.33. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    06/27/2025, 8:54 AM
    #5660 Assets are missing in V0.35.0 release Issue created by vdmgolub Hello! The new version was released yesterday, but its assets are missing. Is it possible to add them? Thank you! UPD: Just another observation: the new version has a capital "V", maybe it's related somehow. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    06/30/2025, 8:22 AM
    #5665 Atlantis v0.35 has breaking changes around YAML anchor Issue created by okkez ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue YAML anchor configurations in
    atlantis.yaml
    files that worked in Atlantis 0.34.0 now fail with duplicate key errors in Atlantis 0.35.0. This is a breaking change that affects users who utilize YAML anchors and aliases to reduce duplication in their Atlantis configurations. The root cause is the migration from
    <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
    to
    <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
    in version 0.35.0, which introduced stricter YAML parsing that now detects duplicate keys that were previously allowed. ### Reproduction Steps 1. Create an
    atlantis.yaml
    file using YAML anchors and aliases that results in duplicate keys after anchor resolution 2. Use this configuration with Atlantis 0.34.0 - it works correctly 3. Upgrade to Atlantis 0.35.0 and run the same configuration - it fails with duplicate key errors ### Example Configuration version: 3 automerge: true parallel_plan: true parallel_apply: true abort_on_execution_order_fail: true projects: - &project_template name: template branch: /^master$/ dir: template repo_locks: mode: on_apply # on_plan, on_apply, disabled custom_policy_check: false autoplan: when_modified: - "*.tf" - "../modules/**/*.tf" - ".terraform.lock.hcl" enabled: true plan_requirements: - undiverged apply_requirements: - mergeable - approved - undiverged import_requirements: - mergeable - approved - undiverged - <<: *project_template name: project1 dir: terraform/aws/project1/ workflow: terraform # snip... ### Logs
    Copy code
    parsing atlantis.yaml: [37:5] duplicate key "name"
      34 |       - undiverged
      35 | 
      36 |   - <<: *project_template
    > 37 |     name: project1
               ^
      38 |     dir: terraform/aws/project1/
      39 |     workflow: terraform
      40 | 
      41 |
    ### Environment details Atlantis version: 0.35.0 (issue present), 0.34.0 (working) Latest version test: Issue is present in the latest version (0.35.0) Deployment method: N/A (affects all deployment methods) Atlantis flags: N/A (affects YAML parsing regardless of flags) Atlantis server-side config file: N/A (issue is with repo-level atlantis.yaml) Repo
    atlantis.yaml
    file
    : See example above - any configuration using YAML anchors that results in duplicate keys after anchor resolution Additional environment info: This is a parsing issue that affects all environments ### Additional Context • Breaking change introduced in PR #5579: Migration from
    <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
    to
    <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
    v1.17.1 • Specific commit: 8639729 - "Replace gopkg.in/yaml.v3 with github.com/goccy/go-yaml" • Parser change: The new library uses
    yaml.Strict()
    mode which enables stricter validation • Impact: Users who have been using YAML anchors successfully in their atlantis.yaml files will experience breaking changes when upgrading to 0.35.0 runatlantis/atlantis
  • g

    GitHub

    07/04/2025, 6:40 AM
    #5670 Atlantis not taking care of Terraform Binary Architecture (/terraform1.12.2: line 11: syntax error: unterminated quoted string) Issue created by rowi1de ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When switching an existing Atlantis deployment from AMD64 to ARM64 (e.g., by running the
    <http://ghcr.io/runatlantis/atlantis:v0.35.0|ghcr.io/runatlantis/atlantis:v0.35.0>
    image on ARM ECS Fargate), previously downloaded Terraform binaries (e.g.,
    terraform1.12.2
    ) fail silently due to architecture mismatch. Instead of showing a clear error, Atlantis tries to execute the x86_64 binary, which fails with:
    Copy code
    syntax error: unterminated quoted string
    This happens because the shell interprets the incompatible binary as a script. ### Reproduction Steps 1. Run Atlantis v0.33.0 (or earlier) on AMD64 (ECS, Fargate). 2. Allow Atlantis to download Terraform versions (e.g., 1.12.2). 3. Upgrade to v0.35.0 and switch to an ARM64 architecture. 4. Keep
    .atlantis/bin/terraform*
    binaries in the shared volume. 5. Trigger a plan for a project using an old Terraform version. 6. Atlantis attempts to run the incompatible binary and fails with a shell error. ### Logs Logs
    Copy code
    running 'sh -c' '/home/atlantis/.atlantis/bin/terraform1.12.2 init -input=false -upgrade' in '/home/atlantis/.atlantis/repos/...'
    /home/atlantis/.atlantis/bin/terraform1.12.2: line 11: syntax error: unterminated quoted string
    No mention of binary incompatibility or fallback handling. ### Environment details • Atlantis version: v0.35.0 • Previously used version: v0.33.0 on AMD64 • Deployment method: ECS Fargate (platform:
    linux/arm64
    ) • Terraform version: 1.12.2 (binary pre-downloaded by Atlantis) • Execution context: Fargate with EFS shared mount at
    /home/atlantis
    • Terraform binaries: Preexisting files like
    /home/atlantis/.atlantis/bin/terraform1.12.2
    from AMD architecture • Atlantis default TF version env:
    ATLANTIS_DEFAULT_TF_VERSION=v1.9.0
    ### Additional Context • This appears to be a binary execution issue due to architecture mismatch (x86_64 binary executed on ARM64). • Atlantis does not validate the downloaded binary architecture or re-download when switching platforms. • Workaround: Delete
    /home/atlantis/.atlantis/bin/terraform*
    after architecture switch to force fresh (ARM64) downloads. • Suggest Atlantis: • Detect binary architecture mismatch before execution • Log architecture info on
    terraform init
    failures • Offer a flag or auto-clean option on arch switch runatlantis/atlantis
  • g

    GitHub

    07/07/2025, 6:34 AM
    #5671 Slack webhooks not working Issue created by velinbudinov ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Regarding the official documentation here and passing webhooks configuration with server configuration using terraform module, we should get Slack notifications as expected The Slack token is tested with curl, and notifications are delivered ### Reproduction Steps Terraform module https://github.com/terraform-aws-modules/terraform-aws-atlantis ECS Fargate task from the official Docker image https://github.com/runatlantis/atlantis/pkgs/container/atlantis/448754282?tag=latest Vars:
    Copy code
    atlantis = {
        environment = [
          {
            name : "ATLANTIS_REPO_CONFIG_JSON",
            value : jsonencode(yamldecode(file("${path.module}/server-atlantis.yaml"))),
          }
        ]
        secrets = [
          {
            name      = "ATLANTIS_SLACK_TOKEN"
            valueFrom = data.aws_secretsmanager_secret.atlantis_slack_token.arn
          }
        ]
      }
    server-atlantis.yaml:
    Copy code
    repos:
      - id: /.*/
        allow_custom_workflows: true
        allowed_overrides:
          - apply_requirements
          - workflow
        apply_requirements:
          - approved
        workflow: default
    
    webhooks:
      - event: apply
        kind: slack
        channel: XXXXXXXXXXX
      - event: plan
        kind: slack
        channel: XXXXXXXXXXX
    ### Logs Nothing related to webhooks or Slack in the logs ### Environment details ATLANTIS_SLACK_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ATLANTIS_REPO_CONFIG_JSON={"repos":[{"allow_custom_workflows":true,"allowed_overrides":["apply_requirements","workflow"],"apply_requirements":["approved"],"id":"/.*/","workflow":"default"}],"webhooks":[{"channel":"XXXXXXXXXXX","event":"apply","kind":"slack"},{"channel":"XXXXXXXXXXX","event":"plan","kind":"slack"}]} ### Additional Context runatlantis/atlantis
  • g

    GitHub

    07/07/2025, 2:43 PM
    #5507 getting pull request: json: cannot unmarshal number Edit into Go struct field GitCommitDiffs.changeCounts of type azuredevops.VersionControlChangeType Issue created by Froostx ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue I upgraded my atlantis image to 0.34.0 and now i get this error when i try to atlantis plan in my Azure Devops PullRequest. getting pull request: json: cannot unmarshal number Edit into Go struct field GitCommitDiffs.changeCounts of type azuredevops.VersionControlChangeType ### Reproduction Steps Upgrade to 0.34.0 ### Environment details • Atlantis version: 0.34.0 • Deployment method: ecs/eks/helm/tf module runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    07/10/2025, 4:57 PM
    #5673 `atlantis apply` fails: it doesn't clone branch if data is missing: not a git repository Issue created by chlos ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Hi! We are trying to improve the performance of our GHA Atlantis reusable workflow, and we are trying to reduce the time the workflow spends on syncing files with the S3 bucket. We tried to remove every file which is not
    *.tfplan
    from the
    atlantis-data
    dir before uploading the data back to the S3 bucket. So each time GHA starts an Atlantis workflow it has to sync only a few
    tfplan
    files. It makes pre-run and post-run S3 sync almost instant. The problem is that when we run
    atlantis plan
    multiple times it works great: atlantis clones the PR files from git and plan changes if needed. But when we run
    atlantis apply
    it doesn't clone data, it just fails with the following error message:
    Copy code
    Error building apply commands: running git ls-files . --others: fatal: not a git repository (or any of the parent directories): .git\n: exit status 128
    The question is: why doesn't
    atlantis apply
    clone the repo, if it's missing (like
    atlantis plan
    does)? ### Reproduction Steps This is how our reusable GHA workflow looks like: # Sync Atlantis data pre-run - name: Pre-run sync from S3 run: | data_path="repos/${{ github.repository }}/${{ steps.get_issue_number.outputs.result }}" mkdir -p /atlantis-data/$data_path aws s3 cp --recursive \ s3://atlantis-s3-${{ steps.aws-resource.outputs.name }}/_atlantis-data/$data_path/ \ /atlantis-data/$data_path/ # Copying files to S3 does not keep their unix permissions. chmod -R 755 /atlantis-data # Send POST request to Atlantis service - name: Run Atlantis # ... # Sync Atlantis data post-run - name: Post-run sync to S3 run: | data_path="repos/${{ github.repository }}/${{ steps.get_issue_number.outputs.result }}" if [ "${{ inputs.enable-s3-sync-lite }}" = "true" ]; then # Delete all files that are NOT *.tfplan find /atlantis-data -type f ! -name '*.tfplan' -exec rm -f {} + # Delete all now-empty directories find /atlantis-data -type d -empty -delete # Ensure the data path exists (it might not if no plans were created) mkdir -p /atlantis-data/$data_path fi #
    --delete
    will ensure to clean up anything that Atlantis deletes locally aws s3 sync \ /atlantis-data/$data_path \ s3://atlantis-s3-${{ steps.aws-resource.outputs.name }}/_atlantis-data/$data_path \ --delete --include "*" ### Logs ## atlantis plan It clones the repo after this warning message.
    Copy code
    # not the first plan - only tfplan files in the s3 bucket
    
    {"level":"warn","ts":"2025-07-10T15:03:13.589Z","caller":"events/working_dir.go:123",
    
    "msg":"will re-clone repo, could not determine if was at correct commit: git rev-parse HEAD: exit status 128: fatal: not a git repository (or any of the parent directories): .git\n",
    
    "json":{"repo":".../test-newrelic-tf","pull":"268"},
    
    "stacktrace":"
    
    # <https://github.com/runatlantis/atlantis/blob/315e25b135dbb19aa0473e867b703dbf9fbba592/server/events/working_dir.go#L125>
    <http://github.com/runatlantis/atlantis/server/events.(*FileWorkspace).Clone|github.com/runatlantis/atlantis/server/events.(*FileWorkspace).Clone>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/working_dir.go:123|github.com/runatlantis/atlantis/server/events/working_dir.go:123>\n
    <http://github.com/runatlantis/atlantis/server/events.(*GithubAppWorkingDir).Clone|github.com/runatlantis/atlantis/server/events.(*GithubAppWorkingDir).Clone>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/github_app_working_dir.go:39|github.com/runatlantis/atlantis/server/events/github_app_working_dir.go:39>\n
    
    # <https://github.com/runatlantis/atlantis/blob/315e25b135dbb19aa0473e867b703dbf9fbba592/server/events/project_command_builder.go#L482>
    <http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/project_command_builder.go:344|github.com/runatlantis/atlantis/server/events/project_command_builder.go:344>\n
    <http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder)|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder)>.	
    	<http://github.com/runatlantis/atlantis/server/events/project_command_builder.go:244|github.com/runatlantis/atlantis/server/events/project_command_builder.go:244>\n
    
    <http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands.func1|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands.func1>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:38|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:38>\n
    <http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71>\n
    <http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35>\n
    
    <http://github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run|github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/plan_command_runner.go:193|github.com/runatlantis/atlantis/server/events/plan_command_runner.go:193>\n
    <http://github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run|github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/plan_command_runner.go:290|github.com/runatlantis/atlantis/server/events/plan_command_runner.go:290>\n
    <http://github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand|github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand>\n\t
    	<http://github.com/runatlantis/atlantis/server/events/command_runner.go:301|github.com/runatlantis/atlantis/server/events/command_runner.go:301>
    "}
    ## atlantis apply ``` {"level":"error","ts":"2025-07-10T121653.808Z","caller":"events/instrumented_project_command_builder.go:75", "msg":"Error building apply commands: running git ls-files . --others: fatal: not a git repository (or any of the parent directories): .git\n: exit status 128", "json":{}, "stacktrace":" <http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlan… runatlantis/atlantis
  • g

    GitHub

    07/11/2025, 12:12 PM
    #5674 Outdated function for previous plan messages [like in cursor bot] Issue created by celeronsx ### Community Note • Please vote on this issue by adding a 👍 reaction to help maintainers prioritize this request. • Avoid “+1” comments without new information—they create noise. • If you’re interested in contributing, please leave a comment. --- • I’m willing to implement this feature (contributing guide) Describe the user story As an Atlantis user, I want my previous plan/apply comments to be marked as “outdated” when a new commit triggers another run—just like GitHub’s cursor pagination hides outdated comments—so my PR discussion stays focused and uncluttered. Describe the solution you’d like On each new run after a commit, Atlantis should call the GitHub API to mark existing Atlantis comments on the pull request as outdated. These comments will collapse under the “Show outdated” toggle, leaving only the latest plan/apply results visible by default. [Image](https://private-user-images.githubusercontent.com/169676865/465273924-64ef56fe-ce15-46a0-a591-7d72205a4f6d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTQxNjk3OTUsIm5iZiI6MTc1NDE2OTQ5NSwicGF0aCI6Ii8xNjk2NzY4NjUvNDY1MjczOTI0LTY0ZWY1NmZlLWNlMTUtNDZhMC1hNTkxLTdkNzIyMDVhNGY2ZC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwODAyJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDgwMlQyMTE4MTVaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT00YzFkOTBmMWI5N2VhYTdlMWIwMWI5Yzg3Y2NiNzViZWNkMTVmYWMwMGE1ZmM1ZmM3MDMzNWZkZmQyNzdkMzNmJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.M4Mtzaw3tWn530IfidYc0GaOliKhSAx2nOaOAEGfWcE) runatlantis/atlantis
  • g

    GitHub

    07/11/2025, 7:33 PM
    #5675 Broken links on https://runatlantis.github.io/helm-charts/ Issue created by aredridel The links to values.yaml are all 404 runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    07/15/2025, 7:55 PM
    #5676 Github plan comments ignoring ATLANTIS_DISABLE_MARKDOWN_FOLDING configuration when continued over multiple comments Issue created by RyanNielson ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When Atlantis posts the Terraform plan to a Github PR as a comment, it ignores the
    disable-markdown-folding
    configuration setting when the plan spans multiple comments. This appears to be because the
    GithubClient
    CreateComment
    function generates the
    <summary><details>
    tags itself without checking the disable markdown configuration: https://github.com/runatlantis/atlantis/blob/main/server/events/vcs/github_client.go#L229-L259 ### Reproduction Steps Configure Atlantis with
    ATLANTIS_DISABLE_MARKDOWN_FOLDING=true
    Set up Atlantis to post Terraform plans in a PR as comments. Make a change that will result in a large plan diff. The initial comment correctly shows the section of the plan, but the follow-up comment has the "Show Output" collapsible area which it shouldn't. runatlantis/atlantis
  • g

    GitHub

    07/15/2025, 8:07 PM
    #2243 atlantis + terragrunt: dependent modules can't be applied in one go Issue created by lukassup ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Updated: Atlantis fails to renew plans for Terragrunt projects with changed inputs from dependencies. This is also true when using
    mock_outputs
    - initial plan is created using mock_outputs. ### Reproduction Steps •
    mod_B
    depends on
    mod_A
    ,
    mod_A
    should produce an output variable
    id
    •
    mod_B
    needs the
    mod_A.outputs.id
    variable but it is not known until
    mod_A
    is applied, we typically use
    mock_outputs
    so the plan does not fail • on
    atlantis plan
    an invalid plan for
    mod_B
    is generated with
    mock_ouputs
    • on
    atlantis apply
    mod_A
    is applied successfully (no dependencies) but
    mod_B
    is applied from old plan with
    mock_outputs
    • then we run
    atlantis plan
    again a valid plan for
    mod_B
    is created • then we run
    atlantis apply
    and
    mod_B
    is successfully applied with the valid plan Notes: • If we run
    terragrunt run-all apply
    locally the resources are applied in the correct order and inputs are provided after they are known. • The problem becomes worse if there are more dependency levels (e.g.
    mod_C
    <-
    mod_B
    <-
    mod_A
    ) ### Logs ### Environment details • Atlantis version: v0.19.2 • Atlantis flags:
    atlantis server
    Atlantis server-side config file:
    Copy code
    # config file
    repos:
      - id: /github.com/my-org/.*/
        workflow: terragrunt
        apply_requirements: [approved, mergeable]
        allowed_overrides: [workflow]
        allowed_workflows: [terragrunt]
        pre_workflow_hooks:
          - run: >
              terragrunt-atlantis-config generate --output atlantis.yaml --autoplan
              --workflow terragrunt --create-workspace --parallel
    workflows:
      terragrunt:
        plan:
          steps:
            - env:
                name: TERRAGRUNT_TFPATH
                command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
            - env:
                name: TF_CLI_ARGS
                value: '-no-color'
            - run: terragrunt run-all plan --terragrunt-non-interactive --terragrunt-log-level=warn -out "$PLANFILE"
        apply:
          steps:
            - env:
                name: TERRAGRUNT_TFPATH
                command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
            - env:
                name: TF_CLI_ARGS
                value: '-no-color'
            - run: terragrunt run-all apply --terragrunt-non-interactive --terragrunt-log-level=warn "$PLANFILE"
    Repo
    atlantis.yaml
    file: generated by
    terragrunt-atlantis-config
    on
    pre_workflow_hooks
    ### Additional Context runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    07/27/2025, 9:03 PM
    #5680 Adopt official ngrok image in Docker compose Issue created by bschaatsbergen ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue The Docker compose currently uses an unofficial ngrok Docker image, by
    wernight
    — which hasn't been updated in a while and doesn't have proper arm/v8 support. A fix would be to simply replace the
    wernight/ngrok
    image with
    ngrok/ngrok
    . ### Reproduction Steps On MacOs Sequoia 15.5 (Apple Silicon)
    docker-compose up --detach
    fails with the below logs: ### Logs
    Copy code
    ~/github/bschaatsbergen/atlantis> docker-compose up --detach
    [+] Running 1/1
     ✔ ngrok Pulled                                                                                                                                                                                           1.1s
    [+] Running 5/5
     ✔ Network atlantis_default                                                                                                                             Created                                           0.0s
     ✔ Container atlantis-redis-1                                                                                                                           Started                                           0.3s
     ✔ Container atlantis-atlantis-1                                                                                                                        Started                                           0.4s
     ✔ Container atlantis-ngrok-1                                                                                                                           Started                                           0.4s
     ! ngrok The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
    And when running
    docker-compose logs --follow
    you can see the issue:
    Copy code
    atlantis-1  | No files found in /docker-entrypoint.d/, skipping
    atlantis-1  | {"level":"info","ts":"2025-07-27T20:58:02.412Z","caller":"server/server.go:342","msg":"Supported VCS Hosts: Github","json":{}}
    atlantis-1  | {"level":"info","ts":"2025-07-27T20:58:02.710Z","caller":"server/server.go:503","msg":"Utilizing BoltDB","json":{}}
    atlantis-1  | {"level":"info","ts":"2025-07-27T20:58:02.722Z","caller":"policy/conftest_client.go:168","msg":"failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFTEST_VERSION not set","json":{}}
    atlantis-1  | {"level":"info","ts":"2025-07-27T20:58:02.727Z","caller":"server/server.go:1120","msg":"Atlantis started - listening on port 4141","json":{}}
    atlantis-1  | {"level":"info","ts":"2025-07-27T20:58:02.728Z","caller":"scheduled/executor_service.go:51","msg":"Scheduled Executor Service started","json":{}}
    ngrok-1     | http - start an HTTP tunnel
    ngrok-1     |
    ngrok-1     | USAGE:
    ngrok-1     |   ngrok http [address:port | port] [flags]
    ngrok-1     |
    ngrok-1     | AUTHOR:
    ngrok-1     |   ngrok - <support@ngrok.com>
    ngrok-1     |
    ngrok-1     | COMMANDS:
    ngrok-1     |   config          update or migrate ngrok's configuration file
    ngrok-1     |   http            start an HTTP tunnel
    ngrok-1     |   tcp             start a TCP tunnel
    ngrok-1     |   tunnel          start a tunnel for use with a tunnel-group backen
    ngrok-1     |
    ngrok-1     | EXAMPLES:
    ngrok-1     |   ngrok http 80                                                 # secure public URL for port 80 web server
    ngrok-1     |   ngrok http --domain baz.ngrok.dev 8080                        # port 8080 available at baz.ngrok.dev
    ngrok-1     |   ngrok tcp 22                                                  # tunnel arbitrary TCP traffic to port 22
    ngrok-1     |   ngrok http 80 --oauth=google --oauth-allow-email=foo@foo.com  # secure your app with oauth
    ngrok-1     |
    ngrok-1     | Paid Features:
    ngrok-1     |   ngrok http 80 --domain <http://mydomain.com|mydomain.com>                           # run ngrok with your own custom domain
    ngrok-1     |   ngrok http 80 --allow-cidr 1234:8c00::b12c:88ee:fe69:1234/32  # run ngrok with IP policy restrictions
    ngrok-1     |   Upgrade your account at <https://dashboard.ngrok.com/billing/subscription> to access paid features
    ngrok-1     |
    ngrok-1     | Upgrade your account at <https://dashboard.ngrok.com/billing/subscription> to access paid features
    ngrok-1     |
    ngrok-1     | Flags:
    ngrok-1     |   -h, --help      help for ngrok
    ngrok-1     |
    ngrok-1     | Use "ngrok [command] --help" for more information about a command.
    ngrok-1     |
    ngrok-1     | ERROR:  authentication failed: Your ngrok-agent version "3.6.0" is too old. The minimum supported agent version for your account is "3.7.0". Please update to a newer version with `ngrok update`, by downloading from <https://ngrok.com/download>, or by updating your SDK version. Paid accounts are currently excluded from minimum agent version requirements. To begin handling traffic immediately without updating your agent, upgrade to a paid plan: <https://dashboard.ngrok.com/billing/subscription>.
    ngrok-1     | ERROR:
    ngrok-1     | ERROR:  ERR_NGROK_121
    ngrok-1     | ERROR:
    q^C
    runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    07/31/2025, 10:34 PM
    #5665 Atlantis v0.35 has breaking changes around YAML anchor Issue created by okkez ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue YAML anchor configurations in
    atlantis.yaml
    files that worked in Atlantis 0.34.0 now fail with duplicate key errors in Atlantis 0.35.0. This is a breaking change that affects users who utilize YAML anchors and aliases to reduce duplication in their Atlantis configurations. The root cause is the migration from
    <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
    to
    <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
    in version 0.35.0, which introduced stricter YAML parsing that now detects duplicate keys that were previously allowed. ### Reproduction Steps 1. Create an
    atlantis.yaml
    file using YAML anchors and aliases that results in duplicate keys after anchor resolution 2. Use this configuration with Atlantis 0.34.0 - it works correctly 3. Upgrade to Atlantis 0.35.0 and run the same configuration - it fails with duplicate key errors ### Example Configuration version: 3 automerge: true parallel_plan: true parallel_apply: true abort_on_execution_order_fail: true projects: - &project_template name: template branch: /^master$/ dir: template repo_locks: mode: on_apply # on_plan, on_apply, disabled custom_policy_check: false autoplan: when_modified: - "*.tf" - "../modules/**/*.tf" - ".terraform.lock.hcl" enabled: true plan_requirements: - undiverged apply_requirements: - mergeable - approved - undiverged import_requirements: - mergeable - approved - undiverged - <<: *project_template name: project1 dir: terraform/aws/project1/ workflow: terraform # snip... ### Logs
    Copy code
    parsing atlantis.yaml: [37:5] duplicate key "name"
      34 |       - undiverged
      35 | 
      36 |   - <<: *project_template
    > 37 |     name: project1
               ^
      38 |     dir: terraform/aws/project1/
      39 |     workflow: terraform
      40 | 
      41 |
    ### Environment details Atlantis version: 0.35.0 (issue present), 0.34.0 (working) Latest version test: Issue is present in the latest version (0.35.0) Deployment method: N/A (affects all deployment methods) Atlantis flags: N/A (affects YAML parsing regardless of flags) Atlantis server-side config file: N/A (issue is with repo-level atlantis.yaml) Repo
    atlantis.yaml
    file
    : See example above - any configuration using YAML anchors that results in duplicate keys after anchor resolution Additional environment info: This is a parsing issue that affects all environments ### Additional Context • Breaking change introduced in PR #5579: Migration from
    <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
    to
    <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
    v1.17.1 • Specific commit: 8639729 - "Replace gopkg.in/yaml.v3 with github.com/goccy/go-yaml" • Parser change: The new library uses
    yaml.Strict()
    mode which enables stricter validation • Impact: Users who have been using YAML anchors successfully in their atlantis.yaml files will experience breaking changes when upgrading to 0.35.0 runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    08/01/2025, 5:44 PM
    #5690 [DOC] Document and support 'env/&lt;workspace&gt;.tfvars' feature that is already implemented and has existed for years Issue created by brandon-fryslie ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story As a developer using Atlantis, I'd love to take advantage of features that are core to Atlantis, well tested, and have apparently existed for a long time, as opposed to implementing something similar but not quite compatible. In this case, I'm referring to the code here: atlantis/server/core/runtime/plan_step_runner.go Line 117 in</runatlantis/atlantis/commit/42e14273801534f0b7081344dd6f5224784fa447|42e1427> | // Check if env/{workspace}.tfvars exist and include it. This is a use-case | | --------------------------------------------------------------------------- | At a previous company, I implemented a lot of Terraform before I knew about this. Now, I'm laying the groundwork for another potentially large Terraform implementation and wanted to design this one to be compatible. Across hundreds of modules, being in alignment is a significant benefit. I went to look for the docs and couldn't find anything. Looked at the code and apparently it's just an undocumented feature (I know there are several). I spent a lot of effort implementing something very similar a couple years ago. My implementation was a directory named
    environments/<env-name>/terraform.tfvars
    . The Atlantis convention is
    envs/<env-name>.tfvars
    . Very similar. The effort was implementing the repo-level config across 60+ root modules. This was before you could enable auto planning w/ a repo level config of any sort, so the entire thing had to be generated on the fly every run. This is an error prone process as if it fails Atlantis just won't plan anything. Anyway, it's hard to say it would have eliminated that work (as I also built a dependency graph and implemented a module-level Atlantis config files to have more granular control), but I believe for many use cases it could completely eliminate an entire category of custom configuration that is required today. And the thing is, it's already there. It's tested. The comment makes it sound like it's going to stay there. So I'm not seeing why it's not in the docs somewhere. Not everyone is going to read all the code just to use Atlantis. Describe the solution you'd like Document this feature. Describe the drawbacks of your solution People will know about it and therefore might ask you questions or complain about it. Describe alternatives you've considered The alternative would be not telling anyone that this exists and yes I've considered it. I suppose it's not good enough because I tend to want to help people even when it has no benefit to me. I think this feature is super useful and the benefit for some uses cases, and the reduced need for a completely custom repo-level config, is going to far outweigh the costs of documenting this relatively benign and simple feature and dealing with a few people who are too confused to use it properly. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    08/02/2025, 9:11 PM
    #5629 --default-tf-version does not take precedence over require_version &gt;= Issue created by eneves-emarketer When using atlantis 0.33.0, even with the flag "--default-tf-version" (actually ATLANTIS_DEFAULT_TF_VERSION in docker compose set to v1.9.8 and ATLANTIS_ALLOW_TERRAFORM_DOWNLOADS = true), the terraform version that was being used in tf plan and apply was 1.12.2 (having required_version = ">= 1.1.0" in the terraform code). No terraform flag enforced in the atlantis.yaml file. As per the documentation the flag to enforce the default tf version should enforce the expected version. runatlantis/atlantis
    • 1
    • 2
  • g

    GitHub

    08/05/2025, 9:18 AM
    #5693 `plan` is not invalidated/re-run when PR is re-opened Issue created by nightmarlin-wise ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When a GitHub PR is closed and then re-opened, atlantis does not invalidate any previous checks against that PR - allowing
    atlantis apply
    to be run while in an invalid state. While the command does then fail, this may cause confusion as the error message simply states
    stat /home/atlantis/.data/repos/{org}/{repo}/{pr-num}: no such file or directory
    . I would expect atlantis to re-run the
    plan
    check when a PR is re-opened - treating it as though it was freshly opened, acquiring all necessary locks and planning the changes. ### Reproduction Steps 1. Open a PR that modifies some atlantis-managed infrastructure 2. Wait for the
    plan
    check to pass, and approve the PR 3. Close the PR • Note: atlantis will delete the lock & plan associated to the PR 4. Re-open the PR • Note: the
    plan
    check will not be invalidated, even though atlantis deleted the plan file. it is also not re-run 5. Comment
    atlantis apply
    • Failure! The
    apply
    will fail with the message
    stat /home/atlantis/.data/repos/{org}/{repo}/{pr-num}: no such file or directory
    , as the resources were deleted when the PR was closed ### Logs Logs { "level": "error", "ts": "2025-08-04T132831.652Z", "caller": "events/instrumented_project_command_builder.go:75", "msg": "Error building apply commands: stat /home/atlantis/.data/repos/transferwise/REDACTED/1688: no such file or directory", "json": {}, "stacktrace": "github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:75\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildApplyCommands\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:17\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:116\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:383" } { "level": "error", "ts": "2025-08-04T132832.103Z", "caller": "events/pull_updater.go:18", "msg": "stat /home/atlantis/.data/repos/transferwise/REDACTED/1688: no such file or directory", "json": { "repo": "transferwise/REDACTED", "pull": "1688" }, "stacktrace": "github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:18\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:122\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:383" } ### Environment details • Atlantis version:
    v0.32.0
    • Deployment method: AWS ECS • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: no, but the current handler implementation suggests this is still an issue ### Additional Context • https://www.runatlantis.io/docs/autoplanning does not exclude re-opening a PR • https://github.com/runatlantis/atlantis/blob/main/server/events/event_parser.go#L554-L567 does not handle the `reopened` event type • I suspect this issue can be resolved by handling this event type the same as the
    opened
    or
    ready_for_review
    events, as atlantis deletes its state on close runatlantis/atlantis
  • g

    GitHub

    08/05/2025, 2:26 PM
    #5694 Atlantis UI plan refresh Issue created by mwozniak97 ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- Describe the user story Setup: • Multiple Atlantis containers deployed on AWS ECS • Shared file system: Amazon EFS, mounted at the same path in every container • Lock database: Redis As a team, we rely on the Atlantis UI to review plan results and the full job history. During rolling updates or auto-scaling events, an individual container is stopped and replaced. After the replacement, the Jobs list in the UI is empty, and the detailed step plan output is no longer visible, even though the corresponding workflow-workspace.tfplan files are still safely stored in EFS. Describe the solution you'd like On container start-up, Atlantis should re-hydrate the UI from any plan artifacts already present in the shared storage: 1. Discovery phase • Scan the EFS mount for workflow-workspace.tfplan for every repo/workspace. 2. Re-index phase • Populate the internal job cache so that the Jobs page shows the historical entries exactly as they appeared before the container restart. The experience should mirror how locks survive restarts when Redis is used: plans and their metadata become first-class persisted resources, not ephemeral container state. Describe the drawbacks of your solution
    Copy code
    •	Startup latency – Large repositories with many historical plans could slow container boot time while the index is rebuilt.
    •	Metadata drift – If a plan file exists but its corresponding PR or commit has been deleted, the UI might surface “orphaned” entries. Additional validation logic would be required.
    •	Concurrency complexity – Multiple containers running the discovery simultaneously may race to write identical metadata into Redis or memory. Coordination (e.g., Redis transactions or leader election) will be needed.
    •	Maintenance overhead – Future changes to plan file formats or storage paths would need matching migration logic in the re-hydration code.
    Describe alternatives you've considered
    Copy code
    1.	Separate “Archived Plans” tab
    •	Keep the current Jobs list ephemeral, but add a new tab that lists plans discovered on disk.
    •	Drawback: Two nearly identical views can confuse users; reviewers may not know where to look first.
    2.	Persist job metadata in Redis (or DynamoDB) instead of on-disk scanning
    •	Write a small record to Redis each time a plan completes; at startup, rebuild the UI from Redis keys.
    •	Drawback: Introduces a second persistence strategy (plans on EFS, metadata in Redis); if the cache is flushed, the index is lost while files remain.
    3.	Force containers to run in “sticky” mode (no rolling restarts)
    •	Disable automatic task replacement so that UI state is never lost.
    •	Drawback: Removes the main benefit of ECS—automatic updates and rescheduling—so is not viable operationally.
    Given these trade-offs, re-hydrating the Jobs list directly from EFS strikes the best balance between user experience and architectural simplicity, staying aligned with how plan artifacts are already stored today. runatlantis/atlantis
  • g

    GitHub

    08/06/2025, 3:58 PM
    #5696 Bitbucket Cloud Atlantis Incompatibility After App Password Deprecation Issue created by oliver-vini ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Bitbucket Cloud is deprecating app passwords on June 9, 2026, in favor of API tokens. However, Bitbucket mandates different "user" identifiers for API token authentication depending on the protocol: [Image](https://private-user-images.githubusercontent.com/51100260/475103849-5574199a-2cba-4088-a7f0-968c39fd9e64.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTQ0OTk5NTAsIm5iZiI6MTc1NDQ5OTY1MCwicGF0aCI6Ii81MTEwMDI2MC80NzUxMDM4NDktNTU3NDE5OWEtMmNiYS00MDg4LWE3ZjAtOTY4YzM5ZmQ5ZTY0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA4MDYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwODA2VDE3MDA1MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTBmOWJkMmE5YzFlNGI1YWUwYmM5OGY1NmI3YjkwNTU2NjUyNDEzOGE5MDJhNWFhYmI3NGYzN2UyODU5MTU1MzImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0._Kndo_3Kt1PyhbZ9FY6gUvlXs5Xpk3qmhzARPPvmAYA) • For git/HTTPS cloning: requires USERNAME:API_TOKEN • For API requests (e.g., PR comments): requires EMAIL:API_TOKEN Atlantis (using ATLANTIS_BITBUCKET_USER and ATLANTIS_BITBUCKET_TOKEN) currently applies either the email or username globally for both Git and API operations. This creates an unrecoverable bug: set to username and only cloning works (API calls fail with 401), set to email and only API works (cloning fails). ### Reproduction Steps 1. Set up Atlantis to use Bitbucket Cloud and provide an API token for authentication. 2. Set ATLANTIS_BITBUCKET_USER to your Bitbucket username: • Git cloning works: git clone https://USERNAME:API_TOKEN@bitbucket.org/org/repo.git • Atlantis API calls to Bitbucket fail: e.g., can’t comment on PRs, gets 401 error. 1. Set ATLANTIS_BITBUCKET_USER to your Atlassian account email: • API calls from Atlantis work: e.g., commenting on PRs. • Git clone fails with authentication error. ### Logs
    Copy code
    running git clone --depth=1 --branch test_v035 --single-branch <https://atlantis%40acme.net:<redacted>@bitbucket.org/acme/atlantis-demo.git> /home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default: Cloning into '/home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default'...
    remote: You may not have access to this repository or it no longer exists in this workspace. If you think this repository exists and you have access, make sure you are authenticated.
    fatal: Authentication failed for '<https://bitbucket.org/acme/atlantis-demo.git/>'
    : exit status 128
    
    # Conversely, with username:
    git clone <https://atlantis-devops:API_TOKEN@bitbucket.org/acme/atlantis-demo.git>
    Cloning into 'atlantis-demo'...
    remote: Enumerating objects: 171, done.
    ...
    Resolving deltas: 100% (75/75), done.
    
    # CURL API call with username:
    curl -u "atlantis-devops:API_TOKEN" -H "Content-Type: application/json" -X POST -d '{"content": {"raw": "Test comment"}}' "<https://api.bitbucket.org/2.0/repositories/org/repo/pullrequests/28/comments>"
    # Response:
    {"error": {"message": "Unauthorized"}}
    ### Environment details ### Additional Context Reference docs: https://support.atlassian.com/bitbucket-cloud/docs/using-api-tokens/ https://support.atlassian.com/bitbucket-cloud/docs/using-app-passwords/ runatlantis/atlantis
  • g

    GitHub

    08/06/2025, 11:16 PM
    #5697 Gitlab mergeability checks include commit statuses from other reviews Issue created by gmartin-cloudflare ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue https://github.com/runatlantis/atlantis/blob/main/server/events/vcs/gitlab_client.go#L335-L351
    Copy code
    statuses, _, err := g.Client.Commits.GetCommitStatuses(mr.ProjectID, commit, nil)
    	if resp != nil {
    		logger.Debug("GET /projects/%d/commits/%s/statuses returned: %d", mr.ProjectID, commit, resp.StatusCode)
    	}
    	if err != nil {
    		return false, err
    	}
    
    	for _, status := range statuses {
    		// Ignore any commit statuses with 'atlantis/apply' as prefix
    		if strings.HasPrefix(status.Name, fmt.Sprintf("%s/%s", vcsstatusname, command.Apply.String())) {
    			continue
    		}
    		if !status.AllowFailure && project.OnlyAllowMergeIfPipelineSucceeds && status.Status != "success" {
    			return false, nil
    		}
    	}
    In this Gitlab client code to check the mergeability of a review, an API call is made to get pipeline statuses the commits in the review and it specifically checks the statuses of the latest commit. If a commit is used in more than one review, it may have statuses across reviews, and this code does not filter out statuses from prior reviews. I believe this creates scenarios where failures on a commit in a prior review are 'brought forward' and block a newer review. ### Reproduction Steps 1. Create a Gitlab review on a terraform prroject that is configured to work with Atlantis, with a change that will succeed to plan but fail to apply. 2. Attempt to apply the plan, see it fail. 3. Open a new review with the same commit. 4. Observe that no plan pipeline runs, and attempting to apply fails immediately (even if all signals show the review is approved and mergeable) ### Logs LogsProject name has been replaced with
    $MY_PROJECT
    to avoid leaking information about my company's repos.
    Copy code
    {"level":"debug","ts":"2025-08-06T18:34:16.197Z","caller":"vcs/gitlab_client.go:282","msg":"GET /projects/$MY_PROJECT/merge_requests/755/approvals returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:16.197Z","caller":"vcs/gitlab_client.go:307","msg":"Checking if GitLab merge request 755 is mergeable","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:16.950Z","caller":"vcs/gitlab_client.go:310","msg":"GET /projects/$MY_PROJECT/merge_requests/755 returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:17.081Z","caller":"vcs/gitlab_client.go:328","msg":"GET /projects/5409 returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:17.210Z","caller":"vcs/gitlab_client.go:337","msg":"GET /projects/5409/commits/53d0402a25653f55336219ad2dde8dbed4600c0f/statuses returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} ... {"level":"error","ts":"2025-08-06T18:34:19.531Z","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running apply operation: Pull request must be mergeable before running apply.","json":{"repo":"$MY_PROJECT","pull":"755"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events.RunAndEmitStats|github.com/runatlantis/atlantis/server/events.RunAndEmitStats>\n\t/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).Apply\n\t/atlantis/server/events/instrumented_project_command_runner.go:46\ngithub.com/runatlantis/atlantis/server/events.runProjectCmds\n\t/atlantis/server/events/project_command_pool_executor.go:48\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\t/atlantis/server/events/apply_command_runner.go:163\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\t/atlantis/server/events/command_runner.go:401"}
    ### Environment details • Atlantis version: 0.33.0 • Deployment method:
    kubectly apply
    😅 • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: we have not, but I validated via diff on Github that the code I have linked to and provided above has not changed between the two versions. That said, I will look into scheduling an update to the latest version, as maybe some other change indirectly addresses this. • Atlantis flags:
    Copy code
    ATLANTIS_LOG_LEVEL="debug"
    ATLANTIS_CHECKOUT_DEPTH="25"
    ATLANTIS_CHECKOUT_STRATEGY="merge"
    ATLANTIS_CONFIG="/config/files/config.yaml"
    ATLANTIS_DATA_DIR="/atlantis"
    ATLANTIS_DEFAULT_TF_VERSION="0.12.31"
    ATLANTIS_ENABLE_POLICY_CHECKS="true"
    ATLANTIS_FAIL_ON_PRE_WORKFLOW_HOOK_ERROR="true"
    ATLANTIS_PORT="4141"
    Atlantis server-side config file: I can't provide this, it has a ton of stuff that I'm not allowed to put in a public github issue. Repo
    atlantis.yaml
    file: Same as above Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc): Nothing much...it runs in a k8s cluster on a Statefulset. ### Additional Context • Our Gitlab instance is self-hosted, on version 17.11 • We have multiple terraform projects, but this behavior seems to happen almost entirely to the one project the logs come from. I'm wondering if there are any configurations that could come into play here. runatlantis/atlantis
  • g

    GitHub

    08/07/2025, 6:43 PM
    #5615 Atlantis should post Pre Workflow Hooks failures if FailOnPreWorkflowHook is enabled Issue created by mowirth ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story If a preworkflow hook fails, its status is currently reported to the VCS Provider. However, the reason for the failure is not reported back to the end user, making it very hard to debug issues leading to the failure of the pre-workflow hook without having access to the atlantis logs. Describe the solution you'd like Atlantis should be able to post the output of the pre-workflow errors to the VCS provider (for example, Github). This would allow developers to see the failure reason, and that there was a failure and atlantis did not just crash without ever generating a plan. This feature can be protected by a feature flag, in case the pre-workflow hook result may contain sensitive information. ## Describe the drawbacks of your solution Describe alternatives you've considered Only relying on pipeline status is not sufficient, as it is very quickly overlooked and potentially hidden behind hundreds of other pipeline statuses. Furthermore, the pipeline status does not provide additional information about the failure reason. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    08/11/2025, 11:00 AM
    #5704 Add webhook support for atlantis plan results Issue created by uplus-hjk ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story As a DevOps engineer, I want Atlantis to send webhook notifications for the atlantis plan event, so that I can integrate plan results into external systems (e.g., Slack, Teams) without having to manually check the Atlantis UI or logs. Currently, webhook notifications are available only for atlantis apply, which limits visibility during the planning stage. Describe the solution you'd like Add support for webhook notifications triggered after an atlantis plan command is executed. Describe the drawbacks of your solution 1. Additional webhook event may slightly increase outbound traffic from Atlantis. 2. Consumers of the webhook will need to update their handlers to process the new event type. Describe alternatives you've considered Manually checking the PR comments or Atlantis UI for plan results. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    08/13/2025, 4:53 AM
    #5707 Add webhook support for atlantis plan results Issue created by hjk1996 ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story I want Atlantis to send webhook notifications for the atlantis plan event, so that I can integrate plan results into external systems (e.g., Slack, Teams) without having to manually check the Atlantis UI or logs. Currently, webhook notifications are available only for atlantis apply, which limits visibility during the planning stage. Describe the solution you'd like Add support for webhook notifications triggered after an atlantis plan command is executed, containing: 1. Workspace, repo, and pull request info 2. Plan result status (success, error, changes detected, no changes) 3. Summary of planned changes The format could follow the existing apply webhook schema for consistency. Describe the drawbacks of your solution 1. Additional webhook event may slightly increase outbound traffic from Atlantis. 2. Consumers of the webhook will need to update their handlers to process the new event type. Describe alternatives you've considered 1. Manually checking the Atlantis UI or Github comments for plan results . runatlantis/atlantis
  • g

    GitHub

    08/13/2025, 4:50 PM
    #5708 directory is left in a locked state and can no longer be planned Issue created by grimm26 ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue I had a github pull request that caused a
    The default workspace at path foo is currently locked by another command that is running for this pull request.
    by rapidly pushing back to back commits that cause an autoplan. After this, they lock never cleared. I did an
    atlantis unlock
    comment and tried to plan again. It still claimed that that one directory was locked. I discarded plan and locks from the web UI. Same effect.
    /api/locks
    shows the lock gone, but trying a plan says it is locked and then it does show up in
    /api/locks
    . I unlock again to remove it from locks and the run
    strings
    on the
    atlantis.db
    file and it shows that PR with that directory with a status of 5, while other directories in that PR that did plan show status 1. ### Reproduction Steps Hard to say, because I have had this happen before where someone blocks themselves with reapdi commits pushed, but it resolves itself. ### Environment details running atlantis 0.35.0 on eks ### Additional Context runatlantis/atlantis