https://www.runatlantis.io/ logo
Join Slack
Powered by
# github-issues
  • g

    GitHub

    10/17/2025, 1:26 PM
    #5894 team_authz passes teams as unquoted arguments to sh -c, and it breaks on special characters (like parentheses) causing it to return "Error: User @user does not have permissions to execute 'plan' command." Issue created by rfaurevincent-wiser ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue
    Copy code
    team_authz:
          command: "/etc/atlantis/scripts/admin-auth.sh"
    debug log:
    Copy code
    {
      "level": "debug",
      "ts": "2025-10-17T12:50:35.389Z",
      "caller": "runtime/external_team_allowlist_runner.go:53",
      "msg": "error: exit status 2: running \"sh -c /etc/atlantis/scripts/admin-auth.sh plan MyOrg/team-a MyOrg/Team-B MyOrg/team-b MyOrg/Developers MyOrg/developers MyOrg/Developers - Product Public Repos MyOrg/developers-product-public-repos MyOrg/Product Tech Leads MyOrg/product-tech-leads MyOrg/Product Mobile App MyOrg/product-mobile-app MyOrg/Analytics Team MyOrg/analytics-team MyOrg/Product MyOrg/product MyOrg/Engineering MyOrg/engineering MyOrg/Product Management (PM) MyOrg/product-management-pm MyOrg/Administrator MyOrg/administrator\": \nsh: syntax error: unexpected \"(\"\n",
      "json": {}
    }
    Atlantis comment on PR:
    Error: User @user does not have permissions to execute 'plan' command.
    this error should never actually be reached in normal operation. removing the user from the team with parentheses in the name restores normal functionality. ### Reproduction Steps set up any
    team_authz
    script (even just
    echo "pass"; exit 0
    ) and comment
    atlantis plan
    with a user in a team with special characters. anything else should be irrelevant, the error is really scoped to that specific
    sh -c
    command. in our case the team is on github, but #5314 seems to be the same problem on gitlab. I will try to send a PR to fix this... runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    10/20/2025, 7:42 PM
    #5899 [Refactor] Rewrite the `ExecutorService` to a cron supported scheduler Issue created by ramonvermeulen ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story Currently, Atlantis handles scheduled tasks through the ExecutorService, which manages two main use-cases: 1. Publishing runtime stats every 10 seconds via RuntimeStatsCollector. 2. Rotating GitHub tokens every 30 seconds via GithubTokenRotator. While this works for current needs, the scheduling capabilities are quite basic and lack cron support. Enhancing this would make it easier to implement features like: • #3245 (Drift Detection) • #916 (Plugin Cache Clean-up) For both of these features it would be beneficial to have a scheduler with cron support on the server. Describe the solution you'd like I suggest replacing the
    ExecutorService
    with a
    ScheduleManager
    singleton that uses gocron Scheduler. Jobs would be registered with the
    ScheduleManager
    but remain fully decoupled from it. The gocron package provides: • Cron job support • Various job types • Built-in job queues, max concurrent jobs, and more useful features (see examples) The gocron package supports a variety of Job types, including a CronJob. Also it supports built-in mechanisms for Job queues and a lot more beneficial features for scheduling server-side tasks. I did some drafting at: ramonvermeulen/atlantis@main...f/refactor-scheduler to illustrate the direction (note: this is an early proof of concept, and far from a full implementation). Describe the drawbacks of your solution • Adds a new dependency:
    <http://github.com/go-co-op/gocron|github.com/go-co-op/gocron>
    (a well-maintained and widely used package, but still it is a new dependency) • Requires refactoring
    ExecutorService
    into
    ScheduleManager
    , which will need thorough testing to ensure GitHub token rotation and stats publishing continue working reliably. Describe alternatives you've considered I looked at making the scheduler completely separate from the Atlantis server (multi-container setup, similar to Airflow), but this would require significant changes and doesn't align with Atlantis's single-server approach. runatlantis/atlantis
  • g

    GitHub

    10/22/2025, 8:58 PM
    #3891 --skip-clone-no-changes does not work with fork PRs Issue created by Ulminator ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue --skip-clone-no-changes does not seem to work with fork PRs. ### Reproduction Steps • Start the Atlantis server • Create a fork of a base repo, matching the regex in the repos.yaml below. (VCS used: GitHub) • Create a PR against the base repo • An
    atlantis.yaml
    file is present that has a project defined • The files changed in this PR are not in any project directories defined in the
    atlantis.yaml
    . From the above, I would expect the repo to not be cloned as the changes are to files outside of the defined project. However, if you look at the Atlantis data dir you will see that the fork has been cloned. ### Logs Provide log files from Atlantis server logs can be retrieved from the deployment or from atlantis comments by adding
    --debug
    such as
    atlantis plan --debug
    Logs
    Copy code
    {"level":"debug","ts":"2023-10-23T21:42:10.920Z","caller":"server/middleware.go:45","msg":"POST /events – from 127.0.0.1:38102","json":{}}
    {"level":"debug","ts":"2023-10-23T21:42:10.920Z","caller":"events/events_controller.go:103","msg":"handling GitHub post","json":{}}
    {"level":"debug","ts":"2023-10-23T21:42:10.920Z","caller":"events/events_controller.go:169","msg":"request valid","json":{"gh-request-id":"X-Github-Delivery=04fcd390-71ed-11ee-8bda-6f43a7c959e3"}}
    {"level":"debug","ts":"2023-10-23T21:42:10.921Z","caller":"events/events_controller.go:423","msg":"identified event as type \"updated\"","json":{"gh-request-id":"X-Github-Delivery=04fcd390-71ed-11ee-8bda-6f43a7c959e3"}}
    {"level":"debug","ts":"2023-10-23T21:42:10.921Z","caller":"server/middleware.go:72","msg":"POST /events – respond HTTP 200","json":{}}
    {"level":"debug","ts":"2023-10-23T21:42:10.921Z","caller":"vcs/github_client.go:143","msg":"[attempt 1] GET /repos/<redacted_base_repo>/pulls/<pull_number>/files","json":{}}
    {"level":"debug","ts":"2023-10-23T21:42:11.062Z","caller":"metrics/debug.go:52","msg":"timer","json":{"name":"atlantis.github.get_modified_files.execution_time","value":0.140859106,"tags":{"base_repo":"<redacted_base_repo>","pr_number":"<pull_number>"},"type":"timer"}}
    {"level":"debug","ts":"2023-10-23T21:42:11.062Z","caller":"events/project_command_builder.go:290","msg":"1 files were modified in this pull request","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:11.156Z","caller":"metrics/debug.go:42","msg":"counter","json":{"name":"atlantis.github.get_modified_files.execution_success","value":1,"tags":{"base_repo":"<redacted_base_repo>","pr_number":"<pull_number>"},"type":"counter"}}
    {"level":"debug","ts":"2023-10-23T21:42:11.156Z","caller":"metrics/debug.go:42","msg":"counter","json":{"name":"atlantis.github_event.pr_synchronize.success_200","value":1,"tags":{"base_repo":"<redacted_base_repo>","pr_number":"<pull_number>"},"type":"counter"}}
    {"level":"debug","ts":"2023-10-23T21:42:11.160Z","caller":"events/project_command_builder.go:332","msg":"got workspace lock","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"info","ts":"2023-10-23T21:42:11.161Z","caller":"events/working_dir.go:230","msg":"creating dir \"/dir/.atlantis/repos/<redacted_base_repo>/<pull_number>/default\"","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/working_dir.go:262","msg":"ran: git clone --depth=1 --branch branch_name --single-branch <redacted.git> /dir/.atlantis/repos/<redacted_base_repo>/<pull_number>/default. Output: Cloning into '/root/.atlantis/repos/<redacted_base_repo>/<pull_number>/default'...","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"info","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_command_builder.go:357","msg":"successfully parsed path/to/atlantis.yaml file","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_command_builder.go:364","msg":"moduleInfo for /root/.atlantis/repos/<redacted_base_repo>/<pull_number>/default (matching \"\") = map[]","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_finder.go:185","msg":"found downstream projects for \"some_file.py\": []","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_finder.go:192","msg":"checking if project at dir \"environments/env_a\" workspace \"default\" was modified","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"info","ts":"2023-10-23T21:42:14.748Z","caller":"events/project_command_builder.go:371","msg":"0 projects are to be planned based on their when_modified config","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:14.748Z","caller":"metrics/debug.go:52","msg":"timer","json":{"name":"atlantis.builder.execution_time","value":3.826454988,"tags":{},"type":"timer"}}
    {"level":"info","ts":"2023-10-23T21:42:14.748Z","caller":"events/plan_command_runner.go:97","msg":"determined there was no project to run plan in","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
    {"level":"debug","ts":"2023-10-23T21:42:14.748Z","caller":"metrics/debug.go:52","msg":"timer","json":{"name":"atlantis.cmd.autoplan.execution_time","value":3.826552526,"tags":{},"type":"timer"}}
    {"level":"debug","ts":"2023-10-23T21:42:15.156Z","caller":"metrics/debug.go:42","msg":"counter","json":{"name":"atlantis.builder.execution_success","value":1,"tags":{},"type":"counter"}}
    ### Environment details If not already included, please provide the following: • Atlantis version:
    atlantis 0.24.3 (commit: 5b8ddc7) (build date: 2023-06-20T22:05:19Z)
    • Deployment method: GCE • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: No • Atlantis flags: Atlantis server-side config file: repos: - id: /github\.com\/(.*?)\/repo_name/ branch: /master/ repo_config_file: path/to/atlantis.yaml plan_requirements: [] apply_requirements: [approved, mergeable, undiverged] import_requirements: [approved, mergeable, undiverged] Repo
    atlantis.yaml
    file: version: 3 projects: - name: env_a dir: environments/env_a autoplan: when_modified: ["*.tf", "../modules/host/*.tf", ".terraform.lock.hcl"] enabled: true Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc) Additional env vars:
    Copy code
    export ATLANTIS_ALLOW_FORK_PRS=true
    export ATLANTIS_RESTRICT_FILE_LIST=true
    export ATLANTIS_SILENCE_NO_PROJECTS=true
    export ATLANTIS_SILENCE_VCS_STATUS_NO_PLANS=true
    export ATLANTIS_SKIP_CLONE_NO_CHANGES=true
    export ATLANTIS_DISABLE_AUTOPLAN=false
    ### Additional Context I believe this is caused by the
    hasRepoCfg
    check always failing below. That code is located <https://github.com/runatlantis/atlantis/blob/a542aa8f2015e67957c5fdb6ec994080561aa… runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    10/29/2025, 3:55 PM
    #5916 Terragrunt Upgrade Issue created by llandmayer Overview of the Issue I'm having trouble upgrading terragrunt in the atlantis container. Whenever I upgrade terragrunt from version 0.47.0 to any othher version above atlantis just crashes after 1 min up. I'm trying to upgrade to terragrunt v0.90.0. Currently running atlantis v0.36.0 I this something people are having or just me? The error I get in atlantis below, even in debug mode it doesn't give me anymore information about what's happening.
    {"level":"warn","ts":"2025-10-24T14:47:07.298Z","caller":"cmd/server.go:1138","msg":"Bitbucket Cloud does not support webhook secrets. This could allow attackers to spoof requests from Bit bucket. Ensure you are allowing only Bitbucket IPs","json":{},"stacktrace":"<http://github.com/runatlantis/atlantis/cmd.(*ServerCmd).securityWarnings|github.com/runatlantis/atlantis/cmd.(*ServerCmd).securityWarnings>\n\tgithub.com/runatlantis/atlantis/cmd/server. go:1138\<http://ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).run|ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).run>\n\tgithub.com/runatlantis/atlantis/cmd/server.go:826\ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.func2\n\tgithub.co m/runatlantis/atlantis/cmd/server.go:718\<http://ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.(*ServerCmd).withErrPrint.func5|ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.(*ServerCmd).withErrPrint.func5>\n\tgithub.com/runatlantis/atlantis/cmd/server.go:1172\ngithu b.com/spf13/cobra.(*Command).execute\n\<http://tgithub.com/spf13/cobra@v1.8.1/command.go:985|tgithub.com/spf13/cobra@v1.8.1/command.go:985>\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.co m/spf13/cobra.(*Command).Execute\n\<http://tgithub.com/spf13/cobra@v1.8.1/command.go:1041|tgithub.com/spf13/cobra@v1.8.1/command.go:1041>\ngithub.com/runatlantis/atlantis/cmd.Execute\n\tgithub.com/runatlantis/atlantis/cmd/root.go:30\nmain.main\ n\<http://tgithub.com/runatlantis/atlantis/main.go:66|tgithub.com/runatlantis/atlantis/main.go:66>\nruntime.main\n\truntime/proc.go:272"} {"level":"info","ts":"2025-10-24T14:47:07.299Z","caller":"server/server.go:319","msg":"Supported VCS Hosts%!(EXTRA string=hosts, []models.VCSHostType=[BitbucketCloud])","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.378Z","caller":"server/server.go:472","msg":"Utilizing BoltDB","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.382Z","caller":"policy/conftest_client.go:167","msg":"failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFT EST_VERSION not set","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.382Z","caller":"server/server.go:1032","msg":"Atlantis started - listening on port 4141","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.384Z","caller":"scheduled/executor_service.go:51","msg":"Scheduled Executor Service started","json":{}} {"level":"warn","ts":"2025-10-24T14:48:01.283Z","caller":"server/server.go:1047","msg":"Received interrupt. Waiting for in-progress operations to complete","json":{},"stacktrace":"github.c om/runatlantis/atlantis/server.(*Server).Start\n\<http://tgithub.com/runatlantis/atlantis/server/server.go:1047|tgithub.com/runatlantis/atlantis/server/server.go:1047>\ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).run\n\tgithub.com/runatlantis/atla ntis/cmd/server.go:842\<http://ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.func2|ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.func2>\n\tgithub.com/runatlantis/atlantis/cmd/server.go:718\ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).I nit.(*ServerCmd).withErrPrint.func5\n\<http://tgithub.com/runatlantis/atlantis/cmd/server.go:1172|tgithub.com/runatlantis/atlantis/cmd/server.go:1172>\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.8.1/command.go:985\ngithub .com/spf13/cobra.(*Command).ExecuteC\n\<http://tgithub.com/spf13/cobra@v1.8.1/command.go:1117|tgithub.com/spf13/cobra@v1.8.1/command.go:1117>\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.8.1/command.go:1041\ngithub.co m/runatlantis/atlantis/cmd.Execute\n\<http://tgithub.com/runatlantis/atlantis/cmd/root.go:30|tgithub.com/runatlantis/atlantis/cmd/root.go:30>\nmain.main\n\tgithub.com/runatlantis/atlantis/main.go:66\nruntime.main\n\truntime/proc.go:272"} {"level":"info","ts":"2025-10-24T14:48:01.284Z","caller":"server/server.go:1074","msg":"All in-progress operations complete, shutting down","json":{}}
    runatlantis/atlantis
  • g

    GitHub

    10/29/2025, 4:45 PM
    #5917 Support for Terraform Actions Issue created by msollanych-tt ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story End-user consumers of devops work, e.g. internal developers, often have a need for privileged users to execute tasks that they themselves are not permitted to do. This might include re-deploying a service, restarting something, or performing any other action that is not directly impactful to the "desired configuration" of a system, but is just intended to prod it back into the desired state when it has drifted in a way Terraform cannot be aware of. Terraform has a new feature coming down the pike in the form of Actions: https://mattias.engineer/blog/2025/terraform-actions-deep-dive/ https://www.reddit.com/r/Terraform/comments/1nr09sr/introduction_to_terraform_actions/ These are a boon to those of us who have been using Terraform for things that have been long considered not idiomatically part of the canonical Terraform workflow, mostly in the form of provisioning, restarting, and taking other actions on managed resources that are not necessarily state-impactful. A well configured Atlantis instance is already generally in a good place to run Terraform with a solid story around credentials management, etc. and some amount of control over what Git user can perform which Atlantis action on which repo. With the coming of Actions, exposing this to Atlantis in some way would provide a solution to a missing aspect of the workflow that still often requires a privileged user to go and run a command by hand. Describe the solution you'd like The primary means of interacting with Atlantis is presently via pull request comments. While I would love improvements to the web interface, or API + remote CLI to trigger such a thing, that seems like too big of an ask for the project in its current phase. The most likely solution is to add
    atlantis action
    as a PR command, with parameters matching the upstream Terraform CLI syntax. Two scenarios this should be able to be used in: • Performing an action (or running a plan) without any code change or state drift This is the new upstream change to the Terraform lifecycle and the one that I most want to see Atlantis reflect. I think one vector would be using an empty pull request (i.e. consisting of one empty commit) with commands in the comment to ask Atlantis to run a plan or perform an action in a specific directory within the repo. This could serve two purposes: 1. Request Atlantis run a plan on a given directory, without forcing any code change, in case of configuration drift 2. Request Atlantis to run a Terraform Action in a given directory This gives us the double win of being able to keep track of these actions in a PR and therefore in Git even though it does not necessarily reflect code changes. Very devops. • Performing an action in the context of a code change Not dissimilar from the above, in this case there's a conventional PR and plan, and then Atlantis can be engaged to run actions - before or after applies - with the same flexibility as the above. Lastly, and more speculatively as this needs some thought: When Atlantis runs an auto-merge on apply, it should still be possible to request additional actions post-apply (or ask for more plans, applies, etc.) Describe the drawbacks of your solution Still requires that the end-user interact with Atlantis via Git to do on-demand actions, but this PR lifecycle itself could be wrapped up by some automation. Describe alternatives you've considered The same workarounds most advanced Terraform users have used for ages: tainting and redeploying resources. Adding
    atlantis taint
    with the same empty-PR workflow could be useful as well, but since Actions are designed to make tainting less necessary, it might be wise to just skip that one. As far as the Atlantis implementation is concerned: as mentioned above, adding to the web UI or implementing an API method to trigger Atlantis to do this would both be interesting, but likely orders of magnitude more work. runatlantis/atlantis
  • g

    GitHub

    10/31/2025, 4:53 AM
    #5924 Race condition in BoltDB UnlockByPull implementation Issue created by jamengual ## Bug Report ### Description A race condition exists in the
    BoltDB.UnlockByPull()
    implementation where locks are read in a
    View
    transaction and then deleted in separate
    Update
    transactions. This creates a window where locks could be modified between the read and delete operations. ### Location File:
    server/core/boltdb/boltdb.go
    Lines: 257-284 Function:
    UnlockByPull(repoFullName string, pullNum int)
    ### Current Implementation func (b *BoltDB) UnlockByPull(repoFullName string, pullNum int) ([]models.ProjectLock, error) { var locks []models.ProjectLock // ⚠️ View transaction: reads locks err := b.db.View(func(tx *bolt.Tx) error { c := tx.Bucket(b.locksBucketName).Cursor() for k, v := c.Seek([]byte(repoFullName)); k != nil && bytes.HasPrefix(k, []byte(repoFullName)); k, v = c.Next() { var lock models.ProjectLock if err := json.Unmarshal(v, &lock); err != nil { return errors.Wrapf(err, "deserializing lock at key %q", string(k)) } if lock.Pull.Num == pullNum { locks = append(locks, lock) } } return nil }) // ⚠️ RACE: Locks could be modified between View and Update for _, lock := range locks { if _, err = b.Unlock(lock.Project, lock.Workspace); err != nil { return locks, errors.Wrapf(err, "unlocking repo %s", lock.Project.RepoFullName) } } return locks, nil } ### Problem 1. View transaction reads all matching locks 2. Separate Update transactions delete each lock individually 3. Race window: Between steps 1 and 2, locks could be: • Modified by another operation • Already deleted by another process • Newly created (won't be deleted) ### Impact Severity: Low to Medium Likelihood: Low (locks are typically PR-scoped and this race is unlikely in practice) Consequences: • Potential to miss deleting a lock if it's modified between read and delete • Could return outdated lock information • Theoretical risk of orphaned locks ### Proposed Fix Use a single
    Update
    transaction for both reading and deleting: func (b *BoltDB) UnlockByPull(repoFullName string, pullNum int) ([]models.ProjectLock, error) { var locks []models.ProjectLock err := b.db.Update(func(tx *bolt.Tx) error { // Use Update, not View bucket := tx.Bucket(b.locksBucketName) c := bucket.Cursor() var keysToDelete [][]byte // Read and collect keys to delete for k, v := c.Seek([]byte(repoFullName)); k != nil && bytes.HasPrefix(k, []byte(repoFullName)); k, v = c.Next() { var lock models.ProjectLock if err := json.Unmarshal(v, &lock); err != nil { return errors.Wrapf(err, "deserializing lock at key %q", string(k)) } if lock.Pull.Num == pullNum { locks = append(locks, lock) keysToDelete = append(keysToDelete, append([]byte(nil), k...)) // Copy key } } // Delete within same transaction (atomic) for _, key := range keysToDelete { if err := bucket.Delete(key); err != nil { return errors.Wrapf(err, "deleting lock at key %q", string(key)) } } return nil }) return locks, err } ### Benefits of Fix 1. ✅ Atomic operation: Read and delete in single transaction 2. ✅ No race window: Locks can't change between read and delete 3. ✅ Consistent state: All locks for a PR deleted together 4. ✅ Better error handling: Single transaction failure point ### Testing Recommendations 1. Add concurrent test with goroutines trying to: • Unlock same PR simultaneously • Lock while unlocking is in progress 2. Use Go race detector:
    go test -race
    3. Test PR close webhook with concurrent plan operations ### Related This was discovered during comprehensive lock mechanism analysis. See related documentation PR for full context on locking architecture. ### Notes • Similar pattern in Redis implementation should be reviewed • Other
    View
    +
    Update
    patterns in codebase should be audited • This is a theoretical race; no known production issues reported runatlantis/atlantis
  • g

    GitHub

    10/31/2025, 2:54 PM
    #5925 When global codeql is enabled for every repository, atlantis apply will report that a PR must be approved if the PR is blocked and flag `--gh-allow-mergeable-bypass-apply` is used Issue created by nvanheuverzwijn ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When using global code analysis in github with the flag
    --gh-allow-mergeable-bypass-apply
    ,
    atlantis apply
    will report incorrectly that the PR is not approved on merge request that are in a blocked state. I was able to track down where the problem occurs. This error message (
    invalid repository name
    )comes from
    func (g *GithubClient) LookupRepoId(repo githubv4.String) (<http://githubv4.Int|githubv4.Int>, error)
    file
    server/events/vcs/github_client.go
    . The relevant code is this. Keep in mind that LookupRepoId will return an error if the repo parameter cannot be split.
    Copy code
    repoSplit := strings.Split(string(repo), "/")
    	if len(repoSplit) != 2 {
    		return <http://githubv4.Int|githubv4.Int>(0), fmt.Errorf("invalid repository name: %s", repo)
    	}
    I won't go into the weeds here but basically, when the flag
    --gh-allow-mergeable-bypass-apply
    is set and that the pull request is in a blocked state due to checks failing, atlantis will verify that every checks are passing except for the atlantis apply checks. Atlantis makes a graphql query to retrieve the pull request information. See below. Among those information, we want to get the "checkrun" object that represents how a workflow was run, successful or not.
    Copy code
    gh api graphql -f query='query { repository(owner:"OWNER",name:"REPO") {pullRequest(number:69) {baseRef{rules(first:100){nodes{type,repositoryRuleset{enforcement},parameters{... on WorkflowsParameters{workflows{path, repositoryId}}}}}}commits(last:1){nodes{commit{statusCheckRollup{contexts(first: 100){nodes{__typename, ... on CheckRun{conclusion, name, checkSuite{conclusion,workflowRun{runNumber,file{repositoryName, path}}}}}}}}}}}}  }
    From this query, we get a very big output but what we are interested in is these line:
    Copy code
    "commits": {
              "nodes": [
                {
                  "commit": {
                    "statusCheckRollup": {
                      "contexts": {
                        "nodes": [
                          {
                            "__typename": "CheckRun",
                            "conclusion": "SUCCESS",
                            "name": "Analyze (actions)",
                            "checkSuite": {
                              "conclusion": "SUCCESS",
                              "workflowRun": {
                                "runNumber": 208,
                                "file": null
                              }
                            }
                          },
    [...]
    Notice how the Analyze (actions) check is structured. The workflowRun is not empty but the workflowRun.file is null. This checkrun is coming from a global actions activated from the organization setting, not from an individual workflow file. In the code path, right before we call the LookupRepoId function, we do this:
    Copy code
    if checkRun.CheckSuite.WorkflowRun == nil {
    			continue
    		}
    In our case, the WorkflowRun is not nil but the WorkflowRun.File is nil. This result in atlantis erroring because we try to lookup a repo with an empty repo name. ### Reproduction Steps 1. Add global code analysis (from github settings) so that every repository will get automatically analysed with codeql. 2. Ensure that the flag
    --gh-allow-mergeable-bypass-apply
    is enabled. 3. Create a pull request. 4. Make sure one of the check, not related to atlantis, is failing. 5. Add an
    altantis apply
    comment. ### Logs Logs``` {"level":"warn","ts":"2025-10-30T184500.327Z","caller":"events/apply_command_runner.go:108","msg":"unable to get pull request status: fetching mergeability status for repo: ORG/REPO, and pull number: 44: getting pull request status: invalid repository name: . Continuing with mergeable and approved assumed false","json":{"repo":"ORG/REPO","pull":"44"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427"} ``` ### Environment details ### Additional Context runatlantis/atlantis
  • g

    GitHub

    10/31/2025, 5:46 PM
    #5928 Server Side Config Repo - (2) configs with the same id Issue created by adamsebesta The following
    server-side config
    doesn't seem to be working for me:
    Copy code
    - id: <http://github.com/xxx/infrastructure|github.com/xxx/infrastructure>
        branch: /^(development|uat|rc)$/
        apply_requirements: []
        plan_requirements: []
        allow_custom_workflows: false
        allowed_overrides: []
        workflow: default
    
      - id: <http://github.com/xxx/infrastructure|github.com/xxx/infrastructure>
        branch: /^(stg|main)$/
        apply_requirements: [approved]
        plan_requirements: []
        allow_custom_workflows: false
        allowed_overrides: []
        workflow: default
    Can anyone advise how I can add
    apply_requirements
    to only specific branches of a repo - maybe I am missing something but it always just takes the latter entry and ignores the first completely Thanks in advance, runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/05/2025, 4:21 PM
    #2164 Question: Plans to integrate with google cloud source repositories Issue created by dcernag Hello, i've been looking in the issues and i'm wondering if there are plans on integrating with google cloud source repositories or if there's a specific reason why it's not yet supported. Thanks! runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/06/2025, 10:00 AM
    #5938 Add POLICYSETNAME environment variable Issue created by r3nic1e ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story I want to run multiple policy check sets without storing them locally at the atlantis server. There is a way to do that using `--update` or adding
    --policy=path
    via
    extra_args
    . So I want to fetch own policies for each
    policy_set
    to have separate owners/approvers. Describe the solution you'd like Add an environment variable
    POLICYSETNAME
    (like
    POLICYCHECKPATH
    ) that will provide the name of currently running policyset. Describe the drawbacks of your solution I don't see any Describe alternatives you've considered Probably we could use single source of policy checks so that checks will be separated by namespaces. So specifying
    namespace
    in
    policy_set
    may work effectively the same. But I guess it will be sort of a duplicate for
    name
    field runatlantis/atlantis
  • g

    GitHub

    11/10/2025, 6:20 PM
    #5944 Pipelines left in running state blocking automerge - GItLab Issue created by MihailoPlavsic34 ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Merge requests on GitLab cannot be applyed as there are pipelines stuck in running state. When a commit is pushed a race condition happens between GitLab creating a merge request pipeline and Atlanits updating commit status. We have additional checks running in .gitlab-ci defined jobs. We observed 3 possible scenarios: • Happy path command line logs | atlantis server logs: 1. When a commit is pushed to the merge request branch, GitLab creates a pipeline and associated jobs 2. This logic picks up the pipeline ID and updates the commit status accordingly 3. All pipelines associated with the commit are successful, merge checks pass, and the merge request is mergeable and can be applied • Split pipelines command line logs | atlantis server logs: 1. GitLab does not create a pipeline in time 2. Atlantis sets the Ref to the branch name instead of the MR ID (after waiting for 2 seconds) 3. GitLab creates a pipeline and associated jobs after the commit is pushed 4. Atlantis sets the commit status to running state, whitch creates a new pipeline and associated jobs with branch source instead of MR source 5. This pipeline is considered as latest by GitLab, so the success of atlantis workflow jobs is reported there 6. All pipelines associated with the commit are successful, merge checks pass, and the merge request is mergeable and can be applied • Pipeline stuck in running state command line logs | atlantis server logs: 1. GitLab does not create a pipeline in time 2. Atlantis sets the Ref to the branch name instead of the MR ID (after waiting for 2 seconds) 3. Atlantis sets the commit status to running state, whitch creates a new pipeline and associated jobs with branch source instead of MR source 4. GitLab creates a pipeline and associated jobs after the commit is pushed 5. GitLab consideres the merge request pipeline as latest, so the success of atlantis workflow jobs is reported there 6. The branch source pipeline is left in running state 7. The merge request is not mergeable and cannot be applied ### Reproduction Steps • Have at least one GitLab job triggered on merge request push event • Push commits to the merge request branch until a pipeline stuck in running state is observed ### Environment details • Atlantis version: v0.33.0 • Deployment method: helm • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: yes • Atlantis flags: ### Additional Context We chagned the retry logic as a POC, and we have not seen this issue since. Changes made can be seen here. I would like to add some configuration that would allow us to add custom retry logic parameters in a backwards compatible way. Keep the default as is - 2s retry once, but allow the user to add custom paramteres in cases like ours. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/11/2025, 3:57 AM
    #5696 Bitbucket Cloud Atlantis Incompatibility After App Password Deprecation Issue created by oliver-vini ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Bitbucket Cloud is deprecating app passwords on June 9, 2026, in favor of API tokens. However, Bitbucket mandates different "user" identifiers for API token authentication depending on the protocol: [Image](https://private-user-images.githubusercontent.com/51100260/475103849-5574199a-2cba-4088-a7f0-968c39fd9e64.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjI4MzM3NzEsIm5iZiI6MTc2MjgzMzQ3MSwicGF0aCI6Ii81MTEwMDI2MC80NzUxMDM4NDktNTU3NDE5OWEtMmNiYS00MDg4LWE3ZjAtOTY4YzM5ZmQ5ZTY0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTExVDAzNTc1MVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTBhNGJjMjEyYWJjZDBiMWMzYjExMDJiZjFmMDI0OGE0NzhjZGY3YzczMDAwYjFjN2ZmODFmOTEyNjJkNzE2OGImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.gTea4XqbpSnYjr_4l8TXCvsdKSotUKXs_Ycl4WL4uUM) • For git/HTTPS cloning: requires USERNAME:API_TOKEN • For API requests (e.g., PR comments): requires EMAIL:API_TOKEN Atlantis (using ATLANTIS_BITBUCKET_USER and ATLANTIS_BITBUCKET_TOKEN) currently applies either the email or username globally for both Git and API operations. This creates an unrecoverable bug: set to username and only cloning works (API calls fail with 401), set to email and only API works (cloning fails). ### Reproduction Steps 1. Set up Atlantis to use Bitbucket Cloud and provide an API token for authentication. 2. Set ATLANTIS_BITBUCKET_USER to your Bitbucket username: • Git cloning works: git clone https://USERNAME:API_TOKEN@bitbucket.org/org/repo.git • Atlantis API calls to Bitbucket fail: e.g., can’t comment on PRs, gets 401 error. 1. Set ATLANTIS_BITBUCKET_USER to your Atlassian account email: • API calls from Atlantis work: e.g., commenting on PRs. • Git clone fails with authentication error. ### Logs
    Copy code
    running git clone --depth=1 --branch test_v035 --single-branch <https://atlantis%40acme.net:<redacted>@bitbucket.org/acme/atlantis-demo.git> /home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default: Cloning into '/home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default'...
    remote: You may not have access to this repository or it no longer exists in this workspace. If you think this repository exists and you have access, make sure you are authenticated.
    fatal: Authentication failed for '<https://bitbucket.org/acme/atlantis-demo.git/>'
    : exit status 128
    
    # Conversely, with username:
    git clone <https://atlantis-devops:API_TOKEN@bitbucket.org/acme/atlantis-demo.git>
    Cloning into 'atlantis-demo'...
    remote: Enumerating objects: 171, done.
    ...
    Resolving deltas: 100% (75/75), done.
    
    # CURL API call with username:
    curl -u "atlantis-devops:API_TOKEN" -H "Content-Type: application/json" -X POST -d '{"content": {"raw": "Test comment"}}' "<https://api.bitbucket.org/2.0/repositories/org/repo/pullrequests/28/comments>"
    # Response:
    {"error": {"message": "Unauthorized"}}
    ### Environment details ### Additional Context Reference docs: https://support.atlassian.com/bitbucket-cloud/docs/using-api-tokens/ https://support.atlassian.com/bitbucket-cloud/docs/using-app-passwords/ runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/11/2025, 8:18 PM
    #5949 Atlantis should make a best effort to gracefully terminate terraform processes on shutdown Issue created by aggrand ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story While it's possible to try to configure k8s to minimize disruptions to the atlantis pod, occasionally the atlantis pod will be shuffled around and atlantis will be sent a termination signal. #1051 allows for terraform commands to complete before stopping, but some apply operations are very long and often k8s will just kill the pod. This can leave terraform in a bad state, with the lock still held and resources leaked. One could configure a
    terminationGracePeriodSeconds
    very high, but if the apply is long-running then all use of atlantis would be blocked for that period. Apparently terraform attempts a more graceful shutdown if it receives a sigint. It would be useful for atlantis to have an option, like
    TerraformGracefulShutdownSeconds
    or so. After that period has passed while draining, atlantis would send the sigint. I can take a crack at a PR for this, but I wanted to check first if that approach seems reasonable to the maintainers. Describe the solution you'd like Describe the drawbacks of your solution Describe alternatives you've considered runatlantis/atlantis
  • g

    GitHub

    11/11/2025, 11:33 PM
    #5950 Break up events package Issue created by lukemassa ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story As a contributor, it is often difficult to navigate and understand the interdependencies of the
    events
    package. Some data confirms it is far and away the largest package in the codebase:
    Copy code
    atlantis % for j in $(for i in $(find . -type f -name '*.go'); do dirname $i; done | sort | uniq); do echo -n "$j  "; cloc $j/*.go | grep '^Go' | awk '{print $5}'; done | sort -rnk2 | column -t
    ./server/events                           25592
    ./server/events/vcs                       6706
    ./server/events/mocks                     5674
    ./server/core/runtime                     4776
    ./server/core/config/raw                  4658
    ./server/controllers/events               4048
    ./server/core/config/valid                2828
    ./cmd                                     2334
    ./server/core/config                      2095
    Having large packages is unavoidable, but "events" has become just a "kitchen sink", and the fact that it's 4x the next largest package is to me a code smell. Describe the solution you'd like I'd like to start pulling logic into subpackages of events, in the spirit of
    vcs
    and
    command
    . I don't want to do this "just for the sake" so would be looking for semantically related packages that have a comprehensible interface they expose to the rest of the code base. Describe the drawbacks of your solution Obviously a large refactor is a risk. Also moving large numbers of critical files around makes git histories difficult. I'm just afraid the problem is only going to get worse as
    events
    becomes more and more a center of gravity. Describe alternatives you've considered Code could be pulled out a different way, like to a sibling package? But "lowering" into a subpackage seems the more obvious. runatlantis/atlantis
  • g

    GitHub

    11/13/2025, 8:00 AM
    #5952 Pull request must be approved according to the project's approval rules before running apply Issue created by yongzhang ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Hi team, I experienced an issue when commenting
    atlantis apply
    in the PR:
    Copy code
    Ran Apply for dir: my-project-dir workspace: default
    
    Apply Failed: Pull request must be approved according to the project's approval rules before running apply.
    I do have this PR approved by the codeowner: [Image](https://private-user-images.githubusercontent.com/15604715/513748579-b95878e6-31aa-4766-b9fa-b2b0735e5d88.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjMwMjYxMzMsIm5iZiI6MTc2MzAyNTgzMywicGF0aCI6Ii8xNTYwNDcxNS81MTM3NDg1NzktYjk1ODc4ZTYtMzFhYS00NzY2LWI5ZmEtYjJiMDczNWU1ZDg4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTEzVDA5MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTI3MDliYTIzMTdkMTk4MzdmOTJmMWZhNTQ1M2M4NmEzOGQxYjEzNmRjODc2MmZiMmZmNWNlNWRiNThiZmRhNTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.JE6PfXZzlTVaFJN1eyBuh7JZjvT3l79Dk-klDr55S48) Current status checks: [Image](https://private-user-images.githubusercontent.com/15604715/513749267-5054b89e-a34d-4c19-b10a-99d53c9315a3.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjMwMjYxMzMsIm5iZiI6MTc2MzAyNTgzMywicGF0aCI6Ii8xNTYwNDcxNS81MTM3NDkyNjctNTA1NGI4OWUtYTM0ZC00YzE5LWIxMGEtOTlkNTNjOTMxNWEzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTEzVDA5MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE3ZjA0MjQ0ZjFlNzkyNTExMDMwNzk5OGQ5NjE4OTg0ODhkMDdhOGYwYmFlNWExMDIxZWMxMjc5YTlhMDM0NjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.HowVwbMIJ2Gban7uNq3IGwQqLkLLrwkpNnk5Q5VbrWk) I don't understand what's happening here, please help, thanks. ### Reproduction Steps ### Logs I can see error logs:
    Copy code
    Unable to check pull mergeable status, error: getting pull request status: fetching rulesets, branch protections and status checks from GraphQL: Resource not accessible by integration
    unable to get pull request status: fetching mergeability status for repo: xxx, and pull number: 62: getting pull request status: fetching rulesets, branch protections and status checks from GraphQL: Resource not accessible by integration. Continuing with mergeable and approved assumed false
    I'm using github app for auth with permissions like this: [Image](https://private-user-images.githubusercontent.com/15604715/513776991-5a41f112-1910-46ac-bff8-e6853adca8d9.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjMwMjYxMzMsIm5iZiI6MTc2MzAyNTgzMywicGF0aCI6Ii8xNTYwNDcxNS81MTM3NzY5OTEtNWE0MWYxMTItMTkxMC00NmFjLWJmZjgtZTY4NTNhZGNhOGQ5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTEzVDA5MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU2Y2RjMDg2MDNiZTNjMTk2MWEwMDI2NWRlOWZmYTU1MGZjNTJmNDMxOTY4YThhMWMxOGQyMzIwY2FjZGUxODEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.99zPmMcIWLv2tjB0aw6zfXMBwGmeDB8itv_8caxYGOo) ### Environment details • Atlantis version: v0.37.1 • Atlantis flags:
    --repo-config=repos.yaml --gh-allow-mergeable-bypass-apply
    Atlantis server-side config file: repos: - id: /.*/ apply_requirements: [approved, mergeable] import_requirements: [approved, mergeable] ### Additional Context
    atlantis/apply
    is added as my required status check, flag
    --gh-allow-mergeable-bypass-apply
    is enabled. runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/14/2025, 2:39 PM
    #5957 Module auto planning not compatible with OpenTofu syntax changes Issue created by lauraseidler ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When using
    ATLANTIS_AUTOPLAN_MODULES
    and using OpenTofu with dynamic providers (which are currently not supported by Terraform), module dependencies cannot be loaded. ### Reproduction Steps Make Atlantis run with
    ATLANTIS_AUTOPLAN_MODULES
    on a project that has a dynamic provider configuration, which is valid for OpenTofu, for example: locals { projects = toset(["my-project-1", "my-project-2"]) } provider "google" { for_each = local.projects alias = "project" project = each.key } data "google_service_account" "all" { for_each = local.projects provider = google.project[each.key] account_id = "my-service-account" } ### Logs Logs
    Copy code
    error(s) loading project module dependencies: my-stack/provider.tf:52 - Invalid provider reference: Provider argument requires a provider name followed by an optional alias, like \"aws.foo\".","json":{"repo":"my-repo","pull":"24"},"stacktr
    ace":"<http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).getMergedProjectCfgs|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).getMergedProjectCfgs>\n\tgithub.com/runatlantis/atlantis/server/events/project_command_builder.go:401\ngithub.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg\n\tgithub.com/runatlantis/atlantis/server/events/project_command_builder.go:532\ngithub.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).BuildAutoplanCommands\n\tgithub.com/runatlantis/atlantis/server/events/project_command_builder.go:257\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands.func1\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:29\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands\n\tgithub.com/runatlantis/atlantis/server
    /events/instrumented_project_command_builder.go:26\<http://ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan|ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan>\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:94\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:319\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunAutoplanCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:251
    ### Environment details • Atlantis version: v0.37.1 runatlantis/atlantis
  • g

    GitHub

    11/16/2025, 6:24 AM
    #5962 Do not set command name as part of Project*CommandRunner Issue created by lukemassa ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story Right now for each command there is a block like this:
    Copy code
    func (p *DefaultProjectCommandRunner) Apply(ctx command.ProjectContext) command.ProjectResult {
    	applyOut, failure, err := p.doApply(ctx)
    	return command.ProjectResult{
    		Command:           command.Apply,
    		Failure:           failure,
    		Error:             err,
    		ApplySuccess:      applyOut,
    		RepoRelDir:        ctx.RepoRelDir,
    		Workspace:         ctx.Workspace,
    		ProjectName:       ctx.ProjectName,
    		SilencePRComments: ctx.SilencePRComments,
    	}
    }
    The runner is "inventing" the name of the command that it assumed called it. Higher up in the call stack the command is known, then we "forget it" and sneak it back in here. This causes bugs like #5934, and in general doesn't make a lot of sense. In addition the ctx.* content (which again is returned by all the analogous commands) is a code smell here; ctx is being passed in to this function, the caller clearly knows things like ProjectName or Workspace, why are we telling it? The issue is that the type ProjectResult is used in many places, and here is doing double duty of summarizing what happened in a given run, as well as the output from a given command. Describe the solution you'd like These functions should return a pared down ProjectCommandOutput, that the caller of this function should then "decorate" with the additional information. Describe the drawbacks of your solution It's a refactor so there's some risk there, but there's a lot of test coverage so should be ok. Describe alternatives you've considered I can't think of any runatlantis/atlantis
  • g

    GitHub

    11/18/2025, 9:43 AM
    #5967 Auto-merge retry mechanism on errors Issue created by Proximyst ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story GitHub's API isn't always perfect in terms of when a merge can happen. It's not uncommon for us to run into HTTP 403
    Resource not accessible by integration
    errors after an
    atlantis apply
    . It's relatively rare, but you still notice this at scale in an organisation. Describe the solution you'd like When automerging is used on
    atlantis apply
    commands, have a mechanism to retry (e.g. up to 3 times) on some common/configurable errors. This can be opt-in. Describe the drawbacks of your solution • Not all errors are transient: sometimes, an error like 403 really isn't transient, and it truly cannot be merged. These cases can result in N auto-merge retried requests. • Not everyone wants retries on their Atlantis deployments. Describe alternatives you've considered We can also implement an automatic GitHub Action workflow that runs on PR comments from Atlantis when it comments with an error like this. This is definitely useful, but it isn't all too generic, and is a patch to a case that can often be solved by just trying once more after some 5 seconds. runatlantis/atlantis
  • g

    GitHub

    11/20/2025, 1:13 AM
    #5972 Nil Pointer Dereference in `atlantis version` command execution Issue created by Adamovix ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue Command
    atlantis version
    under PR threw panic:
    Copy code
    runtime error: invalid memory address or nil pointer dereference
    runtime/panic.go:262 (0x472a98)
    runtime/signal_unix.go:917 (0x472a68)
    <http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:509|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:509> (0xec9497)
    <http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:422|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:422> (0xec894d)
    <http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:397|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:397> (0xec8686)
    <http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:370|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:370> (0xec7d76)
    <http://github.com/runatlantis/atlantis/server/core/runtime/version_step_runner.go:30|github.com/runatlantis/atlantis/server/core/runtime/version_step_runner.go:30> (0x1095193)
    <http://github.com/runatlantis/atlantis/server/events/project_command_runner.go:813|github.com/runatlantis/atlantis/server/events/project_command_runner.go:813> (0x118ad62)
    <http://github.com/runatlantis/atlantis/server/events/project_command_runner.go:699|github.com/runatlantis/atlantis/server/events/project_command_runner.go:699> (0x1188e95)
    <http://github.com/runatlantis/atlantis/server/events/project_command_runner.go:304|github.com/runatlantis/atlantis/server/events/project_command_runner.go:304> (0x11842dd)
    <http://github.com/runatlantis/atlantis/server/events/project_command_pool_executor.go:48|github.com/runatlantis/atlantis/server/events/project_command_pool_executor.go:48> (0x1182bbb)
    <http://github.com/runatlantis/atlantis/server/events/version_command_runner.go:52|github.com/runatlantis/atlantis/server/events/version_command_runner.go:52> (0x11912b0)
    <http://github.com/runatlantis/atlantis/server/events/command_runner.go:401|github.com/runatlantis/atlantis/server/events/command_runner.go:401> (0x115478c)
    runtime/asm_amd64.s:1700 (0x478960)
    During investigation I found two distinct bugs with the same source: 1. Nil pointer panic when running
    version
    command after Atlantis restart or cache clear 2. Version command fails on fresh Atlantis instances with existing PRs From my investigation, when the terraform binary cache is cleared or Atlantis restarts: 1. versions map has no cached terraform binaries atlantis/server/core/terraform/tfclient/terraform_client.go Line 502 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f> | if binPath, ok := versions[v.String()]; ok { | | -------------------------------------------- | 2. VersionStepRunner.Run() sets
    tfDistribution := v.DefaultTFDistribution
    atlantis/server/core/runtime/version_step_runner.go Line 20 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f> | tfDistribution := v.DefaultTFDistribution | | ----------------------------------------- | which is nil because it was never initialized in server.go atlantis/server/server.go Lines 725 to 728 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f> | VersionStepRunner: &runtime.VersionStepRunner{ | | ---------------------------------------------- | | TerraformExecutor: terraformClient, | | DefaultTFVersion: defaultTfVersion, | | }, | 3. Calls RunCommandWithVersion(..., tfDistribution, ...) → prepCmd(..., d, ...) → ensureVersion(..., d, ...) where
    d
    is nil 4. Check at line 502 fails because cache is empty atlantis/server/core/terraform/tfclient/terraform_client.go Lines 502 to 503 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f> | if binPath, ok := versions[v.String()]; ok { | | -------------------------------------------- | | return binPath, nil | 5. Execution reaches line 509: atlantis/server/core/terraform/tfclient/terraform_client.go Line 509 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f> | binFile := dist.BinName() + v.String() | | -------------------------------------- | 6. Calling BinName() on nil dist interface causes panic Long story short -
    VersionStepRunner
    is missing
    DefaultTFDistribution
    field initialization in server/server.go. atlantis/server/server.go Lines 725 to 728 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f> | VersionStepRunner: &runtime.VersionStepRunner{ | | ---------------------------------------------- | | TerraformExecutor: terraformClient, | | DefaultTFVersion: defaultTfVersion, | | }, | ### Reproduction Steps Nil pointer panic: 1. Start Atlantis (any version including latest) 2. Create a PR with
    atlantis.yaml
    3. Run
    atlantis plan
    to populate the terraform binary cache 4. Clear the cache:
    rm -rf /home/atlantis/.atlantis/bin/*
    5. Run
    atlantis version
    command Result: Panics with nil pointer dereference at terraform_client.go:509 Expected: Prints terraform version in a comment This is not an edge case - it happens during normal operations, for example Atlantis restarts, container/pod restarts with persistent volume containing data about existing PRs. … runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/21/2025, 12:51 PM
    #5888 404 Error when comment creation in Pull Request with Custom Workflow and Terragrunt Issue created by Leo-67 ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When running an
    atlantis plan
    in a Pull Request, the webhook is well trigger, the step well run but the output is not posted within Pull Request comment. Instead I get a 404 error in the logs. ### Reproduction Steps We do use a GitHub Enterprise Server and Atlantis installed with Helm in an Azure Kubernetes Service cluster using a GitHub App for connection between the services. We add Custom Workflow configuration in the `repos.yaml| file to handle Terragrunt following the documentation. Everything looks ok except the plan output which is not included in the Pull Request comment. [Image](https://private-user-images.githubusercontent.com/81637376/501341677-937dd19d-b78a-46f7-8178-f815f168baba.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjM3Mjk3NzcsIm5iZiI6MTc2MzcyOTQ3NywicGF0aCI6Ii84MTYzNzM3Ni81MDEzNDE2NzctOTM3ZGQxOWQtYjc4YS00NmY3LTgxNzgtZjgxNWYxNjhiYWJhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTIxVDEyNTExN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWFlOGQ4YmNkNzBkMGJmZmY4ZGFiOTVkMDI5YWQ1ZDgyN2RiZTVlOTgwMzBkYzA3NzQ4MmU2YWMzM2FmZTVkOGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.rkTzTx_hynzCitIoll3L3e_STtkOhV2xjYS1ZO8Y2-k) I have made several tests with other based repositories, GitHub App, Atlantis version, basic configuration with Terraform instead etc.. My results concluded that the issue is only related to Terragrunt and output plan in comments. Note that in the above screenshot atlantis can create comments with the API for all the other result types. ### Logs Here you can find the logs I get from the Atlantis pod. Logs
    Copy code
    "vcs/instrumented_client.go:116","msg":"Unable to create comment for command plan, error: POST https://<GITHUB_SERVER>/api/v3/repos/l<OWNER>/<REPO_NAME>/issues/61/comments: 404  []","json":{"repo":"liebherr/min_landing_zone_platform","pull":"61"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).CreateComment|github.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).CreateComment>\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:116\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).CreateComment\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:65\ngithub.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:51\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:264\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:299\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:401"}
    
    "events/pull_updater.go:52","msg":"unable to comment: POST https://<GITHUB_SERVER>/api/v3/repos/l<OWNER>/<REPO_NAME>/issues/61/comments: 404  []","json":{"repo":"liebherr/min_landing_zone_platform","pull":"61"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull|github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull>\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:52\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:264\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:299\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:401"}
    ### Environment details If not already included, please provide the following: • Atlantis version: 0.36.0 • Deployment method: Helm in Azure Kubernetes Service • Other tested Atlantis version: 0.35.1 and 0.34.0 Atlantis server-side config file: repoConfig: | repos: - id: <repo_id> branch: /.*/ allowed_overrides: [workflow] allow_custom_workflows: true pre_workflow_hooks: - run: terragrunt-atlantis-config generate --output atlantis.yaml --workflow terragrunt --autoplan --automerge --parallel --create-workspace workflows: terragrunt: plan: steps: - env: name: ARM_OIDC_TOKEN_FILE_PATH command: 'echo $AZURE_FEDERATED_TOKEN_FILE' - env: name: ARM_CLIENT_ID command: 'echo $AZURE_CLIENT_ID' - run: # Allow for targeted plans/applies as not supported for Terraform wrappers by default command: terragrunt plan -input=false $(printf '%s' $COMMENT_ARGS | sed 's/,/ /g' | tr -d '\\') -no-color -out $PLANFILE output: hide - run: | terragrunt show $PLANFILE apply: steps: - env: name: ARM_OIDC_TOKEN_FILE_PATH command: 'echo $AZURE_FEDERATED_TOKEN_FILE' - env: name: ARM_CLIENT_ID command: 'echo $AZURE_CLIENT_ID' - run: terragrunt apply -input=false $PLANFILE Repo
    atlantis.yaml
    file: atlantis.yaml file is generated on the fly with terragrunt-atlantis-config Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc) ### Additional Context runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/23/2025, 2:15 AM
    #5674 Outdated function for previous plan messages [like in cursor bot] Issue created by celeronsx ### Community Note • Please vote on this issue by adding a 👍 reaction to help maintainers prioritize this request. • Avoid “+1” comments without new information—they create noise. • If you’re interested in contributing, please leave a comment. --- • I’m willing to implement this feature (contributing guide) Describe the user story As an Atlantis user, I want my previous plan/apply comments to be marked as “outdated” when a new commit triggers another run—just like GitHub’s cursor pagination hides outdated comments—so my PR discussion stays focused and uncluttered. Describe the solution you’d like On each new run after a commit, Atlantis should call the GitHub API to mark existing Atlantis comments on the pull request as outdated. These comments will collapse under the “Show outdated” toggle, leaving only the latest plan/apply results visible by default. [Image](https://private-user-images.githubusercontent.com/169676865/465273924-64ef56fe-ce15-46a0-a591-7d72205a4f6d.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjM4NjQ0MzIsIm5iZiI6MTc2Mzg2NDEzMiwicGF0aCI6Ii8xNjk2NzY4NjUvNDY1MjczOTI0LTY0ZWY1NmZlLWNlMTUtNDZhMC1hNTkxLTdkNzIyMDVhNGY2ZC5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUxMTIzJTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MTEyM1QwMjE1MzJaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lYjIzMTlhODQ5OGIxN2U4ZWRkNmViM2E0MmRkMzY5OGY4YWZlNjIyZmE4ZjNhZWI5OWUzZWE2OTI3NGE0MGI4JlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.tzPeT5SEdOQwqdG-FHsj0hBkNJHLys2HNPhSTS8-qWA) runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/23/2025, 2:15 AM
    #773 Atlantis apply all after a failed apply; outputs Ran Apply for 0 projects Issue created by mlehner616 I have a repo that uses the default workspace but there are a number of different project folders. Atlantis version: 0.8.3 Terraform version: v0.12.8
    Copy code
    version: 3
    projects:
      - name: qa
        dir: qa_acct/qa_env
        terraform_version: v0.12.8
        autoplan:
          when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
          enabled: false
      - name: staging
        dir: prod_acct/staging_env
        terraform_version: v0.12.8
        autoplan:
          when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
          enabled: false
      - name: prod
        dir: prod_acct/prod_env
        terraform_version: v0.12.8
        autoplan:
          when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
          enabled: false
    Plans are generated for all three projects as normal after commenting exactly
    atlantis plan
    . Immediately afterword, commenting
    atlantis apply
    attempts to apply all three environments as expected. In this case, there was an apply error due to an AWS IAM policy being misconfigured and the plans were not successfully applied. A commit was pushed to fix this issue and another
    atlantis apply
    was submitted. Note, there was not another
    atlantis plan
    after the fix commit was pushed. Atlantis behaved as if it had forgotten about the failed plans and assumed they had been applied successfully when, in fact, they had not been. I believe the expected behavior should be to reject the apply since new commits were made and force another plan be run, correct? The result was the following:
    Copy code
    Ran Apply for 0 projects:
    Copy code
    Automatically merging because all plans have been successfully applied.
    Copy code
    Locks and plans deleted for the projects and workspaces modified in this pull request:
    
    * dir: `prod_acct/prod_env` workspace: `default`
    * dir: `prod_acct/staging_env` workspace: `default`
    * dir: `qa_acct/qa_env` workspace: `default`
    runatlantis/atlantis
    • 1
    • 1
  • g

    GitHub

    11/26/2025, 1:54 PM
    #5982 `terraform_distribution` availability version says `v0.25.0` instead of `v0.33.0` Issue created by mehanikm Line
    <http://runatlantis.io/docs/repo-level-atlantis-yaml.md:71|runatlantis.io/docs/repo-level-atlantis-yaml.md:71>
    says
    terraform_distribution: terraform # Available since v0.25.0
    , which seems off and the real availability is
    v0.33.0
    . Made me think that my
    v0.31.0
    can run this but spent quite some time debugging. runatlantis/atlantis
  • g

    GitHub

    11/27/2025, 7:56 PM
    #5984 Race condition on provider installation with parallel_plan/apply enabled (Text file busy) Issue created by uroja97 ### Description When enabling
    parallel_plan: true
    and
    parallel_apply: true
    in
    atlantis.yaml
    , we are experiencing concurrency issues with Terraform provider installation. Multiple parallel executions try to write/read to the same shared plugin cache directory simultaneously, resulting in
    text file busy
    errors or checksum mismatches. It seems that even when using a shared plugin cache, concurrent
    terraform init
    or
    terraform plan
    operations conflict when accessing the provider binaries. ### Steps to Reproduce 1. Enable parallel execution in `atlantis.yaml`: parallel_plan: true parallel_apply: true 2. Configure a shared plugin cache (e.g., via
    TF_PLUGIN_CACHE_DIR
    env var or
    .terraformrc
    ). 3. Trigger a PR that runs multiple Terraform projects simultaneously (e.g., 5-10 projects) using the same providers. ### Logs
    Copy code
    │ Error: Failed to install provider
    │ 
    │ Error while installing hashicorp/azuread v3.7.0: open
    │ /atlantis-data/plugin-cache/registry.terraform.io/hashicorp/azuread/3.7.0/linux_amd64/terraform-provider-azuread_v3.7.0_x5:
    │ text file busy
    And sometimes checksum errors:
    Copy code
    │ Error: Required plugins are not installed
    │ 
    │ The installed provider plugins are not consistent with the packages
    │ selected in the dependency lock file:
    │   - <http://registry.terraform.io/hashicorp/azurerm|registry.terraform.io/hashicorp/azurerm>: the cached package for <http://registry.terraform.io/hashicorp/azurerm|registry.terraform.io/hashicorp/azurerm> 4.54.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file
    ### Environment details • Atlantis version: v0.37.1 • Terraform version: v1.13.5 • Atlantis server side config: •
    TF_PLUGIN_CACHE_DIR
    is set to a shared directory. ### Workaround attempted We had to implement a workaround in our
    atlantis.yaml
    to serialize the
    init
    phase and force a local download of providers (bypassing the cache) to avoid conflicts: workflows: default: plan: steps: # Use flock to serialize init and disable cache to avoid symlink conflicts - run: flock /tmp/terraform_init.lock bash -c "rm -rf .terraform/providers && env -u TF_PLUGIN_CACHE_DIR TF_CLI_CONFIG_FILE=/dev/null terraform init -upgrade" - plan ### Proposed Solution / Feature Request It would be great if Atlantis could handle the locking mechanism for the provider cache internally when parallel mode is enabled, or provide a native way to serialize the
    init
    step while keeping
    plan/apply
    parallel. runatlantis/atlantis
  • g

    GitHub

    11/28/2025, 7:01 AM
    #5985 Support `CI` and `ATLANTIS` native environment variables Issue created by ponkio-o ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- • I'd be willing to implement this feature (contributing guide) Describe the user story When executing tools or scripts in Custom Workflow, there are situations where you may want to confirm that they are “running in Atlantis.” For example, other CI services provide environment variables such as the following, which can be used to determine the execution environment. • CircleCI (docs) •
    CI
    /
    CIRCLECI
    • GitHub Actions (docs) •
    CI
    /
    GITHUB_ACTION
    • Drone (docs) •
    CI
    /
    DRONE
    However, it does not exist in Atlantis. https://www.runatlantis.io/docs/custom-workflows#native-environment-variables The github-comment tool identifies the execution environment based on these environment variables. https://suzuki-shunsuke.github.io/github-comment/complement Describe the solution you'd like This can be resolved by providing the environment variables
    CI=true
    and
    ATLANTIS=true
    as Native Environment Variables. https://www.runatlantis.io/docs/custom-workflows#native-environment-variables Describe the drawbacks of your solution Currently, we are using
    ATLANTIS_TERRAFORM_VERSION
    for determination, but I don't think it's very appropriate. Because this variable is intended to store the Terraform version. Describe alternatives you've considered related issue: suzuki-shunsuke/go-ci-env#583 (comment) runatlantis/atlantis
  • g

    GitHub

    12/02/2025, 2:50 PM
    #5993 Plan fails if previous commit failed policy checks Issue created by nightmarlin-wise ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue If the previous commit failed policy checks, auto-plan fails with the message
    Copy code
    **Ran Plan for dir**: aws/playground/policy-brick workspace: default
    
    **Plan Failed**: All policies must pass for project before running plan.
    ### Reproduction Steps • Set up an atlantis instance with at least one element of
    repos.yaml#/policies/policy_sets
    defined, and has auto-plan enabled • Open a PR that fails the policy check • Wait for the policy checks to fail, then push a new commit • Observe that atlantis produces the above error
    I suspect this will also fail if auto-plan is disabled and a manual
    atlantis plan
    is run - will see if I can verify this.
    A subsequent
    atlantis plan
    or push that triggers auto-plan is successfully planned
    ### Logs Logs // policy check error on first commit {"level":"info","caller":"events/events_controller.go:559","msg":"Handling GitHub Pull Request 'opened' event","json":{"gh-request-id":"X-Github-Delivery=REDACTED","repo":"transferwise/repo-name","pull":"216"}} ["omitted... plan & init runs as normal"] {"level":"error","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running policy_check operation: Some policy sets did not pass.","json":{"repo":"transferwise/repo-name","pull":"216"},"stacktrace":"github.com/runatlantis/atlantis/server/events.RunAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).PolicyCheck\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go42\ngithub.com/runatlantis/atlantis/server/events.runProjectCmdsParallel.func1\n\tgithub.com/runatlantis/atlantis/server/events/project command pool executor.go29"} // plan error {"level":"info","caller":"events/events_controller.go:559","msg":"Handling GitHub Pull Request 'updated' event","json":{"gh-request-id":"X-Github-Delivery=REDACTED","repo":"transferwise/repo-name","pull":"216"}} ["omitted... atlantis pulls latest version, discovers updated file & sets up commands - but no policy checks are actually run"] {"level":"debug","caller":"events/plan_command_runner.go:129","msg":"deleting previous plans and locks","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/project_command_context_builder.go:200","msg":"Building project command context for policy_check","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/project_command_context_builder.go:98","msg":"Building project command context for plan","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"vcs/github_client.go:940","msg":"Updating GitHub Check status for 'atlantis/plan: aws/playground/policy-brick/default' to 'pending'","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/plan_command_runner.go:139","msg":"Running plans in parallel","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"vcs/github_client.go:950","msg":"POST /repos/transferwise/repo-name/statuses/REF returned: 201","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/working_dir.go:109","msg":"clone directory '/home/atlantis/.data/repos/transferwise/repo-name/216/default' already exists, checking if it's at the right commit","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/project_command_runner.go:576","msg":"acquired lock for project","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/project_locker.go:86","msg":"Acquired lock with id 'transferwise/repo-name/aws/playground/policy-brick/default'","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/working_dir.go:117","msg":"repo is at correct commit \"REF\" so will not re-clone","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/working_dir.go:299","msg":"Comparing PR ref \"REF\" to local ref \"REF\"","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"vcs/github_client.go:940","msg":"Updating GitHub Check status for 'atlantis/plan: aws/playground/policy-brick/default' to 'failure'","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/plan_command_runner.go:146","msg":"deleting plans because there were errors and automerge requires all plans succeed","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"error","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running plan operation: All policies must pass for project before running plan.","json":{"repo":"transferwise/repo-name","pull":"216"},"stacktrace":"github.com/runatlantis/atlantis/server/events.RunAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).Plan\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go38\ngithub.com/runatlantis/atlantis/server/events.runProjectCmdsParallel.func1\n\tgithub.com/runatlantis/atlantis/server/events/project command pool executor.go29"} Happy to provide additional logs upon request. ### Environment details If not already included, please provide the following: • Atlantis version: v0.37.1 • Deployment method: ecs Atlantis server-side config file: repos: - id: /.*/ branch: /^(main|master)$/ apply_requirements: [approved, mergeable] workflow: default policies: owners: teams: - my-team policy_sets: - name: aws path: policy # local path, ignored when
    --update
    is used source: local workflows: default: plan: steps: - init - plan - show apply: steps: - apply policy_check: steps: - show - policy_check: extra_args: - "--update" - "${opa_policy_url}" - "-d" - "./policy/data.json" - "--namespace" - "${namespace}" metrics: prometheus: endpoint: "/metrics" Additional features: • We have enabled parallel plan & apply • We have enabled auto-discovery & autoplan-modules • We have disabled the Terraform plugin cache • We have allowed atlantis to ignore failed
    atlantis/apply
    checks when checking if a PR is mergeable ### Additional Context I believe this was introduced by #5851 - which changed the behaviour to validate that policy checks are passing before running the
    plan
    command. Fixes here could be • remove the
    valid.PoliciesPassedCommandReq
    if present in
    ctx.PlanRequirements
    when passed to
    DefaultCommandRequirementHandler.ValidatePlanProject
    • this has the smallest scope, but if it's possible to run a
    policy_check
    before a
    plan
    , it may cause a regression for that use case (i can't tell if that is the case) • not inject it at the t… runatlantis/atlantis
  • g

    GitHub

    12/02/2025, 8:11 PM
    #5996 DEFAULT_CONFTEST_VERSION should be available at runtime Issue created by nvanheuverzwijn ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue When starting atlantis version 0.36.0 (0.37.1 is affected as well),
    DEFAULT_CONFTEST_VERSION
    is not defined. This will always print the info log
    failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFTEST_VERSION not set
    . Starting atlantis with default configuration should not log errors like this.
    DEFAULT_CONFTEST_VERSION
    should be available in runtime, especially because we use this environment variable to download
    conftest
    on build time. ### Reproduction Steps Executing this will print out the error log.
    Copy code
    docker run -it <http://ghcr.io/runatlantis/atlantis:v0.36.0|ghcr.io/runatlantis/atlantis:v0.36.0> atlantis server --gh-user=test --gh-token=test --repo-allowlist=test
    ### Logs ### Environment details atlantis version: 0.36 and 0.37.1 ### Additional Context We should add
    DEFAULT_CONFTEST_VERSION
    available to stock container image. The source code is using this environment variable so it should always be defined with a default value. The operator should explicitly unset the environment variable if he wants to keep this behavior. I think it's unreasonable to have to specify
    DEFAULT_CONFTEST_VERSION
    , especially since we download conftest at build time using this variable. runatlantis/atlantis
  • g

    GitHub

    12/03/2025, 8:18 AM
    #5998 terragrunt runner pool feature breaks `run --all` workflows Issue created by bronto-rikstv Terragrunt's new runner pool feature, which replaces the previous group-based scheduling, breaks existing workflows that rely on
    run --all
    , and may impact other workflows as well. Runner pool was made generally available in v0.89.0. Why it breaks When discovering units to run, terragrunt ignores hidden directories, not only under the working directory but also above it. Atlantis jobs happen to run inside
    ~/.atlantis
    , which triggers the issue. I attempted to work around the limitation using TG_QUEUE_INCLUDE_DIR, but it didn’t behave consistently. In the end, the only reliable fix was to change the Atlantis data directory from
    ~/.atlantis
    to a non-hidden one (in our case
    ~/atlantis-data
    ). Not sure if it should be considered a bug in Atlantis, but definitely something that Atlantis users and developers should be aware of. runatlantis/atlantis
  • g

    GitHub

    12/03/2025, 4:00 PM
    #5999 `/api/plan` returning an error about post-merge verification failing Issue created by khung ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue It looks like PR #5895 may not work the same way as the old logic when using the API endpoints. When I try running the API endpoint
    /api/plan
    against the main branch, the API returns the following error:
    Copy code
    {
      "error": "post-merge verification failed: HEAD^2 != main"
    }
    ### Reproduction Steps Make a POST request to
    <https://hostname/api/plan>
    with the following body:
    Copy code
    {
      "Repository": "myorg/myrepo",
      "Ref": "main",
      "Type": "Github",
      "Paths": [
        {
          "Directory": "myterraformconfig"
        }
      ]
    }
    ### Logs Logs
    Copy code
    {"level":"info","ts":"2025-12-03T15:31:09.818Z","caller":"events/working_dir.go:120","msg":"repo was already cloned but branch is not at correct commit, updating to \"main\"","json":{}}
    {"level":"warn","ts":"2025-12-03T15:31:25.985Z","caller":"controllers/api_controller.go:391","msg":"{\"error\":\"post-merge verification failed: HEAD^2 != main\"}","json":{},"stacktrace":"<http://github.com/runatlantis/atlantis/server/controllers.(*APIController).respond|github.com/runatlantis/atlantis/server/controllers.(*APIController).respond>\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:391\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).apiReportError\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:87\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).Plan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:101\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2322\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3340\nnet/http.(*conn).serve\n\tnet/http/server.go:2109"}
    ### Environment details • Atlantis version: v0.37.1 • Deployment method: helm • If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: n/a • Atlantis flags: see helm chart values below Atlantis helm chart values: # config file github: hostname: <removed> enableDiffMarkdownFormat: true ingress: enabled: false environment: ATLANTIS_CHECKOUT_STRATEGY: merge ATLANTIS_DEFAULT_TF_VERSION: v1.11.1 ATLANTIS_WEB_BASIC_AUTH: "true" AWS_ENDPOINT_URL_S3: <removed> TF_CLI_CONFIG_FILE: /plugins/terraform.tfrc loadEnvFromSecrets: - <removed> initConfig: enabled: true sharedDir: /plugins atlantisUrl: <removed> orgAllowlist: <removed> Atlantis server-side config file: repos: - id: /.*/ allowed_overrides: [apply_requirements] workflows: default: plan: steps: - run: terraform fmt -check=true -diff=true -write=false - init - plan apply: steps: - apply - run: inventory-update.sh Repo
    atlantis.yaml
    file: version: 3 projects: - dir: ./myterraformconfig ### Additional Context The error message is part of the changes in PR #5895. runatlantis/atlantis
  • g

    GitHub

    12/03/2025, 5:28 PM
    #5884 Mergeability may be determined wrongfully on required workflows with multiple checks Issue created by henriklundstrom ### Community Note • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment. --- ### Overview of the Issue With the
    gh-allow-mergeable-bypass-apply
    -flag enabled, Atlantis may conclude the mergeability of a pull request incorrectly if there is a required workflow that has multiple checks. Atlantis uses the outcome of the first check in the suite, rather than the outcome of the suite as a whole. Thus, if the first check in the suite is successful, but the suite as a whole is not, for example if a second check is in progress or failed, Atlantis will consider the workflow a success and wrongfully proceed with applying. This may lead to apply executing when it should not be allowed to, and it may also lead to Atlantis attempting to merge the pull request after apply but failing to do so since GitHub will not permit it. ### Reproduction Steps Configure a ruleset with a required workflow that has more than one check. Trigger Atlantis apply after the first check is successful, but before the workflow as a whole is completed. Alternatively, trigger Atlantis apply when the first check is successful, but the workflow as a whole is completed with failure. ### Logs ### Environment details ### Additional Context runatlantis/atlantis
    • 1
    • 1