GitHub
10/17/2025, 1:26 PMteam_authz:
command: "/etc/atlantis/scripts/admin-auth.sh"
debug log:
{
"level": "debug",
"ts": "2025-10-17T12:50:35.389Z",
"caller": "runtime/external_team_allowlist_runner.go:53",
"msg": "error: exit status 2: running \"sh -c /etc/atlantis/scripts/admin-auth.sh plan MyOrg/team-a MyOrg/Team-B MyOrg/team-b MyOrg/Developers MyOrg/developers MyOrg/Developers - Product Public Repos MyOrg/developers-product-public-repos MyOrg/Product Tech Leads MyOrg/product-tech-leads MyOrg/Product Mobile App MyOrg/product-mobile-app MyOrg/Analytics Team MyOrg/analytics-team MyOrg/Product MyOrg/product MyOrg/Engineering MyOrg/engineering MyOrg/Product Management (PM) MyOrg/product-management-pm MyOrg/Administrator MyOrg/administrator\": \nsh: syntax error: unexpected \"(\"\n",
"json": {}
}
Atlantis comment on PR:
Error: User @user does not have permissions to execute 'plan' command.this error should never actually be reached in normal operation. removing the user from the team with parentheses in the name restores normal functionality. ### Reproduction Steps set up any
team_authz script (even just echo "pass"; exit 0) and comment atlantis plan with a user in a team with special characters.
anything else should be irrelevant, the error is really scoped to that specific sh -c command.
in our case the team is on github, but #5314 seems to be the same problem on gitlab.
I will try to send a PR to fix this...
runatlantis/atlantisGitHub
10/20/2025, 7:42 PMExecutorService with a ScheduleManager singleton that uses gocron Scheduler. Jobs would be registered with the ScheduleManager but remain fully decoupled from it. The gocron package provides:
• Cron job support
• Various job types
• Built-in job queues, max concurrent jobs, and more useful features (see examples)
The gocron package supports a variety of Job types, including a CronJob. Also it supports built-in mechanisms for Job queues and a lot more beneficial features for scheduling server-side tasks.
I did some drafting at: ramonvermeulen/atlantis@main...f/refactor-scheduler to illustrate the direction (note: this is an early proof of concept, and far from a full implementation).
Describe the drawbacks of your solution
• Adds a new dependency: <http://github.com/go-co-op/gocron|github.com/go-co-op/gocron> (a well-maintained and widely used package, but still it is a new dependency)
• Requires refactoring ExecutorService into ScheduleManager, which will need thorough testing to ensure GitHub token rotation and stats publishing continue working reliably.
Describe alternatives you've considered
I looked at making the scheduler completely separate from the Atlantis server (multi-container setup, similar to Airflow), but this would require significant changes and doesn't align with Atlantis's single-server approach.
runatlantis/atlantisGitHub
10/22/2025, 8:58 PMatlantis.yaml file is present that has a project defined
• The files changed in this PR are not in any project directories defined in the atlantis.yaml.
From the above, I would expect the repo to not be cloned as the changes are to files outside of the defined project. However, if you look at the Atlantis data dir you will see that the fork has been cloned.
### Logs
Provide log files from Atlantis server
logs can be retrieved from the deployment or from atlantis comments by adding --debug such as atlantis plan --debug
Logs
{"level":"debug","ts":"2023-10-23T21:42:10.920Z","caller":"server/middleware.go:45","msg":"POST /events – from 127.0.0.1:38102","json":{}}
{"level":"debug","ts":"2023-10-23T21:42:10.920Z","caller":"events/events_controller.go:103","msg":"handling GitHub post","json":{}}
{"level":"debug","ts":"2023-10-23T21:42:10.920Z","caller":"events/events_controller.go:169","msg":"request valid","json":{"gh-request-id":"X-Github-Delivery=04fcd390-71ed-11ee-8bda-6f43a7c959e3"}}
{"level":"debug","ts":"2023-10-23T21:42:10.921Z","caller":"events/events_controller.go:423","msg":"identified event as type \"updated\"","json":{"gh-request-id":"X-Github-Delivery=04fcd390-71ed-11ee-8bda-6f43a7c959e3"}}
{"level":"debug","ts":"2023-10-23T21:42:10.921Z","caller":"server/middleware.go:72","msg":"POST /events – respond HTTP 200","json":{}}
{"level":"debug","ts":"2023-10-23T21:42:10.921Z","caller":"vcs/github_client.go:143","msg":"[attempt 1] GET /repos/<redacted_base_repo>/pulls/<pull_number>/files","json":{}}
{"level":"debug","ts":"2023-10-23T21:42:11.062Z","caller":"metrics/debug.go:52","msg":"timer","json":{"name":"atlantis.github.get_modified_files.execution_time","value":0.140859106,"tags":{"base_repo":"<redacted_base_repo>","pr_number":"<pull_number>"},"type":"timer"}}
{"level":"debug","ts":"2023-10-23T21:42:11.062Z","caller":"events/project_command_builder.go:290","msg":"1 files were modified in this pull request","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:11.156Z","caller":"metrics/debug.go:42","msg":"counter","json":{"name":"atlantis.github.get_modified_files.execution_success","value":1,"tags":{"base_repo":"<redacted_base_repo>","pr_number":"<pull_number>"},"type":"counter"}}
{"level":"debug","ts":"2023-10-23T21:42:11.156Z","caller":"metrics/debug.go:42","msg":"counter","json":{"name":"atlantis.github_event.pr_synchronize.success_200","value":1,"tags":{"base_repo":"<redacted_base_repo>","pr_number":"<pull_number>"},"type":"counter"}}
{"level":"debug","ts":"2023-10-23T21:42:11.160Z","caller":"events/project_command_builder.go:332","msg":"got workspace lock","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"info","ts":"2023-10-23T21:42:11.161Z","caller":"events/working_dir.go:230","msg":"creating dir \"/dir/.atlantis/repos/<redacted_base_repo>/<pull_number>/default\"","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/working_dir.go:262","msg":"ran: git clone --depth=1 --branch branch_name --single-branch <redacted.git> /dir/.atlantis/repos/<redacted_base_repo>/<pull_number>/default. Output: Cloning into '/root/.atlantis/repos/<redacted_base_repo>/<pull_number>/default'...","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"info","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_command_builder.go:357","msg":"successfully parsed path/to/atlantis.yaml file","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_command_builder.go:364","msg":"moduleInfo for /root/.atlantis/repos/<redacted_base_repo>/<pull_number>/default (matching \"\") = map[]","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_finder.go:185","msg":"found downstream projects for \"some_file.py\": []","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:14.747Z","caller":"events/project_finder.go:192","msg":"checking if project at dir \"environments/env_a\" workspace \"default\" was modified","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"info","ts":"2023-10-23T21:42:14.748Z","caller":"events/project_command_builder.go:371","msg":"0 projects are to be planned based on their when_modified config","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:14.748Z","caller":"metrics/debug.go:52","msg":"timer","json":{"name":"atlantis.builder.execution_time","value":3.826454988,"tags":{},"type":"timer"}}
{"level":"info","ts":"2023-10-23T21:42:14.748Z","caller":"events/plan_command_runner.go:97","msg":"determined there was no project to run plan in","json":{"repo":"<redacted_base_repo>","pull":"<pull_number>"}}
{"level":"debug","ts":"2023-10-23T21:42:14.748Z","caller":"metrics/debug.go:52","msg":"timer","json":{"name":"atlantis.cmd.autoplan.execution_time","value":3.826552526,"tags":{},"type":"timer"}}
{"level":"debug","ts":"2023-10-23T21:42:15.156Z","caller":"metrics/debug.go:42","msg":"counter","json":{"name":"atlantis.builder.execution_success","value":1,"tags":{},"type":"counter"}}
### Environment details
If not already included, please provide the following:
• Atlantis version: atlantis 0.24.3 (commit: 5b8ddc7) (build date: 2023-06-20T22:05:19Z)
• Deployment method: GCE
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: No
• Atlantis flags:
Atlantis server-side config file:
repos:
- id: /github\.com\/(.*?)\/repo_name/
branch: /master/
repo_config_file: path/to/atlantis.yaml
plan_requirements: []
apply_requirements: [approved, mergeable, undiverged]
import_requirements: [approved, mergeable, undiverged]
Repo atlantis.yaml file:
version: 3
projects:
- name: env_a
dir: environments/env_a
autoplan:
when_modified: ["*.tf", "../modules/host/*.tf", ".terraform.lock.hcl"]
enabled: true
Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc)
Additional env vars:
export ATLANTIS_ALLOW_FORK_PRS=true
export ATLANTIS_RESTRICT_FILE_LIST=true
export ATLANTIS_SILENCE_NO_PROJECTS=true
export ATLANTIS_SILENCE_VCS_STATUS_NO_PLANS=true
export ATLANTIS_SKIP_CLONE_NO_CHANGES=true
export ATLANTIS_DISABLE_AUTOPLAN=false
### Additional Context
I believe this is caused by the hasRepoCfg check always failing below. That code is located <https://github.com/runatlantis/atlantis/blob/a542aa8f2015e67957c5fdb6ec994080561aa…
runatlantis/atlantisGitHub
10/29/2025, 3:55 PM{"level":"warn","ts":"2025-10-24T14:47:07.298Z","caller":"cmd/server.go:1138","msg":"Bitbucket Cloud does not support webhook secrets. This could allow attackers to spoof requests from Bit bucket. Ensure you are allowing only Bitbucket IPs","json":{},"stacktrace":"<http://github.com/runatlantis/atlantis/cmd.(*ServerCmd).securityWarnings|github.com/runatlantis/atlantis/cmd.(*ServerCmd).securityWarnings>\n\tgithub.com/runatlantis/atlantis/cmd/server. go:1138\<http://ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).run|ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).run>\n\tgithub.com/runatlantis/atlantis/cmd/server.go:826\ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.func2\n\tgithub.co m/runatlantis/atlantis/cmd/server.go:718\<http://ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.(*ServerCmd).withErrPrint.func5|ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.(*ServerCmd).withErrPrint.func5>\n\tgithub.com/runatlantis/atlantis/cmd/server.go:1172\ngithu b.com/spf13/cobra.(*Command).execute\n\<http://tgithub.com/spf13/cobra@v1.8.1/command.go:985|tgithub.com/spf13/cobra@v1.8.1/command.go:985>\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/cobra@v1.8.1/command.go:1117\ngithub.co m/spf13/cobra.(*Command).Execute\n\<http://tgithub.com/spf13/cobra@v1.8.1/command.go:1041|tgithub.com/spf13/cobra@v1.8.1/command.go:1041>\ngithub.com/runatlantis/atlantis/cmd.Execute\n\tgithub.com/runatlantis/atlantis/cmd/root.go:30\nmain.main\ n\<http://tgithub.com/runatlantis/atlantis/main.go:66|tgithub.com/runatlantis/atlantis/main.go:66>\nruntime.main\n\truntime/proc.go:272"} {"level":"info","ts":"2025-10-24T14:47:07.299Z","caller":"server/server.go:319","msg":"Supported VCS Hosts%!(EXTRA string=hosts, []models.VCSHostType=[BitbucketCloud])","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.378Z","caller":"server/server.go:472","msg":"Utilizing BoltDB","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.382Z","caller":"policy/conftest_client.go:167","msg":"failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFT EST_VERSION not set","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.382Z","caller":"server/server.go:1032","msg":"Atlantis started - listening on port 4141","json":{}} {"level":"info","ts":"2025-10-24T14:47:07.384Z","caller":"scheduled/executor_service.go:51","msg":"Scheduled Executor Service started","json":{}} {"level":"warn","ts":"2025-10-24T14:48:01.283Z","caller":"server/server.go:1047","msg":"Received interrupt. Waiting for in-progress operations to complete","json":{},"stacktrace":"github.c om/runatlantis/atlantis/server.(*Server).Start\n\<http://tgithub.com/runatlantis/atlantis/server/server.go:1047|tgithub.com/runatlantis/atlantis/server/server.go:1047>\ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).run\n\tgithub.com/runatlantis/atla ntis/cmd/server.go:842\<http://ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.func2|ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).Init.func2>\n\tgithub.com/runatlantis/atlantis/cmd/server.go:718\ngithub.com/runatlantis/atlantis/cmd.(*ServerCmd).I nit.(*ServerCmd).withErrPrint.func5\n\<http://tgithub.com/runatlantis/atlantis/cmd/server.go:1172|tgithub.com/runatlantis/atlantis/cmd/server.go:1172>\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/cobra@v1.8.1/command.go:985\ngithub .com/spf13/cobra.(*Command).ExecuteC\n\<http://tgithub.com/spf13/cobra@v1.8.1/command.go:1117|tgithub.com/spf13/cobra@v1.8.1/command.go:1117>\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/cobra@v1.8.1/command.go:1041\ngithub.co m/runatlantis/atlantis/cmd.Execute\n\<http://tgithub.com/runatlantis/atlantis/cmd/root.go:30|tgithub.com/runatlantis/atlantis/cmd/root.go:30>\nmain.main\n\tgithub.com/runatlantis/atlantis/main.go:66\nruntime.main\n\truntime/proc.go:272"} {"level":"info","ts":"2025-10-24T14:48:01.284Z","caller":"server/server.go:1074","msg":"All in-progress operations complete, shutting down","json":{}}
runatlantis/atlantisGitHub
10/29/2025, 4:45 PMatlantis action as a PR command, with parameters matching the upstream Terraform CLI syntax.
Two scenarios this should be able to be used in:
• Performing an action (or running a plan) without any code change or state drift
This is the new upstream change to the Terraform lifecycle and the one that I most want to see Atlantis reflect.
I think one vector would be using an empty pull request (i.e. consisting of one empty commit) with commands in the comment to ask Atlantis to run a plan or perform an action in a specific directory within the repo. This could serve two purposes:
1. Request Atlantis run a plan on a given directory, without forcing any code change, in case of configuration drift
2. Request Atlantis to run a Terraform Action in a given directory
This gives us the double win of being able to keep track of these actions in a PR and therefore in Git even though it does not necessarily reflect code changes. Very devops.
• Performing an action in the context of a code change
Not dissimilar from the above, in this case there's a conventional PR and plan, and then Atlantis can be engaged to run actions - before or after applies - with the same flexibility as the above.
Lastly, and more speculatively as this needs some thought: When Atlantis runs an auto-merge on apply, it should still be possible to request additional actions post-apply (or ask for more plans, applies, etc.)
Describe the drawbacks of your solution
Still requires that the end-user interact with Atlantis via Git to do on-demand actions, but this PR lifecycle itself could be wrapped up by some automation.
Describe alternatives you've considered
The same workarounds most advanced Terraform users have used for ages: tainting and redeploying resources. Adding atlantis taint with the same empty-PR workflow could be useful as well, but since Actions are designed to make tainting less necessary, it might be wise to just skip that one.
As far as the Atlantis implementation is concerned: as mentioned above, adding to the web UI or implementing an API method to trigger Atlantis to do this would both be interesting, but likely orders of magnitude more work.
runatlantis/atlantisGitHub
10/31/2025, 4:53 AMBoltDB.UnlockByPull() implementation where locks are read in a View transaction and then deleted in separate Update transactions. This creates a window where locks could be modified between the read and delete operations.
### Location
File: server/core/boltdb/boltdb.go
Lines: 257-284
Function: UnlockByPull(repoFullName string, pullNum int)
### Current Implementation
func (b *BoltDB) UnlockByPull(repoFullName string, pullNum int) ([]models.ProjectLock, error) {
var locks []models.ProjectLock
// ⚠️ View transaction: reads locks
err := b.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(b.locksBucketName).Cursor()
for k, v := c.Seek([]byte(repoFullName)); k != nil && bytes.HasPrefix(k, []byte(repoFullName)); k, v = c.Next() {
var lock models.ProjectLock
if err := json.Unmarshal(v, &lock); err != nil {
return errors.Wrapf(err, "deserializing lock at key %q", string(k))
}
if lock.Pull.Num == pullNum {
locks = append(locks, lock)
}
}
return nil
})
// ⚠️ RACE: Locks could be modified between View and Update
for _, lock := range locks {
if _, err = b.Unlock(lock.Project, lock.Workspace); err != nil {
return locks, errors.Wrapf(err, "unlocking repo %s", lock.Project.RepoFullName)
}
}
return locks, nil
}
### Problem
1. View transaction reads all matching locks
2. Separate Update transactions delete each lock individually
3. Race window: Between steps 1 and 2, locks could be:
• Modified by another operation
• Already deleted by another process
• Newly created (won't be deleted)
### Impact
Severity: Low to Medium
Likelihood: Low (locks are typically PR-scoped and this race is unlikely in practice)
Consequences:
• Potential to miss deleting a lock if it's modified between read and delete
• Could return outdated lock information
• Theoretical risk of orphaned locks
### Proposed Fix
Use a single Update transaction for both reading and deleting:
func (b *BoltDB) UnlockByPull(repoFullName string, pullNum int) ([]models.ProjectLock, error) {
var locks []models.ProjectLock
err := b.db.Update(func(tx *bolt.Tx) error { // Use Update, not View
bucket := tx.Bucket(b.locksBucketName)
c := bucket.Cursor()
var keysToDelete [][]byte
// Read and collect keys to delete
for k, v := c.Seek([]byte(repoFullName)); k != nil && bytes.HasPrefix(k, []byte(repoFullName)); k, v = c.Next() {
var lock models.ProjectLock
if err := json.Unmarshal(v, &lock); err != nil {
return errors.Wrapf(err, "deserializing lock at key %q", string(k))
}
if lock.Pull.Num == pullNum {
locks = append(locks, lock)
keysToDelete = append(keysToDelete, append([]byte(nil), k...)) // Copy key
}
}
// Delete within same transaction (atomic)
for _, key := range keysToDelete {
if err := bucket.Delete(key); err != nil {
return errors.Wrapf(err, "deleting lock at key %q", string(key))
}
}
return nil
})
return locks, err
}
### Benefits of Fix
1. ✅ Atomic operation: Read and delete in single transaction
2. ✅ No race window: Locks can't change between read and delete
3. ✅ Consistent state: All locks for a PR deleted together
4. ✅ Better error handling: Single transaction failure point
### Testing Recommendations
1. Add concurrent test with goroutines trying to:
• Unlock same PR simultaneously
• Lock while unlocking is in progress
2. Use Go race detector: go test -race
3. Test PR close webhook with concurrent plan operations
### Related
This was discovered during comprehensive lock mechanism analysis. See related documentation PR for full context on locking architecture.
### Notes
• Similar pattern in Redis implementation should be reviewed
• Other View + Update patterns in codebase should be audited
• This is a theoretical race; no known production issues reported
runatlantis/atlantisGitHub
10/31/2025, 2:54 PM--gh-allow-mergeable-bypass-apply, atlantis apply will report incorrectly that the PR is not approved on merge request that are in a blocked state.
I was able to track down where the problem occurs. This error message (invalid repository name)comes from func (g *GithubClient) LookupRepoId(repo githubv4.String) (<http://githubv4.Int|githubv4.Int>, error) file server/events/vcs/github_client.go.
The relevant code is this. Keep in mind that LookupRepoId will return an error if the repo parameter cannot be split.
repoSplit := strings.Split(string(repo), "/")
if len(repoSplit) != 2 {
return <http://githubv4.Int|githubv4.Int>(0), fmt.Errorf("invalid repository name: %s", repo)
}
I won't go into the weeds here but basically, when the flag --gh-allow-mergeable-bypass-apply is set and that the pull request is in a blocked state due to checks failing, atlantis will verify that every checks are passing except for the atlantis apply checks.
Atlantis makes a graphql query to retrieve the pull request information. See below. Among those information, we want to get the "checkrun" object that represents how a workflow was run, successful or not.
gh api graphql -f query='query { repository(owner:"OWNER",name:"REPO") {pullRequest(number:69) {baseRef{rules(first:100){nodes{type,repositoryRuleset{enforcement},parameters{... on WorkflowsParameters{workflows{path, repositoryId}}}}}}commits(last:1){nodes{commit{statusCheckRollup{contexts(first: 100){nodes{__typename, ... on CheckRun{conclusion, name, checkSuite{conclusion,workflowRun{runNumber,file{repositoryName, path}}}}}}}}}}}} }
From this query, we get a very big output but what we are interested in is these line:
"commits": {
"nodes": [
{
"commit": {
"statusCheckRollup": {
"contexts": {
"nodes": [
{
"__typename": "CheckRun",
"conclusion": "SUCCESS",
"name": "Analyze (actions)",
"checkSuite": {
"conclusion": "SUCCESS",
"workflowRun": {
"runNumber": 208,
"file": null
}
}
},
[...]
Notice how the Analyze (actions) check is structured. The workflowRun is not empty but the workflowRun.file is null. This checkrun is coming from a global actions activated from the organization setting, not from an individual workflow file.
In the code path, right before we call the LookupRepoId function, we do this:
if checkRun.CheckSuite.WorkflowRun == nil {
continue
}
In our case, the WorkflowRun is not nil but the WorkflowRun.File is nil. This result in atlantis erroring because we try to lookup a repo with an empty repo name.
### Reproduction Steps
1. Add global code analysis (from github settings) so that every repository will get automatically analysed with codeql.
2. Ensure that the flag --gh-allow-mergeable-bypass-apply is enabled.
3. Create a pull request.
4. Make sure one of the check, not related to atlantis, is failing.
5. Add an altantis apply comment.
### Logs
Logs``` {"level":"warn","ts":"2025-10-30T184500.327Z","caller":"events/apply_command_runner.go:108","msg":"unable to get pull request status: fetching mergeability status for repo: ORG/REPO, and pull number: 44: getting pull request status: invalid repository name: . Continuing with mergeable and approved assumed false","json":{"repo":"ORG/REPO","pull":"44"},"stacktrace":"github.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:108\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:427"} ```
### Environment details
### Additional Context
runatlantis/atlantisGitHub
10/31/2025, 5:46 PMserver-side config doesn't seem to be working for me:
- id: <http://github.com/xxx/infrastructure|github.com/xxx/infrastructure>
branch: /^(development|uat|rc)$/
apply_requirements: []
plan_requirements: []
allow_custom_workflows: false
allowed_overrides: []
workflow: default
- id: <http://github.com/xxx/infrastructure|github.com/xxx/infrastructure>
branch: /^(stg|main)$/
apply_requirements: [approved]
plan_requirements: []
allow_custom_workflows: false
allowed_overrides: []
workflow: default
Can anyone advise how I can add apply_requirements to only specific branches of a repo - maybe I am missing something but it always just takes the latter entry and ignores the first completely
Thanks in advance,
runatlantis/atlantisGitHub
11/05/2025, 4:21 PMGitHub
11/06/2025, 10:00 AM--policy=path via extra_args.
So I want to fetch own policies for each policy_set to have separate owners/approvers.
Describe the solution you'd like
Add an environment variable POLICYSETNAME (like POLICYCHECKPATH) that will provide the name of currently running policyset.
Describe the drawbacks of your solution
I don't see any
Describe alternatives you've considered
Probably we could use single source of policy checks so that checks will be separated by namespaces. So specifying namespace in policy_set may work effectively the same. But I guess it will be sort of a duplicate for name field
runatlantis/atlantisGitHub
11/10/2025, 6:20 PMGitHub
11/11/2025, 3:57 AMrunning git clone --depth=1 --branch test_v035 --single-branch <https://atlantis%40acme.net:<redacted>@bitbucket.org/acme/atlantis-demo.git> /home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default: Cloning into '/home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default'...
remote: You may not have access to this repository or it no longer exists in this workspace. If you think this repository exists and you have access, make sure you are authenticated.
fatal: Authentication failed for '<https://bitbucket.org/acme/atlantis-demo.git/>'
: exit status 128
# Conversely, with username:
git clone <https://atlantis-devops:API_TOKEN@bitbucket.org/acme/atlantis-demo.git>
Cloning into 'atlantis-demo'...
remote: Enumerating objects: 171, done.
...
Resolving deltas: 100% (75/75), done.
# CURL API call with username:
curl -u "atlantis-devops:API_TOKEN" -H "Content-Type: application/json" -X POST -d '{"content": {"raw": "Test comment"}}' "<https://api.bitbucket.org/2.0/repositories/org/repo/pullrequests/28/comments>"
# Response:
{"error": {"message": "Unauthorized"}}
### Environment details
### Additional Context
Reference docs:
https://support.atlassian.com/bitbucket-cloud/docs/using-api-tokens/
https://support.atlassian.com/bitbucket-cloud/docs/using-app-passwords/
runatlantis/atlantisGitHub
11/11/2025, 8:18 PMterminationGracePeriodSeconds very high, but if the apply is long-running then all use of atlantis would be blocked for that period. Apparently terraform attempts a more graceful shutdown if it receives a sigint. It would be useful for atlantis to have an option, like TerraformGracefulShutdownSeconds or so. After that period has passed while draining, atlantis would send the sigint.
I can take a crack at a PR for this, but I wanted to check first if that approach seems reasonable to the maintainers.
Describe the solution you'd like
Describe the drawbacks of your solution
Describe alternatives you've considered
runatlantis/atlantisGitHub
11/11/2025, 11:33 PMevents package. Some data confirms it is far and away the largest package in the codebase:
atlantis % for j in $(for i in $(find . -type f -name '*.go'); do dirname $i; done | sort | uniq); do echo -n "$j "; cloc $j/*.go | grep '^Go' | awk '{print $5}'; done | sort -rnk2 | column -t
./server/events 25592
./server/events/vcs 6706
./server/events/mocks 5674
./server/core/runtime 4776
./server/core/config/raw 4658
./server/controllers/events 4048
./server/core/config/valid 2828
./cmd 2334
./server/core/config 2095
Having large packages is unavoidable, but "events" has become just a "kitchen sink", and the fact that it's 4x the next largest package is to me a code smell.
Describe the solution you'd like
I'd like to start pulling logic into subpackages of events, in the spirit of vcs and command. I don't want to do this "just for the sake" so would be looking for semantically related packages that have a comprehensible interface they expose to the rest of the code base.
Describe the drawbacks of your solution
Obviously a large refactor is a risk. Also moving large numbers of critical files around makes git histories difficult. I'm just afraid the problem is only going to get worse as events becomes more and more a center of gravity.
Describe alternatives you've considered
Code could be pulled out a different way, like to a sibling package? But "lowering" into a subpackage seems the more obvious.
runatlantis/atlantisGitHub
11/13/2025, 8:00 AMatlantis apply in the PR:
Ran Apply for dir: my-project-dir workspace: default
Apply Failed: Pull request must be approved according to the project's approval rules before running apply.
I do have this PR approved by the codeowner:
[Image](https://private-user-images.githubusercontent.com/15604715/513748579-b95878e6-31aa-4766-b9fa-b2b0735e5d88.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjMwMjYxMzMsIm5iZiI6MTc2MzAyNTgzMywicGF0aCI6Ii8xNTYwNDcxNS81MTM3NDg1NzktYjk1ODc4ZTYtMzFhYS00NzY2LWI5ZmEtYjJiMDczNWU1ZDg4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTEzVDA5MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTI3MDliYTIzMTdkMTk4MzdmOTJmMWZhNTQ1M2M4NmEzOGQxYjEzNmRjODc2MmZiMmZmNWNlNWRiNThiZmRhNTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.JE6PfXZzlTVaFJN1eyBuh7JZjvT3l79Dk-klDr55S48)
Current status checks:
[Image](https://private-user-images.githubusercontent.com/15604715/513749267-5054b89e-a34d-4c19-b10a-99d53c9315a3.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjMwMjYxMzMsIm5iZiI6MTc2MzAyNTgzMywicGF0aCI6Ii8xNTYwNDcxNS81MTM3NDkyNjctNTA1NGI4OWUtYTM0ZC00YzE5LWIxMGEtOTlkNTNjOTMxNWEzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTEzVDA5MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTE3ZjA0MjQ0ZjFlNzkyNTExMDMwNzk5OGQ5NjE4OTg0ODhkMDdhOGYwYmFlNWExMDIxZWMxMjc5YTlhMDM0NjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.HowVwbMIJ2Gban7uNq3IGwQqLkLLrwkpNnk5Q5VbrWk)
I don't understand what's happening here, please help, thanks.
### Reproduction Steps
### Logs
I can see error logs:
Unable to check pull mergeable status, error: getting pull request status: fetching rulesets, branch protections and status checks from GraphQL: Resource not accessible by integration
unable to get pull request status: fetching mergeability status for repo: xxx, and pull number: 62: getting pull request status: fetching rulesets, branch protections and status checks from GraphQL: Resource not accessible by integration. Continuing with mergeable and approved assumed false
I'm using github app for auth with permissions like this:
[Image](https://private-user-images.githubusercontent.com/15604715/513776991-5a41f112-1910-46ac-bff8-e6853adca8d9.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjMwMjYxMzMsIm5iZiI6MTc2MzAyNTgzMywicGF0aCI6Ii8xNTYwNDcxNS81MTM3NzY5OTEtNWE0MWYxMTItMTkxMC00NmFjLWJmZjgtZTY4NTNhZGNhOGQ5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTEzVDA5MjM1M1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU2Y2RjMDg2MDNiZTNjMTk2MWEwMDI2NWRlOWZmYTU1MGZjNTJmNDMxOTY4YThhMWMxOGQyMzIwY2FjZGUxODEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.99zPmMcIWLv2tjB0aw6zfXMBwGmeDB8itv_8caxYGOo)
### Environment details
• Atlantis version: v0.37.1
• Atlantis flags: --repo-config=repos.yaml --gh-allow-mergeable-bypass-apply
Atlantis server-side config file:
repos:
- id: /.*/
apply_requirements: [approved, mergeable]
import_requirements: [approved, mergeable]
### Additional Context
atlantis/apply is added as my required status check, flag --gh-allow-mergeable-bypass-apply is enabled.
runatlantis/atlantisGitHub
11/14/2025, 2:39 PMATLANTIS_AUTOPLAN_MODULES and using OpenTofu with dynamic providers (which are currently not supported by Terraform), module dependencies cannot be loaded.
### Reproduction Steps
Make Atlantis run with ATLANTIS_AUTOPLAN_MODULES on a project that has a dynamic provider configuration, which is valid for OpenTofu, for example:
locals {
projects = toset(["my-project-1", "my-project-2"])
}
provider "google" {
for_each = local.projects
alias = "project"
project = each.key
}
data "google_service_account" "all" {
for_each = local.projects
provider = google.project[each.key]
account_id = "my-service-account"
}
### Logs
Logs
error(s) loading project module dependencies: my-stack/provider.tf:52 - Invalid provider reference: Provider argument requires a provider name followed by an optional alias, like \"aws.foo\".","json":{"repo":"my-repo","pull":"24"},"stacktr
ace":"<http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).getMergedProjectCfgs|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).getMergedProjectCfgs>\n\tgithub.com/runatlantis/atlantis/server/events/project_command_builder.go:401\ngithub.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg\n\tgithub.com/runatlantis/atlantis/server/events/project_command_builder.go:532\ngithub.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).BuildAutoplanCommands\n\tgithub.com/runatlantis/atlantis/server/events/project_command_builder.go:257\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands.func1\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:29\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildAutoplanCommands\n\tgithub.com/runatlantis/atlantis/server
/events/instrumented_project_command_builder.go:26\<http://ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan|ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).runAutoplan>\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:94\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:319\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunAutoplanCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:251
### Environment details
• Atlantis version: v0.37.1
runatlantis/atlantisGitHub
11/16/2025, 6:24 AMfunc (p *DefaultProjectCommandRunner) Apply(ctx command.ProjectContext) command.ProjectResult {
applyOut, failure, err := p.doApply(ctx)
return command.ProjectResult{
Command: command.Apply,
Failure: failure,
Error: err,
ApplySuccess: applyOut,
RepoRelDir: ctx.RepoRelDir,
Workspace: ctx.Workspace,
ProjectName: ctx.ProjectName,
SilencePRComments: ctx.SilencePRComments,
}
}
The runner is "inventing" the name of the command that it assumed called it. Higher up in the call stack the command is known, then we "forget it" and sneak it back in here. This causes bugs like #5934, and in general doesn't make a lot of sense. In addition the ctx.* content (which again is returned by all the analogous commands) is a code smell here; ctx is being passed in to this function, the caller clearly knows things like ProjectName or Workspace, why are we telling it?
The issue is that the type ProjectResult is used in many places, and here is doing double duty of summarizing what happened in a given run, as well as the output from a given command.
Describe the solution you'd like
These functions should return a pared down ProjectCommandOutput, that the caller of this function should then "decorate" with the additional information.
Describe the drawbacks of your solution
It's a refactor so there's some risk there, but there's a lot of test coverage so should be ok.
Describe alternatives you've considered
I can't think of any
runatlantis/atlantisGitHub
11/18/2025, 9:43 AMResource not accessible by integration errors after an atlantis apply. It's relatively rare, but you still notice this at scale in an organisation.
Describe the solution you'd like
When automerging is used on atlantis apply commands, have a mechanism to retry (e.g. up to 3 times) on some common/configurable errors. This can be opt-in.
Describe the drawbacks of your solution
• Not all errors are transient: sometimes, an error like 403 really isn't transient, and it truly cannot be merged. These cases can result in N auto-merge retried requests.
• Not everyone wants retries on their Atlantis deployments.
Describe alternatives you've considered
We can also implement an automatic GitHub Action workflow that runs on PR comments from Atlantis when it comments with an error like this. This is definitely useful, but it isn't all too generic, and is a patch to a case that can often be solved by just trying once more after some 5 seconds.
runatlantis/atlantisGitHub
11/20/2025, 1:13 AMatlantis version under PR threw panic:
runtime error: invalid memory address or nil pointer dereference
runtime/panic.go:262 (0x472a98)
runtime/signal_unix.go:917 (0x472a68)
<http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:509|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:509> (0xec9497)
<http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:422|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:422> (0xec894d)
<http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:397|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:397> (0xec8686)
<http://github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:370|github.com/runatlantis/atlantis/server/core/terraform/tfclient/terraform_client.go:370> (0xec7d76)
<http://github.com/runatlantis/atlantis/server/core/runtime/version_step_runner.go:30|github.com/runatlantis/atlantis/server/core/runtime/version_step_runner.go:30> (0x1095193)
<http://github.com/runatlantis/atlantis/server/events/project_command_runner.go:813|github.com/runatlantis/atlantis/server/events/project_command_runner.go:813> (0x118ad62)
<http://github.com/runatlantis/atlantis/server/events/project_command_runner.go:699|github.com/runatlantis/atlantis/server/events/project_command_runner.go:699> (0x1188e95)
<http://github.com/runatlantis/atlantis/server/events/project_command_runner.go:304|github.com/runatlantis/atlantis/server/events/project_command_runner.go:304> (0x11842dd)
<http://github.com/runatlantis/atlantis/server/events/project_command_pool_executor.go:48|github.com/runatlantis/atlantis/server/events/project_command_pool_executor.go:48> (0x1182bbb)
<http://github.com/runatlantis/atlantis/server/events/version_command_runner.go:52|github.com/runatlantis/atlantis/server/events/version_command_runner.go:52> (0x11912b0)
<http://github.com/runatlantis/atlantis/server/events/command_runner.go:401|github.com/runatlantis/atlantis/server/events/command_runner.go:401> (0x115478c)
runtime/asm_amd64.s:1700 (0x478960)
During investigation I found two distinct bugs with the same source:
1. Nil pointer panic when running version command after Atlantis restart or cache clear
2. Version command fails on fresh Atlantis instances with existing PRs
From my investigation, when the terraform binary cache is cleared or Atlantis restarts:
1. versions map has no cached terraform binaries
atlantis/server/core/terraform/tfclient/terraform_client.go
Line 502 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f>
| if binPath, ok := versions[v.String()]; ok { |
| -------------------------------------------- |
2. VersionStepRunner.Run() sets tfDistribution := v.DefaultTFDistribution
atlantis/server/core/runtime/version_step_runner.go
Line 20 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f>
| tfDistribution := v.DefaultTFDistribution |
| ----------------------------------------- |
which is nil because it was never initialized in server.go
atlantis/server/server.go
Lines 725 to 728 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f>
| VersionStepRunner: &runtime.VersionStepRunner{ |
| ---------------------------------------------- |
| TerraformExecutor: terraformClient, |
| DefaultTFVersion: defaultTfVersion, |
| }, |
3. Calls RunCommandWithVersion(..., tfDistribution, ...) → prepCmd(..., d, ...) → ensureVersion(..., d, ...) where d is nil
4. Check at line 502 fails because cache is empty
atlantis/server/core/terraform/tfclient/terraform_client.go
Lines 502 to 503 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f>
| if binPath, ok := versions[v.String()]; ok { |
| -------------------------------------------- |
| return binPath, nil |
5. Execution reaches line 509:
atlantis/server/core/terraform/tfclient/terraform_client.go
Line 509 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f>
| binFile := dist.BinName() + v.String() |
| -------------------------------------- |
6. Calling BinName() on nil dist interface causes panic
Long story short - VersionStepRunner is missing DefaultTFDistribution field initialization in server/server.go.
atlantis/server/server.go
Lines 725 to 728 in</runatlantis/atlantis/commit/2b2fd1fd62d6229be453d4135fe432e84cd20f7b|2b2fd1f>
| VersionStepRunner: &runtime.VersionStepRunner{ |
| ---------------------------------------------- |
| TerraformExecutor: terraformClient, |
| DefaultTFVersion: defaultTfVersion, |
| }, |
### Reproduction Steps
Nil pointer panic:
1. Start Atlantis (any version including latest)
2. Create a PR with atlantis.yaml
3. Run atlantis plan to populate the terraform binary cache
4. Clear the cache: rm -rf /home/atlantis/.atlantis/bin/*
5. Run atlantis version command
Result: Panics with nil pointer dereference at terraform_client.go:509
Expected: Prints terraform version in a comment
This is not an edge case - it happens during normal operations, for example Atlantis restarts, container/pod restarts with persistent volume containing data about existing PRs.
…
runatlantis/atlantisGitHub
11/21/2025, 12:51 PMatlantis plan in a Pull Request, the webhook is well trigger, the step well run but the output is not posted within Pull Request comment. Instead I get a 404 error in the logs.
### Reproduction Steps
We do use a GitHub Enterprise Server and Atlantis installed with Helm in an Azure Kubernetes Service cluster using a GitHub App for connection between the services. We add Custom Workflow configuration in the `repos.yaml| file to handle Terragrunt following the documentation. Everything looks ok except the plan output which is not included in the Pull Request comment.
[Image](https://private-user-images.githubusercontent.com/81637376/501341677-937dd19d-b78a-46f7-8178-f815f168baba.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjM3Mjk3NzcsIm5iZiI6MTc2MzcyOTQ3NywicGF0aCI6Ii84MTYzNzM3Ni81MDEzNDE2NzctOTM3ZGQxOWQtYjc4YS00NmY3LTgxNzgtZjgxNWYxNjhiYWJhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTExMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUxMTIxVDEyNTExN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWFlOGQ4YmNkNzBkMGJmZmY4ZGFiOTVkMDI5YWQ1ZDgyN2RiZTVlOTgwMzBkYzA3NzQ4MmU2YWMzM2FmZTVkOGMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.rkTzTx_hynzCitIoll3L3e_STtkOhV2xjYS1ZO8Y2-k)
I have made several tests with other based repositories, GitHub App, Atlantis version, basic configuration with Terraform instead etc.. My results concluded that the issue is only related to Terragrunt and output plan in comments.
Note that in the above screenshot atlantis can create comments with the API for all the other result types.
### Logs
Here you can find the logs I get from the Atlantis pod.
Logs
"vcs/instrumented_client.go:116","msg":"Unable to create comment for command plan, error: POST https://<GITHUB_SERVER>/api/v3/repos/l<OWNER>/<REPO_NAME>/issues/61/comments: 404 []","json":{"repo":"liebherr/min_landing_zone_platform","pull":"61"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).CreateComment|github.com/runatlantis/atlantis/server/events/vcs.(*InstrumentedClient).CreateComment>\n\tgithub.com/runatlantis/atlantis/server/events/vcs/instrumented_client.go:116\ngithub.com/runatlantis/atlantis/server/events/vcs.(*ClientProxy).CreateComment\n\tgithub.com/runatlantis/atlantis/server/events/vcs/proxy.go:65\ngithub.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:51\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:264\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:299\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:401"}
"events/pull_updater.go:52","msg":"unable to comment: POST https://<GITHUB_SERVER>/api/v3/repos/l<OWNER>/<REPO_NAME>/issues/61/comments: 404 []","json":{"repo":"liebherr/min_landing_zone_platform","pull":"61"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull|github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull>\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:52\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:264\ngithub.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/plan_command_runner.go:299\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:401"}
### Environment details
If not already included, please provide the following:
• Atlantis version: 0.36.0
• Deployment method: Helm in Azure Kubernetes Service
• Other tested Atlantis version: 0.35.1 and 0.34.0
Atlantis server-side config file:
repoConfig: |
repos:
- id: <repo_id>
branch: /.*/
allowed_overrides: [workflow]
allow_custom_workflows: true
pre_workflow_hooks:
- run: terragrunt-atlantis-config generate --output atlantis.yaml --workflow terragrunt --autoplan --automerge --parallel --create-workspace
workflows:
terragrunt:
plan:
steps:
- env:
name: ARM_OIDC_TOKEN_FILE_PATH
command: 'echo $AZURE_FEDERATED_TOKEN_FILE'
- env:
name: ARM_CLIENT_ID
command: 'echo $AZURE_CLIENT_ID'
- run:
# Allow for targeted plans/applies as not supported for Terraform wrappers by default
command: terragrunt plan -input=false $(printf '%s' $COMMENT_ARGS | sed 's/,/ /g' | tr -d '\\') -no-color -out $PLANFILE
output: hide
- run: |
terragrunt show $PLANFILE
apply:
steps:
- env:
name: ARM_OIDC_TOKEN_FILE_PATH
command: 'echo $AZURE_FEDERATED_TOKEN_FILE'
- env:
name: ARM_CLIENT_ID
command: 'echo $AZURE_CLIENT_ID'
- run: terragrunt apply -input=false $PLANFILE
Repo atlantis.yaml file:
atlantis.yaml file is generated on the fly with terragrunt-atlantis-config
Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc)
### Additional Context
runatlantis/atlantisGitHub
11/23/2025, 2:15 AMGitHub
11/23/2025, 2:15 AMversion: 3
projects:
- name: qa
dir: qa_acct/qa_env
terraform_version: v0.12.8
autoplan:
when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
enabled: false
- name: staging
dir: prod_acct/staging_env
terraform_version: v0.12.8
autoplan:
when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
enabled: false
- name: prod
dir: prod_acct/prod_env
terraform_version: v0.12.8
autoplan:
when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
enabled: false
Plans are generated for all three projects as normal after commenting exactly atlantis plan.
Immediately afterword, commenting atlantis apply attempts to apply all three environments as expected. In this case, there was an apply error due to an AWS IAM policy being misconfigured and the plans were not successfully applied. A commit was pushed to fix this issue and another atlantis apply was submitted. Note, there was not another atlantis plan after the fix commit was pushed. Atlantis behaved as if it had forgotten about the failed plans and assumed they had been applied successfully when, in fact, they had not been. I believe the expected behavior should be to reject the apply since new commits were made and force another plan be run, correct?
The result was the following:
Ran Apply for 0 projects:
Automatically merging because all plans have been successfully applied.
Locks and plans deleted for the projects and workspaces modified in this pull request:
* dir: `prod_acct/prod_env` workspace: `default`
* dir: `prod_acct/staging_env` workspace: `default`
* dir: `qa_acct/qa_env` workspace: `default`
runatlantis/atlantisGitHub
11/26/2025, 1:54 PM<http://runatlantis.io/docs/repo-level-atlantis-yaml.md:71|runatlantis.io/docs/repo-level-atlantis-yaml.md:71> says terraform_distribution: terraform # Available since v0.25.0, which seems off and the real availability is v0.33.0.
Made me think that my v0.31.0 can run this but spent quite some time debugging.
runatlantis/atlantisGitHub
11/27/2025, 7:56 PMparallel_plan: true and parallel_apply: true in atlantis.yaml, we are experiencing concurrency issues with Terraform provider installation. Multiple parallel executions try to write/read to the same shared plugin cache directory simultaneously, resulting in text file busy errors or checksum mismatches.
It seems that even when using a shared plugin cache, concurrent terraform init or terraform plan operations conflict when accessing the provider binaries.
### Steps to Reproduce
1. Enable parallel execution in `atlantis.yaml`:
parallel_plan: true
parallel_apply: true
2. Configure a shared plugin cache (e.g., via TF_PLUGIN_CACHE_DIR env var or .terraformrc).
3. Trigger a PR that runs multiple Terraform projects simultaneously (e.g., 5-10 projects) using the same providers.
### Logs
│ Error: Failed to install provider
│
│ Error while installing hashicorp/azuread v3.7.0: open
│ /atlantis-data/plugin-cache/registry.terraform.io/hashicorp/azuread/3.7.0/linux_amd64/terraform-provider-azuread_v3.7.0_x5:
│ text file busy
And sometimes checksum errors:
│ Error: Required plugins are not installed
│
│ The installed provider plugins are not consistent with the packages
│ selected in the dependency lock file:
│ - <http://registry.terraform.io/hashicorp/azurerm|registry.terraform.io/hashicorp/azurerm>: the cached package for <http://registry.terraform.io/hashicorp/azurerm|registry.terraform.io/hashicorp/azurerm> 4.54.0 (in .terraform/providers) does not match any of the checksums recorded in the dependency lock file
### Environment details
• Atlantis version: v0.37.1
• Terraform version: v1.13.5
• Atlantis server side config:
• TF_PLUGIN_CACHE_DIR is set to a shared directory.
### Workaround attempted
We had to implement a workaround in our atlantis.yaml to serialize the init phase and force a local download of providers (bypassing the cache) to avoid conflicts:
workflows:
default:
plan:
steps:
# Use flock to serialize init and disable cache to avoid symlink conflicts
- run: flock /tmp/terraform_init.lock bash -c "rm -rf .terraform/providers && env -u TF_PLUGIN_CACHE_DIR TF_CLI_CONFIG_FILE=/dev/null terraform init -upgrade"
- plan
### Proposed Solution / Feature Request
It would be great if Atlantis could handle the locking mechanism for the provider cache internally when parallel mode is enabled, or provide a native way to serialize the init step while keeping plan/apply parallel.
runatlantis/atlantisGitHub
11/28/2025, 7:01 AMCI / CIRCLECI
• GitHub Actions (docs)
• CI / GITHUB_ACTION
• Drone (docs)
• CI / DRONE
However, it does not exist in Atlantis.
https://www.runatlantis.io/docs/custom-workflows#native-environment-variables
The github-comment tool identifies the execution environment based on these environment variables.
https://suzuki-shunsuke.github.io/github-comment/complement
Describe the solution you'd like
This can be resolved by providing the environment variables CI=true and ATLANTIS=true as Native Environment Variables.
https://www.runatlantis.io/docs/custom-workflows#native-environment-variables
Describe the drawbacks of your solution
Currently, we are using ATLANTIS_TERRAFORM_VERSION for determination, but I don't think it's very appropriate. Because this variable is intended to store the Terraform version.
Describe alternatives you've considered
related issue: suzuki-shunsuke/go-ci-env#583 (comment)
runatlantis/atlantisGitHub
12/02/2025, 2:50 PM**Ran Plan for dir**: aws/playground/policy-brick workspace: default
**Plan Failed**: All policies must pass for project before running plan.
### Reproduction Steps
• Set up an atlantis instance with at least one element of repos.yaml#/policies/policy_sets defined, and has auto-plan enabled
• Open a PR that fails the policy check
• Wait for the policy checks to fail, then push a new commit
• Observe that atlantis produces the above error
I suspect this will also fail if auto-plan is disabled and a manualis run - will see if I can verify this.atlantis plan
A subsequent### Logs Logs // policy check error on first commit {"level":"info","caller":"events/events_controller.go:559","msg":"Handling GitHub Pull Request 'opened' event","json":{"gh-request-id":"X-Github-Delivery=REDACTED","repo":"transferwise/repo-name","pull":"216"}} ["omitted... plan & init runs as normal"] {"level":"error","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running policy_check operation: Some policy sets did not pass.","json":{"repo":"transferwise/repo-name","pull":"216"},"stacktrace":"github.com/runatlantis/atlantis/server/events.RunAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).PolicyCheck\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go42\ngithub.com/runatlantis/atlantis/server/events.runProjectCmdsParallel.func1\n\tgithub.com/runatlantis/atlantis/server/events/project command pool executor.go29"} // plan error {"level":"info","caller":"events/events_controller.go:559","msg":"Handling GitHub Pull Request 'updated' event","json":{"gh-request-id":"X-Github-Delivery=REDACTED","repo":"transferwise/repo-name","pull":"216"}} ["omitted... atlantis pulls latest version, discovers updated file & sets up commands - but no policy checks are actually run"] {"level":"debug","caller":"events/plan_command_runner.go:129","msg":"deleting previous plans and locks","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/project_command_context_builder.go:200","msg":"Building project command context for policy_check","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/project_command_context_builder.go:98","msg":"Building project command context for plan","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"vcs/github_client.go:940","msg":"Updating GitHub Check status for 'atlantis/plan: aws/playground/policy-brick/default' to 'pending'","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/plan_command_runner.go:139","msg":"Running plans in parallel","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"vcs/github_client.go:950","msg":"POST /repos/transferwise/repo-name/statuses/REF returned: 201","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/working_dir.go:109","msg":"clone directory '/home/atlantis/.data/repos/transferwise/repo-name/216/default' already exists, checking if it's at the right commit","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/project_command_runner.go:576","msg":"acquired lock for project","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/project_locker.go:86","msg":"Acquired lock with id 'transferwise/repo-name/aws/playground/policy-brick/default'","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/working_dir.go:117","msg":"repo is at correct commit \"REF\" so will not re-clone","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"debug","caller":"events/working_dir.go:299","msg":"Comparing PR ref \"REF\" to local ref \"REF\"","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"vcs/github_client.go:940","msg":"Updating GitHub Check status for 'atlantis/plan: aws/playground/policy-brick/default' to 'failure'","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"info","caller":"events/plan_command_runner.go:146","msg":"deleting plans because there were errors and automerge requires all plans succeed","json":{"repo":"transferwise/repo-name","pull":"216"}} {"level":"error","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running plan operation: All policies must pass for project before running plan.","json":{"repo":"transferwise/repo-name","pull":"216"},"stacktrace":"github.com/runatlantis/atlantis/server/events.RunAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).Plan\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_runner.go38\ngithub.com/runatlantis/atlantis/server/events.runProjectCmdsParallel.func1\n\tgithub.com/runatlantis/atlantis/server/events/project command pool executor.go29"} Happy to provide additional logs upon request. ### Environment details If not already included, please provide the following: • Atlantis version: v0.37.1 • Deployment method: ecs Atlantis server-side config file: repos: - id: /.*/ branch: /^(main|master)$/ apply_requirements: [approved, mergeable] workflow: default policies: owners: teams: - my-team policy_sets: - name: aws path: policy # local path, ignored whenor push that triggers auto-plan is successfully plannedatlantis plan
--update is used
source: local
workflows:
default:
plan:
steps:
- init
- plan
- show
apply:
steps:
- apply
policy_check:
steps:
- show
- policy_check:
extra_args:
- "--update"
- "${opa_policy_url}"
- "-d"
- "./policy/data.json"
- "--namespace"
- "${namespace}"
metrics:
prometheus:
endpoint: "/metrics"
Additional features:
• We have enabled parallel plan & apply
• We have enabled auto-discovery & autoplan-modules
• We have disabled the Terraform plugin cache
• We have allowed atlantis to ignore failed atlantis/apply checks when checking if a PR is mergeable
### Additional Context
I believe this was introduced by #5851 - which changed the behaviour to validate that policy checks are passing before running the plan command.
Fixes here could be
• remove the valid.PoliciesPassedCommandReq if present in ctx.PlanRequirements when passed to DefaultCommandRequirementHandler.ValidatePlanProject
• this has the smallest scope, but if it's possible to run a policy_check before a plan, it may cause a regression for that use case (i can't tell if that is the case)
• not inject it at the t…
runatlantis/atlantisGitHub
12/02/2025, 8:11 PMDEFAULT_CONFTEST_VERSION is not defined. This will always print the info log failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFTEST_VERSION not set. Starting atlantis with default configuration should not log errors like this. DEFAULT_CONFTEST_VERSION should be available in runtime, especially because we use this environment variable to download conftest on build time.
### Reproduction Steps
Executing this will print out the error log.
docker run -it <http://ghcr.io/runatlantis/atlantis:v0.36.0|ghcr.io/runatlantis/atlantis:v0.36.0> atlantis server --gh-user=test --gh-token=test --repo-allowlist=test
### Logs
### Environment details
atlantis version: 0.36 and 0.37.1
### Additional Context
We should add DEFAULT_CONFTEST_VERSION available to stock container image. The source code is using this environment variable so it should always be defined with a default value. The operator should explicitly unset the environment variable if he wants to keep this behavior. I think it's unreasonable to have to specify DEFAULT_CONFTEST_VERSION, especially since we download conftest at build time using this variable.
runatlantis/atlantisGitHub
12/03/2025, 8:18 AMrun --all, and may impact other workflows as well. Runner pool was made generally available in v0.89.0.
Why it breaks
When discovering units to run, terragrunt ignores hidden directories, not only under the working directory but also above it. Atlantis jobs happen to run inside ~/.atlantis, which triggers the issue.
I attempted to work around the limitation using TG_QUEUE_INCLUDE_DIR, but it didn’t behave consistently. In the end, the only reliable fix was to change the Atlantis data directory from ~/.atlantis to a non-hidden one (in our case ~/atlantis-data).
Not sure if it should be considered a bug in Atlantis, but definitely something that Atlantis users and developers should be aware of.
runatlantis/atlantisGitHub
12/03/2025, 4:00 PM/api/plan against the main branch, the API returns the following error:
{
"error": "post-merge verification failed: HEAD^2 != main"
}
### Reproduction Steps
Make a POST request to <https://hostname/api/plan> with the following body:
{
"Repository": "myorg/myrepo",
"Ref": "main",
"Type": "Github",
"Paths": [
{
"Directory": "myterraformconfig"
}
]
}
### Logs
Logs
{"level":"info","ts":"2025-12-03T15:31:09.818Z","caller":"events/working_dir.go:120","msg":"repo was already cloned but branch is not at correct commit, updating to \"main\"","json":{}}
{"level":"warn","ts":"2025-12-03T15:31:25.985Z","caller":"controllers/api_controller.go:391","msg":"{\"error\":\"post-merge verification failed: HEAD^2 != main\"}","json":{},"stacktrace":"<http://github.com/runatlantis/atlantis/server/controllers.(*APIController).respond|github.com/runatlantis/atlantis/server/controllers.(*APIController).respond>\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:391\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).apiReportError\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:87\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).Plan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:101\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2322\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.1/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3340\nnet/http.(*conn).serve\n\tnet/http/server.go:2109"}
### Environment details
• Atlantis version: v0.37.1
• Deployment method: helm
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: n/a
• Atlantis flags: see helm chart values below
Atlantis helm chart values:
# config file
github:
hostname: <removed>
enableDiffMarkdownFormat: true
ingress:
enabled: false
environment:
ATLANTIS_CHECKOUT_STRATEGY: merge
ATLANTIS_DEFAULT_TF_VERSION: v1.11.1
ATLANTIS_WEB_BASIC_AUTH: "true"
AWS_ENDPOINT_URL_S3: <removed>
TF_CLI_CONFIG_FILE: /plugins/terraform.tfrc
loadEnvFromSecrets:
- <removed>
initConfig:
enabled: true
sharedDir: /plugins
atlantisUrl: <removed>
orgAllowlist: <removed>
Atlantis server-side config file:
repos:
- id: /.*/
allowed_overrides: [apply_requirements]
workflows:
default:
plan:
steps:
- run: terraform fmt -check=true -diff=true -write=false
- init
- plan
apply:
steps:
- apply
- run: inventory-update.sh
Repo atlantis.yaml file:
version: 3
projects:
- dir: ./myterraformconfig
### Additional Context
The error message is part of the changes in PR #5895.
runatlantis/atlantisGitHub
12/03/2025, 5:28 PMgh-allow-mergeable-bypass-apply-flag enabled, Atlantis may conclude the mergeability of a pull request incorrectly if there is a required workflow that has multiple checks. Atlantis uses the outcome of the first check in the suite, rather than the outcome of the suite as a whole. Thus, if the first check in the suite is successful, but the suite as a whole is not, for example if a second check is in progress or failed, Atlantis will consider the workflow a success and wrongfully proceed with applying. This may lead to apply executing when it should not be allowed to, and it may also lead to Atlantis attempting to merge the pull request after apply but failing to do so since GitHub will not permit it.
### Reproduction Steps
Configure a ruleset with a required workflow that has more than one check. Trigger Atlantis apply after the first check is successful, but before the workflow as a whole is completed. Alternatively, trigger Atlantis apply when the first check is successful, but the workflow as a whole is completed with failure.
### Logs
### Environment details
### Additional Context
runatlantis/atlantis