GitHub
06/13/2025, 3:45 PM(?m)^( +)([-+~]\s)(.*)(\s=\s\|\s->\s|<<|\{|\(known after apply\)| {2,}[^ ]+:.*)(.*)
) |
| diffListRegex = regexp.MustCompile((?m)^( +)([-+~]\s)(".*",)
) |
| diffTildeRegex = regexp.MustCompile((?m)^~
) |
| ) |
| |
| // DiffMarkdownFormattedTerraformOutput formats the Terraform output to match diff markdown format |
| func (p PlanSuccess) DiffMarkdownFormattedTerraformOutput() string { |
| formattedTerraformOutput := diffKeywordRegex.ReplaceAllString(p.TerraformOutput, "$2$1$3$4$5") |
| formattedTerraformOutput = diffListRegex.ReplaceAllString(formattedTerraformOutput, "$2$1$3") |
| formattedTerraformOutput = diffTildeRegex.ReplaceAllString(formattedTerraformOutput, "!") |
| |
| return strings.TrimSpace(formattedTerraformOutput) |
| } |
Here is the regexr reproduction using
1. DiffKeywordRegex https://regexr.com/8fdf9
2. DiffListRegex https://regexr.com/8fden
I believe the above contents are getting picked up by the former.
### Logs
### Environment details
• Atlantis version: 0.34.0
• Deployment method: eks
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version:
• Atlantis flags: n/a
## options
1. Improve the regex to exclude yaml lists somehow
2. Exempt aws_cloudformation_stack resources from the diff conversion
3. Use hcl instead of diff and don't modify the terraform output at all
### Additional Context
• aws_cloudformation_stack
• #2438
runatlantis/atlantisGitHub
06/17/2025, 5:21 PMatlantis unlock
and immediately after atlantis apply
which cause atlatis to think there's no effected states.
we also enable auto merge after apply, which merged the PR.
### Reproduction Steps
• create a PR
• comment atlantis unlock
• comment atlantis apply
### Logs
### Environment details
• Atlantis version: v0.34.0
• Atlantis flags: auto merge
### Additional Context
[Image](https://private-user-images.githubusercontent.com/35378572/456127804-ae3871e1-db48-4942-95cd-75b96b3cc042.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTAxODExODQsIm5iZiI6MTc1MDE4MDg4NCwicGF0aCI6Ii8zNTM3ODU3Mi80NTYxMjc4MDQtYWUzODcxZTEtZGI0OC00OTQyLTk1Y2QtNzViOTZiM2NjMDQyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA2MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNjE3VDE3MjEyNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTFlYmI0YmU3MWUyYTYxMTFhNDA1N2FjYjgxYmIyYTUyMTgwMDg1OGRiYzljZTMwZDcxYjM5YjQ1M2M0ZmE2MGYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.1I5L_nwzgHSU0ymLlnHACtHrHYNNZzESLkdTMbSa77g)
[Image](https://private-user-images.githubusercontent.com/35378572/456127738-4eab8b52-2e23-4d5b-a184-989d08d1ec00.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NTAxODExODQsIm5iZiI6MTc1MDE4MDg4NCwicGF0aCI6Ii8zNTM3ODU3Mi80NTYxMjc3MzgtNGVhYjhiNTItMmUyMy00ZDViLWExODQtOTg5ZDA4ZDFlYzAwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA2MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNjE3VDE3MjEyNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTM3NDBjNjBjNzY0NDQ2NTAwYThkMTk5ZjMzMDMyYjY5YTk2MzlmODcyZmExZDQ0Mjk0MzM4MTA3NjZlMWFkMjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.nFxlYwGvp920Omuk8ot1QVy8dan85CMTapmF0YUuaRg)
runatlantis/atlantisGitHub
06/18/2025, 2:30 PMapply
in parallel. One example being Terraform resources that create git commits.
But this are a rare exception in my repo, so I want to set ATLANTIS_PARALLEL_POOL_SIZE<0
and ATLANTIS_PARALLEL_APPLY=true
for my server/repo and also be able to overwrite the behaviour when invoking atlantis for this single PR.
Describe the solution you'd like
I would like to be able to use something like atlantis apply -parallelism 1
to be able to apply all plans, but sequentially.
Describe the drawbacks of your solution
It would be sensible to not allow to set -parallelism
higher than ATLANTIS_PARALLEL_POOL_SIZE
to not let the user overload the instance. Aside from this I see no issues.
Describe alternatives you've considered
Technically it would be possible to set parallel_apply: false
in the repo config, for a single PR and revert it afterwards, but that's far from ideal.
runatlantis/atlantisGitHub
06/18/2025, 11:23 PMGitHub
06/19/2025, 3:57 PMversion: 3
projects:
- name: qa
dir: qa_acct/qa_env
terraform_version: v0.12.8
autoplan:
when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
enabled: false
- name: staging
dir: prod_acct/staging_env
terraform_version: v0.12.8
autoplan:
when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
enabled: false
- name: prod
dir: prod_acct/prod_env
terraform_version: v0.12.8
autoplan:
when_modified: ["../../projects/*", "*.tf*", "../../modules/*"]
enabled: false
Plans are generated for all three projects as normal after commenting exactly atlantis plan
.
Immediately afterword, commenting atlantis apply
attempts to apply all three environments as expected. In this case, there was an apply error due to an AWS IAM policy being misconfigured and the plans were not successfully applied. A commit was pushed to fix this issue and another atlantis apply
was submitted. Note, there was not another atlantis plan
after the fix commit was pushed. Atlantis behaved as if it had forgotten about the failed plans and assumed they had been applied successfully when, in fact, they had not been. I believe the expected behavior should be to reject the apply since new commits were made and force another plan be run, correct?
The result was the following:
Ran Apply for 0 projects:
Automatically merging because all plans have been successfully applied.
Locks and plans deleted for the projects and workspaces modified in this pull request:
* dir: `prod_acct/prod_env` workspace: `default`
* dir: `prod_acct/staging_env` workspace: `default`
* dir: `qa_acct/qa_env` workspace: `default`
runatlantis/atlantisGitHub
06/19/2025, 4:10 PM/api/plan
but is receiving a 500 error back.
When checking the logs of Atlantis server, I see that Atlantis is trying to pull pull/0/head
which doesn't work.
We're also using the merge
checkout strategy.
To fix this issue, I believe we should add the c.pr.Num > 0
condition to this if check, as the pr number is an optional parameter to plan/apply API endpoints:
atlantis/server/events/working_dir.go
Line 334 in</runatlantis/atlantis/commit/42e2dc706b97cf542137c9c076f56d48f57cb23c|42e2dc7>
| if w.GithubAppEnabled { |
| ----------------------- |
### Reproduction Steps
Set ATLANTIS_CHECKOUT_STRATEGY=merge
Note that the PR parameter is optional, and as such, is omitted:
curl --request POST 'https://<ATLANTIS_HOST_NAME>/api/plan' \
--header 'X-Atlantis-Token: <ATLANTIS_API_SECRET>' \
--header 'Content-Type: application/json' \
--data-raw '{
"Repository": "repo-name",
"Ref": "main",
"Type": "Github",
"Paths": [{
"Directory": ".",
"Workspace": "default"
}],
}'
### Logs
[
{
"level": "info",
"ts": "2024-08-16T14:33:16.101Z",
"caller": "events/working_dir.go:235",
"msg": "creating dir '/atlantis-data/repos/<myorg/my-repo-name>/0/default'",
"json": {}
},
{
"level": "error",
"ts": "2024-08-16T14:33:18.541Z",
"caller": "events/instrumented_project_command_builder.go:75",
"msg": "Error building plan commands: running git fetch origin pull/0/head:: fatal: couldn't find remote ref pull/0/head\n: exit status 128",
"json": {},
"stacktrace": "<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats>\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:75\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35\ngithub.com/runatlantis/atlantis/server/controllers.(*APIRequest).getCommands\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:67\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).apiPlan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:148\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).Plan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:93\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"
},
{
"level": "warn",
"ts": "2024-08-16T14:33:18.541Z",
"caller": "controllers/api_controller.go:261",
"msg": "{\"error\":\"failed to build command: running git fetch origin pull/0/head:: fatal: couldn't find remote ref pull/0/head\\n: exit status 128\"}",
"json": {},
"stacktrace": "<http://github.com/runatlantis/atlantis/server/controllers.(*APIController).respond|github.com/runatlantis/atlantis/server/controllers.(*APIController).respond>\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:261\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).apiReportError\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:81\ngithub.com/runatlantis/atlantis/server/controllers.(*APIController).Plan\n\tgithub.com/runatlantis/atlantis/server/controllers/api_controller.go:95\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2171\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\tgithub.com/gorilla/mux@v1.8.1/mux.go:212\ngithub.com/urfave/negroni/v3.(*Negroni).UseHandler.Wrap.func1\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:59\ngithub.com/urfave/negroni/v3.HandlerFunc.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:33\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/runatlantis/atlantis/server.(*RequestLogger).ServeHTTP\n\tgithub.com/runatlantis/atlantis/server/middleware.go:70\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Recovery).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/recovery.go:210\ngithub.com/urfave/negroni/v3.middleware.ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:51\ngithub.com/urfave/negroni/v3.(*Negroni).ServeHTTP\n\tgithub.com/urfave/negroni/v3@v3.1.0/negroni.go:111\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3142\nnet/http.(*conn).serve\n\tnet/http/server.go:2044"
}
]
### Environment details
• Atlantis version: latest
• Deployment method: eks
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version:
• Atlantis flags:
ATLANTIS_CHECKOUT_STRATEGY=merge
runatlantis/atlantisGitHub
06/24/2025, 1:25 PMGitHub
06/25/2025, 12:01 PMif len(projectCmds) > 0
, but this logic appears to have been removed in that update.
Perhaps this is an intentional change, but we haven’t found any mention or documentation of it in the release notes or related discussions. Clarification would be appreciated.
### Reproduction Steps
Reproducing the issue should be straightforward. We've consistently seen the atlantis/plan
check remain in a pending state on every pull request since upgrading to v0.33.
runatlantis/atlantisGitHub
06/27/2025, 8:54 AMGitHub
06/30/2025, 8:22 AMatlantis.yaml
files that worked in Atlantis 0.34.0 now fail with duplicate key errors in Atlantis 0.35.0. This is a breaking change that affects users who utilize YAML anchors and aliases to reduce duplication in their Atlantis configurations.
The root cause is the migration from <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
to <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
in version 0.35.0, which introduced stricter YAML parsing that now detects duplicate keys that were previously allowed.
### Reproduction Steps
1. Create an atlantis.yaml
file using YAML anchors and aliases that results in duplicate keys after anchor resolution
2. Use this configuration with Atlantis 0.34.0 - it works correctly
3. Upgrade to Atlantis 0.35.0 and run the same configuration - it fails with duplicate key errors
### Example Configuration
version: 3
automerge: true
parallel_plan: true
parallel_apply: true
abort_on_execution_order_fail: true
projects:
- &project_template
name: template
branch: /^master$/
dir: template
repo_locks:
mode: on_apply # on_plan, on_apply, disabled
custom_policy_check: false
autoplan:
when_modified:
- "*.tf"
- "../modules/**/*.tf"
- ".terraform.lock.hcl"
enabled: true
plan_requirements:
- undiverged
apply_requirements:
- mergeable
- approved
- undiverged
import_requirements:
- mergeable
- approved
- undiverged
- <<: *project_template
name: project1
dir: terraform/aws/project1/
workflow: terraform
# snip...
### Logs
parsing atlantis.yaml: [37:5] duplicate key "name"
34 | - undiverged
35 |
36 | - <<: *project_template
> 37 | name: project1
^
38 | dir: terraform/aws/project1/
39 | workflow: terraform
40 |
41 |
### Environment details
Atlantis version: 0.35.0 (issue present), 0.34.0 (working)
Latest version test: Issue is present in the latest version (0.35.0)
Deployment method: N/A (affects all deployment methods)
Atlantis flags: N/A (affects YAML parsing regardless of flags)
Atlantis server-side config file: N/A (issue is with repo-level atlantis.yaml)
Repo atlantis.yaml
file: See example above - any configuration using YAML anchors that results in duplicate keys after anchor resolution
Additional environment info: This is a parsing issue that affects all environments
### Additional Context
• Breaking change introduced in PR #5579: Migration from <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
to <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
v1.17.1
• Specific commit: 8639729 - "Replace gopkg.in/yaml.v3 with github.com/goccy/go-yaml"
• Parser change: The new library uses yaml.Strict()
mode which enables stricter validation
• Impact: Users who have been using YAML anchors successfully in their atlantis.yaml files will experience breaking changes when upgrading to 0.35.0
runatlantis/atlantisGitHub
07/04/2025, 6:40 AM<http://ghcr.io/runatlantis/atlantis:v0.35.0|ghcr.io/runatlantis/atlantis:v0.35.0>
image on ARM ECS Fargate), previously downloaded Terraform binaries (e.g., terraform1.12.2
) fail silently due to architecture mismatch.
Instead of showing a clear error, Atlantis tries to execute the x86_64 binary, which fails with:
syntax error: unterminated quoted string
This happens because the shell interprets the incompatible binary as a script.
### Reproduction Steps
1. Run Atlantis v0.33.0 (or earlier) on AMD64 (ECS, Fargate).
2. Allow Atlantis to download Terraform versions (e.g., 1.12.2).
3. Upgrade to v0.35.0 and switch to an ARM64 architecture.
4. Keep .atlantis/bin/terraform*
binaries in the shared volume.
5. Trigger a plan for a project using an old Terraform version.
6. Atlantis attempts to run the incompatible binary and fails with a shell error.
### Logs
Logs
running 'sh -c' '/home/atlantis/.atlantis/bin/terraform1.12.2 init -input=false -upgrade' in '/home/atlantis/.atlantis/repos/...'
/home/atlantis/.atlantis/bin/terraform1.12.2: line 11: syntax error: unterminated quoted string
No mention of binary incompatibility or fallback handling.
### Environment details
• Atlantis version: v0.35.0
• Previously used version: v0.33.0 on AMD64
• Deployment method: ECS Fargate (platform: linux/arm64
)
• Terraform version: 1.12.2 (binary pre-downloaded by Atlantis)
• Execution context: Fargate with EFS shared mount at /home/atlantis
• Terraform binaries: Preexisting files like /home/atlantis/.atlantis/bin/terraform1.12.2
from AMD architecture
• Atlantis default TF version env: ATLANTIS_DEFAULT_TF_VERSION=v1.9.0
### Additional Context
• This appears to be a binary execution issue due to architecture mismatch (x86_64 binary executed on ARM64).
• Atlantis does not validate the downloaded binary architecture or re-download when switching platforms.
• Workaround: Delete /home/atlantis/.atlantis/bin/terraform*
after architecture switch to force fresh (ARM64) downloads.
• Suggest Atlantis:
• Detect binary architecture mismatch before execution
• Log architecture info on terraform init
failures
• Offer a flag or auto-clean option on arch switch
runatlantis/atlantisGitHub
07/07/2025, 6:34 AMatlantis = {
environment = [
{
name : "ATLANTIS_REPO_CONFIG_JSON",
value : jsonencode(yamldecode(file("${path.module}/server-atlantis.yaml"))),
}
]
secrets = [
{
name = "ATLANTIS_SLACK_TOKEN"
valueFrom = data.aws_secretsmanager_secret.atlantis_slack_token.arn
}
]
}
server-atlantis.yaml:
repos:
- id: /.*/
allow_custom_workflows: true
allowed_overrides:
- apply_requirements
- workflow
apply_requirements:
- approved
workflow: default
webhooks:
- event: apply
kind: slack
channel: XXXXXXXXXXX
- event: plan
kind: slack
channel: XXXXXXXXXXX
### Logs
Nothing related to webhooks or Slack in the logs
### Environment details
ATLANTIS_SLACK_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ATLANTIS_REPO_CONFIG_JSON={"repos":[{"allow_custom_workflows":true,"allowed_overrides":["apply_requirements","workflow"],"apply_requirements":["approved"],"id":"/.*/","workflow":"default"}],"webhooks":[{"channel":"XXXXXXXXXXX","event":"apply","kind":"slack"},{"channel":"XXXXXXXXXXX","event":"plan","kind":"slack"}]}
### Additional Context
runatlantis/atlantisGitHub
07/07/2025, 2:43 PMGitHub
07/10/2025, 4:57 PM*.tfplan
from the atlantis-data
dir before uploading the data back to the S3 bucket. So each time GHA starts an Atlantis workflow it has to sync only a few tfplan
files. It makes pre-run and post-run S3 sync almost instant.
The problem is that when we run atlantis plan
multiple times it works great: atlantis clones the PR files from git and plan changes if needed. But when we run atlantis apply
it doesn't clone data, it just fails with the following error message:
Error building apply commands: running git ls-files . --others: fatal: not a git repository (or any of the parent directories): .git\n: exit status 128
The question is: why doesn't atlantis apply
clone the repo, if it's missing (like atlantis plan
does)?
### Reproduction Steps
This is how our reusable GHA workflow looks like:
# Sync Atlantis data pre-run
- name: Pre-run sync from S3
run: |
data_path="repos/${{ github.repository }}/${{ steps.get_issue_number.outputs.result }}"
mkdir -p /atlantis-data/$data_path
aws s3 cp --recursive \
s3://atlantis-s3-${{ steps.aws-resource.outputs.name }}/_atlantis-data/$data_path/ \
/atlantis-data/$data_path/
# Copying files to S3 does not keep their unix permissions.
chmod -R 755 /atlantis-data
# Send POST request to Atlantis service
- name: Run Atlantis
# ...
# Sync Atlantis data post-run
- name: Post-run sync to S3
run: |
data_path="repos/${{ github.repository }}/${{ steps.get_issue_number.outputs.result }}"
if [ "${{ inputs.enable-s3-sync-lite }}" = "true" ]; then
# Delete all files that are NOT *.tfplan
find /atlantis-data -type f ! -name '*.tfplan' -exec rm -f {} +
# Delete all now-empty directories
find /atlantis-data -type d -empty -delete
# Ensure the data path exists (it might not if no plans were created)
mkdir -p /atlantis-data/$data_path
fi
# --delete
will ensure to clean up anything that Atlantis deletes locally
aws s3 sync \
/atlantis-data/$data_path \
s3://atlantis-s3-${{ steps.aws-resource.outputs.name }}/_atlantis-data/$data_path \
--delete --include "*"
### Logs
## atlantis plan
It clones the repo after this warning message.
# not the first plan - only tfplan files in the s3 bucket
{"level":"warn","ts":"2025-07-10T15:03:13.589Z","caller":"events/working_dir.go:123",
"msg":"will re-clone repo, could not determine if was at correct commit: git rev-parse HEAD: exit status 128: fatal: not a git repository (or any of the parent directories): .git\n",
"json":{"repo":".../test-newrelic-tf","pull":"268"},
"stacktrace":"
# <https://github.com/runatlantis/atlantis/blob/315e25b135dbb19aa0473e867b703dbf9fbba592/server/events/working_dir.go#L125>
<http://github.com/runatlantis/atlantis/server/events.(*FileWorkspace).Clone|github.com/runatlantis/atlantis/server/events.(*FileWorkspace).Clone>\n\t
<http://github.com/runatlantis/atlantis/server/events/working_dir.go:123|github.com/runatlantis/atlantis/server/events/working_dir.go:123>\n
<http://github.com/runatlantis/atlantis/server/events.(*GithubAppWorkingDir).Clone|github.com/runatlantis/atlantis/server/events.(*GithubAppWorkingDir).Clone>\n\t
<http://github.com/runatlantis/atlantis/server/events/github_app_working_dir.go:39|github.com/runatlantis/atlantis/server/events/github_app_working_dir.go:39>\n
# <https://github.com/runatlantis/atlantis/blob/315e25b135dbb19aa0473e867b703dbf9fbba592/server/events/project_command_builder.go#L482>
<http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg>\n\t
<http://github.com/runatlantis/atlantis/server/events/project_command_builder.go:344|github.com/runatlantis/atlantis/server/events/project_command_builder.go:344>\n
<http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder)|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder)>.
<http://github.com/runatlantis/atlantis/server/events/project_command_builder.go:244|github.com/runatlantis/atlantis/server/events/project_command_builder.go:244>\n
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands.func1|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands.func1>\n\t
<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:38|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:38>\n
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats>\n\t
<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71>\n
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands>\n\t
<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35>\n
<http://github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run|github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run>\n\t
<http://github.com/runatlantis/atlantis/server/events/plan_command_runner.go:193|github.com/runatlantis/atlantis/server/events/plan_command_runner.go:193>\n
<http://github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run|github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run>\n\t
<http://github.com/runatlantis/atlantis/server/events/plan_command_runner.go:290|github.com/runatlantis/atlantis/server/events/plan_command_runner.go:290>\n
<http://github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand|github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand>\n\t
<http://github.com/runatlantis/atlantis/server/events/command_runner.go:301|github.com/runatlantis/atlantis/server/events/command_runner.go:301>
"}
## atlantis apply
```
{"level":"error","ts":"2025-07-10T121653.808Z","caller":"events/instrumented_project_command_builder.go:75",
"msg":"Error building apply commands: running git ls-files . --others: fatal: not a git repository (or any of the parent directories): .git\n: exit status 128",
"json":{},
"stacktrace":"
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlan…
runatlantis/atlantisGitHub
07/11/2025, 12:12 PMGitHub
07/11/2025, 7:33 PMGitHub
07/15/2025, 7:55 PMdisable-markdown-folding
configuration setting when the plan spans multiple comments. This appears to be because the GithubClient
CreateComment
function generates the <summary><details>
tags itself without checking the disable markdown configuration: https://github.com/runatlantis/atlantis/blob/main/server/events/vcs/github_client.go#L229-L259
### Reproduction Steps
Configure Atlantis with ATLANTIS_DISABLE_MARKDOWN_FOLDING=true
Set up Atlantis to post Terraform plans in a PR as comments.
Make a change that will result in a large plan diff.
The initial comment correctly shows the section of the plan, but the follow-up comment has the "Show Output" collapsible area which it shouldn't.
runatlantis/atlantisGitHub
07/15/2025, 8:07 PMmock_outputs
- initial plan is created using mock_outputs.
### Reproduction Steps
• mod_B
depends on mod_A
, mod_A
should produce an output variable id
• mod_B
needs the mod_A.outputs.id
variable but it is not known until mod_A
is applied, we typically use mock_outputs
so the plan does not fail
• on atlantis plan
an invalid plan for mod_B
is generated with mock_ouputs
• on atlantis apply
mod_A
is applied successfully (no dependencies) but mod_B
is applied from old plan with mock_outputs
• then we run atlantis plan
again a valid plan for mod_B
is created
• then we run atlantis apply
and mod_B
is successfully applied with the valid plan
Notes:
• If we run terragrunt run-all apply
locally the resources are applied in the correct order and inputs are provided after they are known.
• The problem becomes worse if there are more dependency levels (e.g. mod_C
<- mod_B
<- mod_A
)
### Logs
### Environment details
• Atlantis version: v0.19.2
• Atlantis flags: atlantis server
Atlantis server-side config file:
# config file
repos:
- id: /github.com/my-org/.*/
workflow: terragrunt
apply_requirements: [approved, mergeable]
allowed_overrides: [workflow]
allowed_workflows: [terragrunt]
pre_workflow_hooks:
- run: >
terragrunt-atlantis-config generate --output atlantis.yaml --autoplan
--workflow terragrunt --create-workspace --parallel
workflows:
terragrunt:
plan:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
name: TF_CLI_ARGS
value: '-no-color'
- run: terragrunt run-all plan --terragrunt-non-interactive --terragrunt-log-level=warn -out "$PLANFILE"
apply:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
name: TF_CLI_ARGS
value: '-no-color'
- run: terragrunt run-all apply --terragrunt-non-interactive --terragrunt-log-level=warn "$PLANFILE"
Repo atlantis.yaml
file: generated by terragrunt-atlantis-config
on pre_workflow_hooks
### Additional Context
runatlantis/atlantisGitHub
07/27/2025, 9:03 PMwernight
— which hasn't been updated in a while and doesn't have proper arm/v8 support. A fix would be to simply replace the wernight/ngrok
image with ngrok/ngrok
.
### Reproduction Steps
On MacOs Sequoia 15.5 (Apple Silicon) docker-compose up --detach
fails with the below logs:
### Logs
~/github/bschaatsbergen/atlantis> docker-compose up --detach
[+] Running 1/1
✔ ngrok Pulled 1.1s
[+] Running 5/5
✔ Network atlantis_default Created 0.0s
✔ Container atlantis-redis-1 Started 0.3s
✔ Container atlantis-atlantis-1 Started 0.4s
✔ Container atlantis-ngrok-1 Started 0.4s
! ngrok The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
And when running docker-compose logs --follow
you can see the issue:
atlantis-1 | No files found in /docker-entrypoint.d/, skipping
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.412Z","caller":"server/server.go:342","msg":"Supported VCS Hosts: Github","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.710Z","caller":"server/server.go:503","msg":"Utilizing BoltDB","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.722Z","caller":"policy/conftest_client.go:168","msg":"failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFTEST_VERSION not set","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.727Z","caller":"server/server.go:1120","msg":"Atlantis started - listening on port 4141","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.728Z","caller":"scheduled/executor_service.go:51","msg":"Scheduled Executor Service started","json":{}}
ngrok-1 | http - start an HTTP tunnel
ngrok-1 |
ngrok-1 | USAGE:
ngrok-1 | ngrok http [address:port | port] [flags]
ngrok-1 |
ngrok-1 | AUTHOR:
ngrok-1 | ngrok - <support@ngrok.com>
ngrok-1 |
ngrok-1 | COMMANDS:
ngrok-1 | config update or migrate ngrok's configuration file
ngrok-1 | http start an HTTP tunnel
ngrok-1 | tcp start a TCP tunnel
ngrok-1 | tunnel start a tunnel for use with a tunnel-group backen
ngrok-1 |
ngrok-1 | EXAMPLES:
ngrok-1 | ngrok http 80 # secure public URL for port 80 web server
ngrok-1 | ngrok http --domain baz.ngrok.dev 8080 # port 8080 available at baz.ngrok.dev
ngrok-1 | ngrok tcp 22 # tunnel arbitrary TCP traffic to port 22
ngrok-1 | ngrok http 80 --oauth=google --oauth-allow-email=foo@foo.com # secure your app with oauth
ngrok-1 |
ngrok-1 | Paid Features:
ngrok-1 | ngrok http 80 --domain <http://mydomain.com|mydomain.com> # run ngrok with your own custom domain
ngrok-1 | ngrok http 80 --allow-cidr 1234:8c00::b12c:88ee:fe69:1234/32 # run ngrok with IP policy restrictions
ngrok-1 | Upgrade your account at <https://dashboard.ngrok.com/billing/subscription> to access paid features
ngrok-1 |
ngrok-1 | Upgrade your account at <https://dashboard.ngrok.com/billing/subscription> to access paid features
ngrok-1 |
ngrok-1 | Flags:
ngrok-1 | -h, --help help for ngrok
ngrok-1 |
ngrok-1 | Use "ngrok [command] --help" for more information about a command.
ngrok-1 |
ngrok-1 | ERROR: authentication failed: Your ngrok-agent version "3.6.0" is too old. The minimum supported agent version for your account is "3.7.0". Please update to a newer version with `ngrok update`, by downloading from <https://ngrok.com/download>, or by updating your SDK version. Paid accounts are currently excluded from minimum agent version requirements. To begin handling traffic immediately without updating your agent, upgrade to a paid plan: <https://dashboard.ngrok.com/billing/subscription>.
ngrok-1 | ERROR:
ngrok-1 | ERROR: ERR_NGROK_121
ngrok-1 | ERROR:
q^C
runatlantis/atlantisGitHub
07/31/2025, 10:34 PMatlantis.yaml
files that worked in Atlantis 0.34.0 now fail with duplicate key errors in Atlantis 0.35.0. This is a breaking change that affects users who utilize YAML anchors and aliases to reduce duplication in their Atlantis configurations.
The root cause is the migration from <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
to <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
in version 0.35.0, which introduced stricter YAML parsing that now detects duplicate keys that were previously allowed.
### Reproduction Steps
1. Create an atlantis.yaml
file using YAML anchors and aliases that results in duplicate keys after anchor resolution
2. Use this configuration with Atlantis 0.34.0 - it works correctly
3. Upgrade to Atlantis 0.35.0 and run the same configuration - it fails with duplicate key errors
### Example Configuration
version: 3
automerge: true
parallel_plan: true
parallel_apply: true
abort_on_execution_order_fail: true
projects:
- &project_template
name: template
branch: /^master$/
dir: template
repo_locks:
mode: on_apply # on_plan, on_apply, disabled
custom_policy_check: false
autoplan:
when_modified:
- "*.tf"
- "../modules/**/*.tf"
- ".terraform.lock.hcl"
enabled: true
plan_requirements:
- undiverged
apply_requirements:
- mergeable
- approved
- undiverged
import_requirements:
- mergeable
- approved
- undiverged
- <<: *project_template
name: project1
dir: terraform/aws/project1/
workflow: terraform
# snip...
### Logs
parsing atlantis.yaml: [37:5] duplicate key "name"
34 | - undiverged
35 |
36 | - <<: *project_template
> 37 | name: project1
^
38 | dir: terraform/aws/project1/
39 | workflow: terraform
40 |
41 |
### Environment details
Atlantis version: 0.35.0 (issue present), 0.34.0 (working)
Latest version test: Issue is present in the latest version (0.35.0)
Deployment method: N/A (affects all deployment methods)
Atlantis flags: N/A (affects YAML parsing regardless of flags)
Atlantis server-side config file: N/A (issue is with repo-level atlantis.yaml)
Repo atlantis.yaml
file: See example above - any configuration using YAML anchors that results in duplicate keys after anchor resolution
Additional environment info: This is a parsing issue that affects all environments
### Additional Context
• Breaking change introduced in PR #5579: Migration from <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
to <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
v1.17.1
• Specific commit: 8639729 - "Replace gopkg.in/yaml.v3 with github.com/goccy/go-yaml"
• Parser change: The new library uses yaml.Strict()
mode which enables stricter validation
• Impact: Users who have been using YAML anchors successfully in their atlantis.yaml files will experience breaking changes when upgrading to 0.35.0
runatlantis/atlantisGitHub
08/01/2025, 5:44 PMenvironments/<env-name>/terraform.tfvars
. The Atlantis convention is envs/<env-name>.tfvars
. Very similar.
The effort was implementing the repo-level config across 60+ root modules. This was before you could enable auto planning w/ a repo level config of any sort, so the entire thing had to be generated on the fly every run. This is an error prone process as if it fails Atlantis just won't plan anything. Anyway, it's hard to say it would have eliminated that work (as I also built a dependency graph and implemented a module-level Atlantis config files to have more granular control), but I believe for many use cases it could completely eliminate an entire category of custom configuration that is required today.
And the thing is, it's already there. It's tested. The comment makes it sound like it's going to stay there. So I'm not seeing why it's not in the docs somewhere. Not everyone is going to read all the code just to use Atlantis.
Describe the solution you'd like
Document this feature.
Describe the drawbacks of your solution
People will know about it and therefore might ask you questions or complain about it.
Describe alternatives you've considered
The alternative would be not telling anyone that this exists and yes I've considered it. I suppose it's not good enough because I tend to want to help people even when it has no benefit to me.
I think this feature is super useful and the benefit for some uses cases, and the reduced need for a completely custom repo-level config, is going to far outweigh the costs of documenting this relatively benign and simple feature and dealing with a few people who are too confused to use it properly.
runatlantis/atlantisGitHub
08/02/2025, 9:11 PMGitHub
08/05/2025, 9:18 AMatlantis apply
to be run while in an invalid state. While the command does then fail, this may cause confusion as the error message simply states stat /home/atlantis/.data/repos/{org}/{repo}/{pr-num}: no such file or directory
.
I would expect atlantis to re-run the plan
check when a PR is re-opened - treating it as though it was freshly opened, acquiring all necessary locks and planning the changes.
### Reproduction Steps
1. Open a PR that modifies some atlantis-managed infrastructure
2. Wait for the plan
check to pass, and approve the PR
3. Close the PR
• Note: atlantis will delete the lock & plan associated to the PR
4. Re-open the PR
• Note: the plan
check will not be invalidated, even though atlantis deleted the plan file. it is also not re-run
5. Comment atlantis apply
• Failure! The apply
will fail with the message stat /home/atlantis/.data/repos/{org}/{repo}/{pr-num}: no such file or directory
, as the resources were deleted when the PR was closed
### Logs
Logs
{
"level": "error",
"ts": "2025-08-04T132831.652Z",
"caller": "events/instrumented_project_command_builder.go:75",
"msg": "Error building apply commands: stat /home/atlantis/.data/repos/transferwise/REDACTED/1688: no such file or directory",
"json": {},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:75\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildApplyCommands\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:17\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:116\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:383"
}
{
"level": "error",
"ts": "2025-08-04T132832.103Z",
"caller": "events/pull_updater.go:18",
"msg": "stat /home/atlantis/.data/repos/transferwise/REDACTED/1688: no such file or directory",
"json": {
"repo": "transferwise/REDACTED",
"pull": "1688"
},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:18\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:122\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:383"
}
### Environment details
• Atlantis version: v0.32.0
• Deployment method: AWS ECS
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: no, but the current handler implementation suggests this is still an issue
### Additional Context
• https://www.runatlantis.io/docs/autoplanning does not exclude re-opening a PR
• https://github.com/runatlantis/atlantis/blob/main/server/events/event_parser.go#L554-L567 does not handle the `reopened` event type
• I suspect this issue can be resolved by handling this event type the same as the opened
or ready_for_review
events, as atlantis deletes its state on close
runatlantis/atlantisGitHub
08/05/2025, 2:26 PM• Startup latency – Large repositories with many historical plans could slow container boot time while the index is rebuilt.
• Metadata drift – If a plan file exists but its corresponding PR or commit has been deleted, the UI might surface “orphaned” entries. Additional validation logic would be required.
• Concurrency complexity – Multiple containers running the discovery simultaneously may race to write identical metadata into Redis or memory. Coordination (e.g., Redis transactions or leader election) will be needed.
• Maintenance overhead – Future changes to plan file formats or storage paths would need matching migration logic in the re-hydration code.
Describe alternatives you've considered
1. Separate “Archived Plans” tab
• Keep the current Jobs list ephemeral, but add a new tab that lists plans discovered on disk.
• Drawback: Two nearly identical views can confuse users; reviewers may not know where to look first.
2. Persist job metadata in Redis (or DynamoDB) instead of on-disk scanning
• Write a small record to Redis each time a plan completes; at startup, rebuild the UI from Redis keys.
• Drawback: Introduces a second persistence strategy (plans on EFS, metadata in Redis); if the cache is flushed, the index is lost while files remain.
3. Force containers to run in “sticky” mode (no rolling restarts)
• Disable automatic task replacement so that UI state is never lost.
• Drawback: Removes the main benefit of ECS—automatic updates and rescheduling—so is not viable operationally.
Given these trade-offs, re-hydrating the Jobs list directly from EFS strikes the best balance between user experience and architectural simplicity, staying aligned with how plan artifacts are already stored today.
runatlantis/atlantisGitHub
08/06/2025, 3:58 PMrunning git clone --depth=1 --branch test_v035 --single-branch <https://atlantis%40acme.net:<redacted>@bitbucket.org/acme/atlantis-demo.git> /home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default: Cloning into '/home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default'...
remote: You may not have access to this repository or it no longer exists in this workspace. If you think this repository exists and you have access, make sure you are authenticated.
fatal: Authentication failed for '<https://bitbucket.org/acme/atlantis-demo.git/>'
: exit status 128
# Conversely, with username:
git clone <https://atlantis-devops:API_TOKEN@bitbucket.org/acme/atlantis-demo.git>
Cloning into 'atlantis-demo'...
remote: Enumerating objects: 171, done.
...
Resolving deltas: 100% (75/75), done.
# CURL API call with username:
curl -u "atlantis-devops:API_TOKEN" -H "Content-Type: application/json" -X POST -d '{"content": {"raw": "Test comment"}}' "<https://api.bitbucket.org/2.0/repositories/org/repo/pullrequests/28/comments>"
# Response:
{"error": {"message": "Unauthorized"}}
### Environment details
### Additional Context
Reference docs:
https://support.atlassian.com/bitbucket-cloud/docs/using-api-tokens/
https://support.atlassian.com/bitbucket-cloud/docs/using-app-passwords/
runatlantis/atlantisGitHub
08/06/2025, 11:16 PMstatuses, _, err := g.Client.Commits.GetCommitStatuses(mr.ProjectID, commit, nil)
if resp != nil {
logger.Debug("GET /projects/%d/commits/%s/statuses returned: %d", mr.ProjectID, commit, resp.StatusCode)
}
if err != nil {
return false, err
}
for _, status := range statuses {
// Ignore any commit statuses with 'atlantis/apply' as prefix
if strings.HasPrefix(status.Name, fmt.Sprintf("%s/%s", vcsstatusname, command.Apply.String())) {
continue
}
if !status.AllowFailure && project.OnlyAllowMergeIfPipelineSucceeds && status.Status != "success" {
return false, nil
}
}
In this Gitlab client code to check the mergeability of a review, an API call is made to get pipeline statuses the commits in the review and it specifically checks the statuses of the latest commit. If a commit is used in more than one review, it may have statuses across reviews, and this code does not filter out statuses from prior reviews. I believe this creates scenarios where failures on a commit in a prior review are 'brought forward' and block a newer review.
### Reproduction Steps
1. Create a Gitlab review on a terraform prroject that is configured to work with Atlantis, with a change that will succeed to plan but fail to apply.
2. Attempt to apply the plan, see it fail.
3. Open a new review with the same commit.
4. Observe that no plan pipeline runs, and attempting to apply fails immediately (even if all signals show the review is approved and mergeable)
### Logs
LogsProject name has been replaced with $MY_PROJECT
to avoid leaking information about my company's repos. {"level":"debug","ts":"2025-08-06T18:34:16.197Z","caller":"vcs/gitlab_client.go:282","msg":"GET /projects/$MY_PROJECT/merge_requests/755/approvals returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:16.197Z","caller":"vcs/gitlab_client.go:307","msg":"Checking if GitLab merge request 755 is mergeable","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:16.950Z","caller":"vcs/gitlab_client.go:310","msg":"GET /projects/$MY_PROJECT/merge_requests/755 returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:17.081Z","caller":"vcs/gitlab_client.go:328","msg":"GET /projects/5409 returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:17.210Z","caller":"vcs/gitlab_client.go:337","msg":"GET /projects/5409/commits/53d0402a25653f55336219ad2dde8dbed4600c0f/statuses returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} ... {"level":"error","ts":"2025-08-06T18:34:19.531Z","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running apply operation: Pull request must be mergeable before running apply.","json":{"repo":"$MY_PROJECT","pull":"755"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events.RunAndEmitStats|github.com/runatlantis/atlantis/server/events.RunAndEmitStats>\n\t/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).Apply\n\t/atlantis/server/events/instrumented_project_command_runner.go:46\ngithub.com/runatlantis/atlantis/server/events.runProjectCmds\n\t/atlantis/server/events/project_command_pool_executor.go:48\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\t/atlantis/server/events/apply_command_runner.go:163\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\t/atlantis/server/events/command_runner.go:401"}
### Environment details
• Atlantis version: 0.33.0
• Deployment method: kubectly apply
😅
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: we have not, but I validated via diff on Github that the code I have linked to and provided above has not changed between the two versions. That said, I will look into scheduling an update to the latest version, as maybe some other change indirectly addresses this.
• Atlantis flags:
ATLANTIS_LOG_LEVEL="debug"
ATLANTIS_CHECKOUT_DEPTH="25"
ATLANTIS_CHECKOUT_STRATEGY="merge"
ATLANTIS_CONFIG="/config/files/config.yaml"
ATLANTIS_DATA_DIR="/atlantis"
ATLANTIS_DEFAULT_TF_VERSION="0.12.31"
ATLANTIS_ENABLE_POLICY_CHECKS="true"
ATLANTIS_FAIL_ON_PRE_WORKFLOW_HOOK_ERROR="true"
ATLANTIS_PORT="4141"
Atlantis server-side config file: I can't provide this, it has a ton of stuff that I'm not allowed to put in a public github issue.
Repo atlantis.yaml
file: Same as above
Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc):
Nothing much...it runs in a k8s cluster on a Statefulset.
### Additional Context
• Our Gitlab instance is self-hosted, on version 17.11
• We have multiple terraform projects, but this behavior seems to happen almost entirely to the one project the logs come from. I'm wondering if there are any configurations that could come into play here.
runatlantis/atlantisGitHub
08/07/2025, 6:43 PMGitHub
08/11/2025, 11:00 AMGitHub
08/13/2025, 4:53 AMGitHub
08/13/2025, 4:50 PMThe default workspace at path foo is currently locked by another command that is running for this pull request.
by rapidly pushing back to back commits that cause an autoplan. After this, they lock never cleared. I did an atlantis unlock
comment and tried to plan again. It still claimed that that one directory was locked. I discarded plan and locks from the web UI. Same effect. /api/locks
shows the lock gone, but trying a plan says it is locked and then it does show up in /api/locks
. I unlock again to remove it from locks and the run strings
on the atlantis.db
file and it shows that PR with that directory with a status of 5, while other directories in that PR that did plan show status 1.
### Reproduction Steps
Hard to say, because I have had this happen before where someone blocks themselves with reapdi commits pushed, but it resolves itself.
### Environment details
running atlantis 0.35.0 on eks
### Additional Context
runatlantis/atlantis