GitHub
07/04/2025, 6:40 AM<http://ghcr.io/runatlantis/atlantis:v0.35.0|ghcr.io/runatlantis/atlantis:v0.35.0>
image on ARM ECS Fargate), previously downloaded Terraform binaries (e.g., terraform1.12.2
) fail silently due to architecture mismatch.
Instead of showing a clear error, Atlantis tries to execute the x86_64 binary, which fails with:
syntax error: unterminated quoted string
This happens because the shell interprets the incompatible binary as a script.
### Reproduction Steps
1. Run Atlantis v0.33.0 (or earlier) on AMD64 (ECS, Fargate).
2. Allow Atlantis to download Terraform versions (e.g., 1.12.2).
3. Upgrade to v0.35.0 and switch to an ARM64 architecture.
4. Keep .atlantis/bin/terraform*
binaries in the shared volume.
5. Trigger a plan for a project using an old Terraform version.
6. Atlantis attempts to run the incompatible binary and fails with a shell error.
### Logs
Logs
running 'sh -c' '/home/atlantis/.atlantis/bin/terraform1.12.2 init -input=false -upgrade' in '/home/atlantis/.atlantis/repos/...'
/home/atlantis/.atlantis/bin/terraform1.12.2: line 11: syntax error: unterminated quoted string
No mention of binary incompatibility or fallback handling.
### Environment details
• Atlantis version: v0.35.0
• Previously used version: v0.33.0 on AMD64
• Deployment method: ECS Fargate (platform: linux/arm64
)
• Terraform version: 1.12.2 (binary pre-downloaded by Atlantis)
• Execution context: Fargate with EFS shared mount at /home/atlantis
• Terraform binaries: Preexisting files like /home/atlantis/.atlantis/bin/terraform1.12.2
from AMD architecture
• Atlantis default TF version env: ATLANTIS_DEFAULT_TF_VERSION=v1.9.0
### Additional Context
• This appears to be a binary execution issue due to architecture mismatch (x86_64 binary executed on ARM64).
• Atlantis does not validate the downloaded binary architecture or re-download when switching platforms.
• Workaround: Delete /home/atlantis/.atlantis/bin/terraform*
after architecture switch to force fresh (ARM64) downloads.
• Suggest Atlantis:
• Detect binary architecture mismatch before execution
• Log architecture info on terraform init
failures
• Offer a flag or auto-clean option on arch switch
runatlantis/atlantisGitHub
07/07/2025, 6:34 AMatlantis = {
environment = [
{
name : "ATLANTIS_REPO_CONFIG_JSON",
value : jsonencode(yamldecode(file("${path.module}/server-atlantis.yaml"))),
}
]
secrets = [
{
name = "ATLANTIS_SLACK_TOKEN"
valueFrom = data.aws_secretsmanager_secret.atlantis_slack_token.arn
}
]
}
server-atlantis.yaml:
repos:
- id: /.*/
allow_custom_workflows: true
allowed_overrides:
- apply_requirements
- workflow
apply_requirements:
- approved
workflow: default
webhooks:
- event: apply
kind: slack
channel: XXXXXXXXXXX
- event: plan
kind: slack
channel: XXXXXXXXXXX
### Logs
Nothing related to webhooks or Slack in the logs
### Environment details
ATLANTIS_SLACK_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ATLANTIS_REPO_CONFIG_JSON={"repos":[{"allow_custom_workflows":true,"allowed_overrides":["apply_requirements","workflow"],"apply_requirements":["approved"],"id":"/.*/","workflow":"default"}],"webhooks":[{"channel":"XXXXXXXXXXX","event":"apply","kind":"slack"},{"channel":"XXXXXXXXXXX","event":"plan","kind":"slack"}]}
### Additional Context
runatlantis/atlantisGitHub
07/07/2025, 2:43 PMGitHub
07/10/2025, 4:57 PM*.tfplan
from the atlantis-data
dir before uploading the data back to the S3 bucket. So each time GHA starts an Atlantis workflow it has to sync only a few tfplan
files. It makes pre-run and post-run S3 sync almost instant.
The problem is that when we run atlantis plan
multiple times it works great: atlantis clones the PR files from git and plan changes if needed. But when we run atlantis apply
it doesn't clone data, it just fails with the following error message:
Error building apply commands: running git ls-files . --others: fatal: not a git repository (or any of the parent directories): .git\n: exit status 128
The question is: why doesn't atlantis apply
clone the repo, if it's missing (like atlantis plan
does)?
### Reproduction Steps
This is how our reusable GHA workflow looks like:
# Sync Atlantis data pre-run
- name: Pre-run sync from S3
run: |
data_path="repos/${{ github.repository }}/${{ steps.get_issue_number.outputs.result }}"
mkdir -p /atlantis-data/$data_path
aws s3 cp --recursive \
s3://atlantis-s3-${{ steps.aws-resource.outputs.name }}/_atlantis-data/$data_path/ \
/atlantis-data/$data_path/
# Copying files to S3 does not keep their unix permissions.
chmod -R 755 /atlantis-data
# Send POST request to Atlantis service
- name: Run Atlantis
# ...
# Sync Atlantis data post-run
- name: Post-run sync to S3
run: |
data_path="repos/${{ github.repository }}/${{ steps.get_issue_number.outputs.result }}"
if [ "${{ inputs.enable-s3-sync-lite }}" = "true" ]; then
# Delete all files that are NOT *.tfplan
find /atlantis-data -type f ! -name '*.tfplan' -exec rm -f {} +
# Delete all now-empty directories
find /atlantis-data -type d -empty -delete
# Ensure the data path exists (it might not if no plans were created)
mkdir -p /atlantis-data/$data_path
fi
# --delete
will ensure to clean up anything that Atlantis deletes locally
aws s3 sync \
/atlantis-data/$data_path \
s3://atlantis-s3-${{ steps.aws-resource.outputs.name }}/_atlantis-data/$data_path \
--delete --include "*"
### Logs
## atlantis plan
It clones the repo after this warning message.
# not the first plan - only tfplan files in the s3 bucket
{"level":"warn","ts":"2025-07-10T15:03:13.589Z","caller":"events/working_dir.go:123",
"msg":"will re-clone repo, could not determine if was at correct commit: git rev-parse HEAD: exit status 128: fatal: not a git repository (or any of the parent directories): .git\n",
"json":{"repo":".../test-newrelic-tf","pull":"268"},
"stacktrace":"
# <https://github.com/runatlantis/atlantis/blob/315e25b135dbb19aa0473e867b703dbf9fbba592/server/events/working_dir.go#L125>
<http://github.com/runatlantis/atlantis/server/events.(*FileWorkspace).Clone|github.com/runatlantis/atlantis/server/events.(*FileWorkspace).Clone>\n\t
<http://github.com/runatlantis/atlantis/server/events/working_dir.go:123|github.com/runatlantis/atlantis/server/events/working_dir.go:123>\n
<http://github.com/runatlantis/atlantis/server/events.(*GithubAppWorkingDir).Clone|github.com/runatlantis/atlantis/server/events.(*GithubAppWorkingDir).Clone>\n\t
<http://github.com/runatlantis/atlantis/server/events/github_app_working_dir.go:39|github.com/runatlantis/atlantis/server/events/github_app_working_dir.go:39>\n
# <https://github.com/runatlantis/atlantis/blob/315e25b135dbb19aa0473e867b703dbf9fbba592/server/events/project_command_builder.go#L482>
<http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder).buildAllCommandsByCfg>\n\t
<http://github.com/runatlantis/atlantis/server/events/project_command_builder.go:344|github.com/runatlantis/atlantis/server/events/project_command_builder.go:344>\n
<http://github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder)|github.com/runatlantis/atlantis/server/events.(*DefaultProjectCommandBuilder)>.
<http://github.com/runatlantis/atlantis/server/events/project_command_builder.go:244|github.com/runatlantis/atlantis/server/events/project_command_builder.go:244>\n
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands.func1|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands.func1>\n\t
<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:38|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:38>\n
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats>\n\t
<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:71>\n
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands|github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildPlanCommands>\n\t
<http://github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35|github.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:35>\n
<http://github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run|github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).run>\n\t
<http://github.com/runatlantis/atlantis/server/events/plan_command_runner.go:193|github.com/runatlantis/atlantis/server/events/plan_command_runner.go:193>\n
<http://github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run|github.com/runatlantis/atlantis/server/events.(*PlanCommandRunner).Run>\n\t
<http://github.com/runatlantis/atlantis/server/events/plan_command_runner.go:290|github.com/runatlantis/atlantis/server/events/plan_command_runner.go:290>\n
<http://github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand|github.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand>\n\t
<http://github.com/runatlantis/atlantis/server/events/command_runner.go:301|github.com/runatlantis/atlantis/server/events/command_runner.go:301>
"}
## atlantis apply
```
{"level":"error","ts":"2025-07-10T121653.808Z","caller":"events/instrumented_project_command_builder.go:75",
"msg":"Error building apply commands: running git ls-files . --others: fatal: not a git repository (or any of the parent directories): .git\n: exit status 128",
"json":{},
"stacktrace":"
<http://github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats|github.com/runatlantis/atlan…
runatlantis/atlantisGitHub
07/11/2025, 12:12 PMGitHub
07/11/2025, 7:33 PMGitHub
07/15/2025, 7:55 PMdisable-markdown-folding
configuration setting when the plan spans multiple comments. This appears to be because the GithubClient
CreateComment
function generates the <summary><details>
tags itself without checking the disable markdown configuration: https://github.com/runatlantis/atlantis/blob/main/server/events/vcs/github_client.go#L229-L259
### Reproduction Steps
Configure Atlantis with ATLANTIS_DISABLE_MARKDOWN_FOLDING=true
Set up Atlantis to post Terraform plans in a PR as comments.
Make a change that will result in a large plan diff.
The initial comment correctly shows the section of the plan, but the follow-up comment has the "Show Output" collapsible area which it shouldn't.
runatlantis/atlantisGitHub
07/15/2025, 8:07 PMmock_outputs
- initial plan is created using mock_outputs.
### Reproduction Steps
• mod_B
depends on mod_A
, mod_A
should produce an output variable id
• mod_B
needs the mod_A.outputs.id
variable but it is not known until mod_A
is applied, we typically use mock_outputs
so the plan does not fail
• on atlantis plan
an invalid plan for mod_B
is generated with mock_ouputs
• on atlantis apply
mod_A
is applied successfully (no dependencies) but mod_B
is applied from old plan with mock_outputs
• then we run atlantis plan
again a valid plan for mod_B
is created
• then we run atlantis apply
and mod_B
is successfully applied with the valid plan
Notes:
• If we run terragrunt run-all apply
locally the resources are applied in the correct order and inputs are provided after they are known.
• The problem becomes worse if there are more dependency levels (e.g. mod_C
<- mod_B
<- mod_A
)
### Logs
### Environment details
• Atlantis version: v0.19.2
• Atlantis flags: atlantis server
Atlantis server-side config file:
# config file
repos:
- id: /github.com/my-org/.*/
workflow: terragrunt
apply_requirements: [approved, mergeable]
allowed_overrides: [workflow]
allowed_workflows: [terragrunt]
pre_workflow_hooks:
- run: >
terragrunt-atlantis-config generate --output atlantis.yaml --autoplan
--workflow terragrunt --create-workspace --parallel
workflows:
terragrunt:
plan:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
name: TF_CLI_ARGS
value: '-no-color'
- run: terragrunt run-all plan --terragrunt-non-interactive --terragrunt-log-level=warn -out "$PLANFILE"
apply:
steps:
- env:
name: TERRAGRUNT_TFPATH
command: 'echo "terraform${ATLANTIS_TERRAFORM_VERSION}"'
- env:
name: TF_CLI_ARGS
value: '-no-color'
- run: terragrunt run-all apply --terragrunt-non-interactive --terragrunt-log-level=warn "$PLANFILE"
Repo atlantis.yaml
file: generated by terragrunt-atlantis-config
on pre_workflow_hooks
### Additional Context
runatlantis/atlantisGitHub
07/27/2025, 9:03 PMwernight
— which hasn't been updated in a while and doesn't have proper arm/v8 support. A fix would be to simply replace the wernight/ngrok
image with ngrok/ngrok
.
### Reproduction Steps
On MacOs Sequoia 15.5 (Apple Silicon) docker-compose up --detach
fails with the below logs:
### Logs
~/github/bschaatsbergen/atlantis> docker-compose up --detach
[+] Running 1/1
✔ ngrok Pulled 1.1s
[+] Running 5/5
✔ Network atlantis_default Created 0.0s
✔ Container atlantis-redis-1 Started 0.3s
✔ Container atlantis-atlantis-1 Started 0.4s
✔ Container atlantis-ngrok-1 Started 0.4s
! ngrok The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
And when running docker-compose logs --follow
you can see the issue:
atlantis-1 | No files found in /docker-entrypoint.d/, skipping
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.412Z","caller":"server/server.go:342","msg":"Supported VCS Hosts: Github","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.710Z","caller":"server/server.go:503","msg":"Utilizing BoltDB","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.722Z","caller":"policy/conftest_client.go:168","msg":"failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFTEST_VERSION not set","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.727Z","caller":"server/server.go:1120","msg":"Atlantis started - listening on port 4141","json":{}}
atlantis-1 | {"level":"info","ts":"2025-07-27T20:58:02.728Z","caller":"scheduled/executor_service.go:51","msg":"Scheduled Executor Service started","json":{}}
ngrok-1 | http - start an HTTP tunnel
ngrok-1 |
ngrok-1 | USAGE:
ngrok-1 | ngrok http [address:port | port] [flags]
ngrok-1 |
ngrok-1 | AUTHOR:
ngrok-1 | ngrok - <support@ngrok.com>
ngrok-1 |
ngrok-1 | COMMANDS:
ngrok-1 | config update or migrate ngrok's configuration file
ngrok-1 | http start an HTTP tunnel
ngrok-1 | tcp start a TCP tunnel
ngrok-1 | tunnel start a tunnel for use with a tunnel-group backen
ngrok-1 |
ngrok-1 | EXAMPLES:
ngrok-1 | ngrok http 80 # secure public URL for port 80 web server
ngrok-1 | ngrok http --domain baz.ngrok.dev 8080 # port 8080 available at baz.ngrok.dev
ngrok-1 | ngrok tcp 22 # tunnel arbitrary TCP traffic to port 22
ngrok-1 | ngrok http 80 --oauth=google --oauth-allow-email=foo@foo.com # secure your app with oauth
ngrok-1 |
ngrok-1 | Paid Features:
ngrok-1 | ngrok http 80 --domain <http://mydomain.com|mydomain.com> # run ngrok with your own custom domain
ngrok-1 | ngrok http 80 --allow-cidr 1234:8c00::b12c:88ee:fe69:1234/32 # run ngrok with IP policy restrictions
ngrok-1 | Upgrade your account at <https://dashboard.ngrok.com/billing/subscription> to access paid features
ngrok-1 |
ngrok-1 | Upgrade your account at <https://dashboard.ngrok.com/billing/subscription> to access paid features
ngrok-1 |
ngrok-1 | Flags:
ngrok-1 | -h, --help help for ngrok
ngrok-1 |
ngrok-1 | Use "ngrok [command] --help" for more information about a command.
ngrok-1 |
ngrok-1 | ERROR: authentication failed: Your ngrok-agent version "3.6.0" is too old. The minimum supported agent version for your account is "3.7.0". Please update to a newer version with `ngrok update`, by downloading from <https://ngrok.com/download>, or by updating your SDK version. Paid accounts are currently excluded from minimum agent version requirements. To begin handling traffic immediately without updating your agent, upgrade to a paid plan: <https://dashboard.ngrok.com/billing/subscription>.
ngrok-1 | ERROR:
ngrok-1 | ERROR: ERR_NGROK_121
ngrok-1 | ERROR:
q^C
runatlantis/atlantisGitHub
07/31/2025, 10:34 PMatlantis.yaml
files that worked in Atlantis 0.34.0 now fail with duplicate key errors in Atlantis 0.35.0. This is a breaking change that affects users who utilize YAML anchors and aliases to reduce duplication in their Atlantis configurations.
The root cause is the migration from <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
to <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
in version 0.35.0, which introduced stricter YAML parsing that now detects duplicate keys that were previously allowed.
### Reproduction Steps
1. Create an atlantis.yaml
file using YAML anchors and aliases that results in duplicate keys after anchor resolution
2. Use this configuration with Atlantis 0.34.0 - it works correctly
3. Upgrade to Atlantis 0.35.0 and run the same configuration - it fails with duplicate key errors
### Example Configuration
version: 3
automerge: true
parallel_plan: true
parallel_apply: true
abort_on_execution_order_fail: true
projects:
- &project_template
name: template
branch: /^master$/
dir: template
repo_locks:
mode: on_apply # on_plan, on_apply, disabled
custom_policy_check: false
autoplan:
when_modified:
- "*.tf"
- "../modules/**/*.tf"
- ".terraform.lock.hcl"
enabled: true
plan_requirements:
- undiverged
apply_requirements:
- mergeable
- approved
- undiverged
import_requirements:
- mergeable
- approved
- undiverged
- <<: *project_template
name: project1
dir: terraform/aws/project1/
workflow: terraform
# snip...
### Logs
parsing atlantis.yaml: [37:5] duplicate key "name"
34 | - undiverged
35 |
36 | - <<: *project_template
> 37 | name: project1
^
38 | dir: terraform/aws/project1/
39 | workflow: terraform
40 |
41 |
### Environment details
Atlantis version: 0.35.0 (issue present), 0.34.0 (working)
Latest version test: Issue is present in the latest version (0.35.0)
Deployment method: N/A (affects all deployment methods)
Atlantis flags: N/A (affects YAML parsing regardless of flags)
Atlantis server-side config file: N/A (issue is with repo-level atlantis.yaml)
Repo atlantis.yaml
file: See example above - any configuration using YAML anchors that results in duplicate keys after anchor resolution
Additional environment info: This is a parsing issue that affects all environments
### Additional Context
• Breaking change introduced in PR #5579: Migration from <http://gopkg.in/yaml.v3|gopkg.in/yaml.v3>
to <http://github.com/goccy/go-yaml|github.com/goccy/go-yaml>
v1.17.1
• Specific commit: 8639729 - "Replace gopkg.in/yaml.v3 with github.com/goccy/go-yaml"
• Parser change: The new library uses yaml.Strict()
mode which enables stricter validation
• Impact: Users who have been using YAML anchors successfully in their atlantis.yaml files will experience breaking changes when upgrading to 0.35.0
runatlantis/atlantisGitHub
08/01/2025, 5:44 PMenvironments/<env-name>/terraform.tfvars
. The Atlantis convention is envs/<env-name>.tfvars
. Very similar.
The effort was implementing the repo-level config across 60+ root modules. This was before you could enable auto planning w/ a repo level config of any sort, so the entire thing had to be generated on the fly every run. This is an error prone process as if it fails Atlantis just won't plan anything. Anyway, it's hard to say it would have eliminated that work (as I also built a dependency graph and implemented a module-level Atlantis config files to have more granular control), but I believe for many use cases it could completely eliminate an entire category of custom configuration that is required today.
And the thing is, it's already there. It's tested. The comment makes it sound like it's going to stay there. So I'm not seeing why it's not in the docs somewhere. Not everyone is going to read all the code just to use Atlantis.
Describe the solution you'd like
Document this feature.
Describe the drawbacks of your solution
People will know about it and therefore might ask you questions or complain about it.
Describe alternatives you've considered
The alternative would be not telling anyone that this exists and yes I've considered it. I suppose it's not good enough because I tend to want to help people even when it has no benefit to me.
I think this feature is super useful and the benefit for some uses cases, and the reduced need for a completely custom repo-level config, is going to far outweigh the costs of documenting this relatively benign and simple feature and dealing with a few people who are too confused to use it properly.
runatlantis/atlantisGitHub
08/02/2025, 9:11 PMGitHub
08/05/2025, 9:18 AMatlantis apply
to be run while in an invalid state. While the command does then fail, this may cause confusion as the error message simply states stat /home/atlantis/.data/repos/{org}/{repo}/{pr-num}: no such file or directory
.
I would expect atlantis to re-run the plan
check when a PR is re-opened - treating it as though it was freshly opened, acquiring all necessary locks and planning the changes.
### Reproduction Steps
1. Open a PR that modifies some atlantis-managed infrastructure
2. Wait for the plan
check to pass, and approve the PR
3. Close the PR
• Note: atlantis will delete the lock & plan associated to the PR
4. Re-open the PR
• Note: the plan
check will not be invalidated, even though atlantis deleted the plan file. it is also not re-run
5. Comment atlantis apply
• Failure! The apply
will fail with the message stat /home/atlantis/.data/repos/{org}/{repo}/{pr-num}: no such file or directory
, as the resources were deleted when the PR was closed
### Logs
Logs
{
"level": "error",
"ts": "2025-08-04T132831.652Z",
"caller": "events/instrumented_project_command_builder.go:75",
"msg": "Error building apply commands: stat /home/atlantis/.data/repos/transferwise/REDACTED/1688: no such file or directory",
"json": {},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).buildAndEmitStats\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:75\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandBuilder).BuildApplyCommands\n\tgithub.com/runatlantis/atlantis/server/events/instrumented_project_command_builder.go:17\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:116\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:383"
}
{
"level": "error",
"ts": "2025-08-04T132832.103Z",
"caller": "events/pull_updater.go:18",
"msg": "stat /home/atlantis/.data/repos/transferwise/REDACTED/1688: no such file or directory",
"json": {
"repo": "transferwise/REDACTED",
"pull": "1688"
},
"stacktrace": "github.com/runatlantis/atlantis/server/events.(*PullUpdater).updatePull\n\tgithub.com/runatlantis/atlantis/server/events/pull_updater.go:18\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\tgithub.com/runatlantis/atlantis/server/events/apply_command_runner.go:122\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\tgithub.com/runatlantis/atlantis/server/events/command_runner.go:383"
}
### Environment details
• Atlantis version: v0.32.0
• Deployment method: AWS ECS
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: no, but the current handler implementation suggests this is still an issue
### Additional Context
• https://www.runatlantis.io/docs/autoplanning does not exclude re-opening a PR
• https://github.com/runatlantis/atlantis/blob/main/server/events/event_parser.go#L554-L567 does not handle the `reopened` event type
• I suspect this issue can be resolved by handling this event type the same as the opened
or ready_for_review
events, as atlantis deletes its state on close
runatlantis/atlantisGitHub
08/05/2025, 2:26 PM• Startup latency – Large repositories with many historical plans could slow container boot time while the index is rebuilt.
• Metadata drift – If a plan file exists but its corresponding PR or commit has been deleted, the UI might surface “orphaned” entries. Additional validation logic would be required.
• Concurrency complexity – Multiple containers running the discovery simultaneously may race to write identical metadata into Redis or memory. Coordination (e.g., Redis transactions or leader election) will be needed.
• Maintenance overhead – Future changes to plan file formats or storage paths would need matching migration logic in the re-hydration code.
Describe alternatives you've considered
1. Separate “Archived Plans” tab
• Keep the current Jobs list ephemeral, but add a new tab that lists plans discovered on disk.
• Drawback: Two nearly identical views can confuse users; reviewers may not know where to look first.
2. Persist job metadata in Redis (or DynamoDB) instead of on-disk scanning
• Write a small record to Redis each time a plan completes; at startup, rebuild the UI from Redis keys.
• Drawback: Introduces a second persistence strategy (plans on EFS, metadata in Redis); if the cache is flushed, the index is lost while files remain.
3. Force containers to run in “sticky” mode (no rolling restarts)
• Disable automatic task replacement so that UI state is never lost.
• Drawback: Removes the main benefit of ECS—automatic updates and rescheduling—so is not viable operationally.
Given these trade-offs, re-hydrating the Jobs list directly from EFS strikes the best balance between user experience and architectural simplicity, staying aligned with how plan artifacts are already stored today.
runatlantis/atlantisGitHub
08/06/2025, 3:58 PMrunning git clone --depth=1 --branch test_v035 --single-branch <https://atlantis%40acme.net:<redacted>@bitbucket.org/acme/atlantis-demo.git> /home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default: Cloning into '/home/atlantis/.atlantis/repos/acme/atlantis-demo/39/default'...
remote: You may not have access to this repository or it no longer exists in this workspace. If you think this repository exists and you have access, make sure you are authenticated.
fatal: Authentication failed for '<https://bitbucket.org/acme/atlantis-demo.git/>'
: exit status 128
# Conversely, with username:
git clone <https://atlantis-devops:API_TOKEN@bitbucket.org/acme/atlantis-demo.git>
Cloning into 'atlantis-demo'...
remote: Enumerating objects: 171, done.
...
Resolving deltas: 100% (75/75), done.
# CURL API call with username:
curl -u "atlantis-devops:API_TOKEN" -H "Content-Type: application/json" -X POST -d '{"content": {"raw": "Test comment"}}' "<https://api.bitbucket.org/2.0/repositories/org/repo/pullrequests/28/comments>"
# Response:
{"error": {"message": "Unauthorized"}}
### Environment details
### Additional Context
Reference docs:
https://support.atlassian.com/bitbucket-cloud/docs/using-api-tokens/
https://support.atlassian.com/bitbucket-cloud/docs/using-app-passwords/
runatlantis/atlantisGitHub
08/06/2025, 11:16 PMstatuses, _, err := g.Client.Commits.GetCommitStatuses(mr.ProjectID, commit, nil)
if resp != nil {
logger.Debug("GET /projects/%d/commits/%s/statuses returned: %d", mr.ProjectID, commit, resp.StatusCode)
}
if err != nil {
return false, err
}
for _, status := range statuses {
// Ignore any commit statuses with 'atlantis/apply' as prefix
if strings.HasPrefix(status.Name, fmt.Sprintf("%s/%s", vcsstatusname, command.Apply.String())) {
continue
}
if !status.AllowFailure && project.OnlyAllowMergeIfPipelineSucceeds && status.Status != "success" {
return false, nil
}
}
In this Gitlab client code to check the mergeability of a review, an API call is made to get pipeline statuses the commits in the review and it specifically checks the statuses of the latest commit. If a commit is used in more than one review, it may have statuses across reviews, and this code does not filter out statuses from prior reviews. I believe this creates scenarios where failures on a commit in a prior review are 'brought forward' and block a newer review.
### Reproduction Steps
1. Create a Gitlab review on a terraform prroject that is configured to work with Atlantis, with a change that will succeed to plan but fail to apply.
2. Attempt to apply the plan, see it fail.
3. Open a new review with the same commit.
4. Observe that no plan pipeline runs, and attempting to apply fails immediately (even if all signals show the review is approved and mergeable)
### Logs
LogsProject name has been replaced with $MY_PROJECT
to avoid leaking information about my company's repos. {"level":"debug","ts":"2025-08-06T18:34:16.197Z","caller":"vcs/gitlab_client.go:282","msg":"GET /projects/$MY_PROJECT/merge_requests/755/approvals returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:16.197Z","caller":"vcs/gitlab_client.go:307","msg":"Checking if GitLab merge request 755 is mergeable","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:16.950Z","caller":"vcs/gitlab_client.go:310","msg":"GET /projects/$MY_PROJECT/merge_requests/755 returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:17.081Z","caller":"vcs/gitlab_client.go:328","msg":"GET /projects/5409 returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} {"level":"debug","ts":"2025-08-06T18:34:17.210Z","caller":"vcs/gitlab_client.go:337","msg":"GET /projects/5409/commits/53d0402a25653f55336219ad2dde8dbed4600c0f/statuses returned: 200","json":{"repo":"$MY_PROJECT","pull":"755"}} ... {"level":"error","ts":"2025-08-06T18:34:19.531Z","caller":"events/instrumented_project_command_runner.go:84","msg":"Failure running apply operation: Pull request must be mergeable before running apply.","json":{"repo":"$MY_PROJECT","pull":"755"},"stacktrace":"<http://github.com/runatlantis/atlantis/server/events.RunAndEmitStats|github.com/runatlantis/atlantis/server/events.RunAndEmitStats>\n\t/atlantis/server/events/instrumented_project_command_runner.go:84\ngithub.com/runatlantis/atlantis/server/events.(*InstrumentedProjectCommandRunner).Apply\n\t/atlantis/server/events/instrumented_project_command_runner.go:46\ngithub.com/runatlantis/atlantis/server/events.runProjectCmds\n\t/atlantis/server/events/project_command_pool_executor.go:48\ngithub.com/runatlantis/atlantis/server/events.(*ApplyCommandRunner).Run\n\t/atlantis/server/events/apply_command_runner.go:163\ngithub.com/runatlantis/atlantis/server/events.(*DefaultCommandRunner).RunCommentCommand\n\t/atlantis/server/events/command_runner.go:401"}
### Environment details
• Atlantis version: 0.33.0
• Deployment method: kubectly apply
😅
• If not running the latest Atlantis version have you tried to reproduce this issue on the latest version: we have not, but I validated via diff on Github that the code I have linked to and provided above has not changed between the two versions. That said, I will look into scheduling an update to the latest version, as maybe some other change indirectly addresses this.
• Atlantis flags:
ATLANTIS_LOG_LEVEL="debug"
ATLANTIS_CHECKOUT_DEPTH="25"
ATLANTIS_CHECKOUT_STRATEGY="merge"
ATLANTIS_CONFIG="/config/files/config.yaml"
ATLANTIS_DATA_DIR="/atlantis"
ATLANTIS_DEFAULT_TF_VERSION="0.12.31"
ATLANTIS_ENABLE_POLICY_CHECKS="true"
ATLANTIS_FAIL_ON_PRE_WORKFLOW_HOOK_ERROR="true"
ATLANTIS_PORT="4141"
Atlantis server-side config file: I can't provide this, it has a ton of stuff that I'm not allowed to put in a public github issue.
Repo atlantis.yaml
file: Same as above
Any other information you can provide about the environment/deployment (efs/nfs, aws/gcp, k8s/fargate, etc):
Nothing much...it runs in a k8s cluster on a Statefulset.
### Additional Context
• Our Gitlab instance is self-hosted, on version 17.11
• We have multiple terraform projects, but this behavior seems to happen almost entirely to the one project the logs come from. I'm wondering if there are any configurations that could come into play here.
runatlantis/atlantisGitHub
08/07/2025, 6:43 PMGitHub
08/11/2025, 11:00 AMGitHub
08/13/2025, 4:53 AMGitHub
08/13/2025, 4:50 PMThe default workspace at path foo is currently locked by another command that is running for this pull request.
by rapidly pushing back to back commits that cause an autoplan. After this, they lock never cleared. I did an atlantis unlock
comment and tried to plan again. It still claimed that that one directory was locked. I discarded plan and locks from the web UI. Same effect. /api/locks
shows the lock gone, but trying a plan says it is locked and then it does show up in /api/locks
. I unlock again to remove it from locks and the run strings
on the atlantis.db
file and it shows that PR with that directory with a status of 5, while other directories in that PR that did plan show status 1.
### Reproduction Steps
Hard to say, because I have had this happen before where someone blocks themselves with reapdi commits pushed, but it resolves itself.
### Environment details
running atlantis 0.35.0 on eks
### Additional Context
runatlantis/atlantisGitHub
08/18/2025, 2:41 PMatlantis.yaml
file, and a schema file available to validate changes are correct, for example using pre-commit
hooks before a file is pushed
Describe the solution you'd like
I would like a json schema published of the fields one can set for the atlantis.yaml
file
Describe the drawbacks of your solution
Maintaining and updating the schema file would rely on somehow getting all the schemas out of server/config/core/raw
and in to a json file: https://github.com/runatlantis/atlantis/tree/fb91fafcb8db44f1f4416027128e5be8957c4914/server/core/config/raw
Any updates to this file would mean backwards compatibility is potentially broken, so some form of versioning would need to be maintained. It could follow the release cycle of Atlantis
Describe alternatives you've considered
Feeding a bunch of Atlantis files to some AI and having it produce a schema file for me, but I'd like to see a first party solution
runatlantis/atlantisGitHub
08/20/2025, 10:43 AMparallel_plan
and parallel_apply
are both disabled.
This has nothing to do with commits in quick succession.
### Environment details
• Atlantis version: 0.35.1
• Deployment method: Azure App Service
### Additional Context
It seems that something has changed in very recent Atlantis versions, and this has exposed the fact that locks are not really isolated by project name. I know that an attempt was made to include the project name in the lock, but from what I can see it is not actually being included. See #4192 (comment)
runatlantis/atlantisGitHub
08/21/2025, 9:54 AMError running policy_check operation: unable to unmarshal conftest output
Only if we enable debug logging it shows the output:
Treating custom policy tool error exit code as a policy failure. Error output: running 'sh -c' 'custom-policy-check.sh' in '/atlantis/data/repos/owner/repo/41/default/smoke-test': exit status 1: running "custom-policy-check.sh" in "/atlantis/data/repos/owner/repo/41/default/smoke-test":
some test output for policy check
another line
### Reproduction Steps
Example script `custom-policy-check.sh`:
echo "some test output for policy check"
echo "another line"
exit 1
running as the policy_check:
policy_check:
steps:
- run: custom-policy-check.sh
### Logs
Logs
...
"2025-08-20T09:50:30.091Z","Updating GitHub Check status for 'atlantis/policy_check' to 'failure'"
"2025-08-20T09:50:28.420Z","Error running policy_check operation: unable to unmarshal conftest output"
"2025-08-20T09:50:28.420Z","running 'sh -c' 'custom-policy-check.sh' in '/atlantis/data/repos/zendesk/sca-canary-boiled-frog/41/default/smoke-test': exit status 1"
"2025-08-20T09:50:28.411Z","Acquired lock with id 'zendesk/sca-canary-boiled-frog/smoke-test/default'"
"2025-08-20T09:50:28.402Z","Running policy_checks in parallel"
"2025-08-20T09:50:27.644Z","Updating GitHub Check status for 'atlantis/policy_check' to 'pending'"
"2025-08-20T09:50:27.643Z","Running policy check for 'plan'"
"2025-08-20T09:50:26.868Z","Updating GitHub Check status for 'atlantis/plan' to 'success'"
...
### Environment details
Running the latest Atlantis version.
### Additional Context
runatlantis/atlantisGitHub
08/21/2025, 1:07 PMGitHub
08/25/2025, 1:18 PMGitHub App handles the webhook calls by itself, hence there is no need to create webhooks separately. If webhooks were created manually, those should be removed when using GitHub App. Otherwise, there would be 2 calls to Atlantis resulting in locking errors on path/workspace.
Then also here
Webhooks must be created manually for repositories that trigger Atlantis.
This is somewhat confusing as the actual behavior is as follows:
The GitHub App can be configured to manage webhooks or not. If the app is configured to manage webhooks, the user should not create or manage the webhooks manually as this will cause locking errors and general instability.
I propose a new, combined note which explains this behavior in a more direct and clear way.
### Community Note
• Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you!
• Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
• If you are interested in working on this issue or have submitted a pull request, please leave a comment.
---
### Overview of the Issue
### Reproduction Steps
### Logs
### Environment details
### Additional Context
runatlantis/atlantisGitHub
08/26/2025, 1:36 AM--markdown-template-overrides-dir
flag appears to be non-functional. Despite having correctly configured custom markdown templates with proper file structure, permissions, and content, Atlantis continues to use the built-in default templates instead of the custom overrides.
I have verified:
• ✅ The flag is correctly set and appears in the process arguments
• ✅ Template files exist at the specified path with correct permissions (644, owned by atlantis:atlantis)
• ✅ All template dependencies are included (no missing template references)
• ✅ Template syntax is valid (exact copies of default templates with minimal modifications)
• ✅ Files are regular files (not symlinks) to avoid any parsing issues
• ✅ All 39 templates are present (complete override set)
The custom templates are never used, and there are no error messages in the logs indicating template parsing failures.
### Reproduction Steps
1. Configure Atlantis with custom template directory:
atlantis server --markdown-template-overrides-dir="/home/atlantis/templates"
2. Create custom templates with test content:
# Example: single_project_plan_success.tmpl
{{ define "singleProjectPlanSuccess" -}}
TEST TEMPLATE WORKING
{{ $result := index .Results 0 -}}
Walked {{ .Command }} for {{ if $result.ProjectName }}project: {{ $result.ProjectName }}
{{ end }}dir: {{ $result.RepoRelDir }}
workspace: {{ $result.Workspace }}
# ... rest of template identical to default
3. Verify template files are accessible:
# Files exist and are readable
$ ls -la /home/atlantis/templates/
-rw-r--r-- 1 atlantis atlantis 641 single_project_plan_success.tmpl
# ... 38 other templates
# Content is correct
$ head -3 /home/atlantis/templates/single_project_plan_success.tmpl
{{ define "singleProjectPlanSuccess" -}}
TEST TEMPLATE WORKING
{{ $result := index .Results 0 -}}
4. Trigger Atlantis plan command
5. Expected: See "TEST TEMPLATE WORKING" in the output
Actual: Default template is used, no custom content appears
### Logs
Atlantis Server Startup Logs
{"level":"info","ts":"2025-08-26T00:45:23.697Z","caller":"server/server.go:342","msg":"Supported VCS Hosts: Github","json":{}}
{"level":"info","ts":"2025-08-26T00:45:23.969Z","caller":"server/server.go:503","msg":"Utilizing BoltDB","json":{}}
{"level":"info","ts":"2025-08-26T00:45:24.020Z","caller":"vcs/git_cred_writer.go:29","msg":"wrote git credentials to /home/atlantis/.git-credentials","json":{}}
{"level":"info","ts":"2025-08-26T00:45:24.023Z","caller":"vcs/git_cred_writer.go:71","msg":"successfully ran git config --global credential.helper store","json":{}}
{"level":"info","ts":"2025-08-26T00:45:24.026Z","caller":"vcs/git_cred_writer.go:77","msg":"successfully ran git config --global url.<https://x-access-token@github.com.insteadOf> <ssh://git@github.com%22,%22json%22:{}}|ssh://git@github.com","json":{}}>
{"level":"info","ts":"2025-08-26T00:45:24.026Z","caller":"policy/conftest_client.go:168","msg":"failed to get default conftest version. Will attempt request scoped lazy loads DEFAULT_CONFTEST_VERSION not set","json":{}}
{"level":"info","ts":"2025-08-26T00:45:24.027Z","caller":"scheduled/executor_service.go:51","msg":"Scheduled Executor Service started","json":{}}
{"level":"info","ts":"2025-08-26T00:45:24.027Z","caller":"server/server.go:1111","msg":"Atlantis started - listening on port 4141","json":{}}
Process Arguments Verification
$ ps aux | grep atlantis
atlantis server --disable-markdown-folding --markdown-template-overrides-dir="/home/atlantis/templates"
Template Directory Contents
$ ls -la /home/atlantis/templates/ | wc -l
39 # All 39 templates present
$ find /home/atlantis/templates -name "*.tmpl" | wc -l
39 # All files have .tmpl extension
$ grep -c "define \"" /home/atlantis/templates/*.tmpl | wc -l
39 # All templates have valid define statements
Notable: No template-related error messages appear in logs when using grep -i "template\|error\|fail\|parse"
.
### Environment details
• Atlantis version: v0.35.1 (commit: e987e33) (build date: 2025-07-31T003000.639Z)
• Deployment method: Kubernetes/Helm
• Running latest version: Yes (v0.35.1)
• Atlantis flags: --disable-markdown-folding --markdown-template-overrides-dir="/home/atlantis/templates"
Atlantis server-side config file:
# Using Helm chart with extraArgs configuration
extraArgs:
- --disable-markdown-folding
- --markdown-template-overrides-dir="/home/atlantis/templates"
# Additional settings
enableDiffMarkdownFormat: true
hidePrevPlanComments: false
Repo atlantis.yaml
file:
# No repo-level configuration (using global defaults)
Additional deployment details:
• Kubernetes cluster on Linux
• Templates mounted via initContainer copying from ConfigMap to emptyDir volume
• Files verified as regular files (not symlinks) with correct ownership
• No storage constraints or permission issues
### Additional Context
Investigation performed:
1. Verified template loading code in server/events/markdown_renderer.go
lines 158-162
2. Confirmed feature exists via PR #2647 (merged Nov 2022)
3. Tested multiple approaches: minimal templates, comprehensive templates, different mount strategies
4. Verified all dependencies: included all 39 templates to prevent silent parsing failures
5. Added test strings to ALL plan-related templates: Added "TEST TEMPLATE WORKING" messages to singleProjectPlanSuccess
, planSuccessUnwrapped
, planSuccessWrapped
, singleProjectPlanUnsuccessful
, multiProjectPlan
, multiProjectHeader
, and multiProjectPlanFooter
templates to definitively test if ANY template overrides work - none appeared in output
Code analysis suggests the issue may be:
• Silent failure in template.ParseGlob()
on line 159 of markdown_renderer.go
• No error logging when template parsing fails
• Fallback to embedded templates without indication
Related issues/PRs:
• PR #2647 - Original implementation
• PR #2541 - Earlier attempt (closed)
The lack of any error messages or debug output makes it difficult to determine if templates are being parsed at all or if there's a silent failure in the override mechanism.
runatlantis/atlantisGitHub
08/26/2025, 2:27 PM--gh-team-allowlist='*:plan, *:unlock, sre:apply'
, and we wanted to enable conftest policy checking, so we configured it and added *:approve_policies
to our team allowlist so that policy owners are allowed to run the command (everyone is, but if they're not a policy owner they'll get rejected later down the line).
In our testing, this worked on autoplans, but not on atlantis plan
commands. Autoplans would run policy checks, but plan commands wouldn't.
### Reproduction Steps
1. Run Atlantis with conftest policy checking enabled and with --gh-team-allowlist='*:plan, *:unlock, *:approve_policies, *:apply'
.
2. Raise a PR to trigger an autoplan, it will plan and run the policy checks.
3. Now commant atlantis plan
, it will plan but it will NOT run the policy checks.
### Solution
After digging through the code, we found this bit over here which makes sure that whoever is running the plan
command, is also allowed to run the policy_check
command, which is not a command per se in documentation, but is treated as such for the purposes of allowlist evaluation.
And sure thing, we added *:policy_check
to our allowlist, and now policy checks always run, as expected.
As far as I could tell, this is not documented anywhere, and given that policy_check
is not a command, it's pretty unintuitive that it has to be allowlisted for it to work. This is extra confusing because autoplans do work (autoplans don't have a user associated with it, and thus always pass allowlist evaluation even for the policy_check
command).
runatlantis/atlantisGitHub
08/26/2025, 3:34 PMfeat: Implement branch matching in repo-level config by @0x416e746f6e in <https://github.com/runatlantis/atlantis/pull/2522>
. The comparison for the branches also shows this change arriving in 0.21.0: v0.20.1...v0.21.0
### Reproduction Steps
Documentation issue, nothing to really reproduce.
### Logs
Documentation issue, no logs.
### Environment details
Documentation issue, environment is not impactful.
### Additional Context
No additional context aside from links above.
runatlantis/atlantisGitHub
08/28/2025, 4:21 AMversion: "3.7"
services:
atlantis:
image: <http://ghcr.io/runatlantis/atlantis:latest|ghcr.io/runatlantis/atlantis:latest>
container_name: "atlantis"
restart: unless-stopped
command:
# Server Settings
- server
- --atlantis-url=<https://atlantis.mydomain.com>. # SSL enabled on Traefik only
- --repo-allowlist=<http://gitlab.com/mygroup/*|gitlab.com/mygroup/*>
-
# Gitlab Settings
- --gitlab-user="${GL_USER}"
- --gitlab-token="${GL_ATLANTIS}"
- --gitlab-webhook-secret="${WEBHOOK}"
# Traefik won't assign labels until the healthcheck completes, without this the container is always in 'starting' state
healthcheck:
test: ["CMD", "curl", "-f", "<http://localhost:4141/healthz>"]
interval: 30s
timeout: 10s
retries: 5
start_period: 10s
.env
# GitLab Token
GL_ATLANTIS=glpat-somevalue # full perms token
GL_USER=<username value from> # <https://gitlab.com/api/v4/user>
WEBHOOK=secretvalueshere
<https://gitlab.com/api/v4/version>
works fine authenticated from the browser, but not without auth
docker-compose -f atlantis-docker-compose.yml run atlantis curl --header "PRIVATE-TOKEN: glpat-thevalue" "<https://gitlab.com/api/v4/version>"
works fine and retrieves the relevant json
docker-compose -f atlantis-docker-compose.yml config
shows the correct .env values being passed to the cli
---
### Reproduction Steps
docker-compose -f atlantis-docker-compose.yml up -d
# using above config
---
### Logs
Log output (via portainer)
No files found in /docker-entrypoint.d/, skipping
Error: initializing server: GET <https://gitlab.com/api/v4/version>: 401 {message: 401 Unauthorized}
---
### Environment details
Gitlab public cloud (Free Tier)
Public Repository
Group Repo (I am admin of the group)
Synology Host
Traefik Load Balancer with working SSL (v3.5.1)
Other working containers on the host
:latest flag for Atlantis image (currently v0.35.1)
• Atlantis version: atlantis v0.35.1 (commit: e987e33) (build date: 2025-07-31T003000.639Z
• Deployment method: atlantis-docker-compose.yml (synology docker engine)
Atlantis server-side config file: none, all cli flags
---
### Additional Context
Running using bitbucket credentials works as expected against bitbucket repos and tokens
runatlantis/atlantisGitHub
09/01/2025, 1:50 PMGetting modified files for Gitea pull request 18
I can see in the access logs for Gitea that page one of changed files is requested repeatedly:
...
2025/09/01 13:18:59 HTTPRequest [I] router: completed GET /api/v1/repos/repo/Terraform/pulls/18/files?limit=30&page=1 for 1.2.3.4:0, 200 OK in 546.0ms @ repo/pull.go:1517(repo.GetPullRequestFiles)
2025/09/01 13:18:59 HTTPRequest [I] router: completed GET /api/v1/repos/repo/Terraform/pulls/18/files?limit=30&page=1 for 1.2.3.4:0, 200 OK in 554.9ms @ repo/pull.go:1517(repo.GetPullRequestFiles)
2025/09/01 13:19:00 HTTPRequest [I] router: completed GET /api/v1/repos/repo/Terraform/pulls/18/files?limit=30&page=1 for 1.2.3.4:0, 200 OK in 551.8ms @ repo/pull.go:1517(repo.GetPullRequestFiles)
2025/09/01 13:19:00 HTTPRequest [I] router: completed GET /api/v1/repos/repo/Terraform/pulls/18/files?limit=30&page=1 for 1.2.3.4:0, 200 OK in 562.8ms @ repo/pull.go:1517(repo.GetPullRequestFiles)
2025/09/01 13:19:01 HTTPRequest [I] router: completed GET /api/v1/repos/repo/Terraform/pulls/18/files?limit=30&page=1 for 1.2.3.4:0, 200 OK in 554.0ms @ repo/pull.go:1517(repo.GetPullRequestFiles)
...
Looking at the related code in Atlantis it seems like it fails to increment beyond page one. From what I can gather line 113 should be page += 1
instead.
atlantis/server/events/vcs/gitea/client.go
Lines 112 to 114 in</runatlantis/atlantis/commit/4f19e549f525fb17d2657ef5bc8c892fb094a27f|4f19e54>
| for page < nextPage { |
| ----------------------------------- |
| page = +1 |
| listOptions.ListOptions.Page = page |
When increasing the page size enough to get all changes on one page the issue observed went away.
runatlantis/atlantis