I'm trying to install Airbyte on EC2 instance usin...
# ask-community-for-troubleshooting
v
I'm trying to install Airbyte on EC2 instance using external DB (AWS RDS) but experiencing the following error: ERROR Failed to install airbyte/airbyte Helm Chart ERROR Unable to install Airbyte locally ERROR unable to install airbyte chart: bootloader failed I show in the docs that this is a known issue but it's impossible to troubleshoot and to fix it. There is a connectivity between the EC2 instance and the RDS.
k
The error you’re seeing—`bootloader failed` during Airbyte installation with an external database (AWS RDS)—is a known and recurring issue, especially when using Helm or
abctl
for deployment. According to the official Airbyte documentation and multiple user reports, this error typically means the Airbyte bootloader cannot connect to the database, even if network connectivity is confirmed. ### Key Troubleshooting Points 1. Bootloader Fails to Start The Airbyte bootloader is the first component to start and must connect to the database to initialize Airbyte. If it cannot, the installation fails with the error you described. The documentation confirms this is a common problem and is still under investigation by the Airbyte team. The logs printed by recent versions of
abctl
or Helm should provide more details about the root cause, often pointing to database connectivity or configuration issues [Airbyte Bootloader failed to start](https://docs.airbyte.com/platform/deploying-airbyte/troubleshoot-deploy). 2. Configuration Issues Even if your EC2 instance can reach the RDS instance, the problem may be with how the database credentials and connection details are passed to Airbyte. There are several reports where environment variables or Kubernetes secrets were not set up correctly, causing the bootloader to fail. For example, the secret containing the database credentials must exist before installing the Helm chart, and the values in your
values.yaml
must match the secret keys [Cannot connect to External RDS instance](https://github.com/airbytehq/airbyte/issues/48854). 3. Firewall/Security Groups Some users have found that even with apparent connectivity, AWS security groups or firewalls were blocking some packets, causing intermittent or silent failures. Double-check that your RDS instance allows inbound connections from your EC2 instance on the correct port [Error deploying Airbyte with an external Postgres database (AWS RDS)](https://discuss.airbyte.io/t/2179). 4. Helm Chart Version There have been bugs in certain Helm chart versions related to external database configuration. Upgrading to the latest chart version or, in some cases, downgrading to a known stable version (e.g., 1.7.2) has resolved the issue for some users [Helm chart broken with abctl 0.30.1](https://github.com/airbytehq/airbyte/issues/65129). ### What You Can Do - Check Bootloader Logs: Look at the logs for the bootloader pod. They often contain more specific error messages about what is failing (e.g., authentication, network, or configuration). - Verify Kubernetes Secrets: Ensure the secret with your database credentials exists and matches the keys referenced in your Helm values. - Review Security Groups: Double-check AWS security group rules for both EC2 and RDS. - Try a Different Chart Version: If you’re on the latest Helm chart, try downgrading to 1.7.2, as some users have found this resolves the issue. - Consult the Docs: The Airbyte documentation acknowledges this is a known issue and recommends checking the above points [Airbyte Bootloader failed to start](https://docs.airbyte.com/platform/deploying-airbyte/troubleshoot-deploy). If after these steps the issue persists, you may be encountering a bug that is still under investigation by the Airbyte team. In that case, providing detailed bootloader logs when opening a GitHub issue or forum post will help the maintainers assist you further. --- Summary: This is a known and sometimes unresolved issue. The most common causes are misconfigured secrets, environment variables, or subtle network/firewall issues, even when basic connectivity is confirmed. Reviewing bootloader logs and double-checking your configuration are the best next steps. If you need more targeted help, please share the full bootloader logs and your Helm values (with sensitive data redacted).
v
@kapa.ai The logs printed by recent versions of
abctl
or Helm should provide more details about the root cause, often pointing to database connectivity or configuration issues Airbyte Bootloader failed to start. How can I read these logs?
k
A message from kapa.ai
To read the logs from the Airbyte bootloader pod when deploying with
abctl
or Helm, you should use the
kubectl logs
command. The documentation provides the following example:
Copy code
bash kubectl logs -n <namespace> <bootloader-pod-name>
If you are using the default namespace for
abctl
, it is usually
airbyte-abctl
. For Helm-based installations, the namespace might be
airbyte
or another value you specified. For example, to view the logs for the bootloader pod in the
airbyte-abctl
namespace:
Copy code
bash kubectl logs -n airbyte-abctl airbyte-abctl-airbyte-bootloader
Or, if your namespace is `airbyte`:
Copy code
bash kubectl logs -n airbyte airbyte-airbyte-bootloader
This command will print the logs from the bootloader pod, which should help you identify the root cause of the installation failure. The documentation recommends this approach for debugging deployment issues [Developing Locally - Viewing logs](https://docs.airbyte.com/platform/1.8/contributing-to-airbyte/developing-locally#prerequisites).
v
@kapa.ai When I run
Copy code
kubectl logs -n airbyte-abctl airbyte-abctl-airbyte-bootloader
I'm getting E1101 080359.896246 66723 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp 127.0.0.18080 connect: connection refused"
k
A message from kapa.ai
The error message you are seeing:
Copy code
couldn't get current server API group list: Get "<http://localhost:8080/api?timeout=32s>": dial tcp 127.0.0.1:8080: connect: connection refused
indicates that
kubectl
is unable to connect to the Kubernetes API server on your local machine. This is a "connection refused" error, which is a common issue documented in the Airbyte troubleshooting guides. It means that either the Kubernetes cluster is not running, or your
kubectl
context is not set up correctly to communicate with the cluster [Connection Refused](https://docs.airbyte.com/platform/deploying-airbyte/troubleshoot-deploy#connection-refused). What you should check: 1. Is your Kubernetes cluster running? Make sure your cluster is up and running. If you are using a local cluster (like kind, minikube, or Docker Desktop), ensure it is started. 2. Is your
kubectl
context correct?
Run
kubectl config get-contexts
and verify that you are using the context for the cluster where Airbyte is deployed. 3. Can you access the cluster? Try running
kubectl get pods -A
to see if you get a response. If you get the same connection refused error, the issue is with your cluster or configuration, not Airbyte specifically. 4. If using
abctl
or Docker-based clusters:
Ensure the containers or VMs running your Kubernetes control plane are up and healthy. This error is not specific to Airbyte, but rather to your Kubernetes setup. Once you resolve the connection to the Kubernetes API server, you should be able to run the
kubectl logs
command to view the bootloader logs as intended [Connection Refused](https://docs.airbyte.com/platform/deploying-airbyte/troubleshoot-deploy#connection-refused).
v
@kapa.ai There is no cluster started
k
A message from kapa.ai
If there is no Kubernetes cluster started, you will not be able to use
kubectl
to view Airbyte pod logs, as the command requires a running cluster and a valid connection to the Kubernetes API server. The "connection refused" error you encountered is expected in this situation, as documented in the Airbyte troubleshooting guides for "connection refused" errors when communicating with Kubernetes [Connection Refused](https://docs.airbyte.com/platform/deploying-airbyte/troubleshoot-deploy#connection-refused). To resolve this, you need to start your Kubernetes cluster (for example, using
kind
,
minikube
, or your cloud provider) before attempting to access logs or interact with Airbyte pods. Once the cluster is running and your
kubectl
context is configured correctly, you will be able to retrieve the logs as described in the documentation.
b
theres a section in the ec2 config where you can attach an rds to it. also you may need to modify the ec2 max hops property. just guessing though, this might not help you at all @Vasil Boshnakov