I have setup the AWS ALB for k8 but now I want to ...
# all-things-deployment
c
I have setup the AWS ALB for k8 but now I want to remove it from my setup. Whats the best way to do that?
e
You can set datahub-frontend.ingress.enabled to false!
c
This is what I have set but it errors on the upgrade.
Copy code
datahub-frontend:
  enabled: true
  image:
    repository: linkedin/datahub-frontend-react
    tag: "v0.8.26"
  ingress:
    enabled: false
e
which error do you see?
c
Copy code
helm upgrade --install datahub datahub/datahub --values datahub/values.yaml -n datahub
Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition
e
this seems unrelated to ingress
can you see if any of your setup jobs or upgrade job failed?
c
not seeing anything.
Copy code
./kubectl get all -n datahub
NAME                                                   READY   STATUS      RESTARTS   AGE
pod/datahub-acryl-datahub-actions-5bbc8c8dcd-kr4mv     1/1     Running     0          8d
pod/datahub-datahub-frontend-6566c5c47c-6fs2q          1/1     Running     0          8d
pod/datahub-datahub-gms-7c8584f7c-8bsnx                1/1     Running     0          8d
pod/datahub-datahub-upgrade-job-29ppz                  0/1     Completed   0          8d
pod/datahub-elasticsearch-setup-job-lxglw              0/1     Completed   0          36m
pod/datahub-kafka-setup-job-x4sp5                      0/1     Completed   0          112m
pod/datahub-mysql-setup-job-9xr45                      0/1     Completed   0          8d
pod/elasticsearch-master-0                             1/1     Running     0          17d
pod/elasticsearch-master-1                             1/1     Running     0          17d
pod/elasticsearch-master-2                             1/1     Running     0          17d
pod/prerequisites-cp-schema-registry-cf79bfccf-gpzpk   2/2     Running     0          17d
pod/prerequisites-kafka-0                              1/1     Running     0          17d
pod/prerequisites-mysql-0                              1/1     Running     0          17d
pod/prerequisites-neo4j-community-0                    1/1     Running     0          17d
pod/prerequisites-zookeeper-0                          1/1     Running     0          17d

NAME                                       TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)                      AGE
service/datahub-acryl-datahub-actions      ClusterIP      10.100.83.199    <none>                                                                    9093/TCP                     8d
service/datahub-datahub-frontend           LoadBalancer   10.100.52.27     .<http://us-east-1.elb.amazonaws.com|us-east-1.elb.amazonaws.com>   9002:30189/TCP               17d
service/datahub-datahub-gms                LoadBalancer   10.100.18.4      .<http://us-east-1.elb.amazonaws.com|us-east-1.elb.amazonaws.com>   8080:31994/TCP               17d
service/elasticsearch-master               ClusterIP      10.100.114.131   <none>                                                                    9200/TCP,9300/TCP            17d
service/elasticsearch-master-headless      ClusterIP      None             <none>                                                                    9200/TCP,9300/TCP            17d
service/prerequisites-cp-schema-registry   ClusterIP      10.100.21.13     <none>                                                                    8081/TCP,5556/TCP            17d
service/prerequisites-kafka                ClusterIP      10.100.69.229    <none>                                                                    9092/TCP                     17d
service/prerequisites-kafka-headless       ClusterIP      None             <none>                                                                    9092/TCP,9093/TCP            17d
service/prerequisites-mysql                ClusterIP      10.100.242.6     <none>                                                                    3306/TCP                     17d
service/prerequisites-mysql-headless       ClusterIP      None             <none>                                                                    3306/TCP                     17d
service/prerequisites-neo4j-community      ClusterIP      None             <none>                                                                    7474/TCP,7687/TCP            17d
service/prerequisites-zookeeper            ClusterIP      10.100.78.17     <none>                                                                    2181/TCP,2888/TCP,3888/TCP   17d
service/prerequisites-zookeeper-headless   ClusterIP      None             <none>                                                                    2181/TCP,2888/TCP,3888/TCP   17d

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/datahub-acryl-datahub-actions      1/1     1            1           8d
deployment.apps/datahub-datahub-frontend           1/1     1            1           17d
deployment.apps/datahub-datahub-gms                1/1     1            1           17d
deployment.apps/prerequisites-cp-schema-registry   1/1     1            1           17d

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/datahub-acryl-datahub-actions-5bbc8c8dcd     1         1         1       8d
replicaset.apps/datahub-datahub-frontend-6566c5c47c          1         1         1       8d
replicaset.apps/datahub-datahub-frontend-966db755b           0         0         0       10d
replicaset.apps/datahub-datahub-frontend-bc5649c99           0         0         0       17d
replicaset.apps/datahub-datahub-gms-577775c459               0         0         0       10d
replicaset.apps/datahub-datahub-gms-58fbbcbc9b               0         0         0       17d
replicaset.apps/datahub-datahub-gms-7c8584f7c                1         1         1       8d
replicaset.apps/prerequisites-cp-schema-registry-cf79bfccf   1         1         1       17d

NAME                                             READY   AGE
statefulset.apps/elasticsearch-master            3/3     17d
statefulset.apps/prerequisites-kafka             1/1     17d
statefulset.apps/prerequisites-mysql             1/1     17d
statefulset.apps/prerequisites-neo4j-community   1/1     17d
statefulset.apps/prerequisites-zookeeper         1/1     17d

NAME                                                         SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/datahub-datahub-cleanup-job-template           * * * * *   True      0        <none>          17d
cronjob.batch/datahub-datahub-restore-indices-job-template   * * * * *   True      0        <none>          17d

NAME                                        COMPLETIONS   DURATION   AGE
job.batch/datahub-datahub-upgrade-job       1/1           2m23s      8d
job.batch/datahub-elasticsearch-setup-job   1/1           31m        36m
job.batch/datahub-kafka-setup-job           1/1           32m        112m
job.batch/datahub-mysql-setup-job           1/1           3s         8d
e
you are using helm to deploy right? if you see ^ mysql-setup-job never ran and kafka-setup-job ran almost an hour before the elasticsearch-setup-job ran
something is getting stuck in between
can it be that these pods are in pending status bc you are running out of space in your nodes?
c
How would I check that?
Yes I am using Helm
Is there a way to see the chart that is used?
e
can you try
helm upgrade again?
while that is running
c
yup
e
see how the setup job pods are behaving
check if they start right away? or it takes a while for them to spawn
also could run
Copy code
kubectl describe nodes
to see if the requested cpu or memory has exceeded the node’s cpu or memory
c
the nodes seem fine.
Copy code
NAME                                        COMPLETIONS   DURATION   AGE
job.batch/datahub-datahub-upgrade-job       1/1           2m23s      8d
job.batch/datahub-elasticsearch-setup-job   0/1           35s        35s
job.batch/datahub-kafka-setup-job           1/1           32m        142m
job.batch/datahub-mysql-setup-job           1/1           3s         8d
Copy code
NAME                                                   READY   STATUS             RESTARTS   AGE
pod/datahub-acryl-datahub-actions-5bbc8c8dcd-kr4mv     1/1     Running            0          8d
pod/datahub-datahub-frontend-6566c5c47c-6fs2q          1/1     Running            0          8d
pod/datahub-datahub-gms-7c8584f7c-8bsnx                1/1     Running            0          8d
pod/datahub-datahub-upgrade-job-29ppz                  0/1     Completed          0          8d
pod/datahub-elasticsearch-setup-job-5dw97              0/1     ImagePullBackOff   0          81s
pod/datahub-kafka-setup-job-x4sp5                      0/1     Completed          0          143m
pod/datahub-mysql-setup-job-9xr45                      0/1     Completed          0          8d
pod/elasticsearch-master-0                             1/1     Running            0          17d
pod/elasticsearch-master-1                             1/1     Running            0          17d
pod/elasticsearch-master-2                             1/1     Running            0          17d
pod/prerequisites-cp-schema-registry-cf79bfccf-gpzpk   2/2     Running            0          17d
pod/prerequisites-kafka-0                              1/1     Running            0          17d
pod/prerequisites-mysql-0                              1/1     Running            0          17d
pod/prerequisites-neo4j-community-0                    1/1     Running            0          17d
pod/prerequisites-zookeeper-0                          1/1     Running            0          17d
pods show the same
seems its not pulling the image
Copy code
NAME                                                   READY   STATUS         RESTARTS   AGE
pod/datahub-datahub-upgrade-job-29ppz                  0/1     Completed      0          8d
pod/datahub-elasticsearch-setup-job-hnqkr              0/1     ErrImagePull   0          35s
Seems I am hitting rate limiting for docker.
Copy code
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  4m18s                  default-scheduler  Successfully assigned datahub/datahub-elasticsearch-setup-job-hnqkr to ip-192-168-62-38.ec2.internal
  Warning  Failed     3m37s (x2 over 4m3s)   kubelet            Failed to pull image "linkedin/datahub-elasticsearch-setup:v0.8.26": rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <https://www.docker.com/increase-rate-limit>
  Normal   Pulling    2m55s (x4 over 4m18s)  kubelet            Pulling image "linkedin/datahub-elasticsearch-setup:v0.8.26"
  Warning  Failed     2m54s (x2 over 4m18s)  kubelet            Failed to pull image "linkedin/datahub-elasticsearch-setup:v0.8.26": rpc error: code = Unknown desc = toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: <https://www.docker.com/increase-rate-limit>
  Warning  Failed     2m54s (x4 over 4m18s)  kubelet            Error: ErrImagePull
  Normal   BackOff    2m43s (x6 over 4m17s)  kubelet            Back-off pulling image "linkedin/datahub-elasticsearch-setup:v0.8.26"
  Warning  Failed     2m28s (x7 over 4m17s)  kubelet            Error: ImagePullBackOff
[cloudshell-user@ip-10-1-114-207 ~]$
e
yeah
i’ve seen a lot of these as well unfortunately
one way is to set imagePullPolicy to “IfNotPresent”
for all the pods
c
Was able to deploy but even with the ingress set false the load balancer external IPs are still cnames.
e
are you using any software like external-dns to set up the dns records automatically?
ingress controls whether or not the load balancer (under ec2) is created, but not the dns records themselves. Once you confirm that the lb is not there any more, you should manually delete the dns record
c
Do I have to manually delete the LB thought that would be removed when I removed the ingress?
e
LB should be automatically removed
When you run kubectl get ingress
in your namespace
do you see anything?
c
No but I ran
kubectl delete ingress -n datahub
to remove it earlier.