silly-fish-85029
02/06/2023, 9:46 AMdatahub delete
commands but it didn't cleanup the old metadata.salmon-jordan-53958
02/07/2023, 3:30 PMwitty-motorcycle-52108
02/07/2023, 7:44 PMacryldata/datahub-postgres-setup:v0.9.6.1
image tagged on docker hub. was that an intentional omission, or an unintentional one? i see v0.9.6.4
, but that's not an official release on GitHub. what version should we be using for the acryldata/datahub-*
images on docker hub that's consistent across all the images (minus actions)?wide-laptop-97072
02/08/2023, 2:44 AMdatahub
onto AWS EKS. But I am not able to get the prerequisites deployed and see: CrashLoopBackOff
when I run kubectl get pods
. Upon checking the logs for the schema-regiratry
logs (from some similar threads before), I see that Kafka is not deployed successfully with the logs. Any pointers to resolve this is appreciated.
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
[main] ERROR io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Brokers found [].
creamy-van-28626
02/08/2023, 6:06 PMfierce-baker-1392
02/09/2023, 8:25 AMimportant-rainbow-77301
02/09/2023, 9:59 AMelegant-article-21703
02/09/2023, 1:38 PMv0.9.6.1
to v0.10.0
and I got the following message:
$ helm upgrade -n datahub --atomic --debug datahub datahub ./my-folder/datahub
upgrade.go:121: [debug] preparing upgrade for datahub
upgrade.go:129: [debug] performing update for datahub
upgrade.go:301: [debug] creating upgraded release for datahub
client.go:255: [debug] Starting delete for "datahub-datahub-system-update-job" Job
client.go:109: [debug] creating 1 resource(s)
W0209 17:32:01.749531 11644 warnings.go:67] spec.template.spec.containers[0].env[27].name: duplicate name "DATAHUB_UPGRADE_HISTORY_TOPIC_NAME"
W0209 17:32:01.749531 11644 warnings.go:67] spec.template.spec.containers[0].env[29].name: duplicate name "ENTITY_REGISTRY_CONFIG_PATH"
W0209 17:32:01.750525 11644 warnings.go:67] spec.template.spec.containers[0].env[30].name: duplicate name "EBEAN_DATASOURCE_USERNAME"
W0209 17:32:01.750525 11644 warnings.go:67] spec.template.spec.containers[0].env[31].name: duplicate name "EBEAN_DATASOURCE_PASSWORD"
W0209 17:32:01.750525 11644 warnings.go:67] spec.template.spec.containers[0].env[32].name: duplicate name "EBEAN_DATASOURCE_HOST"
W0209 17:32:01.750525 11644 warnings.go:67] spec.template.spec.containers[0].env[33].name: duplicate name "EBEAN_DATASOURCE_URL"
W0209 17:32:01.751522 11644 warnings.go:67] spec.template.spec.containers[0].env[34].name: duplicate name "EBEAN_DATASOURCE_DRIVER"
W0209 17:32:01.751522 11644 warnings.go:67] spec.template.spec.containers[0].env[35].name: duplicate name "KAFKA_BOOTSTRAP_SERVER"
W0209 17:32:01.752524 11644 warnings.go:67] spec.template.spec.containers[0].env[36].name: duplicate name "KAFKA_SCHEMAREGISTRY_URL"
W0209 17:32:01.752524 11644 warnings.go:67] spec.template.spec.containers[0].env[38].name: duplicate name "ELASTICSEARCH_HOST"
W0209 17:32:01.753524 11644 warnings.go:67] spec.template.spec.containers[0].env[39].name: duplicate name "ELASTICSEARCH_PORT"
W0209 17:32:01.753524 11644 warnings.go:67] spec.template.spec.containers[0].env[40].name: duplicate name "SKIP_ELASTICSEARCH_CHECK"
W0209 17:32:01.760523 11644 warnings.go:67] spec.template.spec.containers[0].env[41].name: duplicate name "ELASTICSEARCH_USE_SSL"
W0209 17:32:01.765522 11644 warnings.go:67] spec.template.spec.containers[0].env[45].name: duplicate name "GRAPH_SERVICE_IMPL"
client.go:464: [debug] Watching for changes to Job datahub-datahub-system-update-job with timeout of 5m0s
client.go:492: [debug] Add/Modify event for datahub-datahub-system-update-job: ADDED
client.go:531: [debug] datahub-datahub-system-update-job: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:492: [debug] Add/Modify event for datahub-datahub-system-update-job: MODIFIED
client.go:174: [debug] checking 13 resources for changes
client.go:437: [debug] Looks like there are no changes for Secret "datahub-auth-secrets"
client.go:437: [debug] Looks like there are no changes for Secret "datahub-encryption-secrets"
W0209 17:33:15.230558 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[10].name: duplicate name "EBEAN_DATASOURCE_USERNAME"
W0209 17:33:15.236558 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[11].name: duplicate name "EBEAN_DATASOURCE_PASSWORD"
W0209 17:33:15.241559 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[12].name: duplicate name "EBEAN_DATASOURCE_HOST"
W0209 17:33:15.245558 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[13].name: duplicate name "EBEAN_DATASOURCE_URL"
W0209 17:33:15.249560 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[14].name: duplicate name "EBEAN_DATASOURCE_DRIVER"
wait.go:53: [debug] beginning wait for 13 resources with timeout of 5m0s
wait.go:225: [debug] Deployment is not ready: datahub/datahub-acryl-datahub-actions. 0 out of 1 expected pods are ready
wait.go:225: [debug] Deployment is not ready: datahub/datahub-datahub-gms. 0 out of 1 expected pods are ready
client.go:255: [debug] Starting delete for "datahub-nocode-migration-job" Job
client.go:284: [debug] jobs.batch "datahub-nocode-migration-job" not found
client.go:109: [debug] creating 1 resource(s)
client.go:464: [debug] Watching for changes to Job datahub-nocode-migration-job with timeout of 5m0s
client.go:492: [debug] Add/Modify event for datahub-nocode-migration-job: ADDED
client.go:531: [debug] datahub-nocode-migration-job: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:492: [debug] Add/Modify event for datahub-nocode-migration-job: MODIFIED
client.go:531: [debug] datahub-nocode-migration-job: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
upgrade.go:360: [debug] warning: Upgrade "datahub" failed: post-upgrade hooks failed: timed out waiting for the condition
upgrade.go:378: [debug] Upgrade failed and atomic is set, rolling back to last successful release
history.go:53: [debug] getting history for release datahub
rollback.go:64: [debug] preparing rollback of datahub
rollback.go:112: [debug] rolling back datahub (current: v19, target: v18)
rollback.go:71: [debug] creating rolled back release for datahub
rollback.go:77: [debug] performing rollback of datahub
client.go:174: [debug] checking 13 resources for changes
client.go:437: [debug] Looks like there are no changes for Secret "datahub-auth-secrets"
client.go:437: [debug] Looks like there are no changes for Secret "datahub-encryption-secrets"
W0209 17:42:46.507031 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[10].name: duplicate name "EBEAN_DATASOURCE_USERNAME"
W0209 17:42:46.507031 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[11].name: duplicate name "EBEAN_DATASOURCE_PASSWORD"
W0209 17:42:46.507031 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[12].name: duplicate name "EBEAN_DATASOURCE_HOST"
W0209 17:42:46.507031 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[13].name: duplicate name "EBEAN_DATASOURCE_URL"
W0209 17:42:46.508031 11644 warnings.go:67] spec.jobTemplate.spec.template.spec.containers[0].env[14].name: duplicate name "EBEAN_DATASOURCE_DRIVER"
wait.go:53: [debug] beginning wait for 13 resources with timeout of 5m0s
rollback.go:223: [debug] superseding previous deployment 18
rollback.go:83: [debug] updating status for rolled back release for datahub
Error: UPGRADE FAILED: release datahub failed, and has been rolled back due to atomic being set: post-upgrade hooks failed: timed out waiting for the condition
helm.go:81: [debug] post-upgrade hooks failed: timed out waiting for the condition
release datahub failed, and has been rolled back due to atomic being set
<http://helm.sh/helm/v3/pkg/action.(*Upgrade).failRelease|helm.sh/helm/v3/pkg/action.(*Upgrade).failRelease>
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:410
<http://helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade|helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade>
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:341
<http://helm.sh/helm/v3/pkg/action.(*Upgrade).Run|helm.sh/helm/v3/pkg/action.(*Upgrade).Run>
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:130
main.newUpgradeCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:154
<http://github.com/spf13/cobra.(*Command).execute|github.com/spf13/cobra.(*Command).execute>
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842
<http://github.com/spf13/cobra.(*Command).ExecuteC|github.com/spf13/cobra.(*Command).ExecuteC>
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950
<http://github.com/spf13/cobra.(*Command).Execute|github.com/spf13/cobra.(*Command).Execute>
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
UPGRADE FAILED
main.newUpgradeCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:156
<http://github.com/spf13/cobra.(*Command).execute|github.com/spf13/cobra.(*Command).execute>
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842
<http://github.com/spf13/cobra.(*Command).ExecuteC|github.com/spf13/cobra.(*Command).ExecuteC>
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950
<http://github.com/spf13/cobra.(*Command).Execute|github.com/spf13/cobra.(*Command).Execute>
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:80
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
$ kubectl get pods -n datahub
NAME READY STATUS RESTARTS AGE
datahub-acryl-datahub-actions-79f678dc-ntcmv 1/1 Running 0 13d
datahub-datahub-frontend-79c7949c69-ftptf 1/1 Running 0 13d
datahub-datahub-gms-698cb7d7-4wsvj 1/1 Running 0 13d
datahub-datahub-system-update-job--1-zvbw4 1/1 Running 0 4m46s
datahub-datahub-upgrade-job--1-n68k7 0/1 Completed 0 13d
datahub-elasticsearch-setup-job--1-djdvl 0/1 Completed 0 6m37s
datahub-kafka-setup-job--1-cjgjf 0/1 Completed 0 6m28s
datahub-mysql-setup-job--1-vkdr9 0/1 Completed 0 4m52s
elasticsearch-master-0 0/1 Running 1 (61s ago) 15d
elasticsearch-master-1 1/1 Running 1 (13d ago) 15d
elasticsearch-master-2 1/1 Running 0 15d
prerequisites-cp-schema-registry-7d489cfc6d-swp2d 2/2 Running 0 15d
prerequisites-kafka-0 1/1 Running 0 15d
prerequisites-mysql-0 1/1 Running 0 15d
prerequisites-neo4j-community-0 1/1 Running 0 169d
prerequisites-zookeeper-0 1/1 Running 0 15d
Does anyone have any idea of what do I'm missing here? I've seen that there are duplicate environment variables such as:
DATAHUB_UPGRADE_HISTORY_TOPIC_NAME
ENTITY_REGISTRY_CONFIG_PATH
EBEAN_DATASOURCE_USERNAME
EBEAN_DATASOURCE_PASSWORD
EBEAN_DATASOURCE_HOST
EBEAN_DATASOURCE_PORT
EBEAN_DATASOURCE_DBNAME
Thank you all in advance!brainy-tent-14503
02/10/2023, 12:49 AMdatahub-datahub-system-update-job--1-zvbw4 1/1 Running 0 4m46s
wait for it to complete and re-run the command. The atomic flag might be interrupting it on timeout, so either try without atomic, let that job run or increase the timeout based on your data size and hardware it may take awhile, refer to this doc.billions-family-12217
02/10/2023, 7:13 AMbillions-family-12217
02/10/2023, 7:14 AMfierce-baker-1392
02/10/2023, 11:00 AMpowerful-memory-77948
02/10/2023, 9:22 PMfierce-baker-1392
02/12/2023, 12:44 PMfierce-baker-1392
02/12/2023, 12:47 PMbillions-twilight-48559
02/13/2023, 11:51 AMbillions-twilight-48559
02/13/2023, 11:51 AMmicroscopic-mechanic-13766
02/13/2023, 1:36 PMMETADATA_SERVICE_AUTH_ENABLED
enabled but I have a "problem". The problem is that the creation of the tokens has to be done in Datahub. Is there any existing way to make Datahub check the validation of a token in a third-party software like Apache Knox?
The aim of this is to have a centralize site to manage the applications tokens.
Thanks in advance!!little-megabyte-1074
witty-motorcycle-52108
02/14/2023, 5:05 AMwitty-motorcycle-52108
02/14/2023, 6:23 AM-u SystemUpdate
as specified in helm, and i'm getting logs with Caused by: java.lang.IllegalArgumentException: No upgrade with id SystemUpdate could be found. Aborting...
in them which does not make any sense to me. also tried running the container with no -u
arg, still threw errors.
attaching screenshots of some logs.
i also dont understand why it's saying 2023-02-14 06:09:37.349 INFO 1 --- [ main] c.l.g.f.k.s.AwsGlueSchemaRegistryFactory : Creating AWS Glue registry
when i have SCHEMA_REGISTRY_TYPE
set to kafka
.
what services need to be running in order for an upgrade to take place? all? none?
GMS is bootlooping due to
2023-02-14 06:20:12,838 [ThreadPoolTaskExecutor-1] ERROR o.a.k.c.c.i.ConsumerCoordinator:283 - [Consumer clientId=consumer-generic-duhe-consumer-job-client-1, groupId=generic-duhe-consumer-job-client] User provided listener org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerConsumerRebalanceListener failed on invocation of onPartitionsAssigned for partitions [DataHubUpgradeHistory_v1-2]
and MAE consumer is bootlooping due to GMS not being available, but the upgrade task seems to have hostnames for both of those based on the helm chart? is there some circular dependency here that's causing issues?powerful-cat-68806
02/14/2023, 8:09 AMdatahub-datahub-gms-xxxx
pod is failing with the error
org.postgresql.util.PSQLException: ERROR: relation "metadata_aspect_v2" does not exist
Iβm using my own pgSQL db & configured its values in the chart
I understand that the Postgres setup, in the deployment, should create this relation, but itβs not
Pls. advise
Cc: @incalculable-ocean-74010 @astonishing-answer-96712microscopic-mechanic-13766
02/14/2023, 8:25 AMshy-dog-84302
02/14/2023, 11:05 AMwhite-horse-97256
02/14/2023, 5:53 PMwitty-motorcycle-52108
02/14/2023, 9:58 PMrapid-crowd-46218
02/15/2023, 1:36 AMbillions-family-12217
02/15/2023, 6:28 AMbillions-family-12217
02/15/2023, 6:28 AMbillions-family-12217
02/15/2023, 6:47 AM