fierce-monkey-46092
04/06/2023, 10:42 AMastonishing-animal-7168
04/06/2023, 2:50 PMbumpy-musician-39948
04/07/2023, 2:07 AMcuddly-arm-8412
04/07/2023, 7:46 AMcreamy-van-28626
04/10/2023, 1:23 PMloud-hospital-37195
04/10/2023, 3:07 PMloud-hospital-37195
04/10/2023, 3:07 PMkubectl get podsNAME READY STATUS RESTARTS AGE datahub-elasticsearch-setup-job-4zh9g 0/1 Error 0 28m datahub-elasticsearch-setup-job-7d58l 0/1 Error 0 34m datahub-elasticsearch-setup-job-d9698 0/1 Error 0 36m datahub-elasticsearch-setup-job-g58tr 0/1 Error 0 31m datahub-elasticsearch-setup-job-hnj9n 0/1 Error 0 38m datahub-elasticsearch-setup-job-r8r9d 0/1 Error 0 25m datahub-elasticsearch-setup-job-t657v 0/1 Error 0 20m elasticsearch-master-0 0/1 Pending 0 10d elasticsearch-master-1 0/1 Pending 0 10d elasticsearch-master-2 0/1 Pending 0 10d prerequisites-cp-schema-registry-5f89dd4974-sff65 1/2 CrashLoopBackOff 2691 (4m37s ago) 10d prerequisites-kafka-0 0/1 Pending 0 10d prerequisites-mysql-0 0/1 Pending 0 10d prerequisites-zookeeper-0 0/1 Pending 0 10d
numerous-refrigerator-15664
04/11/2023, 5:51 AMdatahub docker quickstart
, it seems my server can't reach GitHub due to my org's policy. So I'm about to try datahub docker quickstart --quickstart-compose-file <path to compose file>
.
And I can see there are two files from GitHub:
1. docker-compose-without-neo4j.quickstart.yml
2. docker-compose.quickstart.yml
When I tried datahub docker quickstart
, it tried to fetch the first one. If my server already has neo4j on it, then would it be better to use the second one?
If I use the first one, then I can't use the graphical lineages?bland-gold-64386
04/11/2023, 10:37 AM1. i have already running kafka, zookeeper ,mysql, schema-registry on my server, Can these replace into datahub docker yaml file ?
2. can i use postgres RDS instance instead of mysql maria-db ?
please let me know if anyone have any idea.
thanks
rapid-crowd-46218
04/11/2023, 2:40 PMkubectl create secret generic mysql-secrets --from-literal=mysql-root-password=<<password>>
this password is my db password in aws mysql.
sql:
datasource:
host: "<<rds-endpoint>>:3306"
hostForMysqlClient: "<<rds-endpoint>>"
port: "3306"
url: "jdbc:mysql://<<rds-endpoint>>:3306/datahub?verifyServerCertificate=false&useSSL=true&useUnicode=yes&characterEncoding=UTF-8"
driver: "com.mysql.jdbc.Driver"
username: "root"
password:
secretRef: mysql-secrets
secretKey: mysql-root-password
rds-endpoint is the writer endpoint value of the my AWS MySQL instance that I created.
And I changed the username to another userid that I created instead of using "root". Do I have to use only "root" as the username value?
Could you please provide me with the correct way to connect AWS RDS to DataHub? Thank you in advance.witty-motorcycle-52108
04/11/2023, 3:39 PMDataHubUpgradeHistory_v1
kafka topic before it continues initializing? this is for 0.10.0 to 0.10.1bland-orange-13353
04/11/2023, 7:39 PMable-city-76673
04/12/2023, 6:00 AMgentle-dinner-85202
04/12/2023, 11:05 AMcuddly-arm-8412
04/12/2023, 3:11 PMdocker pull acryldata/datahub-upgrade:head && docker run --env-file /Users/work/docker.env acryldata/datahub-upgrade:head -u NoCodeDataMigration
Wrong prompt
ERROR SpringApplication Application run failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'upgradeCli': Unsatisfied dependency expressed through field 'noCodeUpgrade'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'entityRegistryFactory': Unsatisfied dependency expressed through field 'configEntityRegistry'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'configEntityRegistry' defined in class path resource [com/linkedin/gms/factory/entityregistry/ConfigEntityRegistryFactory.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.linkedin.metadata.models.registry.ConfigEntityRegistry]: Factory method 'getInstance' threw exception; nested exception is java.io.FileNotFoundException: ../../metadata-models/src/main/resources/entity-registry.yml (No such file or directory)adventurous-nightfall-90271
04/13/2023, 8:23 AMhigh-hospital-85984
04/13/2023, 11:03 AMunrecognized node type: 380
in the database log. Has anyone seen this before?victorious-spoon-76468
04/13/2023, 2:00 PMbland-gold-64386
04/14/2023, 7:11 AMswift-art-14128
04/14/2023, 2:13 PMERROR: Could not find a version that satisfies the requirement acryl-datahub[clickhouse,datahub-kafka,datahub-rest]==0.10.2
great-monkey-52307
04/14/2023, 11:19 PMclient.go:770: [debug] datahub-datahub-system-update-job: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
upgrade.go:436: [debug] warning: Upgrade "datahub" failed: pre-upgrade hooks failed: timed out waiting for the condition
Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition
when I ran kubectl describe pod <name of the pod> I see datahub-auth-secrets is not created automatically and throws error
Error: secret "datahub-auth-secrets" not found
Helm values has provision secrets set to be true
metadata_service_authentication:
enabled: true
systemClientId: "__datahub_system"
systemClientSecret:
secretRef: "datahub-auth-secrets"
secretKey: "token_service_signing_key"
tokenService:
signingKey:
secretRef: "datahub-auth-secrets"
secretKey: "token_service_signing_key"
salt:
secretRef: "datahub-auth-secrets"
secretKey: "token_service_salt"
# Set to false if you'd like to provide your own auth secrets
provisionSecrets:
enabled: true
autoGenerate: true
proud-dusk-671
04/17/2023, 9:25 AMcuddly-arm-8412
04/17/2023, 9:36 AM[com.linkedin.metadata.models.registry.ConfigEntityRegistry]: Factory method 'getInstance' threw exception; nested exception is java.lang.IllegalArgumentException: Aspect queryKey does not exist
0.8.44->
[com.linkedin.metadata.models.registry.ConfigEntityRegistry]: Factory method 'getInstance' threw exception; nested exception is java.lang.IllegalArgumentException: Aspect embed does not exist
I found that the biggest change is still the index change. I found that the logic for creating the index has been placed in the upgrade-docker. Previously, I remember that the program also created it when it started. Currently,
1.do I have a way to directly create the latest indexes.? and then re-ingest data.
2.whether datahub-upgrade docker is the only channel to init elasticsearch-index?creamy-van-28626
04/17/2023, 3:34 PMrapid-spoon-75609
04/17/2023, 10:07 PMdatahub
) password found here. While it did update the password, it looks like the user is no longer a super/admin user. Did I miss something here? How do I ensure the datahub
user is still an admin user?limited-forest-73733
04/18/2023, 8:02 AMbland-orange-13353
04/18/2023, 8:31 AMbland-gold-64386
04/18/2023, 11:18 AMicy-caravan-72551
04/18/2023, 2:04 PMsquare-solstice-69079
04/18/2023, 4:38 PM