little-waitress-21103
02/15/2023, 7:38 AMbillions-family-12217
02/15/2023, 10:10 AMmagnificent-engine-69382
02/15/2023, 11:44 AMmost-jackal-69587
02/15/2023, 12:51 PMwhite-horse-97256
02/15/2023, 11:32 PMstocky-plumber-3084
02/16/2023, 3:03 AMflat-painter-78331
02/16/2023, 6:16 AMsquare-football-37770
02/16/2023, 6:55 AMdatahub docker quickstart
to NOT download and overwrite existing docker-compose.yaml
file or, alternatively, how would I set/change ENV VARS using docker desktop on a Mac? Thankssquare-football-37770
02/16/2023, 9:17 AMdatahub
and it worked fine. then I deleted the cluster and recreated everything, now I keep getting
configure-sysctl" in pod "elasticsearch-master-0" not found for default/elasticsearch-master-0 (configure-sysctl)
when installing the prerequisites
, any idea what might be going on?fierce-baker-1392
02/16/2023, 3:02 PMwhite-horse-97256
02/16/2023, 8:03 PMastonishing-cartoon-6079
02/17/2023, 8:26 AMCOMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose -p datahub build
, but it fails.fierce-baker-1392
02/17/2023, 11:02 AMvictorious-spoon-76468
02/17/2023, 1:35 PMpostgresql-setup-job
runs just fine and creates the metadata_aspect_v2
on the database, when the GMS pod runs it crashes with the following error:
13:23:22.927 [pool-9-thread-1] ERROR c.d.authorization.DataHubAuthorizer:229 - Failed to retrieve policy urns! Skipping updating policy cache until next refresh. start: 0, count: 30
javax.persistence.PersistenceException: Query threw SQLException:ERROR: relation "metadata_aspect_v2" does not exist
Position: 94 Bind values:[urn:li:dataHubPolicy:7, dataHubPolicyKey, 0, urn:li:dataHubPolicy:7, dataHubPolicyInfo, 0] Query was:select urn, aspect, version, metadata, systemMetadata, createdOn, createdBy, createdFor FROM metadata_aspect_v2 WHERE urn = ? AND aspect = ? AND version = ? UNION ALL SELECT urn, aspect, version, metadata, systemMetadata, createdOn, createdBy, createdFor FROM metadata_aspect_v2 WHERE urn = ? AND aspect = ? AND version = ?
at io.ebean.config.dbplatform.SqlCodeTranslator.translate(SqlCodeTranslator.java:52)
at io.ebean.config.dbplatform.DatabasePlatform.translate(DatabasePlatform.java:219)
at io.ebeaninternal.server.query.CQueryEngine.translate(CQueryEngine.java:149)
at io.ebeaninternal.server.query.DefaultOrmQueryEngine.translate(DefaultOrmQueryEngine.java:43)
at io.ebeaninternal.server.core.OrmQueryRequest.translate(OrmQueryRequest.java:102)
at io.ebeaninternal.server.query.CQuery.createPersistenceException(CQuery.java:702)
at io.ebeaninternal.server.query.CQueryEngine.findMany(CQueryEngine.java:411)
at io.ebeaninternal.server.query.DefaultOrmQueryEngine.findMany(DefaultOrmQueryEngine.java:133)
at io.ebeaninternal.server.core.OrmQueryRequest.findList(OrmQueryRequest.java:459)
at io.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1596)
at io.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1574)
at io.ebeaninternal.server.querydefn.DefaultOrmQuery.findList(DefaultOrmQuery.java:1481)
at com.linkedin.metadata.entity.ebean.EbeanAspectDao.batchGetUnion(EbeanAspectDao.java:360)
at com.linkedin.metadata.entity.ebean.EbeanAspectDao.batchGet(EbeanAspectDao.java:280)
at com.linkedin.metadata.entity.ebean.EbeanAspectDao.batchGet(EbeanAspectDao.java:261)
at com.linkedin.metadata.entity.EntityService.exists(EntityService.java:1624)
at com.linkedin.metadata.shared.ValidationUtils.lambda$validateSearchResult$0(ValidationUtils.java:34)
at java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:176)
at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at com.linkedin.metadata.shared.ValidationUtils.validateSearchResult(ValidationUtils.java:35)
at com.linkedin.metadata.client.JavaEntityClient.search(JavaEntityClient.java:300)
at com.datahub.authorization.PolicyFetcher.fetchPolicies(PolicyFetcher.java:50)
at com.datahub.authorization.PolicyFetcher.fetchPolicies(PolicyFetcher.java:42)
at com.datahub.authorization.DataHubAuthorizer$PolicyRefreshRunnable.run(DataHubAuthorizer.java:222)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.postgresql.util.PSQLException: ERROR: relation "metadata_aspect_v2" does not exist
Position: 94
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:166)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:118)
at io.ebean.datasource.pool.ExtendedPreparedStatement.executeQuery(ExtendedPreparedStatement.java:136)
at io.ebeaninternal.server.query.CQuery.prepareResultSet(CQuery.java:376)
at io.ebeaninternal.server.query.CQuery.prepareBindExecuteQueryWithOption(CQuery.java:324)
at io.ebeaninternal.server.query.CQuery.prepareBindExecuteQuery(CQuery.java:319)
at io.ebeaninternal.server.query.CQueryEngine.findMany(CQueryEngine.java:384)
... 29 common frames omitted
Any idea why this might be happening?miniature-xylophone-2277
02/17/2023, 6:07 PMdatahub-actions
container is not down. So I was wondering how can I trace back to debug what is the issue.
Thanks in advance for your help.white-horse-97256
02/17/2023, 11:00 PMhelm install prerequisites datahub/datahub-prerequisites
this command is there we can configure datahub to use mysql server hosted on-prem in our org?red-waitress-53338
02/18/2023, 11:15 PMbright-receptionist-94235
02/21/2023, 8:45 AMgifted-diamond-19544
02/21/2023, 11:24 AMfierce-guitar-16421
02/21/2023, 11:57 AMdocker
fails and complains that the flag --load
is unknown. It looks like the palantir docker plugin translates the task option load(true)
into the flag --load
to the underlying docker CLI but then the underlying docker does not recognize it. (See following pic.)
Any thought if this could be fixed, or am I doing something wrong? Thanks!
My setup:
• Machine: Mac 2019 Intel
• Docker version: 20.10.23shy-dog-84302
02/21/2023, 7:05 PMMETADATA_CHANGE_LOG_TIMESERIES_TOPIC_NAME=MetadataChangeLog_Timeseries_v1
) referred here unlike other Kafka topics(which is 7 days)?
And what is the consequence of limiting this also to 7 days?eager-electrician-64984
02/22/2023, 8:16 AMblue-microphone-24514
02/23/2023, 3:45 PMdatahub-frontend.oidcAuthentification.scope
is explicitely set to the recommended openid profile email ...blue-microphone-24514
02/23/2023, 3:48 PMblue-microphone-24514
02/23/2023, 3:49 PMbrief-oyster-50637
02/23/2023, 10:55 PMquickstart
, in order to validate DataHub.
Now we want to gradually improve this infrastructure to be more reliable, so we thought of keeping most part of the quickstart setup and start off by just migrating the database to a managed mysql service (Google Cloud SQL). So we’ve configured the docker-compose file to point to this instance instead of running the mysql container in the same VM. However we’ve been experiencing some issues in the db initialization (e.g. it doesn’t create the datahub users, not even the user “datahub”).
But instead of asking about how to solve this initialization problem, I’d like to take a step back and ask if anyone has already tried this type of deployment in production: quickstart + managed mysql service. Does it make sense trying to make this setup work for a light production deployment? Are there any major concern related to this setup? Thank you!rough-island-44285
02/24/2023, 5:59 AM./gradlew build
trying to get get datahub deployed locally. But it has been running for more than hour, can anyone share your experience, how long would it take normally. Any help would be much appreciated.limited-forest-73733
02/24/2023, 6:21 AMflat-painter-78331
02/24/2023, 8:09 AMgifted-diamond-19544
02/24/2023, 9:19 AM