Thread
#getting-started
    a

    acoustic-printer-83045

    1 year ago
    However I'm unable to see anything in the datahub UI. IE no datasets present, I've run my DBT ingestion toolchain + a postgres ingestion setup I created to use to tweak the DBT metadata ingest. Thanks a bunch!
    g

    gray-shoe-75895

    1 year ago
    Huh, I just tried this locally and was unable to reproduce it. The
    kafka-topics-ui
    and
    schema-registry-ui
    containers were running in my case, although I agree that it likely shouldn't have made a difference.
    Can you check if the records successfully made it to the mysql database? I tend to use mycli, but any sql client should work
    ❯ mycli <mysql://datahub:datahub@127.0.0.1/datahub>
    MySQL datahub@127.0.0.1:datahub> SELECT * from metadata_aspect;
    a

    acoustic-printer-83045

    1 year ago
    Appologies, that platform is shut down, I probably won’t be able to poke at it til near EOD. will look then
    b

    big-carpet-38439

    1 year ago
    Hey Gary - Were you able to give this another shot
    ?
    a

    acoustic-printer-83045

    1 year ago
    Unfortunately not today 😞 I’ll peel some time off tomorrow to peek at it again.
    b

    big-carpet-38439

    1 year ago
    Of course- just want to make sure we can help you get it figured out 🙂
    a

    acoustic-printer-83045

    1 year ago
    Still not working in the UI. however I can query the DB as Harshal suggested and I see records.
    I see rows in aspect, but not in index. not sure if that's meaningfull
    That was starting from updating to latest master, running quickstart and then running ingest.
    getting intermittent elasticsearch and kibana error logs:
    elasticsearch             | {"type": "server", "timestamp": "2021-04-28T05:19:04,014Z", "level": "ERROR", "component": "o.e.x.i.IndexLifecycleRunner", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "policy [kibana-event-log-policy] for index [.kibana-event-log-7.9.3-000001] failed on step [{\"phase\":\"hot\",\"action\":\"rollover\",\"name\":\"check-rollover-ready\"}]. Moving to ERROR step", "cluster.uuid": "hhOxH3oLQEqw9QprYAeHfQ", "node.id": "WI4z8In_QjiGrEQEuMgang" , 
    elasticsearch             | "stacktrace": ["org.elasticsearch.cluster.block.ClusterBlockException: index [.kibana-event-log-7.9.3-000001] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];",
    elasticsearch             | "at org.elasticsearch.cluster.block.ClusterBlocks.indicesBlockedException(ClusterBlocks.java:222) ~[elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.checkBlock(TransportRolloverAction.java:93) ~[elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.admin.indices.rollover.TransportRolloverAction.checkBlock(TransportRolloverAction.java:60) ~[elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:142) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:133) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:110) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:59) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:179) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.ActionFilter$Simple.apply(ActionFilter.java:53) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:177) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:155) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:83) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:409) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin(ClientHelper.java:109) [x-pack-core-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.core.ClientHelper.executeWithHeadersAsync(ClientHelper.java:170) [x-pack-core-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.ilm.LifecyclePolicySecurityClient.doExecute(LifecyclePolicySecurityClient.java:51) [x-pack-ilm-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:409) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1274) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.rolloverIndex(AbstractClient.java:1786) [elasticsearch-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:141) [x-pack-core-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:174) [x-pack-ilm-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:329) [x-pack-ilm-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:267) [x-pack-ilm-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183) [x-pack-core-7.9.3.jar:7.9.3]",
    elasticsearch             | "at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:211) [x-pack-core-7.9.3.jar:7.9.3]",
    elasticsearch             | "at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]",
    elasticsearch             | "at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]",
    elasticsearch             | "at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]",
    elasticsearch             | "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]",
    elasticsearch             | "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]",
    elasticsearch             | "at java.lang.Thread.run(Thread.java:832) [?:?]"] }
    elasticsearch             | {"type": "server", "timestamp": "2021-04-28T05:19:04,018Z", "level": "ERROR", "component": "o.e.x.i.IndexLifecycleRunner", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "policy [ilm-history-ilm-policy] for index [ilm-history-2-000001] failed on step [{\"phase\":\"hot\",\"action\":\"rollover\",\"name\":\"check-rollover-ready\"}]. Moving to ERROR step", "cluster.uuid": "hhOxH3oLQEqw9QprYAeHfQ", "node.id": "WI4z8In_QjiGrEQEuMgang" , 
    elasticsearch             | "stacktrace": ["org.elasticsearch.cluster.block.ClusterBlockException: index [ilm-history-2-000001] blocked by: [TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark, index has read-only-allow-delete block];",
    kibana                    | {"type":"log","@timestamp":"2021-04-28T05:19:35Z","tags":["error","plugins","taskManager","taskManager"],"pid":7,"message":"Failed to poll for work: Error: Request Timeout after 30000ms"}
    mysql                     | 2021-04-28T05:19:49.697620Z 111 [Note] Bad handshake
    g

    gray-shoe-75895

    1 year ago
    I don’t really know what’s going on there. cc @early-lamp-41924 - have you seen this before?
    e

    early-lamp-41924

    1 year ago
    ah so
    this happened last time for me
    when I ran out of docker disk space
    can you run
    docker system prune
    and then restart the elasticsearch container?
    a

    acoustic-printer-83045

    1 year ago
    I’m running on linux, I don’t think there’s limits on resource usage. But agreed it looks like a system resource issue. I’ll poke from that angle this evening. Thanks!
    Yup it was resources, docker for fedora default uses
    /var/lib/docker
    which on my machine defaulted to 64gb. My other primary machine is an osx laptop where it installs in the home folder. Thanks and appologies.
    e

    early-lamp-41924

    1 year ago
    Glad it worked out! I struggled for a few days last time with the same issue as well.