faint-school-61982
06/07/2023, 3:27 PMstraight-psychiatrist-62825
06/07/2023, 4:21 PMproud-dusk-671
06/08/2023, 5:49 AMboundless-piano-94348
06/08/2023, 8:56 AMneo4j
to elasticsearch
starting from v0.10.0, while the default value in subcharts are still neo4j
. Is there any reason of the change? Also, it is mentioned in docs that neo4j
is still the default because of backward compatibility. What is the recommended graph_service_impl from now on and going forward?
2. In what situation will Neo4j have advantage over ES? Which specific features and scenario where Neo4j will be more beneficial?
Another question, what is the recommended schema registry between internal and kafka? What is the advantage and disadvantage between them?red-kilobyte-70424
06/08/2023, 8:57 AMchilly-boots-22585
06/08/2023, 12:14 PMchilly-boots-22585
06/08/2023, 1:00 PMchilly-boots-22585
06/08/2023, 2:13 PMstraight-psychiatrist-62825
06/08/2023, 10:00 PMbland-gigabyte-28270
06/09/2023, 7:22 AMINTERNAL
schema registry. The system update job and the gms pod fail.
System update job log:
2023-06-09 07:16:51,099 [main] INFO c.l.d.u.impl.DefaultUpgradeReport:16 - Executing Step 4/5: DataHubStartupStep...
org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.io.IOException: No schema registered under subject!
GMS pod cannot connect to itself:
2023-06-09 07:21:57,361 [R2 Nio Event Loop-1-1] WARN c.l.r.t.h.c.c.ChannelPoolLifecycle:139 - Failed to create channel, remote=localhost/127.0.0.1:8080
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8080
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:829)
handsome-park-80602
06/09/2023, 4:14 PMmagnificent-honey-40185
06/09/2023, 9:29 PMcuddly-arm-8412
06/12/2023, 2:07 PMbitter-waitress-17567
06/13/2023, 10:37 AM/usr/local/lib/python3.10/site-packages/pyspark/jars/commons-text-1.6.jar
future-yak-13169
06/13/2023, 2:28 PM2023-06-13 12:29:26,953 [ThreadPoolTaskExecutor-1] ERROR o.s.k.l.KafkaMessageListenerContainer:149 - Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'SerializationException's directly; please consider configuring an 'ErrorHandlingDeserializer' in the value and/or key deserializer
at org.springframework.kafka.listener.DefaultErrorHandler.handleOtherException(DefaultErrorHandler.java:151)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1815)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1303)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition MetadataChangeLog_Versioned_v1-0 at offset 11251397. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro unknown schema for id 5
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237)
We installed prerequisites with latest chart and kafka 3.4.0 and tried installing 10.3 again, but still failed with same message.
Please advise on what is the correct working combination currently. Our application is down currently.few-sugar-84064
06/14/2023, 2:45 AMcuddly-arm-8412
06/14/2023, 6:40 AMproud-dusk-671
06/14/2023, 10:45 AMfancy-crayon-39356
06/14/2023, 2:39 PMv0.10.2
running in production (k8s deployment) for quite a while now. However, we are implementing basic auth to schema registry and I would like to know if datahub supports that? I've tried setting the following, on `values.yaml`:
credentialsAndCertsSecrets:
name: my-secret
secureEnv:
<http://schema.registry.basic.auth.user.info|schema.registry.basic.auth.user.info>: schema_registry_basic_auth_user_info
And it resulted in the creation of the SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_BASIC_AUTH_USER_INFO
env variable in every datahub component, pointing to the right secret and secret-key. However, I still get 401's from our Schema Registry - meaning that basic auth was not implemented.
If it is supported, how can we define it correctly?straight-psychiatrist-62825
06/14/2023, 3:17 PMbest-wire-59738
06/15/2023, 4:36 AM./gradlew build
.
When I tried to build frontend image using the below dockerfile the Image build is getting stuck in the middle at gradle build command and it’s not moving forward. I had check the logs using --debug
mode but didn’t figure out the actual issue. I had also attached the logs for your reference. Could you please help me out with the issue.
docker buildx build . -t datahub --platform=linux/arm64
cold-tent-85599
06/15/2023, 6:50 AMstocky-guitar-68560
06/15/2023, 9:36 AMorange-gpu-90973
06/15/2023, 2:57 PMcuddly-arm-8412
06/19/2023, 9:36 AMcreamy-van-28626
06/19/2023, 8:04 PMbrainy-teacher-89198
06/20/2023, 12:45 AMdatahub-rest-default
connection from Airflow to the GMS deployed to Kubernetes. I'm facing the following error (likely to do with proxy ingress?), any guidance would be appreciated!
raise InvalidSchema(f"No connection adapters were found for {url!r}")
requests.exceptions.InvalidSchema: No connection adapters were found for '{my domain:gms service is populated here}:8080/aspects?action=ingestProposal'
flat-engineer-75197
06/20/2023, 6:31 PMacryldata/datahub-actions
in the helm chart? It’s currently on v0.0.11 which has the old 0.10.0.6
CLI.
Ref: https://github.com/acryldata/datahub-helm/blob/master/charts/datahub/values.yaml#L47refined-energy-76018
06/21/2023, 2:31 AMdatahub-system-update-job
. That is v0.9.3
-> v0.10.0
-> v0.10.1
-> v0.10.3
. Is this expected? https://datahubproject.io/docs/how/updating-datahub/ This page says only v0.10.0
should have caused downtime. Is this issue related to the now-fixed retention bug in the DataHubUpgradeHistory_v1
topic? What is confusing me is that when I made changes but kept the Datahub version the same, it wouldn't trigger a reindex even if it was past the previous DataHubUpgradeHistory_v1
default retention period of 7 days.fierce-agent-11572
06/21/2023, 3:12 PM