I’m seeing odd behavior with v0.7.1 in which the g...
# ui
w
I’m seeing odd behavior with v0.7.1 in which the gms is seeing requests that are too long:
2021-05-18 14:59:52.109:WARN:oejh.HttpParser:qtp544724190-13: URI is too large >8192
. This throws a
414
response and causes the frontend to error out:
Caused by: com.linkedin.r2.RemoteInvocationException: com.linkedin.r2.RemoteInvocationException: Received error 414 from server for URI <http://datahub-datahub-gms:8080/datasets>
. I am trying to figure out how to increase the logging to see the actual urls being requested. Any thoughts?
FYI @early-lamp-41924
e
cc @green-football-43791 @big-carpet-38439 any ideas?
b
hmm - which is the query you are sending? is this a batch retrieve?
w
yeah
16:36:18 [Thread-599] WARN  n.g.e.SimpleDataFetcherExceptionHandler - Exception while fetching data (/dataset/upstreamLineage/entities[3]/entity/downstreamLineage/entities[15]/entity) : java.lang.RuntimeException: Failed to retrieve entities of type Dataset
g
Hey Zack- this issue was likely fixed in a commit after 0.7.1
w
hmmm ok, should I revert to 0.7.0 or is that also affected?
g
would you be open to running on the latest hash?
w
sure if there aren’t any significant changes to the data models or anything
b
Yep this should have been fixed
g
nope- there shouldn't have been any backwards incompatible changes since then
w
am I just using the latest hash for the frontend or for the latest hash for the gms, frontend, mce, and mae?
b
should just be frontend, right @green-football-43791 ?
g
I would use the latest hash for everything
its possible that you could get away with just the frontend being the latest hash, but you might see some strange behavior
for example, some features may not work in strange ways.
w
ok, I’ll update the hashes for the gms, mce, and mae
I updated to the latest hash for the frontend but the deployment threw an error (
Configuration error: Configuration error[datahub-frontend/conf/application.conf: 143: Could not resolve substitution to a value: ${KAFKA_BOOTSTRAP_SERVER}]
) I’m guess I need to add that as an env var to the chart?
similar to the way it’s set up for the gms?
g
hmm- @big-carpet-38439 any idea why a default wouldn't be picked up?
Zack you can try setting that to 9092, but I'm surprised you got that error to begin with
b
If you are deploying with Helm, yes you'll need to introduce that as an env var on the chart... Let me provide the value we use
g
or rather broker:29092
b
Here are the new configs for the datahub-frontend container:
Copy code
# Required Kafka Producer Configs
KAFKA_BOOTSTRAP_SERVER=broker:29092
DATAHUB_TRACKING_TOPIC=DataHubUsageEvent_v1

# Required Elastic Client Configuration (Analytics)
ELASTIC_CLIENT_HOST=elasticsearch
ELASTIC_CLIENT_PORT=9200
w
ah so the analytics that @early-lamp-41924 referred to before are in these hashes. This means I need to create the topic for that. Is it just that single topic?
DataHubUsageEvent_v1
?
e
Hmn I set this in the helm charts. Checking
@white-beach-27328 you are using the latest helm charts right?
I see
Copy code
- name: KAFKA_BOOTSTRAP_SERVER
              value: "{{ .Values.global.kafka.bootstrap.server }}"
            - name: ELASTIC_CLIENT_HOST
              value: "{{ .Values.global.elasticsearch.host }}"
            - name: ELASTIC_CLIENT_PORT
              value: "{{ .Values.global.elasticsearch.port }}"
            - name: DATAHUB_TRACKING_TOPIC
              value: "DataHubUsageEvent_v1"
            - name: DATAHUB_ANALYTICS_ENABLED
              value: "{{ .Values.global.datahub_analytics_enabled }}"
w
no I’m using the charts from 0.7.1
e
ah that must be why
w
but I didn’t see the
KAFKA_BOOTSTRAP_SERVER
on
master
either
w
yeah must have missed it or been on a different tag
e
Can you set global.datahub_analytics_enabled to false after merging with latest master?
You can set it to true once you have DataHubUsageEvent_v1 topic set up!
w
ok, any reason I shouldn’t have compaction on this topic?
e
Nope
w
cool, btw the setup jobs are missing the imagePullSecrets, should I fix that in a PR or are these K8's jobs getting replaced with something else?
e
Please fix! Thanks for finding these missing pieces!
w
happy to help where I can