prehistoric-room-17640
02/14/2022, 2:10 PMprehistoric-room-17640
02/14/2022, 2:14 PM14:10:14.862 [Thread-3020] WARN org.elasticsearch.client.RestClient:65 - request [POST <http://elasticsearch-master:9200/*index_v2/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true>] returned 2 warnings: [299 Elasticsearch-7.16.2-2b937c44140b6559905130a8650c64dbd0879cfb "Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See <https://www.elastic.co/guide/en/elasticsearch/reference/7.16/security-minimal-setup.html> to enable security."],[299 Elasticsearch-7.16.2-2b937c44140b6559905130a8650c64dbd0879cfb "[ignore_throttled] parameter is deprecated because frozen indices have been deprecated. Consider cold or frozen tiers in place of frozen indices."]
brave-businessperson-3969
02/14/2022, 8:37 PMrich-policeman-92383
02/17/2022, 7:29 AMdamp-minister-31834
02/18/2022, 3:40 AMgifted-piano-21322
02/21/2022, 10:13 AMbroad-thailand-41358
02/22/2022, 6:49 PMable-rain-74449
03/01/2022, 2:08 PMprerequisites-cp-schema-registry
not sure if that's Kafka not connecting.
➜ 01pre-req kubectl logs datahub-prerequisites-cp-schema-registry-65d8777cc8-m88mn cp-schema-registry-server
===> User
uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
===> Configuring ...
===> Running preflight checks ...
===> Check if Kafka is healthy ...
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [z-1.datahub-demo-cluster-......................OMITTED:9092]
client.dns.lookup = use_all_dns_ips
client.id =
<http://connections.max.idle.ms|connections.max.idle.ms> = 300000
<http://default.api.timeout.ms|default.api.timeout.ms> = 60000
<http://metadata.max.age.ms|metadata.max.age.ms> = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
<http://metrics.sample.window.ms|metrics.sample.window.ms> = 30000
receive.buffer.bytes = 65536
<http://reconnect.backoff.max.ms|reconnect.backoff.max.ms> = 1000
<http://reconnect.backoff.ms|reconnect.backoff.ms> = 50
<http://request.timeout.ms|request.timeout.ms> = 30000
retries = 2147483647
<http://retry.backoff.ms|retry.backoff.ms> = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
<http://socket.connection.setup.timeout.max.ms|socket.connection.setup.timeout.max.ms> = 127000
<http://socket.connection.setup.timeout.ms|socket.connection.setup.timeout.ms> = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 6.1.0-ccs
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 5496d92defc9bbe4
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1646143502378
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.TimeoutException: Call(callName=fetchMetadata, deadlineMs=1646143532389, tries=1, nextAllowedTryMs=1646143532490) timed out at 1646143532390 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call. Call: fetchMetadata
[main] ERROR io.confluent.admin.utils.ClusterStatus - Error while getting broker list.
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1646143542388, tries=1, nextAllowedTryMs=1646143542489) timed out at 1646143542389 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:149)
at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:150)
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listNodes, deadlineMs=1646143542388, tries=1, nextAllowedTryMs=1646143542489) timed out at 1646143542389 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
[main] INFO io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ...
[main] ERROR io.confluent.admin.utils.ClusterStatus - Expected 1 brokers but found only 0. Brokers found [].
able-rain-74449
03/01/2022, 2:09 PMminiature-account-72792
03/01/2022, 2:39 PMvalues.yaml
of the prerequisites?able-rain-74449
03/01/2022, 2:42 PMable-rain-74449
03/01/2022, 2:43 PM---
# Source: datahub-prerequisites/charts/cp-helm-charts/charts/cp-schema-registry/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: datahub-prerequisites-cp-schema-registry
namespace: datahub
labels:
app: cp-schema-registry
chart: cp-schema-registry-0.1.0
release: datahub-prerequisites
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: cp-schema-registry
release: datahub-prerequisites
template:
metadata:
labels:
app: cp-schema-registry
release: datahub-prerequisites
annotations:
<http://prometheus.io/scrape|prometheus.io/scrape>: "true"
<http://prometheus.io/port|prometheus.io/port>: "5556"
spec:
containers:
- name: prometheus-jmx-exporter
image: "solsson/kafka-prometheus-jmx-exporter@sha256:6f82e2b0464f50da8104acd7363fb9b995001ddff77d248379f8788e78946143"
imagePullPolicy: "IfNotPresent"
command:
- java
- -XX:+UnlockExperimentalVMOptions
- -XX:+UseCGroupMemoryLimitForHeap
- -XX:MaxRAMFraction=1
- -XshowSettings:vm
- -jar
- jmx_prometheus_httpserver.jar
- "5556"
- /etc/jmx-schema-registry/jmx-schema-registry-prometheus.yml
ports:
- containerPort: 5556
resources:
{}
volumeMounts:
- name: jmx-config
mountPath: /etc/jmx-schema-registry
- name: cp-schema-registry-server
image: "confluentinc/cp-schema-registry:6.1.0"
imagePullPolicy: "IfNotPresent"
ports:
- name: schema-registry
containerPort: 8081
protocol: TCP
- containerPort: 5555
name: jmx
resources:
{}
env:
- name: SCHEMA_REGISTRY_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SCHEMA_REGISTRY_LISTENERS
value: <http://0.0.0.0:8081>
- name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
value: z-1.datahub-demo-cluster-1..............OMITTEED..........:9092 #:9092
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: datahub-prerequisites
- name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
value: "true"
- name: SCHEMA_REGISTRY_HEAP_OPTS
value: "-Xms512M -Xmx512M"
- name: JMX_PORT
value: "5555"
volumes:
- name: jmx-config
configMap:
name: datahub-prerequisites-cp-schema-registry-jmx-configmap
able-rain-74449
03/01/2022, 2:44 PM---
# Source: datahub-prerequisites/charts/cp-helm-charts/charts/cp-schema-registry/templates/jmx-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: datahub-prerequisites-cp-schema-registry-jmx-configmap
namespace: datahub
labels:
app: cp-schema-registry
chart: cp-schema-registry-0.1.0
release: datahub-prerequisites
heritage: Helm
data:
jmx-schema-registry-prometheus.yml: |+
jmxUrl: service:jmx:rmi:///jndi/<rmi://localhost:5555/jmxrmi>
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
whitelistObjectNames:
- kafka.schema.registry:type=jetty-metrics
- kafka.schema.registry:type=master-slave-role
- kafka.schema.registry:type=jersey-metrics
rules:
- pattern : 'kafka.schema.registry<type=jetty-metrics>([^:]+):'
name: "cp_kafka_schema_registry_jetty_metrics_$1"
- pattern : 'kafka.schema.registry<type=master-slave-role>([^:]+):'
name: "cp_kafka_schema_registry_master_slave_role"
- pattern : 'kafka.schema.registry<type=jersey-metrics>([^:]+):'
name: "cp_kafka_schema_registry_jersey_metrics_$1"
able-rain-74449
03/01/2022, 2:45 PM---
# Source: datahub-prerequisites/charts/cp-helm-charts/charts/cp-schema-registry/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: datahub-prerequisites-cp-schema-registry
namespace: datahub
labels:
app: cp-schema-registry
chart: cp-schema-registry-0.1.0
release: datahub-prerequisites
heritage: Helm
spec:
ports:
- name: schema-registry
port: 8081
- name: metrics
port: 5556
selector:
app: cp-schema-registry
release: datahub-prerequisites
able-rain-74449
03/01/2022, 2:49 PMdatahub-elasticsearch-master-2
not ready 🤔red-napkin-59945
03/01/2022, 9:30 PMminiature-account-72792
03/02/2022, 7:12 AMdatahub-upgrade-job
is failing with the following error
Cannot connect to GMSat host datahub-datahub-gms port 8080. Make sure GMS is on the latest version and is running at that host before starting the migration.
Is this also related to the fact that I use certificates?bland-orange-95847
03/02/2022, 9:50 AMMessage: “Failed to construct checkpoint’s config from checkpoint aspect.”
Arguments: (ConfigurationError(‘BigQuery project ids are globally unique. You do not need to specify a platform instance.’),)I think there is something different with the platform instances as they are not supported by bigquery source
red-napkin-59945
03/03/2022, 5:25 PMFACET_FIELDS
rhythmic-bear-20384
03/04/2022, 5:17 AMgorgeous-dinner-4055
03/16/2022, 6:00 AMgetAutoCompleteMultipleResults
function is called, and the searchable types is registered for autocomplete:
.dataFetcher("autoCompleteForMultiple", new AuthenticatedResolver<>(
new AutoCompleteForMultipleResolver(searchableTypes)))
early-midnight-66457
03/16/2022, 7:58 AMearly-midnight-66457
03/16/2022, 7:59 AMearly-midnight-66457
03/16/2022, 7:59 AMfierce-author-36990
03/16/2022, 10:10 AMhigh-family-71209
03/18/2022, 12:15 PMlittle-salesmen-55578
03/23/2022, 4:59 PMbulky-intern-2942
03/30/2022, 7:40 PMsticky-dawn-95000
04/01/2022, 7:22 AMbrief-businessperson-12356
04/04/2022, 11:12 AMkubectl create configmap truststore-configmap --from-file=newTruststore