Hello again :grimacing: How can I setup S3 as deep...
# troubleshooting
d
Hello again 😬 How can I setup S3 as deep storage while using the helm chart? I tried adding the configs from this article to
controller.extra.configs
, but every time I do it the Controller starts responding with
502 Bad Gateway
and I can't see anything wrong in the logs. Results from
helm template
on the thread.
Copy code
# Source: pinot/templates/controller/configmap.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   <http://www.apache.org/licenses/LICENSE-2.0>
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#

apiVersion: v1
kind: ConfigMap
metadata:
  name: responses-pinot-controller-config
data:
  pinot-controller.conf: |-
    controller.helix.cluster.name=responses
    controller.port=9000
    controller.data.dir=s3://<redacted>
    controller.zk.str=responses-pinot-zookeeper:2181
    pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
    pinot.controller.storage.factory.s3.region=eu-west-1
    controller.realtime.segment.deepStoreUploadRetryEnabled=true
    pinot.controller.segment.fetcher.protocols=file,http,s3
    pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
# Source: pinot/templates/controller/statefulset.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   <http://www.apache.org/licenses/LICENSE-2.0>
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: responses-pinot-controller
  labels:
    <http://helm.sh/chart|helm.sh/chart>: pinot-0.2.6-SNAPSHOT
    app: pinot
    release: responses-pinot
    <http://app.kubernetes.io/version|app.kubernetes.io/version>: "0.2.6-SNAPSHOT"
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
    heritage: Helm
    component: controller
spec:
  selector:
    matchLabels:
      app: pinot
      release: responses-pinot
      component: controller
  serviceName: responses-pinot-controller-headless
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        <http://helm.sh/chart|helm.sh/chart>: pinot-0.2.6-SNAPSHOT
        app: pinot
        release: responses-pinot
        <http://app.kubernetes.io/version|app.kubernetes.io/version>: "0.2.6-SNAPSHOT"
        <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
        heritage: Helm
        component: controller
      annotations:
        <http://iam.amazonaws.com/role:<redacted|iam.amazonaws.com/role:<redacted>>
        <http://prometheus.io/path|prometheus.io/path>: /metrics
        <http://prometheus.io/port|prometheus.io/port>: "5556"
        <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccountName: responses-pinot
      securityContext:
        {}
      nodeSelector:
        {}
      affinity:
        {}
      tolerations:
        []
      containers:
      - name: controller
        securityContext:
          {}
        image: "dianaarnos/pinot:0.9.3-with-ingress"
        imagePullPolicy: IfNotPresent
        args: [ "StartController", "-configFileName", "/var/pinot/controller/config/pinot-controller.conf" ]
        env:
          - name: JAVA_OPTS
            value: "-Xms1G -Xmx4G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml -Dlog4j2.configurationFile=/opt/pinot/conf/log4j2.xml -Dplugins.dir=/opt/pinot/plugins"
        envFrom:
          []
        ports:
          - containerPort: 9000
            protocol: TCP
            name: controller
        volumeMounts:
          - name: config
            mountPath: /var/pinot/controller/config
          - name: data
            mountPath: "/var/pinot/controller/data"
        resources:
            limits:
              cpu: 1
              memory: 4G
            requests:
              cpu: 1
              memory: 4G
      restartPolicy: Always
      volumes:
      - name: config
        configMap:
          name: responses-pinot-controller-config
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: "1G"
# Source: pinot/templates/server/configmap.yaml
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   <http://www.apache.org/licenses/LICENSE-2.0>
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#

apiVersion: v1
kind: ConfigMap
metadata:
  name: responses-pinot-server-config
data:
  pinot-server.conf: |-
    pinot.server.netty.port=8098
    pinot.server.adminapi.port=8097
    pinot.server.instance.dataDir=/var/pinot/server/data/index
    pinot.server.instance.segmentTarDir=/var/pinot/server/data/segment
    pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
    pinot.server.storage.factory.s3.region=eu-west-1
    pinot.server.segment.fetcher.protocols=file,http,s3
    pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
# Source: pinot/templates/server/statefulset.yml
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   <http://www.apache.org/licenses/LICENSE-2.0>
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: responses-pinot-server
  labels:
    <http://helm.sh/chart|helm.sh/chart>: pinot-0.2.5-SNAPSHOT
    app: pinot
    chart: pinot-0.2.5-SNAPSHOT
    release: responses-pinot
    <http://app.kubernetes.io/version|app.kubernetes.io/version>: "0.2.5-SNAPSHOT"
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
    heritage: Helm
    component: server
spec:
  selector:
    matchLabels:
      app: pinot
      chart: pinot-0.2.5-SNAPSHOT
      release: responses-pinot
      component: server
  serviceName: responses-pinot-server-headless
  replicas: 2
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        <http://helm.sh/chart|helm.sh/chart>: pinot-0.2.5-SNAPSHOT
        app: pinot
        chart: pinot-0.2.5-SNAPSHOT
        release: responses-pinot
        <http://app.kubernetes.io/version|app.kubernetes.io/version>: "0.2.5-SNAPSHOT"
        <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
        heritage: Helm
        component: server
      annotations:
        <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
        <http://prometheus.io/path|prometheus.io/path>: /metrics
        <http://prometheus.io/port|prometheus.io/port>: "5556"
        <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
    spec:
terminationGracePeriodSeconds: 30
      serviceAccountName: responses-pinot
      securityContext:
        {}
      nodeSelector:
        {}
      affinity:
        {}
      tolerations:
        []
      containers:
      - name: server
        securityContext:
          {}
        image: "registry.vox.dev/dockerhub/dianaarnos/pinot:0.9.3-with-ingress"
        imagePullPolicy: IfNotPresent
        args: [
          "StartServer",
          "-clusterName", "responses",
          "-zkAddress", "responses-pinot-zookeeper:2181",
          "-configFileName", "/var/pinot/server/config/pinot-server.conf"
        ]
        env:
          - name: JAVA_OPTS
            value: "-Xms2G -Xmx7G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml -Dlog4j2.configurationFile=/opt/pinot/conf/log4j2.xml -Dplugins.dir=/opt/pinot/plugins"
        envFrom:
          []
        ports:
          - containerPort: 8098
            protocol: TCP
            name: netty
          - containerPort: 8097
            protocol: TCP
            name: admin
        volumeMounts:
          - name: config
            mountPath: /var/pinot/server/config
          - name: data
            mountPath: "/var/pinot/server/data"
        resources:
            limits:
              cpu: 4
              memory: 16G
            requests:
              cpu: 4
              memory: 16G
      restartPolicy: Always
      volumes:
        - name: config
          configMap:
            name: responses-pinot-server-config
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes:
          - "ReadWriteOnce"
        resources:
          requests:
            storage: 4G
m
This config should work
I think it was @Diogo Baeder who shared that
d
Now I realize that that format is something we use internally, there's an app we have which converts that to Helm templates and then publishes as k8s manifests
But the configs there are relevant, yes
d
We are using ingress and our config works until we add the
extra configs
part. Then the controller pod has nothing listening on port 9000, which is the port the ingress rule uses
Copy code
root@responses-pinot-controller-0:/opt/pinot# ss -tnlp
State        Recv-Q        Send-Q               Local Address:Port               Peer Address:Port       Process
LISTEN       0             3                          0.0.0.0:5556                    0.0.0.0:*           users:(("java",pid=1,fd=30))
We don't setup AWS access key for we use KIAM containers to manage permissions. This is the config that works (those are staging/dev values, so don't mind the resources):
Copy code
image:
  repository: dianaarnos/pinot
  tag: 0.9.3-with-ingress

metadata:
  labels:
    app: responses-pinot
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: responses-pinot
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: responses-pinot
  name: responses-pinot

cluster:
  name: responses

controller:
  external:
    enabled: false
    jvmOpts: "-Xms1G -Xmx4G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
  podAnnotations:
    <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
    <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
    <http://prometheus.io/port|prometheus.io/port>: "5556"
    <http://prometheus.io/path|prometheus.io/path>: /metrics
  resources:
    limits:
      cpu: 1
      memory: 4G
    requests:
      cpu: 1
      memory: 4G
  ingress:
    v1beta1:
      enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal
    tls: { }
    path: /
    hosts:
      - <redacted>

broker:
  external:
    enabled: false
    jvmOpts: "-Xms1G -Xmx4G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
    podAnnotations:
      <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
      <http://prometheus.io/port|prometheus.io/port>: "5556"
      <http://prometheus.io/path|prometheus.io/path>: /metrics
      <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
    resources:
      limits:
        cpu: 1
        memory: 4G
      requests:
        cpu: 1
        memory: 4G
  ingress:
    v1beta1:
      enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal
    tls: { }
    path: /
    hosts:
      - <redacted>

server:
  replicaCount: 2
    jvmOpts: "-Xms2G -Xmx7G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
    podAnnotations:
      <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
      <http://prometheus.io/port|prometheus.io/port>: "5556"
      <http://prometheus.io/path|prometheus.io/path>: /metrics
      <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
    resources:
      limits:
        cpu: 4
        memory: 16G
      requests:
        cpu: 4
        memory: 16G
This is the config that makes the Controller stop listening on port 9000 and give 502:
Copy code
image:
  repository: dianaarnos/pinot
  tag: 0.9.3-with-ingress

metadata:
  labels:
    app: responses-pinot
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: responses-pinot
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: responses-pinot
  name: responses-pinot

cluster:
  name: responses

controller:
  external:
    enabled: false
    jvmOpts: "-Xms1G -Xmx4G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
  persistence:
    enabled: false
  data:
    dir: s3://<redacted>
  extra:
    configs: |-
      pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
      pinot.controller.storage.factory.s3.region=<redacted>
      pinot.controller.segment.fetcher.protocols=file,http,s3
      pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
  podAnnotations:
    <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
    <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
    <http://prometheus.io/port|prometheus.io/port>: "5556"
    <http://prometheus.io/path|prometheus.io/path>: /metrics
  resources:
    limits:
      cpu: 1
      memory: 4G
    requests:
      cpu: 1
      memory: 4G
  ingress:
    v1beta1:
      enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal
    tls: { }
    path: /
    hosts:
      - <redacted>

broker:
  external:
    enabled: false
    jvmOpts: "-Xms1G -Xmx4G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
    podAnnotations:
      <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
      <http://prometheus.io/port|prometheus.io/port>: "5556"
      <http://prometheus.io/path|prometheus.io/path>: /metrics
      <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
    resources:
      limits:
        cpu: 1
        memory: 4G
      requests:
        cpu: 1
        memory: 4G
  ingress:
    v1beta1:
      enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal
    tls: { }
    path: /
    hosts:
      - <redacted>

server:
  replicaCount: 2
    jvmOpts: "-Xms2G -Xmx7G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
    podAnnotations:
      <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
      <http://prometheus.io/port|prometheus.io/port>: "5556"
      <http://prometheus.io/path|prometheus.io/path>: /metrics
      <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
    resources:
      limits:
        cpu: 4
        memory: 16G
      requests:
        cpu: 4
        memory: 16G
  extra:
    configs: |-
      pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
      pinot.server.storage.factory.s3.region=<redacted>
      pinot.server.segment.fetcher.protocols=file,http,s3
      pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
      pinot.server.instance.segment.store.uri=s3://<redacted>
These are the configs inside
pinot-controller.conf
Copy code
controller.helix.cluster.name=responses
controller.port=9000
controller.data.dir=s3://<redacted>
controller.zk.str=responses-pinot-zookeeper:2181
pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
pinot.controller.storage.factory.s3.region=eu-west-1
pinot.controller.segment.fetcher.protocols=file,http,s3
pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
pinot.set.instance.id.to.hostname=true
controller.task.scheduler.enabled=true
I was able to find a working config, after I noticed that @Diogo Baeder had a couple of extra configuration lines for both the server and the controller. Those configs do not appear on the doc "Use S3 as Deep Storage for Pinot" For the controller:
Copy code
extra:
    configs: |-
      pinot.set.instance.id.to.hostname=true
      controller.task.scheduler.enabled=true
      controller.local.temp.dir=/tmp/pinot/
For the server:
Copy code
extra:
    configs: |-
      pinot.set.instance.id.to.hostname=true
      pinot.server.instance.realtime.alloc.offheap=true
      pinot.server.instance.currentDataTableVersion=2
      pinot.server.instance.dataDir=/var/pinot/server/data/index
      pinot.server.instance.segmentTarDir=/var/pinot/server/data/segment
m
do you know which of those make things work?
d
Ah, nice, good to know it worked in the end šŸ™‚
d
@Mark Needham, I didn't try each separately yet, for I needed to approve a working config for our staging environment first 😬 After I'm able to query our table I can try that
a
can someone please explain why deep storage is needed?
i have mongodb > kafka .. now to use pinot do i need deep storage as well?
m
the deep store is the permanent storage of pinot segments
d
m
so you do need to configure it
a
ahh thank u
do we support gcp storage or its just s3?
m
gcp should work yeh
or hdbs
hdfs* even
a
okie awesome, thank u let me look into this
d
Now I have another problem 🤣 Whenever I try to create a schema:
Copy code
> POST /schemas
{
	"_code": 500,
	"_error": null
}
Copy code
Starting TaskMetricsEmitter with running frequency of 300 seconds.
[TaskRequestId: auto] Start running task: TaskMetricsEmitter
[TaskRequestId: auto] Finish running task: TaskMetricsEmitter in 4ms
Server error: 
java.lang.NullPointerException: null
    at java.util.Objects.requireNonNull(Objects.java:221) ~[?:?]
    at java.util.Optional.<init>(Optional.java:107) ~[?:?]
    at java.util.Optional.of(Optional.java:120) ~[?:?]
    at org.apache.pinot.controller.api.access.AccessControlUtils.validatePermission(AccessControlUtils.java:48) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.apache.pinot.controller.api.resources.PinotSchemaRestletResource.addSchema(PinotSchemaRestletResource.java:215) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
    at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) ~[pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:353) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549) [pinot-all-0.9.3-jar-with-dependencies.jar:0.9.3-de4cb2cc01d611c92ff5d1fc6a49ee9f8b113192]
    at java.lang.Thread.run(Thread.java:829) [?:?]
Ok, it stopped... all by itself. Magic.
a
Diana, could you please share the yaml file for s3 that fixed the issue?
d
Please note I'm deploying via helm, and this is the
values-production.yaml
file that we use. And I'm using a custom image that contains Pinot 0.9.3 (latest release) with a cherry pick for the ingress commit. The important part of it for this issue is adding all those
extra.configs
to both the server and the controller.
Copy code
image:
  repository: dianaarnos/pinot
  tag: 0.9.3-with-ingress

controller:
  replicaCount: 2
  data:
    dir: s3://<redacted>
  jvmOpts: "-Xms2G -Xmx4G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
  extra:
    configs: |-
      pinot.set.instance.id.to.hostname=true
      controller.task.scheduler.enabled=true
      controller.local.temp.dir=/tmp/pinot/
      pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
      pinot.controller.segment.fetcher.protocols=file,http,s3
      pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
      pinot.controller.storage.factory.s3.region=<redacted>
      controller.allow.hlc.tables=false
      controller.enable.split.commit=true
      controller.realtime.segment.deepStoreUploadRetryEnabled=true
  ingress:
    v1beta1:
      enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal
    tls: { }
    path: /
    hosts:
      - <redacted>
  podAnnotations:
    <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
  resources:
    limits:
      cpu: 2
      memory: 8G
    requests:
      cpu: 2
      memory: 8G

broker:
  jvmOpts: "-Xms4G -Xmx8G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
  replicaCount: 2
  ingress:
    v1beta1:
      enabled: true
    annotations:
      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: internal
    tls: { }
    path: /
    hosts:
      - <redacted>
  podAnnotations:
    <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
  resources:
    limits:
      cpu: 4
      memory: 16G
    requests:
      cpu: 4
      memory: 16G

server:
  replicaCount: 3
  jvmOpts: "-Xms16G -Xmx16G -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=5556:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml"
  extra:
    configs: |-
      pinot.set.instance.id.to.hostname=true
      pinot.server.instance.realtime.alloc.offheap=true
      pinot.server.instance.currentDataTableVersion=2
      pinot.server.instance.dataDir=/var/pinot/server/data/index
      pinot.server.instance.segmentTarDir=/var/pinot/server/data/segment
      pinot.server.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
      pinot.server.segment.fetcher.protocols=file,http,s3
      pinot.server.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
      pinot.server.instance.enable.split.commit=true
      pinot.server.storage.factory.s3.region=<redacted>
      pinot.server.instance.segment.store.uri=s3://<redacted>
  podAnnotations:
    <http://iam.amazonaws.com/role|iam.amazonaws.com/role>: <redacted>
  resources:
    limits:
      cpu: 4
      memory: 32G
    requests:
      cpu: 4
      memory: 32G

zookeeper:
  persistence:
     size: "50Gi"
a
Diana, do you mind sharing the dockerfile image for custom image u made. i’m still new to devops, and thank you for your help. really appreciate it
a
Hi @ahsen m, @Diana Arnos: In the above example, Did you configure S3 bucket to have put access ?
d
@ahsen m I made the custom image by using the original one locally. I just had pinot repo cloned locally, I cherry picked the commit with ingress, tagged the new image and pushed to docker hub. Something like this: https://www.sentinelone.com/blog/create-docker-image/ But you don't need to create your own, the latest release
0.10.0
already contains the ingress and new features.
šŸ™ 2
a
Thank you @Diana Arnos