Hello, Any pointers while I am using Pinot Helm he...
# troubleshooting
a
Hello, Any pointers while I am using Pinot Helm here helm on AWS, and how to make Pinot controller Load balancer url accessible only inside VPN while I deploy this on AWS.
m
@Xiang Fu ^^
a
Hi @Xiang Fu: Any help/pointers on above issues much appreciated
x
For lb or vpn you can try to configure service external or just ingress
either one should work
Copy code
apiVersion: v1
kind: Service
metadata:
  annotations:
    <http://service.beta.kubernetes.io/aws-load-balancer-internal|service.beta.kubernetes.io/aws-load-balancer-internal>: "true"
    <http://service.beta.kubernetes.io/aws-load-balancer-scheme|service.beta.kubernetes.io/aws-load-balancer-scheme>: internal
    <http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>: nlb
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 80
    targetPort: 8080
  selector:
    app: tomcatinfra
  type: LoadBalancer
If you just need to use AWS internal LB then you can set those annotations exposed in service-external.yaml
a
Thanks will give this a try !
m
@Mark Needham seems like good info to be documented?
a
@Xiang Fu: Follow up question. If I do end up with creating a load balancer service option. And have my load balancer point to the
pinot-controller
Do you recommend setting
external
as false ?https://github.com/apache/pinot/blob/master/kubernetes/helm/pinot/values.yaml#L108
x
You can set it true and in annotation, make aws elb to internal
👍 1
this enabled is for creating the lb service
a
@Xiang Fu: The
annotation
will go in the service.yaml right ? Or should I add them in Pinot controller : https://github.com/apache/pinot/blob/master/kubernetes/helm/pinot/values.yaml#L111
x
In values.yaml
a
Okies. My LB service yaml looks something like this :
Copy code
apiVersion: v1
kind: Service
metadata:
  namespace: pinot-dev-ns
  name: pinot-controller-lb-service
  annotations:
    <http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>: "alb"
    <http://service.beta.kubernetes.io/aws-load-balancer-internal|service.beta.kubernetes.io/aws-load-balancer-internal>: "true"
    <http://service.beta.kubernetes.io/aws-load-balancer-subnets|service.beta.kubernetes.io/aws-load-balancer-subnets>: <>
    <http://service.beta.kubernetes.io/aws-load-balancer-backend-protocol|service.beta.kubernetes.io/aws-load-balancer-backend-protocol>: "http"
    <http://service.beta.kubernetes.io/aws-load-balancer-security-groups|service.beta.kubernetes.io/aws-load-balancer-security-groups>: <>
spec:
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  selector:
    app: pinot-controller
  ports:
    - name: http
      port: 80
      targetPort: 9000
  type: LoadBalancer
Is that correct ?
Basically having LB service <-> Pinot controller
Just for context, I am trying to create a Internal load balancer here
Never mind. Got it working by addying those annotations to the values.yaml
x
Yes, there is one or two annotations set following the wiki :)
1
a
@Xiang Fu: My Pinot controller is available via browser when I use the internal LB URL. But the broker is not reachable when I try the curl request for
/query/sql
Is there something I need to enable to open up the broker. When I query the tables via Console it all works. But when I try the HTTP curl requests it fails
Copy code
nc -vz <controller-lb> 9000
Connection to <controller-lb> port 9000 [tcp/cslistener] succeeded!
Copy code
nc -vz <broker-lb> 8099
Connection to <broker-lb> 8099 (tcp) failed: Operation timed out
x
is broker up?
what’s the output of
kubectl get svc -n pinot
?
my feeling is your broker loadbalancer is not configured to talk to the underly broker pods
a
broker is up
Copy code
[ec2-user@ip-10-220-3-94 ~]$ kubectl get svc -A
NAMESPACE      NAME                                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                                       PORT(S)                      AGE
default        kubernetes                             ClusterIP      172.20.0.1       <none>                                                                            443/TCP                      15d
default        pinot-data-dog-datadog-cluster-agent   ClusterIP      172.20.222.251   <none>                                                                            5005/TCP                     15d
default        pinot-data-dog-kube-state-metrics      ClusterIP      172.20.89.78     <none>                                                                            8080/TCP                     15d
kube-system    kube-dns                               ClusterIP      172.20.0.10      <none>                                                                            53/UDP,53/TCP                15d
pinot-dev-ns   pinot-broker                           ClusterIP      172.20.164.226   <none>                                                                            8099/TCP                     15d
pinot-dev-ns   pinot-broker-external                  LoadBalancer   172.20.122.210   <redacted>   8099:30197/TCP               15d
pinot-dev-ns   pinot-broker-headless                  ClusterIP      None             <none>                                                                            8099/TCP                     15d
pinot-dev-ns   pinot-controller                       ClusterIP      172.20.12.46     <none>                                                                            9000/TCP                     15d
pinot-dev-ns   pinot-controller-external              LoadBalancer   172.20.227.115   <redacted>    9000:31775/TCP               15d
pinot-dev-ns   pinot-controller-headless              ClusterIP      None             <none>                                                                            9000/TCP                     15d
pinot-dev-ns   pinot-minion                           ClusterIP      172.20.63.89     <none>                                                                            9514/TCP                     15d
pinot-dev-ns   pinot-minion-headless                  ClusterIP      None             <none>                                                                            9514/TCP                     15d
pinot-dev-ns   pinot-server                           ClusterIP      172.20.168.253   <none>                                                                            8098/TCP,80/TCP              15d
pinot-dev-ns   pinot-server-headless                  ClusterIP      None             <none>                                                                            8098/TCP,80/TCP              15d
pinot-dev-ns   pinot-zookeeper                        ClusterIP      172.20.188.78    <none>                                                                            2181/TCP,2888/TCP,3888/TCP   15d
pinot-dev-ns   pinot-zookeeper-headless               ClusterIP      None             <none>                                                                            2181/TCP,2888/TCP,3888/TCP   15d
This is the broker helm value :
Copy code
broker:
  name: broker
  replicaCount: 3
  podManagementPolicy: Parallel
  podSecurityContext: {}
    # fsGroup: 2000
  securityContext: {}

  jvmOpts: "-Xms4G -Xmx60G -Xlog:gc*:file=/opt/pinot/gc-pinot-broker.log"

  log4j2ConfFile: /opt/pinot/conf/log4j2.xml
  pluginsDir: /opt/pinot/plugins

  routingTable:
    builderClass: random

  probes:
    endpoint: "/health"
    livenessEnabled: true
    readinessEnabled: true

  service:
    annotations: {}
    clusterIP: "None"
    externalIPs: []
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    type: ClusterIP
    protocol: TCP
    port: 8099
    name: broker
    nodePort: ""

  external:
    enabled: true
    type: LoadBalancer
    port: 8099
    # For example, in private GKE cluster, you might add <http://cloud.google.com/load-balancer-type|cloud.google.com/load-balancer-type>: Internal
    annotations:
      <http://service.beta.kubernetes.io/aws-load-balancer-type|service.beta.kubernetes.io/aws-load-balancer-type>: "alb"
      <http://service.beta.kubernetes.io/aws-load-balancer-internal|service.beta.kubernetes.io/aws-load-balancer-internal>: "true"
      <http://service.beta.kubernetes.io/aws-load-balancer-backend-protocol|service.beta.kubernetes.io/aws-load-balancer-backend-protocol>: "<redacted>"
      <http://service.beta.kubernetes.io/aws-load-balancer-security-groups|service.beta.kubernetes.io/aws-load-balancer-security-groups>: "<redacted>"
  resources:
    limits:
      cpu: 2
      memory: 80G
    requests:
      cpu: 2
      memory: 80G

  nodeSelector: {}

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: component
                operator: In
                values:
                  - broker

  tolerations: []

  podAnnotations: {}

  updateStrategy:
    type: RollingUpdate

  # Use envFrom to define all of the ConfigMap or Secret data as container environment variables.
  # ref: <https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables>
  # ref: <https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#configure-all-key-value-pairs-in-a-secret-as-container-environment-variables>
  envFrom: []
  #  - configMapRef:
  #      name: special-config
  #  - secretRef:
  #      name: test.json-secret

  # Use extraEnv to add individual key value pairs as container environment variables.
  # ref: <https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/>
  extraEnv: []
  #  - name: PINOT_CUSTOM_ENV
  #    value: custom-value

  # Extra configs will be appended to pinot-broker.conf file
  extra:
    configs: |-
      pinot.set.instance.id.to.hostname=true
@Xiang Fu: See anything wrong with above config. Its the same annotations which worked for controller
x
I don’t see anything wrong
🙏 1
are you able to do port-forwarding for the broker service ?
a
Haven't tried it. Will try
Any page/documentation which has the command for broker LB url
x
typically if controller works then broker should work as well, can you try the
broker-url:8099/help
broker has no default page
it only supports the query endpoint and swagger
a
yeah I tried this page. Since the domain is not reachable, none of the path's work. May be its something to do with AWS EKS. where in only the controller 9000 is exposed outside the EKS cluster and broker seems to be reachable only via the controller POD. Hence when I run queries from Controller UI Console it works and SDK/Curl option does not.
x
port-forwarding will help you check if the broker endpoint is up from pinot standpoint
if it’s up, then it’s eks/aws issue from exposing the port access
a
I tried this :
Copy code
kubectl port-forward service/pinot-broker 8099:8099 -n <redacted> > /dev/null &
That seemed to work :
Copy code
{
  "exceptions": [
    {
      "errorCode": 190,
      "message": "TableDoesNotExistError"
    }
  ],
  "numServersQueried": 0,
  "numServersResponded": 0,
  "numSegmentsQueried": 0,
  "numSegmentsProcessed": 0,
  "numSegmentsMatched": 0,
  "numConsumingSegmentsQueried": 0,
  "numDocsScanned": 0,
  "numEntriesScannedInFilter": 0,
  "numEntriesScannedPostFilter": 0,
  "numGroupsLimitReached": false,
  "totalDocs": 0,
  "timeUsedMs": 0,
  "offlineThreadCpuTimeNs": 0,
  "realtimeThreadCpuTimeNs": 0,
  "offlineSystemActivitiesCpuTimeNs": 0,
  "realtimeSystemActivitiesCpuTimeNs": 0,
  "offlineResponseSerializationCpuTimeNs": 0,
  "realtimeResponseSerializationCpuTimeNs": 0,
  "offlineTotalCpuTimeNs": 0,
  "realtimeTotalCpuTimeNs": 0,
  "segmentStatistics": [],
  "traceInfo": {},
  "numRowsResultSet": 0,
  "minConsumingFreshnessTimeMs": 0
}
Ignore the table not exist. But with port forwarding its able to talk to broker
x
yeah
so it’s the issue of lb
maybe delete it and recreate it ?
a
Will give that a try. Thanks for your help ! 🙂