https://pinot.apache.org/ logo
o

Oguzhan Mangir

04/10/2021, 9:37 AM
Copy code
server:
  name: server

  ports:
    netty: 8098
    admin: 8097

  replicaCount: 1

  dataDir: /var/pinot/server/data/index
  segmentTarDir: /var/pinot/server/data/segment

  persistence:
    enabled: true
    accessMode: ReadWriteOnce
    size: "30Gi"
    mountPath: /var/pinot/server/data
    storageClass: "alicloud-disk-available"
    #storageClass: "ssd"

  jvmOpts: "-Xms512M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/pinot/gc-pinot-server.log -Dplugins.include=pinot-hdfs"

  log4j2ConfFile: /opt/pinot/conf/pinot-server-log4j2.xml
  pluginsDir: /opt/pinot/plugins

  service:
    annotations: {}
    clusterIP: ""
    externalIPs: []
    loadBalancerIP: ""
    loadBalancerSourceRanges: []
    type: ClusterIP
    port: 8098
    nodePort: ""

  resources: {}

  nodeSelector: {}

  affinity: {}

  tolerations: []

  podAnnotations: {}

  updateStrategy:
    type: RollingUpdate

  # Extra configs will be appended to pinot-server.conf file
  extra:
    configs: |-
      pinot.set.instance.id.to.hostname=true
      pinot.server.instance.realtime.alloc.offheap=true
      pinot.server.instance.enable.split.commit=true
      pinot.server.storage.factory.class.hdfs=org.apache.pinot.plugin.filesystem.HadoopPinotFS
      pinot.server.storage.factory.hdfs.hadoop.conf.path=/opt/hadoop/conf/
      pinot.server.segment.fetcher.protocols=file,http,hdfs
      pinot.server.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
here is my server config. I want to use hdfs as deep storage. I put related hdfs jars under the pinot, but server pod not starting when i add the
-Dplugins.include=pinot-hdfs
x

Xiang Fu

04/10/2021, 9:42 AM
by default it should load all the plugins
also
Copy code
pinot.server.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
should have same indentation as previous configs
if you specify
Copy code
-Dplugins.include=pinot-hdfs
then you may also need to specify all other plugins e.g,
Copy code
-Dplugins.include=pinot-hdfs,pinot-avro,pinot-parquet
what’s the exceptions in the logs?
o

Oguzhan Mangir

04/10/2021, 9:47 AM
so, we do not need to pass -Dplugins.include ? indentation seems correct if i am not wrong? i can not see any exception, because server pods is crashing, it's restarting continuously
x

Xiang Fu

04/10/2021, 9:48 AM
indentation is correct, it’s my broswer issue, 😛
😄 1
try not to pass
-Dplugins.include
and see what you get from the server log
might be some lib conflict I feel
o

Oguzhan Mangir

04/10/2021, 9:53 AM
ok, i'm trying
x

Xiang Fu

04/10/2021, 9:53 AM
then we may need to re-shade the pinot-hdfs jar
internally pinot-hdfs refers to hadoop-common 2.7.0
o

Oguzhan Mangir

04/10/2021, 10:00 AM
actually, i can run ingestion job with hdfs deep storage. i'm not sure about lib conflict
it's worked now, i don't know how i did fix it 😄
x

Xiang Fu

04/10/2021, 10:37 AM
😛
so I guess restart make the trick
please keep the server log and share next time (hope there is no next time as well)
👍 1
d

Daniel Lavoie

04/10/2021, 1:35 PM
All plugins are loaded by default, the plugin dir doesn’t need to be customized
👍 1