https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • k

    Kishore G

    05/29/2020, 8:58 PM
    cast?
  • a

    Alex

    05/29/2020, 9:04 PM
    Copy code
    Query select a, count(*) from b where operation_ts > 1.5907862199449532E9 group by a limit 10 encountered exception org.apache.pinot.common.response.broker.QueryProcessingException@6ceba6a6 with query "select a, count(*) from b where operation_ts > 1.5907862199449532E9 group by a limit 10"
  • a

    Alex

    05/29/2020, 9:08 PM
    and cast is not working in presto-pinot connector. we are looking into it
  • e

    Elon

    05/29/2020, 9:09 PM
    Will have an update shortly. Users are trying to select the most recent events. Need to push the filter correcly
  • e

    Elon

    05/29/2020, 9:09 PM
    Is 0.4.0 coming soon? The recent commits look pretty amazing:) (brb, need to fix this 🙂 )
  • k

    Kishore G

    05/29/2020, 9:12 PM
    Copy code
    operation_ts > 1.5907862199449532E9
  • k

    Kishore G

    05/29/2020, 9:12 PM
    would love to see use of Range Index here
    👍 1
  • e

    Elon

    05/29/2020, 9:35 PM
    That didn't work, operation_ts is a long
  • e

    Elon

    05/29/2020, 11:04 PM
    @Alex got a workaround: in superset just use jinja to transform the now() - 5 mins timestamp to an int.
  • e

    Elon

    06/01/2020, 5:32 PM
    We noticed that while redeploying pinot to k8s when we do
    select count(*)
    data is unavailable while the server is starting even though replicas are set to 3, and we have 3 servers and 2 brokers, and upgrade one pod at a time: rolling update, pod disruption budget maxunavailable of 1. Is there any config to enable data being available when not all servers are up?
  • k

    Kishore G

    06/01/2020, 5:38 PM
    there is no pinot config for that, are you sure k8s is doing a rolling upgrade?
  • k

    Kishore G

    06/01/2020, 5:38 PM
    also, make sure that the you wait for the health check to return true before moving on to the next server
  • a

    Alex

    06/01/2020, 9:14 PM
    @Elon are we shutting it down gracefully? @Kishore G i bet we just rely on kube sending kill signal. Do we need to execute some shutdown call first?
    👍 1
  • k

    Kishore G

    06/01/2020, 9:16 PM
    yes, there is a stop command that waits a bunch of things
  • a

    Alex

    06/01/2020, 9:30 PM
    @Elon ^^
  • e

    Elon

    06/01/2020, 9:44 PM
    Is the there an endpoint for the stop command?
  • k

    Kishore G

    06/01/2020, 9:48 PM
    how are you starting/stop
  • e

    Elon

    06/01/2020, 11:51 PM
    We start by deploying in k8s and run the defaut startup commands. For stopping we just run an upgrade which deletes the old pods and replaces them with the new ones.
  • d

    Dan Hill

    06/02/2020, 12:13 AM
    Copy code
    Tarring segment from: /tmp/pinot-1591053945331/output/metrics_OFFLINE_1590965807139_1590965807139_0 to: /tmp/pinot-1591053945331/output/metrics_OFFLINE_1590965807139_1590965807139_0.tar.gz
    Got exception to kick off standalone data ingestion job - 
    java.lang.RuntimeException: Caught exception during running - org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:121) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:94) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:123) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:156) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:168) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    Caused by: java.lang.RuntimeException: entry size '14879990781' is too big ( > 8589934591 )
    	at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.failForBigNumber(TarArchiveOutputStream.java:623) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.failForBigNumbers(TarArchiveOutputStream.java:608) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.putArchiveEntry(TarArchiveOutputStream.java:286) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.addFileToTarGz(TarGzCompressionUtils.java:125) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.addFileToTarGz(TarGzCompressionUtils.java:138) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.addFileToTarGz(TarGzCompressionUtils.java:138) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.createTarGzOfDirectory(TarGzCompressionUtils.java:85) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.createTarGzOfDirectory(TarGzCompressionUtils.java:72) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner.run(SegmentGenerationJobRunner.java:197) ~[pinot-batch-ingestion-standalone-0.4.0-SNAPSHOT-shaded.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:119) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	... 4 more
    Exception caught: 
    java.lang.RuntimeException: Caught exception during running - org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:121) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:94) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:123) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:156) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:168) [pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    Caused by: java.lang.RuntimeException: entry size '14879990781' is too big ( > 8589934591 )
    	at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.failForBigNumber(TarArchiveOutputStream.java:623) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.failForBigNumbers(TarArchiveOutputStream.java:608) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.putArchiveEntry(TarArchiveOutputStream.java:286) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.addFileToTarGz(TarGzCompressionUtils.java:125) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.addFileToTarGz(TarGzCompressionUtils.java:138) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.addFileToTarGz(TarGzCompressionUtils.java:138) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.createTarGzOfDirectory(TarGzCompressionUtils.java:85) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.common.utils.TarGzCompressionUtils.createTarGzOfDirectory(TarGzCompressionUtils.java:72) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner.run(SegmentGenerationJobRunner.java:197) ~[pinot-batch-ingestion-standalone-0.4.0-SNAPSHOT-shaded.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:119) ~[pinot-all-0.4.0-SNAPSHOT-jar-with-dependencies.jar:0.4.0-SNAPSHOT-ed26e8589fe5f91d2876d417aebf23575010cc76]
    	... 4 more
  • s

    Shounak Kulkarni

    06/04/2020, 12:18 PM
    where can i find implementation details for using adls?
  • k

    Kenny Bastani

    06/04/2020, 2:24 PM
    We are working on a guide. Unfortunately, right now our documentation (GitBook) is down. I'll provide you links to the page where it will be when the docs are back online.
    👍 1
  • k

    Kenny Bastani

    06/04/2020, 3:50 PM
    @Shounak Kulkarni Here is a link to the guide we're working on. @Kishore G do you have an existing resource for ADLS? https://docs.pinot.apache.org/basics/data-import/pinot-file-system/import-from-adls-azure
  • k

    Kishore G

    06/04/2020, 4:05 PM
    @Seunghyun ^^
  • s

    Seunghyun

    06/04/2020, 4:07 PM
    @Shounak Kulkarni https://github.com/apache/incubator-pinot/blob/8ff155a2a0bad5784d125d9e188fdf015acf5ec1/pinot-plugins/pinot-file-system/pinot-adls/src/main/java/org/apache/pinot/plugin/filesystem/ADLSGen2PinotFS.java
  • s

    Shounak Kulkarni

    06/04/2020, 4:24 PM
    Thanks a lot @Kenny Bastani and @Seunghyun!
    👍 1
  • s

    Subbu Subramaniam

    06/04/2020, 5:24 PM
    @Shounak Kulkarni in a large scale system, with reasonably uniform distribution of events across several partitions, it is very likely that the partitions will complete at the same time.
  • s

    Subbu Subramaniam

    06/04/2020, 5:24 PM
    There is a setting that you can use to limit the number of segment builds in realtime streams.
  • s

    Subbu Subramaniam

    06/04/2020, 5:25 PM
    let me look up the config for you
  • s

    Subbu Subramaniam

    06/04/2020, 5:28 PM
    Copy code
    pinot.server.instance.realtime.max.parallel.segment.builds
    The default value is 0, meaning as many parallels are started as requested. You can experiment with this value to see what works for your use case
    👍 1
  • s

    Shounak Kulkarni

    06/04/2020, 7:11 PM
    @Subbu Subramaniam this is very helpful and will give a confidance on the extra memory to keep for segment creation. Thanks a lot!
1...107108109...166Latest