https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • s

    Sukesh Boggavarapu

    07/21/2022, 8:39 PM
    image.png
    s
    • 2
    • 1
  • g

    Gerrit van Doorn

    07/21/2022, 11:30 PM
    Hi team, I have a few fields that are arrays of byte arrays (BYTES), I made them multi value by setting
    "singleValueField": false
    in the schema but I’m confronted by the following error in the server:
    Copy code
    java.lang.UnsupportedOperationException: null
            at org.apache.pinot.segment.local.realtime.impl.dictionary.BytesOnHeapMutableDictionary.index(BytesOnHeapMutableDictionary.java:45) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
            at org.apache.pinot.segment.local.indexsegment.mutable.MutableSegmentImpl.updateDictionary(MutableSegmentImpl.java:538) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
            at org.apache.pinot.segment.local.indexsegment.mutable.MutableSegmentImpl.index(MutableSegmentImpl.java:486) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
            at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:550) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
            at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:420) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
            at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:598) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
            at java.lang.Thread.run(Thread.java:829) [?:?]
    Is this related to https://github.com/apache/pinot/issues/8635 ? NOTE: I’m not ingesting json
    k
    • 2
    • 4
  • a

    Alexander Vivas

    07/22/2022, 9:21 AM
    <!here> good morning guys, can you recommend a tool to migrate pinot’s zookeeper data from one cluster to another? If possible do some text replacements in the middle?
    ➕ 2
    m
    • 2
    • 1
  • a

    Alice

    07/22/2022, 10:20 AM
    Hey team, we’ve monitored some pinot metrics, e.g pinot_controller_segmentsInErrorState_Value, in Grafana and noticed some metric data(a large negative number) is still reported after related table is deleted in pinot. Any way to filter deleted tables’ metric data?
  • l

    Lars-Kristian Svenøy

    07/22/2022, 1:24 PM
    Hello team 👋 Does reloading an offline table after adding a star-tree index work?
    k
    • 2
    • 3
  • s

    Sukesh Boggavarapu

    07/23/2022, 1:38 AM
    We are planning to use the spark ingestion with pinot 0.10.0 release. It has the 2.4 library... but we have spark 2.2. Will those jars with 2.2 also?
    m
    c
    • 3
    • 3
  • g

    Gerrit van Doorn

    07/25/2022, 4:54 AM
    I’m trying to setup Pinot with a custom PinotFS for deep store usage. It’s currently unimplemented and just logs the params to see what happens. I’m trying to configure Pinot to decouple the controller from the data path. For testing purposes I have set the
    realtime.segment.flush.threshold.rows
    to a low value. My assumption was that a server would try to copy data to the deep-store once a segment is completed. However, I’m not seeing the custom PinotFS being used in the server, other than init(..) being called. I do see it being called in the controller once I hit
    realtime.segment.flush.threshold.rows
    . Shouldn’t I see the server doing this instead of the controller?
    m
    s
    j
    • 4
    • 28
  • s

    Sukesh Boggavarapu

    07/25/2022, 5:21 AM
    Hey guys, I am getting this exception
  • s

    Sukesh Boggavarapu

    07/25/2022, 5:21 AM
    Copy code
    at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
            at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
            at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
            at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:128)
            at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:154)
            at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:107)
            at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:162)
            at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:91)
            at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
            at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
            at software.amazon.awssdk.services.s3.DefaultS3Client.putObject(DefaultS3Client.java:8123)
            at org.apache.pinot.plugin.filesystem.S3PinotFS.mkdir(S3PinotFS.java:305)
            ... 23 more
    Caused by: java.lang.invoke.LambdaConversionException: Invalid receiver type interface org.apache.http.Header; not a subtype of implementation type interface org.apache.http.NameValuePair
            at java.lang.invoke.AbstractValidatingLambdaMetafactory.validateMetafactoryArgs(AbstractValidatingLambdaMetafactory.java:233)
            at java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:303)
            at java.lang.invoke.CallSite.makeSite(CallSite.java:302)
            ... 69 more
    java.lang.RuntimeException: Caught exception during running - org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner
            at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:148)
            at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:117)
            at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.execute(LaunchDataIngestionJobCommand.java:121)
            at org.apache.pinot.tools.Command.call(Command.java:33)
            at org.apache.pinot.tools.Command.call(Command.java:29)
            at picocli.CommandLine.executeUserObject(CommandLine.java:1953)
            at picocli.CommandLine.access$1300(CommandLine.java:145)
            at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2352)
            at picocli.CommandLine$RunLast.handle(CommandLine.java:2346)
            at picocli.CommandLine$RunLast.handle(CommandLine.java:2311)
            at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179)
            at picocli.CommandLine.execute(CommandLine.java:2078)
            at org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand.main(LaunchDataIngestionJobCommand.java:153)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
            at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
            at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
            at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
            at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.io.IOException: java.lang.BootstrapMethodError: call site initialization exception
            at org.apache.pinot.plugin.filesystem.S3PinotFS.mkdir(S3PinotFS.java:308)
            at org.apache.pinot.plugin.ingestion.batch.spark.SparkSegmentGenerationJobRunner.run(SparkSegmentGenerationJobRunner.java:145)
            at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:146)
            ... 21 more
    Caused by: java.lang.BootstrapMethodError: call site initialization exception
            at java.lang.invoke.CallSite.makeSite(CallSite.java:341)
            at java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:307)
            at java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:297)
            at software.amazon.awssdk.http.apache.ApacheHttpClient.transformHeaders(ApacheHttpClient.java:289)
            at software.amazon.awssdk.http.apache.ApacheHttpClient.createResponse(ApacheHttpClient.java:274)
            at software.amazon.awssdk.http.apache.ApacheHttpClient.execute(ApacheHttpClient.java:254)
            at software.amazon.awssdk.http.apache.ApacheHttpClient.access$500(ApacheHttpClient.java:106)
            at software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:232)
            at software.amazon.awssdk.http.apache.ApacheHttpClient$1.call(ApacheHttpClient.java:229)
            at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:64)
            at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.executeHttpRequest(MakeHttpRequestStage.java:76)
            at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:55)
            at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeHttpRequestStage.execute(MakeHttpRequestStage.java:39)
            at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
            at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
            at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
    ✅ 1
    k
    • 2
    • 5
  • s

    Sukesh Boggavarapu

    07/25/2022, 5:22 AM
    when trying to run an offline job using spark. Now, I tried both 0.10.0 release version and 0.11.0-SNAPSHOT version..
  • s

    Sukesh Boggavarapu

    07/25/2022, 5:24 AM
    Any idea what this error could be? On the internet,
    Invalid receiver type interface org.apache.http.Header; not a subtype of implementation type interface org.apache.http.NameValuePair
    , it says it could be a mismatch between
    aws sdk
    and
    httpclient
    or so..but I am not sure if that is the case.
  • j

    Jatin Kumar

    07/25/2022, 6:46 PM
    hello One question like in controller we define
    controller.data.dir
    to s3 location there is no need to define this type of property in server config? like how server know where to push the segment to deepstore? in this page server properties doesn’t have s3 location https://docs.pinot.apache.org/users/tutorials/use-s3-as-deep-store-for-pinot
    k
    g
    m
    • 4
    • 5
  • d

    Daniel

    07/25/2022, 10:33 PM
    Hi. I am trying to submit a spark job for the offline batch ingestion process. I am running spark 2.2 and executors are being added and removed continuously without any work being done. Here is a gist of the log output https://gist.github.com/drojas1/50556945f0491306dcf24c2ccd499446 Can I get any advise about this issue?
    j
    • 2
    • 2
  • a

    Alice

    07/26/2022, 2:37 AM
    Hi team, I’ve a question about 434 rows data lost. In my case, Flink is used to do data transformation, and then send data to Kafka, and transaction is enabled. Therefore, ‘read_commit’ is enabled in our table config. During data validation, we found some records were not ingested by pinot. There’s no error in server log. In order to trace the root cause, we created a new table with the same table config and found those missed records were in this new table. So, my question is, what’s the possible reason for the data missing in the first time?
    s
    • 2
    • 6
  • j

    Jeff Behl

    07/26/2022, 9:41 AM
    hi all - trying to use the RealtimeProvisioningHelper
    bin/pinot-admin.sh RealtimeProvisioningHelper  -tableConfigFile /tmp/table.json -numPartitions 1  -numHosts 8,6,10 -numHours 6,12,18,24 -sampleCompletedSegmentDir /var/pinot/server/data/index/aws_flowlogs_REALTIME/aws_flowlogs__2__3__20220725T2132Z/ -ingestionRate 100000 -maxUsableHostMemory 20G -retentionHours 72
    which seems to be doing it’s thing until:
    Copy code
    Allocating 32768 bytes for: aws_flowlogs__2__3__20220725T2132Z:srcport.dict
    Allocating 32768 bytes for: aws_flowlogs__2__3__20220725T2132Z:dstport.dict
    Trying to destroy segment : aws_flowlogs__2__3__20220725T2132Z
    Trying to close RealtimeSegmentImpl : aws_flowlogs__2__3__20220725T2132Z
    Segment used 45236440 bytes of memory for 337500 rows consumed in 4 seconds
    Allocating byte array store buffer of size 71680 for: aws_flowlogs__2__3__20220725T2132Z:srcaddr.dict
    Allocating -4 bytes for: aws_flowlogs__2__3__20220725T2132Z:srcaddr.sv.unsorted.fwd
    java.lang.IllegalArgumentException: Illegal memory allocation -4 for segment aws_flowlogs__2__3__20220725T2132Z column aws_flowlogs__2__3__20220725T2132Z:srcaddr.sv.unsorted.fwd
    	at shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:122)
    	at org.apache.pinot.segment.local.io.readerwriter.RealtimeIndexOffHeapMemoryManager.allocate(RealtimeIndexOffHeapMemoryManager.java:78)
    	at org.apache.pinot.segment.local.realtime.impl.forward.FixedByteSVMutableForwardIndex.addBuffer(FixedByteSVMutableForwardIndex.java:208)
    	at org.apache.pinot.segment.local.realtime.impl.forward.FixedByteSVMutableForwardIndex.<init>(FixedByteSVMutableForwardIndex.java:77)
    	at org.apache.pinot.segment.local.indexsegment.mutable.MutableSegmentImpl.<init>(MutableSegmentImpl.java:310)
    	at org.apache.pinot.controller.recommender.realtime.provisioning.MemoryEstimator.getMemoryForConsumingSegmentPerPartition(MemoryEstimator.java:339)
    	at org.apache.pinot.controller.recommender.realtime.provisioning.MemoryEstimator.estimateMemoryUsed(MemoryEstimator.java:272)
    ....
    I’m not clear on what the cause of this is?
  • j

    Jeff Behl

    07/26/2022, 9:41 AM
    thanks in advance
  • j

    Jeff Behl

    07/26/2022, 9:43 AM
    also not clear if this is the preferred approach, or using the REST route ``/tables/recommender``
  • r

    Rajan Garg

    07/26/2022, 3:10 PM
    I am trying to create Pinot schema using Swagger API which is deployed on K8s cluster but I am getting this error:
    Copy code
    {
      "code": 403,
      "error": "Permission is denied for access type 'READ' to the endpoint '<http://dev-pie-pinot.aws.phenom.local/schemas>'"
    }
    Can someone help me out?
    j
    • 2
    • 1
  • j

    James Kelleher

    07/26/2022, 3:34 PM
    Hello, I am trying to follow the streaming ingestion tutorial to the best of my ability. I am able to use the console consumer to read from my topic, and I am able to connect my Pinot table to Kafka, but Pinot is unable to read data. I am attaching the schema file and the runbook I am using to set up the cluster
    transcript-table-realtime.jsondocker_runbook.sh
    m
    n
    • 3
    • 12
  • m

    Mugdha Goel

    07/26/2022, 4:27 PM
    Hello, I have a scenario where I have 3 HYBRID tables. One of the tables which is the biggest and has 16 columns , few of which are JSON columns. I have set them to have a deep store in a gs bucket. I have setup pinot in a Kubernetes autopilot cluster. I have 4 server pods.Out of the 3 tables the biggest(Transaction_records) always is scheduled to the same pinot server. Also I am ingesting realtime data from Kafka and have minion tasks setup to get 1day worth of data and create offline segments. This works perfectly for the other 2 tables called(balance and records) in my case. But transactions seems to always ingest segments and they are displayed as bad segments. UI shows the following error. When I look at the pod logs for this server I see the following exceptions
    Copy code
    2022-07-26T16:19:09.950643570Zjava.lang.RuntimeException: java.lang.IllegalStateException at org.apache.pinot.core.data.manager.realtime.RealtimeTableDataManager.replaceLLSegment(RealtimeTableDataManager.java:535) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.core.data.manager.realtime.RealtimeTableDataManager.untarAndMoveSegment(RealtimeTableDataManager.java:483) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.core.data.manager.realtime.RealtimeTableDataManager.downloadSegmentFromDeepStore(RealtimeTableDataManager.java:459) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.core.data.manager.realtime.RealtimeTableDataManager.downloadAndReplaceSegment(RealtimeTableDataManager.java:432) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.downloadSegmentAndReplace(LLRealtimeSegmentDataManager.java:1126) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.goOnlineFromConsuming(LLRealtimeSegmentDataManager.java:1069) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.server.starter.helix.SegmentOnlineOfflineStateModelFactory$SegmentOnlineOfflineStateModel.onBecomeOnlineFromConsuming(SegmentOnlineOfflineStateModelFactory.java:115) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at jdk.internal.reflect.GeneratedMethodAccessor381.invoke(Unknown Source) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.helix.messaging.handling.HelixStateTransitionHandler.invoke(HelixStateTransitionHandler.java:404) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.helix.messaging.handling.HelixStateTransitionHandler.handleMessage(HelixStateTransitionHandler.java:331) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.helix.messaging.handling.HelixTask.call(HelixTask.java:97) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.helix.messaging.handling.HelixTask.call(HelixTask.java:49) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: java.lang.IllegalStateException at shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:429) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.segment.index.readers.forward.BaseChunkSVForwardIndexReader.<init>(BaseChunkSVForwardIndexReader.java:72) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.segment.index.readers.forward.FixedByteChunkMVForwardIndexReader.<init>(FixedByteChunkMVForwardIndexReader.java:40) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.segment.index.readers.DefaultIndexReaderProvider.newForwardIndexReader(DefaultIndexReaderProvider.java:104) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.spi.index.IndexingOverrides$Default.newForwardIndexReader(IndexingOverrides.java:205) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.segment.index.column.PhysicalColumnIndexContainer.<init>(PhysicalColumnIndexContainer.java:166) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader.load(ImmutableSegmentLoader.java:181) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader.load(ImmutableSegmentLoader.java:121) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.segment.local.indexsegment.immutable.ImmutableSegmentLoader.load(ImmutableSegmentLoader.java:91) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] at org.apache.pinot.core.data.manager.realtime.RealtimeTableDataManager.replaceLLSegment(RealtimeTableDataManager.java:533) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-078c711d35769be2dc4e4b7e235e06744cf0bba7] ... 17 more
    • 1
    • 2
  • l

    Luis Fernandez

    07/26/2022, 5:07 PM
    if i use a reserve keyword like timestamp as a column name can I still query stuff? if so how would i do it?
    m
    j
    • 3
    • 65
  • s

    Sukesh Boggavarapu

    07/26/2022, 5:24 PM
    @Kartik Khare Thank you so much for helping me out to figure out the spark issues and get my offline ingestion working.
    m
    • 2
    • 1
  • s

    Sukesh Boggavarapu

    07/26/2022, 5:24 PM
    Appreciate it a lot.
  • p

    Priyank Bagrecha

    07/26/2022, 10:23 PM
    How do I configure swagger to use https instead of http? I am using community provided helm charts to deploy Pinot on a k8s cluster. If it matters, the k8s cluster is using istio for ingress. I am not sure where to look and that's why the question.
    d
    • 2
    • 5
  • p

    Priyank Bagrecha

    07/26/2022, 11:03 PM
    Another question. I am deploying pinot via community provided helm charts, and it seems like the server just fails to restart but I am not sure why.
    server_logs.txt
    x
    • 2
    • 7
  • g

    Gerrit van Doorn

    07/27/2022, 2:41 AM
    I’m implementing similar semantics to S3PinotFS and in a unit test noticed that normalizeToDirectoryUri() might contain a bug. That line, when just testing this like:
    Copy code
    URI foo = new URI("foo", "my-host", "data/dir_2/", null);
    results in:
    Copy code
    java.net.URISyntaxException: Relative path in absolute URI: <foo://my-hostdata/dir_2/>
    In my code I replaced this with:
    Copy code
    return new URI(uri.getScheme(), uri.getHost(), DELIMITER + sanitizePath(uri.getPath() + DELIMITER), null);
    This code is only used by
    copyDir
    , which is used in
    copy
    of PinotFS, and
    doMove
    , which in turn is used by
    move
    of
    BasePinotFS
    k
    • 2
    • 1
  • c

    Cheguri Vinay Goud

    07/26/2022, 7:38 AM
    Hello, I'm getting below error while executing RealtimeToOfflineSegmentsTask. Can someone please help me with this?
    Copy code
    Caused by: org.apache.pinot.common.exception.HttpErrorStatusException: Got error status code: 403 (Forbidden) with reason: "Permission is denied for access type 'READ' to the endpoint '<http://pinot-controller-0.pinot-controller-headless.pie-ml.svc.cluster.local:9000/segments/ts_transcript/ts_transcript__0__142__20220725T1207Z>' for table 'ts_transcript'" while sending request: <http://10.1.93.246:9000/segments/ts_transcript/ts_transcript__0__142__20220725T1207Z> to controller: pinot-controller-0.pinot-controller-headless.pie-ml.svc.cluster.local, version: Unknown
    	at org.apache.pinot.common.utils.FileUploadDownloadClient.downloadFile(FileUploadDownloadClient.java:1148) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.common.utils.FileUploadDownloadClient.downloadFile(FileUploadDownloadClient.java:1217) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.common.utils.fetcher.HttpSegmentFetcher.lambda$fetchSegmentToLocal$0(HttpSegmentFetcher.java:66) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    I'm attaching the REALTIME table config for reference
    realtime.json
    k
    • 2
    • 4
  • c

    Cheguri Vinay Goud

    07/27/2022, 11:00 AM
    Hello, Can someone please help me with below error while executing RealtimeToOfflineSegmentsTask?
    Copy code
    Caught exception while executing task: Task_RealtimeToOfflineSegmentsTask_1658916300778_0
    java.lang.IllegalStateException: RealtimeToOfflineSegmentsTaskMetadata ZNRecord for table: ts_demo_REALTIME should not be null. Exiting task.
    	at shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:518) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.plugin.minion.tasks.realtimetoofflinesegments.RealtimeToOfflineSegmentsTaskExecutor.preProcess(RealtimeToOfflineSegmentsTaskExecutor.java:94) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.executeTask(BaseMultipleSegmentsConversionExecutor.java:125) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.plugin.minion.tasks.BaseMultipleSegmentsConversionExecutor.executeTask(BaseMultipleSegmentsConversionExecutor.java:60) ~[pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.runInternal(TaskFactoryRegistry.java:111) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.pinot.minion.taskfactory.TaskFactoryRegistry$1.run(TaskFactoryRegistry.java:88) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at org.apache.helix.task.TaskRunner.run(TaskRunner.java:71) [pinot-all-0.10.0-jar-with-dependencies.jar:0.10.0-30c4635bfeee88f88aa9c9f63b93bcd4a650607f]
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    	at java.lang.Thread.run(Thread.java:829) [?:?]
    Task: Task_RealtimeToOfflineSegmentsTask_1658916300778_0 completed in: 88ms
    r
    k
    n
    • 4
    • 9
  • r

    Rajkumar Samayanathan

    07/27/2022, 4:48 PM
    Hi Team, I needed the help, I was able to connect the pinot and table. but I can't able working on "Live", then I tried "Live" to "Extract" and worked on it. It works fine but when I refresh the data source it shows this error Unexpected Error Failed to execute query : SELECT WelcomeEffortsReport.CallMade AS CallMade, CAST(WelcomeEffortsReport.ContactedDate AS DATE) AS ContactedDate, WelcomeEffortsReport.DidnotReachPt AS DidnotReachPt, WelcomeEffortsReport.TrainingProvided AS TrainingProvided FROM WelcomeEffortsReport LIMIT 100 Error Code: FAB9A2C5 Unable to create extract
    k
    • 2
    • 2
  • p

    Priyank Bagrecha

    07/27/2022, 5:22 PM
    How are folks deploying zookeeper for their pinot cluster in production? Are you using the helm charts as is? Or are you customizing it? Or are you using helm charts for deploying other components and using your own way for deploying zookeeper specifically?
    d
    • 2
    • 13
1...505152...166Latest