https://pinot.apache.org/ logo
Join Slack
Powered by
# general
  • k

    Kishore G

    01/15/2020, 9:16 PM
    Amazing blog by @User on achieving Full SQL on top of Pinot by integrating it with Presto. https://eng.uber.com/engineering-sql-support-on-apache-pinot/
    👏 1
    👍 7
    d
    • 2
    • 1
  • h

    Haibo Wang

    02/07/2020, 7:55 PM
    We will be hosting a Pinot meetup on March 17th, Tuesday at Uber Palo Alto office Please RSVP in https://www.meetup.com/apache-pinot/events/268542155/
    👍 5
    👏 1
    a
    • 2
    • 3
  • g

    Giorgio Zoppi

    02/10/2020, 5:02 PM
    I am typically an interop guy, so i might write clients.
    k
    • 2
    • 10
  • k

    Kishore G

    02/11/2020, 2:03 AM
    @User did you add the Task reminder app?
    s
    • 2
    • 3
  • s

    Seunghyun

    02/13/2020, 12:15 AM
    by the way, one issue that i see on our workspace is that it only stores the message up to 10k and the discussion is not stored
    x
    a
    • 3
    • 3
  • h

    Harshini Elath

    02/14/2020, 5:26 PM
    Hi.. i am trying to delete a pinot segment from rest api but it doesn’t delete it. Why would that happen?
    n
    • 2
    • 30
  • l

    lsabi

    02/14/2020, 10:07 PM
    Hi community, since I don't follow directly the development of Pinot, does anyone mind helping me writing the release notes? https://apache-pinot.gitbook.io/apache-pinot-cookbook/releases/1.0.0 Thanks (note: you can also write below the features/bug fixes and I'll report them in the release note)
    x
    • 2
    • 1
  • s

    Subbu Subramaniam

    02/14/2020, 10:25 PM
    I did the release notes for 0.2.0, and compiled it by going through the PRs submitted between 0.1.0 and 0.2.0. A bit painful, but it shou dnot be as bad between 0.2.0 and 1.0.0 I think
    l
    • 2
    • 1
  • h

    Harshini Elath

    02/14/2020, 10:52 PM
    { "selectionResults": { "columns": [ "audit_date" ], "results": [] }, "exceptions": [ { "errorCode": 200, "message": "QueryExecutionError\njava.lang.RuntimeException Caught exception while building data table.\n\tat org.apache.pinot.core.operator.blocks.InstanceResponseBlock.<init>(InstanceResponseBlock.java:46)\n\tat org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:37)\n\tat org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:26)\n\tat org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:48)\n\tat org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:48)\n\tat org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:213)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:152)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:136)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat shaded.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)\n\tat shaded.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)\n\tat shaded.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)" }, { "errorCode": 200, "message": "QueryExecutionError\njava.lang.RuntimeException Caught exception while building data table.\n\tat org.apache.pinot.core.operator.blocks.InstanceResponseBlock.<init>(InstanceResponseBlock.java:46)\n\tat org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:37)\n\tat org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:26)\n\tat org.apache.pinot.core.operator.BaseOperator.nextBlock(BaseOperator.java:48)\n\tat org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:48)\n\tat org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:213)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:152)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:136)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat shaded.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)\n\tat shaded.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)\n\tat shaded.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)" } ], "numServersQueried": 2, "numServersResponded": 2, "numSegmentsQueried": 394, "numSegmentsProcessed": 0, "numSegmentsMatched": 0, "numConsumingSegmentsQueried": 0, "numDocsScanned": 0, "numEntriesScannedInFilter": 0, "numEntriesScannedPostFilter": 0, "numGroupsLimitReached": false, "totalDocs": 0, "timeUsedMs": 7, "segmentStatistics": [], "traceInfo": {}, "minConsumingFreshnessTimeMs": 0 }
    x
    • 2
    • 12
  • s

    Sidd

    02/14/2020, 11:21 PM
    Copy code
    Caught exception while building data table.\n\tat org.apache.pinot.core.operator.blocks.InstanceResponseBlock.<init>(InstanceResponseBlock.java:46)\n\tat org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:37)\n\tat org.apache.pinot.core.operator.InstanceResponseOperator.getNextBlock(InstanceResponseOperator.java:26)\n\tat
    h
    • 2
    • 1
  • a

    Alex

    02/17/2020, 6:36 PM
    what do you usually do when indexes change? Recreate segments outside of the cluster using deep storage and load? Or rebuild those segments using Pinot segment files? Or something else?
    x
    • 2
    • 2
  • j

    Jackie

    02/20/2020, 1:06 AM
    <!here> As we made several enhancements and optimizations to the star-tree index (learn more about star-tree here: https://pinot.readthedocs.io/en/latest/star-tree/star-tree.html), we plan to remove the support of the old star-tree index. If you have use cases with the old star-tree index (generated with
    StarTreeIndexSpec
    ), please let us know and we can figure out a way to migrate it to the new one.
    e
    • 2
    • 1
  • s

    Sidd

    02/20/2020, 11:10 AM
    you can actually use maxLength property in FieldSpec
    h
    • 2
    • 1
  • s

    Suraj

    02/20/2020, 10:56 PM
    Hello - how do we reset the offset of a pinot table (current segment using simple consumer) to say the latest ?
    x
    s
    • 3
    • 29
  • s

    Seunghyun

    02/21/2020, 6:14 PM
    @User Does
    pinot-distribution
    ’s quick-start script work on your env?
    Copy code
    ~/workspace/pinot/pinot-distribution/target/apache-pinot-incubating-0.3.0-SNAPSHOT-bin/apache-pinot-incubating-0.3.0-SNAPSHOT-bin/bin master
    ❯ ./quick-start-offline.sh
    Error: Could not find or load main class org.apache.pinot.tools.Quickstart
    
    ~/workspace/pinot/pinot-distribution/target/apache-pinot-incubating-0.3.0-SNAPSHOT-bin/apache-pinot-incubating-0.3.0-SNAPSHOT-bin/lib master
    ❯ ll
    total 163968
    -rw-r--r--  1 snlee  LINKEDIN\eng    77M Feb 20 23:39 pinot-all-0.3.0-SNAPSHOT-jar-with-dependencies.jar
    x
    • 2
    • 2
  • x

    Xiang Fu

    02/24/2020, 9:35 AM
    let me check
    g
    • 2
    • 17
  • s

    Subbu Subramaniam

    02/25/2020, 7:32 PM
    good to go, except that the docs are in the wrong place. please update the docs ingitbook
    e
    x
    • 3
    • 5
  • m

    Mayank

    02/26/2020, 11:14 PM
    Star tree indexing has to be generated at index generation time, and cannot be dynamically created at load time (unlike inverted index), currently.
    h
    • 2
    • 25
  • k

    Kishore G

    03/05/2020, 5:02 PM
    is there a star-tree.bin or something like that under segment directory?
    h
    • 2
    • 1
  • n

    Neha Pawar

    03/05/2020, 5:20 PM
    @User you need to put that config inside
    starTreeIndexConfigs
    . I just updated doc to reflect that: https://apache-pinot.gitbook.io/apache-pinot-cookbook/indexing#example
    h
    • 2
    • 1
  • m

    Mayank

    03/05/2020, 5:24 PM
    I was looking at the config above and was expecting a
    enableStarTree: true
    config. And when I looked at the code, I saw the same thing you posted. And was contemplating that a tool to validate the config would have helped here.
    ➕ 1
    n
    • 2
    • 1
  • v

    veera vissa

    03/05/2020, 6:09 PM
    After updating the table config with starTreeIndexConfigs we are getting below error
    n
    h
    • 3
    • 11
  • n

    Neha Pawar

    03/06/2020, 9:50 PM
    will take a look. Do you see anything in
    pinotController.log
    ?
    d
    k
    • 3
    • 12
  • d

    Dan Hill

    03/08/2020, 12:38 AM
    I'm working on an ad network. I'm evaluating using Pinot to aggregate ad events (impressions, clicks). One of the use cases is to integrate it with the management API (e.g. list campaigns ordered by clicks desc).
    m
    • 2
    • 1
  • d

    Dan Hill

    03/08/2020, 5:29 PM
    Is there an easy way to transform the incoming records by specifying a function in the configs? Just checking.
    k
    • 2
    • 13
  • k

    Kishore G

    03/08/2020, 5:35 PM
    It’s WIP. There is a concept of applying functions during ingestion.
    m
    • 2
    • 9
  • d

    Dan Hill

    03/08/2020, 8:06 PM
    Has anyone implemented a Kinesis stream ingestor?
    k
    • 2
    • 1
  • d

    Dan Hill

    03/10/2020, 4:46 AM
    Is there an easy way to expose the Kafka started in that container? I tried adding both a
    -p 19092:19092
    and
    -p 9092:9092
    on that command line but it didn't work. It looks like the Dockerfile doesn't expose it.
    x
    • 2
    • 49
  • a

    AnishKanth

    03/10/2020, 9:22 PM
    @User We tried to create segments through spark for an offline table. We set the configurations 'exclude.sequence.id=true' and ' segment.name.generator.type=normalizedDate' and the segments created did not have sequence_id(TABLE_2020-03-11_2020-03-12.tar.gz). But when we try creating the segments with 'exclude.sequence.id=true' configuration alone (we did not set the normalized date config 'segment.name.generator.type=normalizedDate' ) segments created with sequence_id(TABLE_18025_18027_155). We want to create segments without sequence_id (TABLE_18025_18027). Could you please help? For config segment.name.generator.type can we give normalizedSeconds? We are dealing with hourly data so we would like to create our segments name like (Table_2020-03-11-000000_2020-03-11-040000)
    x
    k
    • 3
    • 8
  • x

    Xiang Fu

    03/12/2020, 5:06 PM
    its in your ingestion job
    y
    • 2
    • 35
12345...160Latest