https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • k

    Kishore G

    07/13/2021, 6:41 AM
    inverted index will not help
    p
    • 2
    • 10
  • j

    Jonathan Meyer

    07/13/2021, 3:25 PM
    Hello Is it possible to atomically replace multiple segments ? I remember reading about that being upcoming feature, but I can see these 2 endpoints already exist : Is is possible to use these when ingesting data via the CLI ?
    m
    • 2
    • 1
  • s

    Saurabh Dwivedy

    07/14/2021, 1:55 PM
    Okay my problem is solved - The path I was providing to the batch-job-spec.yml was not correct
    m
    • 2
    • 1
  • k

    Kamal Chavda

    07/14/2021, 2:38 PM
    Hi All, I saw this thread related to deleting table/segments (https://apache-pinot.slack.com/archives/C011C9JHN7R/p1623800839135400) and wanted to confirm that to delete table in dev environment I can use API to delete table and can manually delete the segments in my deep store (S3) and that would be it?
    m
    • 2
    • 4
  • l

    Luiz Gabriel Lima Pinheiro

    07/14/2021, 4:09 PM
    @Kishore G, this is for production
    m
    k
    • 3
    • 6
  • b

    Bruce Ritchie

    07/14/2021, 8:48 PM
    Has anyone recently used the spark batch ingestion to ingest parquet files? If so what version of spark did you use? Do you have timestamp columns in your data? I'm having no end of issues with getting to to actually get it to ingest the data. Issues I've encountered so far: • JDK 11 issue with old apache commons lang3 version. Workaround: Update dependency in pinot • Parquet version mismatch between EMR 6.3.0 (spark 3.1.1) and pinot master causing methodNotFoundException issues. Workaround: rev parquet and avro to newer versions and shade in pinot. • INT96 timestamp type unsupported in parquet-avro integration. Workaround attempts include using native parquet reader (fails as below), trying with conf.set("parquet.avro.readInt96AsFixed", "true") which reads the timestamp as bytes but fails in DataTypeTransformer/PinotDataType when attempting to parse as long. • Native parquet reader fails with odd errors: FileNotFoundException: File does not exist: /mnt/yarn/usercache/hadoop/appcache/application_1626274417373_0005/container_1626274417373_0005_01_000004/tmp/pinot-f6020dd1-9bdf-4ac1-b1b8-343bb1af5a50/input/part-29508-674459c7-acf4-42b7-84f4-1752dd3ac7bd.c000.snappy.parquet -- no clue as to the cause of this one. • columns in the path (/data/TransactionDateYear=2016/TransactionDateMonth=02/someparquetfile.parquet) are not detected. I was hoping the native parquet reader might be smart enough to detect those but I think it's failing before then.
    k
    • 2
    • 2
  • d

    Deepak Mishra

    07/15/2021, 11:27 AM
    Hello Everyone , I am ingesting realtime data via kafka with 55 records and set “maxNumRecordsPerSegment” - 10 . expecting 5 segment should be generated but it is showing only i segment generated .
    n
    • 2
    • 7
  • a

    Azri Jamil

    07/15/2021, 12:55 PM
    For some reason, i have to restart the the server and later when it start Im getting this error, seem like it cannot find this record in Zookeeper. What is the best workaround for this?
    Copy code
    WARN [ZkBaseDataAccessor] [ZkClient-EventThread-17-pinot-zookeeper:2181] Fail to read record for paths: {/mdm-analytic/INSTANCES/Server_pinot-server-0.pinot-server-headless.default.svc.cluster.local_8098/MESSAGES/219cd205-8c0c-455b-b509-d26b525dfa7c=-101, /mdm-analytic/INSTANCES/Server_pinot-server-0.pinot-server-headless.default.svc.cluster.local_8098/MESSAGES/269cb90d-0d58-473c-b8ca-7ef892d69da5=-101, /mdm-analytic/INSTANCES/Server_pinot-server-0.pinot-server-headless.default.svc.cluster.local_8098/MESSAGES/0142c30b-740d-4f80-ab61-87110deb8783=-101}
    m
    j
    • 3
    • 5
  • l

    Laxman Ch

    07/16/2021, 6:11 PM
    Seeing some query failures and seeing the following errors in Pinot server intermittently
    Copy code
    2021/07/16 18:00:42.408 INFO [QueryScheduler] [pqr-1] Processed requestId=55381,table=domainEventView_REALTIME,segments(queried/processed/matched/consuming)=20/14/2/3,schedulerWaitMs=0,reqDeserMs=0,totalExecMs=1,resSerMs=0,totalTimeMs=2,minConsumingFreshnessMs=1626458256041,broker=Broker_pinot-broker-1.pinot-broker.traceable.svc.cluster.local_8099,numDocsScanned=209,scanInFilter=940,scanPostFilter=426,sched=fcfs
    2021/07/16 18:00:42.407 ERROR [ServerQueryExecutorV1Impl] [pqr-0] Exception processing requestId 55381
    java.lang.RuntimeException: Caught exception while running CombinePlanNode.
    java.lang.RuntimeException: Caught exception while running CombinePlanNode.
    	at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:157) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:157) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.InstanceResponsePlanNode.run(InstanceResponsePlanNode.java:33) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.InstanceResponsePlanNode.run(InstanceResponsePlanNode.java:33) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:45) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:294) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:215) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:157) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.InstanceResponsePlanNode.run(InstanceResponsePlanNode.java:33) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:45) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:294) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:215) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.QueryExecutor.processQuery(QueryExecutor.java:60) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:157) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.QueryExecutor.processQuery(QueryExecutor.java:60) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:157) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:45) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:294) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:141) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:215) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.executor.QueryExecutor.processQuery(QueryExecutor.java:60) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:157) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:141) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at java.util.concurrent.FutureTask.run(Unknown Source) [?:?]
    	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?]
    	at org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:141) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at java.util.concurrent.FutureTask.run(Unknown Source) [?:?]
    	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) [?:?]
    	at shaded.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111) [pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at shaded.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58) [pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at shaded.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75) [pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?]
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?]
    	at java.lang.Thread.run(Unknown Source) [?:?]
    Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
    	at java.util.concurrent.FutureTask.report(Unknown Source) ~[?:?]
    	at java.util.concurrent.FutureTask.get(Unknown Source) ~[?:?]
    	at org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:147) ~[pinot-all-hypertrace-0.7.1-5-shaded.jar:0.7.1-24ebc6d4cd25b1b21225f48c4e7438919246ffe3]
    	... 15 more
    Caused by: java.lang.NullPointerException
    k
    j
    b
    • 4
    • 5
  • k

    Ken Krugler

    07/16/2021, 9:59 PM
    I’m trying to understand an interesting anomaly with the results of a query. I do
    select sum(metric), key, min(date) as firstSeen, max(date) as lastSeen from table where date >= <lowDate> AND date <= <highDate> group by key order by firstSeen desc limit 1
    . I get a single row as expected, but with a
    firstSeen
    and
    lastSeen
    both equal to
    <highDate>
    . I was expecting the
    firstSeen
    result to be equal to
    <lowDate>
    . If I then run the exact same query, but add in
    AND key = '<key value from the previous result>'
    , I get the a single row with the requested key value, but now the
    firstSeen
    result is equal to
    <lowDate>
    (as expected), and
    sum(metric)
    is larger (also as expected). Any ideas what is going on?
    j
    • 2
    • 7
  • j

    Jonathan Meyer

    07/19/2021, 5:00 PM
    Hello Is there any way to get the percentile of an aggregation ? Say, we aggregate in some way (per group), then get the percentile of these aggregated groups ? It sounds a lot like a sub query, which isn't supported (yet), but maybe there's another way ?
    x
    m
    j
    • 4
    • 8
  • c

    Charles

    07/20/2021, 12:26 AM
    org.jkiss.dbeaver.model.sql.DBSQLException: SQL Error [65536]: Query failed (#20210720_001825_00003_844up): null value in entry: Server_sj1-pinot-server-25_8098=null at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:513) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:444) at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:431) at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:816) at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3435) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:118) at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:171) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:116) at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4686) at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63) Caused by: java.sql.SQLException: Query failed (#20210720_001825_00003_844up): null value in entry: Server_sj1-pinot-server-25_8098=null at com.facebook.presto.jdbc.PrestoResultSet.resultsException(PrestoResultSet.java:1841) at com.facebook.presto.jdbc.PrestoResultSet.getColumns(PrestoResultSet.java:1751) at com.facebook.presto.jdbc.PrestoResultSet.<init>(PrestoResultSet.java:121) at com.facebook.presto.jdbc.PrestoStatement.internalExecute(PrestoStatement.java:272) at com.facebook.presto.jdbc.PrestoStatement.execute(PrestoStatement.java:230) at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:330) at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:130) ... 12 more Caused by: java.lang.NullPointerException: null value in entry: Server_sj1-pinot-server-25_8098=null at com.google.common.collect.CollectPreconditions.checkEntryNotNull(CollectPreconditions.java:32) at com.google.common.collect.SingletonImmutableBiMap.<init>(SingletonImmutableBiMap.java:42) at com.google.common.collect.ImmutableBiMap.of(ImmutableBiMap.java:71) at com.google.common.collect.ImmutableMap.of(ImmutableMap.java:124) at com.google.common.collect.ImmutableMap.copyOf(ImmutableMap.java:459) at com.google.common.collect.ImmutableMap.copyOf(ImmutableMap.java:438)
    m
    x
    d
    • 4
    • 40
  • y

    Yash Agarwal

    07/21/2021, 10:03 AM
    Hey team, I am seeing a huge variation in performance of the following queries.
    Copy code
    select distinct DATETIMECONVERT(transaction_date, '1:DAYS:EPOCH', '1:DAYS:SIMPLE_DATE_FORMAT:yyyy-MM-dd', '1:DAYS') from transactions limit 1000 -- 80+ seconds
    select distinct transaction_date from transactions limit 1000 -- 3.5 seconds
    Can you help with the how to optimize the same. In the meanwhile we have added another column in the
    yyyy-MM-dd
    format to support the same.
    m
    j
    • 3
    • 10
  • j

    Jonathan Meyer

    07/21/2021, 9:27 PM
    Hello Is it possible to bucket per week, but starting on an arbitrary week day ?
    x
    • 2
    • 8
  • s

    Sadim Nadeem

    07/22/2021, 4:51 PM
    In a realtime table .. I went to cluster manager > tables > mytable > edit schema > added one string column *=> newly added column in schema was not reflecting on sql query editor(query is "_select * from mytable"_) on pinot controller UI* again went to cluster manager > tables > mytable > reload all segments *=> now column reflecting on sql query editor(query is "_select * from mytable"_) on pinot controller UI* with all existing rows for this particular column having null values which is ok .. the issue is newly added records also getting updated with null while json posted on kafka topic for the table have non null value of the new column *also when i fire select query with newly added column like (query is "_select * from mytable where newColumn = 'null'"_) then its returning 10 rows with newly addedcolumn only as null but when i fire query like (query is "_select * from mytable where newColumn != 'null'"_) .,. then no rows are returned ..* also when i do "_select newColumn from mytable where newColumn != 'null'"_ .. then no rows returned but when I do "_select newColumn from mytable where newColumn = 'null' "_.. then 10 rows returned with all newColumn value as null
    m
    k
    +2
    • 5
    • 35
  • r

    RK

    07/23/2021, 6:25 AM
    Hi Team, I am pusing some segments in an offline table. I am able to push segments successful and data is also showing in table but segment status is always showing Bad.
    x
    n
    s
    • 4
    • 17
  • d

    Deepak Mishra

    07/23/2021, 12:03 PM
    Hi I am looking into manage offline flows using minion and set “bucketTimePeriod”: “1h”, “bufferTimePeriod”: “2h” in realtime config file and in batch ingestion config - “segmentIngestionFrequency”: “HOURLY” . so that real time data will move into offline table on hourly basis, while executing i found this erro . while in real time segement data is more than 3-4 hours ago.
    n
    • 2
    • 3
  • s

    Syed Akram

    07/23/2021, 3:51 PM
    Hi, assume, i have created table schema/config and ingested some data... after sometime i updated the same table with startree index using swagger... and reloaded the segments for the same.. but star tree index has not been created for the same after updating tableconfig and reloading the segments, (i checked index dir) nly old index is available... please check this and do needful @Jackie @Mayank
    n
    • 2
    • 3
  • d

    Deepak Mishra

    07/25/2021, 3:42 AM
    Hi, when i execute simple query on Pinot query console - select * from transcript where 1=1 , it gives error like - org.apache.pinot.pql.parsers.Pql2CompilationException: Comparison between two constants is not supported
    m
    • 2
    • 22
  • c

    Carlos Domínguez

    07/26/2021, 2:00 PM
    Hi folks, we are using KafkaJsonSchemaSerializer in the Kafka producer side, but we aren’t being able to decode the payload using Pinot ingestion. There are some unreadable chars at the beginning of the payload. I’ve tried different configurations without any success. BTW we are using Confluent Schema Registry.
    j
    e
    • 3
    • 7
  • i

    II

    07/26/2021, 8:46 PM
    hi folks, Is it possible to do `Backward incompatible schema`update (like change data type) on OFFLINE table without downtime?
    j
    • 2
    • 2
  • s

    suraj kamath

    07/28/2021, 2:20 PM
    Hi Folks, I was trying out the below command to upload a segment from a realtime table to OFFLINE table (Without using minion):
    Copy code
    bin/pinot-admin.sh UploadSegment -controllerHost localhost -controllerPort 9000 -segmentDir ${PINOT_ROOT}/pinot/server/data/index/transcript_REALTIME/transcript__0__4__20210720T0957Z -tableName transcript_OFFLINE
    
    Executing command: UploadSegment -controllerProtocol http -controllerHost localhost -controllerPort 9000 -segmentDir ${PINOT_ROOT}/pinot/server/data/index/transcript_REALTIME/transcript__0__4__20210720T0957Z
    Compressing segment: v3
    Uploading segment tar file: /var/folders/fp/dgy2h98d2639nyd4q_v2bwt40000gn/T/segmentUploader4462261101415652801.tmp/v3.tar.gz
    Sending request: <http://localhost:9000/v2/segments?tableName=transcript_OFFLINE> to controller: 192.168.0.105, version: Unknown
    With this I am able to see the segment under the OFFLINE table, and query the table for data as well. However the segment Meta Data seems to be improper/corrupt and I see the error(refer attached pic) Am I missing something here ? Please let me know Thanks PS: I have a hybrid table setup, and am exploring uploading of segments directly from realtime to offline tables as part of a POC.
    x
    j
    • 3
    • 4
  • d

    Deepak Mishra

    07/29/2021, 5:12 AM
    Hello everyone , I am looking into segmentMergeConfig and following this doc :- https://docs.google.com/document/d/1zoklHjbli-HIy0JAiBITABNsBOuC2jFthcjwGqsFVFQ/edit#heading=h.3eldj09ucqrn and found that i am not able to update realtime config with this ‘segmentMergeConfig’ parameter. Can anyone help on this issue?
    l
    j
    • 3
    • 14
  • b

    beerus

    07/29/2021, 12:30 PM
    Hello everyone, A query is not working for me select * from table_name ORDER BY column1 limit 2000 It's breaking somewhere after 1500, getting some merging issues. Seems recently some schema migration has been done, but older data and new data have diff dimensions,so db is unable to merge final result-set. So throwing some merging error Getting this error... [ { "errorCode": 500, "message": "MergeResponseError:\nData schema mismatch between merged block: [adUnitTypeId(STRING)........] and block to merge: [adUnitTypeId(STRING).....], drop block to merge" } ]
    x
    • 2
    • 1
  • k

    Kamal Chavda

    07/29/2021, 8:33 PM
    Hi all, I'm on 0.7.1 so can't take advantage of the JSON data type but am working with a table that has a STRING column and it has
    {"name":"value"}
    I'm trying to get the "value" by passing in the "name". I tried the JSON transformations but keep getting errors. I didn't see any string-to-json transformations on the 'supported transformations' page. Any other documentation with relevant information or suggestions?
    m
    x
    • 3
    • 13
  • k

    Kenneth Koo

    07/30/2021, 12:40 AM
    Hello. Everyone. You can see garbled node data like the image while using pinot. Do you know the cause of this phenomenon?
    k
    • 2
    • 5
  • d

    Dunith Dhanushka

    07/30/2021, 3:45 AM
    Hi all, I'm new here. I'm using 0.7.1. I have a timestamp field that is formatted as epoch. So I'm trying to query all the records that are older than one day. Here's the query I wrote for it.
    select * from steps where loggedAt > now() - cast(60*60*24*1000 as long)
    I have records that falls into this bucket. But this query returns nothing. Where did it go wrong?
    m
    • 2
    • 6
  • d

    Deepak Mishra

    07/30/2021, 4:25 AM
    set new config controller.task.frequencyPeriod=3600 . it looks like not a perfect parameter.while executing controller ,it gives
    m
    x
    • 3
    • 12
  • k

    khush

    07/30/2021, 10:41 AM
    Hi, It appears that we can only add those fields in order by which are part of the select statement. Any leads here? Query: SELECT appName FROM requests WHERE request_ts >= '1628380800000' AND request_ts <= '1628726400000' AND testRequest = 'false' GROUP BY appName ORDER BY requestDate ASC Error: [ { "errorCode": 700, "message": "QueryValidationError\njava.lang.UnsupportedOperationException ORDER By should be only on some/all of the columns passed as arguments to DISTINCT\n\tat org.apache.pinot.broker.requesthandler.BaseBrokerRequestHandler.validateRequest(BaseBrokerRequestHandler.java:1249)\n\tat org.apache.pinot.broker.requesthandler.BaseBrokerRequestHandler.handleRequest(BaseBrokerRequestHandler.java:303)\n\tat org.apache.pinot.broker.api.resources.PinotClientRequest.processSqlQueryPost(PinotClientRequest.java:175)\n\tat sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)\n\tat org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.lambda$apply$0(ResourceMethodInvoker.java:381)\n\tat org.glassfish.jersey.server.ServerRuntime$AsyncResponder$2$1.run(ServerRuntime.java:819)" } ]
    m
    k
    x
    • 4
    • 21
  • m

    Map

    07/30/2021, 2:31 PM
    Has anyone seen this error when running Trino with Pinot? Googled about this error but didn't find something useful.
    Copy code
    trino:default> select count(*) from table1 group by MSGTIME limit 10;
    
    Query 20210730_141801_00012_uani3, FAILED, 1 node
    Splits: 51 total, 0 done (0.00%)
    0.16 [0 rows, 0B] [0 rows/s, 0B/s]
    
    Query 20210730_141801_00012_uani3 failed: Segment query returned '50001' rows per split, maximum allowed is '50000' rows. with query "SELECT MSGTIME FROM table1_REALTIME LIMIT 50001"
    a
    k
    e
    • 4
    • 7
1...181920...166Latest