https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • a

    Ali Atıl

    01/25/2022, 12:20 PM
    Hi everyone, I have a table growing over time and it needs upsert functionality. Does upsert functionality work for hybrid tables? What are the advantages of hybrid tables over realtime tables? Would it be too costly to have a realtime table only instead of hybrid table? It would be great if you can share your information with me. Thanks
    m
    • 2
    • 1
  • m

    Mark Needham

    01/25/2022, 2:46 PM
    This might be more of a SQL query, but how would you go about filtering based on an aggregated value? e.g. I have this query:
    Copy code
    select competitorId, max(distance) AS distanceCovered
    from parkrun 
    group by competitorId
    order by distanceCovered DESC
    And I want to only return records where
    distanceCovered
    is greater than say 1,000. But it doesn't like that. What's the proper way to solve this type of problem?
    a
    r
    • 3
    • 7
  • s

    Sahar

    01/25/2022, 3:28 PM
    Hi 👋 I'm trying to find the best way to deal with deleted records. I have Mysql + debezium + kafka + pinot setup. When a record is deleted from the source db, the
    after
    part of the payload on kafka will be null as expected. I am extracting the fields from
    $.after
    of the payload (attached my table def and schema def). But when the record is deleted, I want to mark the deleted record with a
    deleted_at
    column and extract the fields from the
    $.before
    part of the payload. I haven't used Groovy before and not sure if I'm doing it right, as a test I'm trying to set
    my_id
    field to payload.after.id if after is not null and otherwise set it to payload.before.id:
    Copy code
    {
            "columnName": "my_id",
            "transformFunction": "Groovy({JSONPATHLONG(payload, '$.after.id', '0') == 0 ? JSONPATHLONG(payload, '$.before.id', '0') : JSONPATHLONG(payload, '$.after.id', '0')}, payload)"
          },
    but it fails creating the table with a 400 invalid table config error. Any help/hints would be appreciated or how others deal with deleted records coming from source
    m
    m
    r
    • 4
    • 22
  • x

    xtrntr

    01/26/2022, 6:22 AM
    pinot seems to be truncating the number of groups returned by my group by query, but im not seeing
    groupLimitReached=true
    in my queries. and this is also with the option,
    pinot.server.query.executor.num.groups.limit=N
    where N is 10M and the number of groups i’m expecting to see is between 5-8M but it’s around 1M instead
    m
    j
    • 3
    • 71
  • v

    Vibhor Jaiswal

    01/26/2022, 10:55 PM
    Hi Guys . Need some guidance on MMAP vs HEAP and correction in the Pinot documentation . As written here -https://github.com/apache/pinot/issues/3454
    m
    • 2
    • 3
  • v

    Vibhor Jaiswal

    01/26/2022, 11:00 PM
    It was suggested @2019 that the documentation will get fixed soon . Also we are testing a 20 million insert 50 million kafka upsert and 1 big aggregation join query at the same time . We will like know which one to use in this case . MMAP or HEAP ? But given the current state of documentation we need to check in the codebase . Any guidance is welcome .
    m
    • 2
    • 3
  • s

    Sadim Nadeem

    01/27/2022, 6:07 AM
    is there some plan to support real time tables consuming from pulsar topic?
    x
    • 2
    • 1
  • d

    Diogo Baeder

    01/27/2022, 2:36 PM
    Hi guys! Hey, I was publishing a lot of old logs I have to Pinot, and it stopped being able to commit segments because the Controller got out of space, even though I have 20G in it - I reserved far more space in the Server instances though. How much space am I supposed to have for the Controller instance(s)? Why does it use so much space?
    m
    m
    • 3
    • 22
  • a

    Alexander Vivas

    01/27/2022, 2:49 PM
    Hi guys, good afternoon, I’m gonna need to delete a consuming segment because it got stuck and is no longer streaming data from kafka, what would be the steps to create it again?
    m
    m
    n
    • 4
    • 22
  • a

    Aditya

    01/27/2022, 3:02 PM
    Hi, I am using the pinot go client, Is there a way to create something resembling prepared statements either in go client or the broker rest api ? The jdbc client have support for prepared statement. does pinot broker support prepared statements?
    m
    r
    • 3
    • 6
  • j

    Julien Picard

    01/27/2022, 8:27 PM
    Hello! We are trying pinot, we have installed it with the helm chart and we are seeing this log in the controller :
    Copy code
    Server: Server_pinot-server-0.pinot-server-headless.pinot.svc.cluster.local_8098 returned error: 404
    I am wondering why there is a "_" in front of the pod name. I don't think this could be resolved. Looks like the reason for the 404 error. Do you know anything about it? EDIT: just saw there is another underscore before the port number so maybe that's normal.
    t
    • 2
    • 2
  • j

    Jeff Moszuti

    01/27/2022, 8:49 PM
    I’m kicking the tyres (tires) of the Configuration Recommendation Engine but I get back the following error:
    Copy code
    {
      "_code": 400,
      "_error": "java.lang.IllegalArgumentException: Time column can be only INT or LONG: TIMESTAMP"
    }
    My input json is:
    Copy code
    {
      "schema": {
        "dimensionFieldSpecs": [
          {
            "averageLength": 36,
            "cardinality": 10000,
            "dataType": "STRING",
            "name": "event_id"
          },
          {
            "averageLength": 36,
            "cardinality": 10000,
            "dataType": "STRING",
            "name": "app_id"
          },
          {
            "averageLength": 36,
            "cardinality": 10000,
            "dataType": "STRING",
            "name": "user_id"
          }
        ],
        "dateTimeFieldSpecs": [
          {
            "cardinality": 10000,
            "dataType": "TIMESTAMP",
            "format": "1:MILLISECONDS:EPOCH",
            "granularity": "1:MILLISECONDS",
            "name": "event_at"
          }
        ],
        "metricFieldSpecs": [],
        "schemaName": "app_downloads"
      },
      "queriesWithWeights": {
        "  select count(event_id) as num_downloads from app_downloads where event_at between '2021-01-01 00:00:00' and '2021-01-31 00:00:00' ": 1
      },
      "tableType": "OFFLINE",
      "numRecordsPerPush": 10000,
      "qps": 5,
      "latencySLA": 5000,
      "rulesToExecute": {
        "recommendRealtimeProvisioning": false
      }
    }
    What could be the problem?
    k
    r
    • 3
    • 7
  • a

    Aditya

    01/28/2022, 2:39 PM
    Hi Folks, Has anyone faced random issues while querying with tracing? Ran a simple query with tracing from pinot ui
    Copy code
    select sum(amount) as amt from reward
    where user_id = 'some_id'
    Following exception occurred
    Copy code
    [
      {
        "message": "InternalError:\njava.lang.NullPointerException\n\tat org.apache.pinot.core.util.trace.TraceContext.getTraceInfo(TraceContext.java:188)\n\tat org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:223)\n\tat org.apache.pinot.core.query.executor.QueryExecutor.processQuery(QueryExecutor.java:60)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:151)",
        "errorCode": 450
      },
      {
        "message": "InternalError:\njava.lang.NullPointerException\n\tat org.apache.pinot.core.util.trace.TraceContext.getTraceInfo(TraceContext.java:188)\n\tat org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:223)\n\tat org.apache.pinot.core.query.executor.QueryExecutor.processQuery(QueryExecutor.java:60)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:151)",
        "errorCode": 450
      },
      {
        "message": "1 servers [11.4.7.172_O] not responded",
        "errorCode": 427
      }
    ]
    Whats more interesting the servers become unstable and randomly timeout
    "errorCode": 427
    , queries (without tracing) are randomly processed successfully or timeout Timeout log in server
    Copy code
    ERROR [BaseCombineOperator] [pqr-1] Timed out while polling results block, numBlocksMerged: 0 (query: QueryContext{_tableName='reward_OFFLINE', _selectExpressions=[*], _aliasList=[null], _filter=null, _groupByExpressions=null, _havingFilter=null, _orderByExpressions=null, _limit=10, _offset=0, _queryOptions={responseFormat=sql, groupByMode=sql, timeoutMs=10000}, _debugOptions=null, _brokerRequest=BrokerRequest(querySource:QuerySource(tableName:reward_OFFLINE), pinotQuery:PinotQuery(dataSource:DataSource(tableName:reward_OFFLINE), selectList:[Expression(type:IDENTIFIER, identifier:Identifier(name:*))], orderByList:[], limit:10, queryOptions:{responseFormat=sql, groupByMode=sql, timeoutMs=10000}))})
    Restarting all the servers fixed the issue Using recent nightly docker image (digest a6c14285abf4)
    m
    r
    • 3
    • 5
  • d

    Diogo Baeder

    01/28/2022, 6:56 PM
    Hi guys, I tried setting up S3 deep store for my Pinot cluster, and used this part of config in the
    pinot.yaml
    file, in the
    controller
    config, for deploying via the official Helm chart:
    Copy code
    extra:
        # Note: Extra configs will be appended to pinot-controller.conf file
        configs: |-
          pinot.set.instance.id.to.hostname=true
          controller.task.scheduler.enabled=true
          # Note: change this to the real bucket, after creating it in S3
          controller.data.dir=s3://<redacted>
          controller.local.temp.dir=/tmp/pinot-tmp-data/
          pinot.controller.storage.factory.class.s3=org.apache.pinot.plugin.filesystem.S3PinotFS
          pinot.controller.storage.factory.s3.region=<redacted>
          pinot.controller.segment.fetcher.protocols=file,http,s3
          pinot.controller.segment.fetcher.s3.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
    Is this the correct configuration to do - to append these configs to the existing ones? Or should the approach have been different? I'm asking this because I found a weird directory in one of the Controller local filesystems:
    Copy code
    root@pinot-controller-1:/opt/pinot# du -sh /var/pinot/controller/data\,s3\:/
    0	/var/pinot/controller/data,s3:/
    By the looks of it, it seems like a configuration mistake somewhere [SOLVED! - SOLUTION:] Because the official Helm chart already defines that configuration, it ends up concatenating the one from the extras and the default one when loading from the final file, because that setting ends up defined twice in
    pinot-controller.conf
    inside the Controller container. The solution for this is to, instead of putting
    controller.data.dir
    as a one-line string in the
    extra
    configs
    , just define that setting starting from the
    controller
    options in that YAML file, then
    data
    instead of
    extra
    , then
    dir
    , so that the option replaces the default value.
    🌟 1
    j
    m
    m
    • 4
    • 28
  • s

    Syed Akram

    01/31/2022, 5:09 AM
    we have only one server, where that table has 35B rows
    m
    n
    • 3
    • 36
  • a

    Alexander Vivas

    01/31/2022, 8:32 AM
    <!here>, good morning guys, I’m currently spawning a cluster for pinot 0.9.1 but I have a strange situation here, I’ve been adding resources little by little as I put pinot to stream from several topics in kafka and now I have 3 server instances but only server-0 consumes data from kafka, Am I missing a configuration anywhere?
    m
    x
    • 3
    • 5
  • p

    Prashant Korade

    01/31/2022, 8:09 PM
    Hi Guys , need guidance with one of issue we are facing . We are testing upsert use case on realtime table , where realtime table has 20 million records and we are publishing additional 50million records. We have query loop running which basically validates total record count while upsert is in progress , we expect this query to return total count of 20 million consistently. Real time table has replication factor 2. We noticed that if one of our server goes down (We have 5 servers), pinot dont return result (i.e our count(*)query returns numberofSegmentsmatched=0, totaldocs=0, segmentqueried=0 etc ) for about 30secs before giving expected result. i.e 20 million. Also noticed when server comes back up total count is inconsistent for some time before returning expected result. Table remains in BAD state but keeps serving query with expected result after sometime. Our Table config "REALTIME": {    "tableName": "sometable",    "tableType": "REALTIME",    "segmentsConfig": {      "schemaName": "someschema",      "timeColumnName": "AuditDateTimeUTC",      "allowNullTimeValue": false,      "replication": "1",      "replicasPerPartition": “2”    },    "tenants": {      "broker": "DefaultTenant",      "server": "DefaultTenant",      "tagOverrideConfig": {}    },    "tableIndexConfig": {      "invertedIndexColumns": [],      "rangeIndexColumns": [],      "rangeIndexVersion": 1,      "autoGeneratedInvertedIndex": false,      "createInvertedIndexDuringSegmentGeneration": false,      "sortedColumn": [],      "bloomFilterColumns": [],      "loadMode": "MMAP",      "noDictionaryColumns": [some columns],      "onHeapDictionaryColumns": [],      "varLengthDictionaryColumns": [],      "enableDefaultStarTree": false,      "enableDynamicStarTreeCreation": false,      "aggregateMetrics": false,      "nullHandlingEnabled": false,      "streamConfigs": {        "streamType": "kafka",        "stream.kafka.topic.name": "sometopic",        "stream.kafka.broker.list": "host:9092",        "stream.kafka.consumer.type": "lowlevel",        "stream.kafka.hlc.bootstrap.server": "host:9092",        "stream.kafka.consumer.prop.auto.offset.reset": "largest",        "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",        "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",        "realtime.segment.flush.threshold.rows": "0",        "realtime.segment.flush.threshold.size": "0",        "realtime.segment.flush.autotune.initialRows": “3000000",        "realtime.segment.flush.threshold.time": "24h",        "realtime.segment.flush.threshold.segment.size": “500M”      }    },    "metadata": {},    "quota": {},    "routing": {      "instanceSelectorType": "strictReplicaGroup"    },    "query": {},    "upsertConfig": {      "mode": "FULL",      "comparisonColumn": "AuditDateTimeUTC",      "hashFunction": "NONE"    },    "ingestionConfig": {},    "isDimTable": false  } }
    m
    k
    • 3
    • 24
  • l

    Luis Fernandez

    01/31/2022, 9:33 PM
    besides information about where servers/and configurations what else resides on zookeeper? I have seen the space of zookeeper incrementing in our cluster we have a 5gig persistent volume and i’m worried if may get filled up and how to increment space when that happens
    m
    x
    • 3
    • 18
  • a

    Anish Nair

    02/01/2022, 2:21 PM
    Hi Team, We are observing very high latency when querying with lookup . Without lookup: 500ms With lookup: 10secs Can someone help please Query:
    SELECT
    lookUp('dim_tabl1', 'display_name', 'dim_joinkey', fact_joinkey),
    sum(metric1) AS sum_1
    FROM reporting_aggregations
    WHERE stats_date_hour between '2022012000' and '2022012223'
    GROUP BY lookUp('dim_tabl1', 'display_name', 'dim_joinkey', fact_joinkey)
    ORDER BY sum(metric1)
    DESC LIMIT 10000
    k
    s
    +4
    • 7
    • 168
  • a

    Aditya

    02/01/2022, 4:45 PM
    Hi Team, I am running load test on our test pinot cluster, experimenting with getting high qps on same hardware (3 servers * 4 cores, 16 GB ram) Data size is 100M rows, distributed over a year The filter column is from_user_id (string), with sorted index
    Copy code
    Query : select  sum(amount) as amt from transactions where from_user_id = "some id"
    Initially created table with 13 monthly segments, achieved 3.37K qps p99 143ms Further partitioned data 4 partitions on column from_user_id (13 * 4 segments), achieved 3.50K qps p99 146 ms There is not much difference in throughput Is the data size that too small to realise benefit of partitioning? Any thoughts on further tuning?
    r
    m
    • 3
    • 10
  • s

    Sowmiya

    02/02/2022, 2:27 PM
    Hi All, we have installed Superset, Docker and python. we dont how to interconnect all. Also, we don't know where to use 'Make latest' command. what need to be change in the Makefile. How to connect superset with pinot. Please help us @Mayank @Mark Needham
    m
    x
    • 3
    • 12
  • d

    Diogo Baeder

    02/02/2022, 10:17 PM
    Hi there, friggin' amazing Pinot community! (Sorry for the words, I'm just way too excited with this thing... hahaha) I'm having a bit of an issue with timestamps; I'm publishing data to Pinot with millisecond timestamps (since Epoch), but when I try to convert with
    toDateTime
    I'm getting unexpected times, like 7 hours ago. When I saw this, I thought it was an issue with the published timestamps, but then I fetched some data without doing the conversion, then I pasted a timestamp in my Python shell, converted it to a timezone-naive datetime object, and there it's just fine, it's the expected datetime. I tried to use
    'UTC'
    as the third argument for
    toDateTime
    , but to no avail, it still brings me the wrong datetime. This is not a super critical issue, since the vast majority of the analyses we'll do will be based off of a Python application, where that conversion will be done, but still, it would be nice to get more expected values right out of the Controller web UI. Any ideas what I can do to achieve that? [SOLVED] I was mistakenly using
    hh
    to convert the hours, while I should be using
    HH
    instead
    k
    m
    • 3
    • 5
  • s

    Salai Vasan

    02/03/2022, 5:04 AM
    Hi All Installed Pinot on ubuntu & started the Pinot components(zookeeper,controller &server) , the issue is when the server didn't get any input or server is idle pinot service stops, annoyed to start the components on regular basis,can somebody could guide .
    n
    k
    x
    • 4
    • 22
  • s

    Shadab Anwar

    02/03/2022, 7:02 AM
    My pinot dashboard is not showing broker, server and tables. Broker and Server are running though
    m
    x
    • 3
    • 5
  • a

    Anish Nair

    02/03/2022, 8:18 AM
    Hi Tea,m, Facing issue while running Batch Ingestion Job. Got this issue after upgrading to latest nightly build. 0.10. Here is the log. Can someone check
    Copy code
    2022/02/03 00:11:06.064 INFO [CrcUtils] [pool-6-thread-1] Computed crc = 1828318080, based on files [/tmp/pinot-b263f2fa-8bad-4a49-9511-508fc14c50e2/output/dim_testtable_OFFLINE_0/v3/columns.psf, /tmp/pinot-b263f2fa-8bad-4a49-9511-508fc14c50e2/output/dim_testtable_OFFLINE_0/v3/index_map, /tmp/pinot-b263f2fa-8bad-4a49-9511-508fc14c50e2/output/dim_testtable_OFFLINE_0/v3/metadata.properties]
    2022/02/03 00:11:06.065 INFO [SegmentIndexCreationDriverImpl] [pool-6-thread-1] Driver, record read time : 13
    2022/02/03 00:11:06.065 INFO [SegmentIndexCreationDriverImpl] [pool-6-thread-1] Driver, stats collector time : 0
    2022/02/03 00:11:06.065 INFO [SegmentIndexCreationDriverImpl] [pool-6-thread-1] Driver, indexing time : 8
    2022/02/03 00:11:06.065 INFO [SegmentGenerationJobRunner] [pool-6-thread-1] Tarring segment from: /tmp/pinot-b263f2fa-8bad-4a49-9511-508fc14c50e2/output/dim_testtable_OFFLINE_0 to: /tmp/pinot-b263f2fa-8bad-4a49-9511-508fc14c50e2/output/dim_testtable_OFFLINE_0.tar.gz
    2022/02/03 00:11:06.090 INFO [SegmentGenerationJobRunner] [pool-6-thread-1] Size for segment: dim_testtable_OFFLINE_0, uncompressed: 217.24K, compressed: 70.93K
    2022/02/03 00:11:06.618 INFO [IngestionJobLauncher] [main] Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner
    2022/02/03 00:11:06.619 INFO [PinotFSFactory] [main] Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS
    2022/02/03 00:11:06.620 INFO [PinotFSFactory] [main] Initializing PinotFS for scheme hdfs, classname org.apache.pinot.plugin.filesystem.HadoopPinotFS
    2022/02/03 00:11:06.632 INFO [HadoopPinotFS] [main] successfully initialized HadoopPinotFS
    2022/02/03 00:11:06.819 INFO [SegmentPushUtils] [main] Start pushing segments: [<hdfs://nameservice1/data/max/poc/pinot-ingestion/dimension_segments/dim_testtable/dim_testtable_OFFLINE_0.tar.gz>]... to locations: [org.apache.pinot.spi.ingestion.batch.spec.PinotClusterSpec@51827393] for table dim_testtable
    2022/02/03 00:11:06.819 INFO [SegmentPushUtils] [main] Pushing segment: dim_testtable_OFFLINE_0 to location: <http://d9-max-insert-2.srv.net:9000> for table dim_testtable
    2022/02/03 00:11:07.164 INFO [FileUploadDownloadClient] [main] Sending request: <http://d9-max-insert-2.srv.net:9000/v2/segments?tableName=dim_testtable&tableName=dim_testtable&tableType=OFFLINE> to controller: <http://d9-max-insert-2.srv.net|d9-max-insert-2.srv.net>, version: Unknown
    2022/02/03 00:11:07.168 WARN [SegmentPushUtils] [main] Caught temporary exception while pushing table: dim_testtable segment: dim_testtable_OFFLINE_0 to <http://d9-max-insert-2.srv.net:9000>, will retry
    org.apache.pinot.common.exception.HttpErrorStatusException: Got error status code: 500 (Internal Server Error) with reason: "Exception while uploading segment: null" while sending request: <http://d9-max-insert-2.srv.net:9000/v2/segments?tableName=dim_testtable&tableName=dim_testtable&tableType=OFFLINE> to controller: <http://d9-max-insert-2.srv.net|d9-max-insert-2.srv.net>, version: Unknown
            at org.apache.pinot.common.utils.FileUploadDownloadClient.sendRequest(FileUploadDownloadClient.java:531) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.common.utils.FileUploadDownloadClient.uploadSegment(FileUploadDownloadClient.java:838) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.segment.local.utils.SegmentPushUtils.lambda$pushSegments$0(SegmentPushUtils.java:122) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.spi.utils.retry.BaseRetryPolicy.attempt(BaseRetryPolicy.java:50) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.segment.local.utils.SegmentPushUtils.pushSegments(SegmentPushUtils.java:119) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner.run(SegmentTarPushJobRunner.java:88) [pinot-batch-ingestion-standalone-0.10.0-SNAPSHOT-shaded.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.kickoffIngestionJob(IngestionJobLauncher.java:146) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-ea2f0aa641e17301293662c8e79dfd94d8568438]
            at org.apache.pinot.spi.ingestion.batch.IngestionJobLauncher.runIngestionJob(IngestionJobLauncher.java:118)
    x
    r
    • 3
    • 10
  • a

    Aditya

    02/03/2022, 1:34 PM
    Hi, I am trying JSON Index in pinot tag is a json column containing, a list called Tags Eg :
    {"Tags":["TAG1","TAG3", "TAG2"]}
    Running query like below :
    Copy code
    select sum(amount) as amt from transactions
    where from_user_id = 'some id'
    and JSON_MATCH(tag, '"$.Tags[*]"=''FD''')
    Created sorted index on from_user_id(string) and json index on tag Explain plan shows both the index are used The queries take ~500 ms, Is there a way to improve this? Some obvious optimisation I am missing
    k
    r
    • 3
    • 71
  • p

    Peter Pringle

    02/04/2022, 7:01 AM
    I have enabled controller basic authentication using
    controller.admin.access.control.principals
    and now the controller prompts for login; however the swagger endpoint doesn't seem to get passed these credentials and returns 403 for all operations. How do we pass the credentials into swagger?
    r
    a
    • 3
    • 3
  • a

    Ali Atıl

    02/07/2022, 8:00 AM
    Hi everyone, does pinot have a truncate (delete all data) functionality?
    k
    • 2
    • 2
  • a

    André Siefken

    02/07/2022, 9:08 AM
    Hi folks, I am writing
    TransformFunction
    implementations for geospatial support functionality. Many of them are designed to compare a given static value against table column values. Their current implementations allow to pass the static parameter in any of the two arguments of the function signature. My question is: is there an easy way to identify a static value passed as argument to a
    TransformFunction
    , e.g. from a
    Projectionblock
    ?
    m
    r
    r
    • 4
    • 27
  • a

    Anish Nair

    02/07/2022, 12:22 PM
    Hi Team, This is regarding, queries fired by Superset over Pinot. We observed that charts are loading with high latency, but when hitting SQL queries its fast. Upon checking the query fired by Superset, we found that superset is using DATETIMECONVERT in select statement ( for timeseries charts ). Can someone advice?
    s
    c
    +3
    • 6
    • 45
1...323334...166Latest