https://pinot.apache.org/ logo
Join SlackCommunities
Powered by
# troubleshooting
  • t

    Tanmay Movva

    10/05/2021, 2:11 PM
    Hello, We are trying to connect Pinot with Trino and we are getting this error
    Copy code
    No valid brokers found for backendentityview'
    We got to know it is because, the trino-pinot connector doesn’t support mixed case table name. Is anything planned to support mixed case table names in the connector?
    k
    e
    • 3
    • 14
  • k

    Kamal Chavda

    10/05/2021, 7:18 PM
    Hi all, I'm pushing a table from postgres to kafka (using debezium) to Pinot. The table has a few geography columns. When creating the realtime table however, I get am getting error on Pinot (below).
    Copy code
    java.lang.IllegalStateException: Cannot read single-value from Collection: [AQEAACDmEAAA5no2BviTXcB1T2ijhAxBQA==, 4326] for column: point
    	at shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:721) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.recordtransformer.DataTypeTransformer.standardizeCollection(DataTypeTransformer.java:193) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.recordtransformer.DataTypeTransformer.standardize(DataTypeTransformer.java:138) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.recordtransformer.DataTypeTransformer.transform(DataTypeTransformer.java:88) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.recordtransformer.CompositeTransformer.transform(CompositeTransformer.java:82) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:491) [pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:402) [pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:538) [pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed]
    	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]
    The point column has this as value:
    Copy code
    "point" : {
          "wkb" : "AQEAACDmEAAA5no2BviTXcB1T2ijhAxBQA==",
          "srid" : 4326
        },
    Any suggestions on how to resolve? I have the column as string in the Pinot table schema.
    j
    y
    • 3
    • 26
  • b

    beerus

    10/06/2021, 10:29 AM
    Can we update transformation function of fields in pinot ?
    m
    • 2
    • 3
  • d

    Deepak Mishra

    10/06/2021, 2:07 PM
    Hi All, i am working on spark ingestion job to push previous date data everyday and it is working fine locally using this command - bin/pinot-ingestion-job.sh -jobSpecFile ${PINOT_DIR}/ingestionJobSpec.yaml -values date=
    date -v-1d +%F
    where ' date is set under includeFilePattern parameter -includeFileNamePattern: 'glob:**/{date}/*.avro'. While executing spark submit job with this command - $SPARK_HOME/bin/spark-submit --class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand --master "local[2]" \ --deploy-mode client --conf "spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins \ -Dlog4j2.configurationFile=${PINOT_DISTRIBUTION_DIR}/conf/pinot-ingestion-job-log4j2.xml" \ --conf "spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar/Users/deemish2/apache pinot 0.8.0 bin/plugins/pinot batch ingestion/pinot batch ingestion spark/pinot batch ingestion spark 0.8.0 shaded.jar/Users/deemish2/apache-pinot-0.8.0-bin/plugins/pinot-file-system/pinot-hdfs/pinot-hdfs-0.8.0-shaded.jar" \ local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar \ -jobSpecFile ${PINOT_DIR}/SparkingestionJobSpec.yaml -values date=
    date -v-1d +%F
    It gives error like - Caused by: java.lang.IllegalArgumentException: Positive number of partitions required. It looks like this argument - -values date=
    date -v-1d +%F
    . works only with bin/pinot-ingestion.sh. Please help to execute this spark ingestion job to push previous date data in pinot
    r
    • 2
    • 2
  • l

    Luis Fernandez

    10/06/2021, 7:27 PM
    can I add partitioning to a table that is already existing/ingesting data? and can partitioning work by itself without the groupReplicas? would it speed up queries?
    k
    m
    • 3
    • 11
  • w

    Will Gan

    10/06/2021, 7:49 PM
    Hi, I tried to kick off a rebalance for one of my tables (moving tenant), but afterward I saw that the idealstate wasn't correct. While before I had two replicas for each segment on a set of servers, now each segment only had 1 replica on the same set of servers (not even the new servers I was trying to move them to), with the exception of the most recent segment that got moved and has the correct number of replicas. Does anyone know what the issue might be? FYI this table is being actively queried.
    j
    • 2
    • 16
  • s

    Sadim Nadeem

    10/07/2021, 6:08 AM
    https://github.com/grafana/grafana/issues/20141
    x
    • 2
    • 2
  • m

    Manish Soni

    10/07/2021, 7:04 AM
    Hi Team, We are running a hybrid table setup. With data being present in REALTIME table and no data in the OFFLINE table. We have configured task in the REALTIME table so that data can be moved from REALTIME to OFFLINE. However, when in the broker logs I am seeing the below warning continuously, is this something with respect to no data being in OFFLINE table or will this be gone once when we have data in OFFLINE table.
    2021-10-07 06:41:20.000 WARN [BaseBrokerRequestHandler] [jersey-server-managed-async-executor-15] Failed to find time boundary info for hybrid table:
    x
    • 2
    • 1
  • b

    beerus

    10/07/2021, 9:55 AM
    Fetch offset 8220025 is out of range for partition pinot_request_table-0, resetting offset
    j
    m
    • 3
    • 2
  • a

    Arpita Bajpai

    10/07/2021, 10:16 AM
    Hi Team, We have hybrid tables in our Pinot 0.8.0 and we have deleted some of them. But still we are able to see minion metadata for the deleted tables in Pinot Incubator UI. Will it be deleted automatically or this is a bug? Can we safely delete the existing metadata for the deleted tables from pinot explorer?
    m
    • 2
    • 1
  • i

    Ilya Yatsishin

    10/07/2021, 11:03 AM
    Hi! I’m trying to ingest data to Pinot 0.8.0 and get not clear message. I can’t see any errors in this output and did not find other logs with related output. But output looks different to sample output from docs and there is no segments pushed to Pinot. Can you please help me to see where I can see error to fix that?
    Copy code
    Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner
    Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS
    Creating an executor service with 1 threads(Job parallelism: 1, available cores: 80.)
    Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner
    Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS
    Start pushing segments: []... to locations: [org.apache.pinot.spi.ingestion.batch.spec.PinotClusterSpec@78de58ea] for table trips_OFFLINE
    m
    • 2
    • 2
  • l

    Luis Fernandez

    10/08/2021, 1:14 PM
    hey I have this query that i’m issuing in pinot
    select * from ads_metrics where user_id=x and serve_time >= 1633651200
    when I use this query like this the
    numEntriesScannedInFilter
    shoots up quite considerably if I don’t use serve_time I get 0 anyone knows why that may be? I currenly have a rangeindex in the
    serve_time
    column and an invertedIndex + partitioning on the user_id
    k
    r
    m
    • 4
    • 9
  • b

    Bowen Wan

    10/09/2021, 12:43 AM
    Hi. How do I know if star-tree index is ready ? There seems to be no improvement.
    numDocsScanned
    remains the same. My index config and query are like follow:
    Copy code
    "starTreeIndexConfigs": [
            {
              "dimensionsSplitOrder": [
                "id",
                "A",
                "B",
                "C",
                "D"
              ],
              "functionColumnPairs": [
                "DISTINCT_COUNT_HLL__id"
              ],
              "maxLeafRecords": 10000
            }
          ]
    Query:
    Copy code
    SELECT DISTINCTCOUNTHLL(id), A FROM MyTable WHERE B = 'a' GROUP BY A ORDER BY DISTINCTCOUNTHLL(id) DESC LIMIT 20
    m
    • 2
    • 4
  • z

    Zsolt Takacs

    10/11/2021, 7:54 AM
    We are using the offlineSegmentDelayHours metric for monitoring if the RealtimeToOffline task is stuck, and since upgrading to 0.8.0 we see stale values for it. Prior to 0.8.0 the metrics were present only on one controller, but now they can be on multiple controllers. I've found that 0.8.0 enables Controller Resource by default, so the tables can have different controllers as leaders. We couldn't find a metric to decide which controller is the leader for a table, so we can't filter out the stale metric for alerts. IMO these metrics should be removed for the table once leadership is lost, or there should be a gauge which can be used to decide if a controller is leader for a table.
    k
    j
    m
    • 4
    • 13
  • d

    Deepak Mishra

    10/11/2021, 9:08 AM
    Hello , we are using spark batch ingestion job to push data into pinot offline table using pinot-0.8.0 , we are getting this kind of exception - Caused by: groovy.lang.MissingPropertyException: No such property: date for class: SimpleTemplateScript1\n\tat org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:66)
    x
    • 2
    • 9
  • s

    suraj kamath

    10/12/2021, 9:35 AM
    Hi Folks, We are trying to construct a tabular view from data in pinot. Eg: Get the list of top 10 userId's from Table A, get the names of those users using lookup from Table B. Is this supported using lookup?
    m
    • 2
    • 1
  • d

    Dunith Dhanushka

    10/13/2021, 8:20 AM
    Then my ingestion job failed with this:
    Failed to generate Pinot segment for file - file:/Users/dunith/Projects/streamlit/rawdata/uber-raw-data-sep14.csv
    java.lang.IllegalArgumentException: Invalid format: "null"
    at org.joda.time.format.DateTimeParserBucket.doParseMillis(DateTimeParserBucket.java:187) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.joda.time.format.DateTimeFormatter.parseMillis(DateTimeFormatter.java:826) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.apache.pinot.segment.local.segment.creator.impl.SegmentColumnarIndexCreator.writeMetadata(SegmentColumnarIndexCreator.java:552) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.apache.pinot.segment.local.segment.creator.impl.SegmentColumnarIndexCreator.seal(SegmentColumnarIndexCreator.java:512) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl.handlePostCreation(SegmentIndexCreationDriverImpl.java:284) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.apache.pinot.segment.local.segment.creator.impl.SegmentIndexCreationDriverImpl.build(SegmentIndexCreationDriverImpl.java:257) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.apache.pinot.plugin.ingestion.batch.common.SegmentGenerationTaskRunner.run(SegmentGenerationTaskRunner.java:111) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808]
    at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner.lambda$submitSegmentGenTask$1(SegmentGenerationJobRunner.java:263) ~[pinot-batch-ingestion-standalone-0.8.0-shaded.jar:0.8.0-9a0f41bc24243ff74315723b0153b534c2596e30]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
    at java.lang.Thread.run(Thread.java:834) [?:?]
    k
    • 2
    • 1
  • d

    Dunith Dhanushka

    10/13/2021, 8:22 AM
    I can see the schema and table created in the data explorer. But not sure what went wrong. I guess something to do with the time formatting?
    n
    • 2
    • 3
  • k

    Kamal Chavda

    10/14/2021, 8:51 PM
    Hi All, has anyone run into error
    Metrics aggregation and upsert cannot be enabled together
    when creating a realtime table? Will add log and schema in thread.
    m
    j
    • 3
    • 9
  • d

    Deepak Mishra

    10/15/2021, 9:50 AM
    Hi All, If there is any gap in overlap data between realtime and offline time ( e.g. batch ingestion job crashed in offline table ) . In that case , is data available in hybrid table or not ?
    k
    • 2
    • 2
  • t

    Tony Requist

    10/15/2021, 2:16 PM
    I am trying to configure the S3 storage to use server side encryption and have
    pinot.controller.storage.factory.s3.serverSideEncryption=aws:kms
    pinot.controller.storage.factory.s3.ssekmsKeyId=KEY
    and I get the rather odd error message
    Copy code
    Unknown value 'aws:kms' for S3PinotFS config: 'serverSideEncryption'. Supported values are: [AES256, aws:kms]
    m
    • 2
    • 3
  • k

    Kamal Chavda

    10/15/2021, 5:12 PM
    Anyone using Pinot 0.8.0 and Superset? I upgraded Pinot to utilize the TIMESTAMP datatype but now Superset is giving me error when trying to create a dataset. Below is the error:
    Copy code
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]: ERROR:root:'timestamp'
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]: Traceback (most recent call last):
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/flask_appbuilder/api/__init__.py", line 84, in wraps
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return f(self, *args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/views/base_api.py", line 80, in wraps
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     duration, response = time_function(f, self, *args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/utils/core.py", line 1368, in time_function
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     response = func(*args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/utils/log.py", line 224, in wrapper
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     value = f(*args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/datasets/api.py", line 236, in post
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     new_model = CreateDatasetCommand(g.user, item).run()
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/datasets/commands/create.py", line 47, in run
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     self.validate()
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/datasets/commands/create.py", line 87, in validate
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     if database and not DatasetDAO.validate_table_exists(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/datasets/dao.py", line 81, in validate_table_exists
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     database.get_table(table_name, schema=schema)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/superset/superset/models/core.py", line 603, in get_table
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return Table(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "<string>", line 2, in __new__
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/util/deprecations.py", line 139, in warned
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return fn(*args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 560, in __new__
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     metadata._remove_table(name, schema)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/util/langhelpers.py", line 68, in __exit__
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     compat.raise_(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     raise exception
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 555, in __new__
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     table._init(name, metadata, *args, **kw)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 644, in _init
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     self._autoload(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/sql/schema.py", line 667, in _autoload
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     autoload_with.run_callable(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2212, in run_callable
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return conn.run_callable(callable_, *args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1653, in run_callable
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return callable_(self, *args, **kwargs)
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 469, in reflecttable
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return insp.reflecttable(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/engine/reflection.py", line 664, in reflecttable
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     for col_d in self.get_columns(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/sqlalchemy/engine/reflection.py", line 390, in get_columns
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     col_defs = self.dialect.get_columns(
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/pinotdb-0.3.6-py3.8.egg/pinotdb/sqlalchemy.py", line 390, in get_columns
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     columns = [
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/pinotdb-0.3.6-py3.8.egg/pinotdb/sqlalchemy.py", line 393, in <listcomp>
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     "type": get_type(spec["dataType"], spec.get("fieldSize")),
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:   File "/app/superset/lib/python3.8/site-packages/pinotdb-0.3.6-py3.8.egg/pinotdb/sqlalchemy.py", line 458, in get_type
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]:     return type_map[data_type.lower()]
    Oct 15 17:11:23 ip-10-0-7-125 superset[1849861]: KeyError: 'timestamp'
    I've upgraded to latest pinotdb driver.
    r
    • 2
    • 9
  • v

    Vibhor Jain

    10/18/2021, 1:58 PM
    Hi Team, As part of handling duplicates in our hybrid table, we thought of using "mergeType": "dedup" for moving data from realtime to offline table. The problem we are facing is, one of our column is storing encrypted value and even for duplicate rows, this value is changing everytime. Since "dedup" works on entire row, its not removing the duplicates. Is there a way to perform "dedup" on a subset of columns for moving data to offline table via minion?
    j
    • 2
    • 2
  • d

    Deepak Mishra

    10/19/2021, 4:30 AM
    Hi Team , there is a workaround to fix data gap issue temporarily - https://github.com/apache/pinot/issues/6988 . Update watermark manually from where the data is available in realtime segment.It will move data from realtime table to offline table when the task is scheduled. Is it a feasible way to do this approach
    m
    j
    • 3
    • 3
  • m

    Mahesh babu

    10/20/2021, 8:40 AM
    Hi ,can we remove old data(more than one week) from pinot table.if Yes how?
    m
    • 2
    • 2
  • m

    Manish Soni

    10/20/2021, 10:19 AM
    Hi Team, We are trying to understand the Pinot metrics exposed to Prometheus. While looking into the segments error metrics "pinot_controller_segmentsInErrorState_Value", it states that "Number of segments in error state". However, we see that we do have some of the segments in bad state but the same is not reflected in Prometheus Graph. The count shows 0
    m
    • 2
    • 3
  • s

    Saad Khan

    10/20/2021, 6:30 PM
    Hi team, From auth settings, I was able to enable it user credentials following the instructions here but queries via console are not going through with a READ error. As per instruction the broker and controller have same admin username:pwd
    x
    • 2
    • 6
  • p

    Piyush Chauhan

    10/21/2021, 6:54 AM
    I am facing an issue using JDBC client of Pinot. I am able to do queries via postman to the broker. But getting the following problem. Failed to connect to url : jdbc: pinot://<broker-url> java.util.concurrent.ExecutionException: org.apache.pinot.client.PinotClientException: Pinot returned HTTP status 308, expected 200 I am using version 0.8.0 and following this guide https://docs.pinot.apache.org/users/clients/jdbc
    k
    • 2
    • 2
  • s

    suraj kamath

    10/21/2021, 7:25 AM
    Hi folks, In a case where the lookup join returns null values for some rows, how can we filter out the null? I am trying something like:
    select lookup('tableB', 'username', 'orgId', orgId, 'userId', userId) as username  from tableA where username is not null limit 10
    But I see an error with
    Copy code
    Unsupported predicate type: IS_NOT_NULL
    Full Error screenshot attached
    • 1
    • 1
  • e

    eywek

    10/21/2021, 2:04 PM
    Hello, on the documentation (https://docs.pinot.apache.org/basics/indexing/text-search-support#co-existence-with-other-indexes) you’re explaining that text indexes aren’t supported with dictionary encoded columns. Do you know when we will be able to do it? I would use an inverted index + a text index Thank you
    k
    a
    x
    • 4
    • 12
1...242526...166Latest