https://pinot.apache.org/ logo
Join Slack
Powered by
# troubleshooting
  • d

    Diogo Baeder

    06/25/2022, 8:15 PM
    Hi guys, how can I cast a value from string to float? For some reason in the JSON results I'm getting strings as results from my `sum()`s, and I need them to be converted to floats, but I'd like to avoid having to reprocess this.
    • 1
    • 4
  • a

    Alice

    06/26/2022, 12:16 PM
    Hi team, my table stopped consuming stream data and the state of consuming segments is “consumerState”: “NOT_CONSUMING”. Any way I can do to make it continue consuming again?
    n
    • 2
    • 4
  • a

    ahmed

    06/26/2022, 11:54 PM
    Hi guys In the Documentation they mention that we can use Inbuilt functions in transform-Function, Is There any way to use UDF as a transform function ?? if not can we use multi line groovy script instead ? I tried to write the script in one line with " ; " between the lines and it give me an error
    Copy code
    "ingestionConfig": {
            "transformConfigs": [{
              "columnName": "fcm_token",
              "transformFunction": "Groovy({import javax.crypto.Cipher;import javax.crypto.spec.SecretKeySpec;Cipher cipher = Cipher.getInstance('AES/ECB/PKCS5Padding');cipher.init(Cipher.ENCRYPT_MODE, new SecretKeySpec('1234567812345678'.getBytes('UTF-8'), 'AES'));cipher.doFinal(fcm_token_raw.getBytes('UTF-8')).encodeBase64())).encodeBase64()},fcm_token_raw)"
            }]
    m
    n
    e
    • 4
    • 12
  • s

    Sowmya Gowda

    06/27/2022, 6:38 AM
    Hi Team, I'm facing issue while loading data from s3 to pinot offline table. I'm using Ec2 for pinot setup. I'm able to access my s3 files from ec2 instance but data is not loading into pinot table. To this I also setup my minion job and enabled scheduler in controller config with
    controller.task.scheduler.enabled=true
    No luck here, when I check the scheduler information with API it returns this
    k
    • 2
    • 15
  • s

    Sevvy Yusuf

    06/27/2022, 1:37 PM
    Hi team 👋🏼 can anyone here advise whether we can configure component tags using the Pinot helm charts? Currently we create the brokers and servers in our cluster using helm then add the tags to them using the controller endpoint but ideally we'd like it all done through helm. Is this possible? Thanks in advance
    ➕ 1
    m
    x
    • 3
    • 9
  • k

    Kevin Liu

    06/27/2022, 2:22 PM
    Hi team, I encountered a situation where I configured UPSERT in REALTIME table, configured RealtimeToOfflineSegmentsTask at the same time, and turned on “mergeType”: “dedup” in RealtimeToOfflineSegmentsTask. Prompt when creating a table: RealtimeToOfflineTask doesn’t support UPSERT table. Check the documentation and find that UPSERT can only be applied to OFFLINE tables. Is there any way to enable UPSERT and deduplication in OFFLINE? Also, it seems that HYBRID tables can be used to deduplicate (turn on UPSERT and RealtimeToOfflineSegmentsTask), can it be done now? I don’t see how to create a HYBRID table?
    m
    • 2
    • 1
  • h

    harnoor

    06/27/2022, 3:14 PM
    Hi folks. we are using metrics as per https://docs.pinot.apache.org/configuration-reference/monitoring-metrics . Shouldn’t the metric value
    max (pinot_broker_queryExecution_95thPercentile{table="$table"})
    >
    max (pinot_server_totalQueryTime_95thPercentile{table="$table"})
    ?
    m
    • 2
    • 5
  • r

    Rakesh Bobbala

    06/27/2022, 5:12 PM
    Hi Team, My real time table has "retentionTimeValue": "3" and after three days the files got moved to "Deleted_Segments/" in S3 So, I copied the files to the actual folder and Reload the segments in the Pinot UI. But, the data is not loading Am I missing something here ?
    m
    • 2
    • 5
  • m

    Ming Dai

    06/28/2022, 4:25 AM
    Hi team, I have two questions about the star tree index. Can anyone help me? Thank you in advance. the streaming table has an with star-tree index config as below: "starTreeIndexConfigs": [ { "dimensionsSplitOrder": [ "event", "epoch_minute", "metric_source", "tenant", "topic" ], "skipStarNodeCreationForDimensions": [], "functionColumnPairs": [ "SUM__m1_rate", "MAX__p50" ], "maxLeafRecords": 5000000 } and the column "epoch_minute" is an dimension column with data type "long", and derived from table timestamp column with transformation of "ToEpochMinutes(clock)". 1. If the query clause include range condition, star tree index cannot work. (Explain statement show star tree index is not trigged, while some data can be returned when statement run). (If epoch_minute range condition is removed, explain statement show star tree is trigged.) Here is the sql statement: select event, epoch_minute, topic, tenant from testtable where event='myevent' and (epoch_minute > 27606367) and (epoch_minute < 27606372) GROUP BY event, epoch_minute, topic, tenant ORDER BY event, epoch_minute LIMIT 500000 My first question is: Why range condition blocked the selection of star tree index? 2. In above question, although the star tree is not trigged during the sql execution, its segment selection result shows the epoch_minute range condition filtered some segments which run the real query. its response parameter numSegmentsProcessed is much smaller than below statement: select event, epoch_minute, topic, tenant from testtable where event='myevent' GROUP BY event, epoch_minute, topic, tenant ORDER BY event, epoch_minute LIMIT 500000 My second question is: What is the segment selection scenario works? So far as I know, pinot server uses timestamp column to filter segments, but epoch_minute is an dimension column, instead of timestamp column, why it works in the segment selection phase? I am looking forward for an pinot expert to help me. Thank you very much. Regards, Ming
  • e

    Eric Song

    06/28/2022, 5:17 AM
    Hi Teams, I just encountered some build problems. I built Pinot-0.10 on Mac M1 by maven in IDEA two months ago, at that time, there were some compile errors in pinot-pulsar, the detail was in this issue, and at last I thought it was something wrong about Rosetta, and it built successfully after I updated Rosetta2. And this week I try to build Pinot-0.11, but this time, the compile error show again.the detail log is here
    Copy code
    /Library/Java/JavaVirtualMachines/jdk-11.0.13.jdk/Contents/Home/bin/java -Dmaven.multiModuleProjectDirectory=/Users/.../Downloads/pinot-0.11.0/pinot-plugins/pinot-stream-ingestion/pinot-pulsar -Dmaven.home=/Applications/IntelliJ IDEA <http://CE.app/Contents/plugins/maven/lib/maven3|CE.app/Contents/plugins/maven/lib/maven3> -Dclassworlds.conf=/Applications/IntelliJ IDEA <http://CE.app/Contents/plugins/maven/lib/maven3/bin/m2.conf|CE.app/Contents/plugins/maven/lib/maven3/bin/m2.conf> -Dmaven.ext.class.path=/Applications/IntelliJ IDEA <http://CE.app/Contents/plugins/maven/lib/maven-event-listener.jar|CE.app/Contents/plugins/maven/lib/maven-event-listener.jar> -javaagent:/Applications/IntelliJ IDEA <http://CE.app/Contents/lib/idea_rt.jar=52165:/Applications/IntelliJ|CE.app/Contents/lib/idea_rt.jar=52165:/Applications/IntelliJ> IDEA <http://CE.app/Contents/bin|CE.app/Contents/bin> -Dfile.encoding=UTF-8 -classpath /Applications/IntelliJ IDEA <http://CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds.license:/Applications/IntelliJ|CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds.license:/Applications/IntelliJ> IDEA <http://CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds-2.6.0.jar|CE.app/Contents/plugins/maven/lib/maven3/boot/plexus-classworlds-2.6.0.jar> org.codehaus.classworlds.Launcher -Didea.version=2022.1 clean install package -DskipTests -Pbin-dist -X -P apple-silicon,other-jdk-maven-compiler-plugin
    Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
    Maven home: /Applications/IntelliJ IDEA <http://CE.app/Contents/plugins/maven/lib/maven3|CE.app/Contents/plugins/maven/lib/maven3>
    Java version: 11.0.13, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk-11.0.13.jdk/Contents/Home
    Default locale: zh_CN_#Hans, platform encoding: UTF-8
    OS name: "mac os x", version: "11.3", arch: "x86_64", family: "mac"
    [DEBUG]   Included /Applications/IntelliJ IDEA <http://CE.app/Contents/plugins/maven/lib/maven-event-listener.jar|CE.app/Contents/plugins/maven/lib/maven-event-listener.jar>
    .........................
    .........................
    [DEBUG] incrementalBuildHelper#beforeRebuildExecution
    [INFO] Compiling 11 source files to /Users/.../Downloads/pinot-0.11.0/pinot-plugins/pinot-stream-ingestion/pinot-pulsar/target/classes
    [DEBUG] incrementalBuildHelper#afterRebuildExecution
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD FAILURE
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time:  14.325 s
    [INFO] Finished at: 2022-06-28T11:30:06+08:00
    [INFO] ------------------------------------------------------------------------
    [WARNING] The requested profile "bin-dist" could not be activated because it does not exist.
    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) on project pinot-pulsar: Compilation failure -> [Help 1]
    org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile) on project pinot-pulsar: Compilation failure
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
        at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
        at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
        at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
        at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
        at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
        at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
        at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
        at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke (Method.java:566)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
        at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
        at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
        at org.codehaus.classworlds.Launcher.main (Launcher.java:47)
    Caused by: org.apache.maven.plugin.compiler.CompilationFailureException: Compilation failure
        at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute (AbstractCompilerMojo.java:1219)
        at org.apache.maven.plugin.compiler.CompilerMojo.execute (CompilerMojo.java:188)
        at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:210)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:156)
        at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:148)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
        at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
        at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
        at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
        at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
        at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
        at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
        at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
        at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
        at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke (Method.java:566)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
        at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
        at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
        at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
        at org.codehaus.classworlds.Launcher.main (Launcher.java:47)
    [ERROR] 
    [ERROR] 
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] <http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException>
    I can't resolve it even I updated Rosetta2 again. But at same time, I can still build Pinot-0.10 successfully, it print log like this And I guess there are something wrong in modules but I dont't know how to get more useful info to locate it. Appreciate for any advices!
  • h

    Harish Bohara

    06/28/2022, 8:11 AM
    I am facing a issue with a table. I observe that when I create a table, the data is injected but after 1-2 days the new data is not visible. I don’t see the latest data. If i delete and re-create the table it will show the new data . This table is a upsert table.
    Copy code
    "routing": {
            "instanceSelectorType": "strictReplicaGroup"
        },
    "query": {},
    "upsertConfig": {
        "mode": "PARTIAL",
        "partialUpsertStrategies": {
            "status": "OVERWRITE",
            "tenant_name": "OVERWRITE",
            "sub_tenant_name": "OVERWRITE"
        },
        "defaultPartialUpsertStrategy": "OVERWRITE",
        "hashFunction": "NONE"
    },
    🟢 1
    k
    s
    • 3
    • 34
  • k

    Kevin Liu

    06/28/2022, 10:53 AM
    Hi folks. I tested the ‘Stream Ingestion with Dedup’ feature under the latest 0.11.0-SNAPSHO version. It was found that the function of deduplicating data could not be achieved, and when creating the table, “dedupEnabled”: true was added. After the table is created, the dedupConfig configuration is not found in the table’s config file. May I ask how is this going?
    s
    n
    • 3
    • 15
  • n

    Nirmaljeet Singh

    06/28/2022, 3:00 PM
    Hi Folks I am facing issues due to spikes in latency in Pinot. I have observed that pinot server’s memory is fluctuating between these specific values. At same time latency also decreases and increases with same pattern. Just wanted to know is it due to heap memory or GC? Or what values should i keep? Current specifications -: Number of severs - 3 Heap - 5 GB Memory min max - 8 GB and 10 GB CPU min max - 4 and 6 I have setup pinot using helm chart given on official documentation - https://docs.pinot.apache.org/basics/getting-started/kubernetes-quickstart Attaching screenshot for latency and memory
    m
    • 2
    • 16
  • a

    Abhay Rawat

    06/28/2022, 5:03 PM
    Hi Pinot team, I am facing dependency conflict issues while running spark ingestion for both spark 2.4 and 3.2. (Both different errors though) I am using latest pinot version (0.11.0-SNAPSHOT) Error on spark 2.4
    Copy code
    Caused by: java.lang.NoSuchMethodError: 'org.apache.pinot.shaded.org.apache.commons.configuration.PropertiesConfiguration org.apache.pinot.spi.env.CommonsConfigurationUtils.fromFile(java.io.File)'
    	at org.apache.pinot.segment.spi.index.metadata.SegmentMetadataImpl.getPropertiesConfiguration(SegmentMetadataImpl.java:161)
    Error on spark 3.2
    Copy code
    Exception in thread "main" java.lang.ExceptionInInitializerError
    Caused by: java.lang.NullPointerException
    	at org.apache.commons.lang3.SystemUtils.isJavaVersionAtLeast(SystemUtils.java:1626)
    both on jdk*11,* on aws emrs I think it’s this particular combination (Pinot, Spark, S3, Parquet) thats not working. I am trying to remove some of them to narrow down the problem. Just wanted to know if this has worked for anyone
    m
    k
    • 3
    • 12
  • m

    Mohamed Emad

    06/28/2022, 11:50 PM
    Hi, I created a real-time table this table contains about 44M records each time I run a query to get the total count of rows"select count(*) from table_Name" got a different number, for example, I run now I got 4M rows then I run after only one second I got 9M rows then I run again for the third time after only one second I got 4M rows,
    m
    • 2
    • 31
  • m

    Michael Latta

    06/28/2022, 11:55 PM
    Look in your server logs for failure to create segment messages
    m
    • 2
    • 1
  • a

    Alice

    06/29/2022, 3:27 AM
    Hi, has anyone got any suggestion on how to make backfill job easier? Any tool or doc would help. Thanks.
    m
    • 2
    • 6
  • l

    Laxman Ch

    06/29/2022, 6:27 AM
    Hi, is there a way to control the replication for CONSUMING and ONLINE segments separately for RT table. For example, say configuring replication factor of 1 for ONLINE segments and 2 for CONSUMING segments.
    n
    • 2
    • 3
  • k

    Kevin Liu

    06/29/2022, 6:45 AM
    Hi Folks, In the pom.xml of the pinot 0.11.0-SNAPSHO version, jackson-annotations 2.12.6.1 and jackson-core 2.12.6.1 do not have jars in the central repository. Where can I download them?
  • a

    Alice

    06/29/2022, 1:35 PM
    Hi team, I’ve a question about r2o task and backfill. If there’s only one segment for a time window, it will be much easier to prepare data and generate a segment. But what if r2o task generated 3 segments for a 2-hour window realtime data, namely table_name_starttimestamp_endtimestamp_0, table_name_starttimestamp_endtimestamp_1, table_name_starttimestamp_endtimestamp_2. If I need to backfill the second segment, table_name_starttimestamp_endtimestamp_1, how can I prepare the data to backfill this segment and how to configure the job spec file to generate the exact name? I feel it’s a little complex to prepare many more rows to backfill only a few rows.🤣 Is there any easier way to backfill data?
    m
    • 2
    • 1
  • a

    Alice

    06/29/2022, 2:53 PM
    Another question about r2o task. I found count(*) is much larger than the actual number in my table when r2o task is enabled and “segmentPushFrequency” is set “HOURLY”. For example, for a same sql like the following one is run two times. The first run is before the offline segment is generated and the second run is after the offline segment is generated. How’s the difference generated? select count(*), count(request_id), distinctcount(request_id) from table_test where “timestamp” >= FromDateTime(‘2022-06-29T141500’,’yyyy-MM-dd’’T’‘HHmmss’) and “timestamp” < FromDateTime(‘2022-06-29T142300’, ’yyyy-MM-dd’’T’‘HHmmss’)
    m
    • 2
    • 2
  • k

    Karin Wolok

    06/29/2022, 4:55 PM
    There's a question that was posted in PrestoDB slack. Can anyone help? ```*Ritwik* [3:04 AM] Hi everyone, We are using presto-pinot connector, to query the data stored in apache pinot. The same query when ran in presto(using pinot catalog) and pinot(directly in pinot) were giving different results. The number of output records were different in both cases. Query: select id, count(id) from table_name group by user_id HAVING COUNT(id) = 39 LIMIT 500000; Number of records in presto = 14 Number of records in pinot = 4051 Seems like there is some issue with presto-pinot connector. Does anyone have any idea on this? If you are in the Presto Slack. Here is the link to the post: https://prestodb.slack.com/archives/C07JH9WMQ/p1655719450882479
    m
    x
    • 3
    • 3
  • t

    Tao Hu

    06/29/2022, 8:08 PM
    Hi Pinot Community. It seems the CASE WHEN query has a bug in 0.10.0 (it was working in 0.9.3). I have created an issue: https://github.com/apache/pinot/issues/8996, also post here to keep everyone updated
    m
    • 2
    • 1
  • a

    Alice

    06/30/2022, 2:49 AM
    Hi team, S3 is used for deep store in my case. I noticed that when r2o task is running, it download segment from s3 and it’s slow. Can minion download data from pinot server instead of S3?
    m
    • 2
    • 1
  • k

    Kevin Xu

    06/30/2022, 9:17 AM
    Hi team, I tried to use TLSv1.2 pinot. But now I want to know what parameters would be added to JDBC if I use a self-signed cert? Can anyone help?
    • 1
    • 1
  • v

    Vinay Krishnamurthy

    06/30/2022, 4:43 PM
    Hi, I have a question about
    tagOverrideConfig
    option - I understand that this option allows completed/consumed segments from realtime servers to offline servers. And the movement logic kicks in every hour or so. Is there an option available to configure the time interval for the segment movement too? Also, I noticed that the segment gets dropped from realtime server as part of the movement - is there a way to delay/defer this behavior (realtime server dropping segment), until the offline server gets the segment?
    n
    • 2
    • 5
  • s

    Seunghyun

    06/30/2022, 9:26 PM
    Has anyone faced the issue where the downloaded segments on the server are not consistent? https://github.com/apache/pinot/issues/9003
    m
    k
    p
    • 4
    • 14
  • h

    harnoor

    07/01/2022, 1:51 PM
    Hi. we want to replace
    regexp_like
    with
    TEXT_MATCH
    operator. column in schema:
    Copy code
    {
          "name": "backend_name",
          "dataType": "STRING",
          "defaultNullValue": ""
        },
    we have added below fields: Table configs:
    Copy code
    "tableIndexConfig": {
          "noDictionaryColumns": [
            "backend_name"
          ],
    Copy code
    "fieldConfigList": [
          {
            "name": "backend_name",
            "encodingType": "RAW",
            "indexType": "TEXT",
            "indexTypes": [
              "TEXT"
            ]
          }
        ],
    after clicking on reload all segments from UI, we are unable to see any effect. cannot run TEXT_MATCH queries. Tried reloading from swagger API too. version: 0.9.1
    m
    a
    • 3
    • 4
  • a

    Anish Nair

    07/02/2022, 7:40 AM
    Hi Team, This is regarding Pinot servers getting OOM. We have a server with following spec: Server mem : 64gb xmx allocated to pinot server : 55gb • JVM used for this pinot server goes upto 49gb sometimes ( screenshot for reference server name: max-pinot1.srv.media.net), and at the same time pinot usage goes upto 60gb. ( screenshot for reference) • We have a real time table with upsert enabled. This server has 90million primary key ( long type ) • is this expected ? Is there anyway to know biferfication on how pinot component is using the memory.
    k
    v
    +2
    • 5
    • 13
  • a

    Abdullah Jaffer

    07/03/2022, 2:34 PM
    Hello everyone, I am trying to ingest some data locally into PINOT but my date field keeps getting set to null, all other fields are properly ingested Relevant schema section
    Copy code
    "dateTimeFieldSpecs": [
        {
            "name": "orderingDate",
            "dataType": "STRING",
            "format": "1:DAYS:SIMPLE_DATE_FORMAT:yyyy-MM-dd",
            "granularity": "1:DAYS"
          }
      ]
    Table specs
    Copy code
    {
      "tableName": "sales_by_order_table",
      "segmentsConfig": {
        "timeColumnName": "orderingDate",
        "timeType": "DAYS",
        "replication": "1",
        "schemaName": "sales_by_order"
      },
      "tableIndexConfig": {
        "invertedIndexColumns": [],
        "loadMode": "MMAP"
      },
      "tenants": {
        "broker": "DefaultTenant",
        "server": "DefaultTenant"
      },
      "tableType": "OFFLINE",
      "metadata": {}
    }
    k
    • 2
    • 16
1...464748...166Latest