Hello, I am trying to follow the streaming ingesti...
# troubleshooting
j
Hello, I am trying to follow the streaming ingestion tutorial to the best of my ability. I am able to use the console consumer to read from my topic, and I am able to connect my Pinot table to Kafka, but Pinot is unable to read data. I am attaching the schema file and the runbook I am using to set up the cluster
m
Was the table created successfully? And if so, can you share the ideal-state (from ZK browser in the UI)?
j
The table is able to be created. ZK browser should be at http://localhost:2181/ right?
sorry misunderstood what you meant! assumed you were saying that zookeeper has a UI you can load in the browser. looking now
Copy code
{
  "id": "transcript_REALTIME",
  "simpleFields": {
    "BATCH_MESSAGE_MODE": "false",
    "IDEAL_STATE_MODE": "CUSTOMIZED",
    "INSTANCE_GROUP_TAG": "transcript_REALTIME",
    "MAX_PARTITIONS_PER_INSTANCE": "1",
    "NUM_PARTITIONS": "0",
    "REBALANCE_MODE": "CUSTOMIZED",
    "REPLICAS": "1",
    "STATE_MODEL_DEF_REF": "SegmentOnlineOfflineStateModel",
    "STATE_MODEL_FACTORY_NAME": "DEFAULT"
  },
  "mapFields": {
    "transcript__0__0__20220726T1553Z": {
      "Server_172.18.0.5_8098": "CONSUMING"
    }
  },
  "listFields": {}
}
m
Ok, this means that the consumer started. Please check the external view as well. And if that also shows “CONSUMING” status, check the table debug endpoint in swagger, and then the server log for any errors.
j
externalview.json.js
table_debug.json.js
I’m seeing this in the controller logs:
java.lang.NoSuchMethodException: Could not find a suitable constructor in javax.servlet.ServletConfig class.
But no errors in the server or the broker
Ah interesting… trying out the example provided here, thought I was having the same issue because when I look at the table in Cluster Manager is says 0 bytes for reported and estimated size. But when I look at the Query Console, I can see the rows
Ok looks like that was the issue, I was looking at the reported/estimated sizes, which were 0 bytes even though there were rows loaded into the table. Is that a bug? Or was it at 0 just because there was not yet much data in the table
n
it is 0 because the size reporter doesn’t count CONSUMING segments. You should see that number go up when the segment converts to ONLINE