Q on consuming from kafka - if the table is create...
# troubleshooting
b
Q on consuming from kafka - if the table is created through the UI is there some endpoint that must be hit to start the consumer? As far as I can tell I've created the table correctly, it validated in the UI and said it could connect to kafka, but it just doesn't seem to consume any messages. I've verified that I can connect to kafka from the controller/broker machine and there there is messages in the topic. The broker and controller logs do not seem to contain anything related to kafka however
My config for the realtime table is:
Copy code
"streamConfigs": {
        "streamType": "kafka",
        "stream.kafka.topic.name": "test_pinot",
        "stream.kafka.broker.list": "10.150.1.248:9092",
        "stream.kafka.consumer.type": "lowlevel",
        "stream.kafka.consumer.prop.auto.offset.reset": "smallest",
        "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory",
        "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder",
        "realtime.segment.flush.threshold.rows": "100000",
        "realtime.segment.flush.threshold.time": "24h",
        "realtime.segment.flush.segment.size": "100M"
      },
m
What's the tenant name you used for servers and are there servers available in your cluster with that tenant name?
b
Copy code
"tenants": {
      "broker": "DefaultTenant",
      "server": "DefaultTenant",
      "tagOverrideConfig": {}
    },
UI lists one server out of the 5 for that tenant
That server is enabled. The log on that server hasn't been updated since yesterday though
m
could you share the ideal-state and external view of the table from ZK browser?
b
Copy code
{
  "id": "transaction_REALTIME",
  "simpleFields": {
    "BATCH_MESSAGE_MODE": "false",
    "BUCKET_SIZE": "0",
    "HELIX_ENABLED": "true",
    "IDEAL_STATE_MODE": "CUSTOMIZED",
    "INSTANCE_GROUP_TAG": "transaction_REALTIME",
    "MAX_PARTITIONS_PER_INSTANCE": "1",
    "NUM_PARTITIONS": "0",
    "REBALANCE_MODE": "CUSTOMIZED",
    "REPLICAS": "1",
    "STATE_MODEL_DEF_REF": "SegmentOnlineOfflineStateModel",
    "STATE_MODEL_FACTORY_NAME": "DEFAULT"
  },
  "mapFields": {
    "transaction__0__0__20210801T1539Z": {
      "Server_ip-10-150-4-116.ec2.internal_8098": "CONSUMING"
    }
  },
  "listFields": {}
}
ideal state
Copy code
{
  "id": "transaction_REALTIME",
  "simpleFields": {
    "BATCH_MESSAGE_MODE": "false",
    "HELIX_ENABLED": "true",
    "IDEAL_STATE_MODE": "CUSTOMIZED",
    "INSTANCE_GROUP_TAG": "transaction_REALTIME",
    "MAX_PARTITIONS_PER_INSTANCE": "1",
    "NUM_PARTITIONS": "0",
    "REBALANCE_MODE": "CUSTOMIZED",
    "REPLICAS": "1",
    "STATE_MODEL_DEF_REF": "SegmentOnlineOfflineStateModel",
    "STATE_MODEL_FACTORY_NAME": "DEFAULT"
  },
  "mapFields": {
    "transaction__0__0__20210801T1539Z": {
      "Server_ip-10-150-4-116.ec2.internal_8098": "CONSUMING"
    }
  },
  "listFields": {}
}
That server is not the same server as the one for the tenant. It's logs have a few messages related to the table but nothing else.
The journal service on that machine though is going nuts, checking in on it
Ah, there it is. Parse error with timestamp.
m
👍