From the documentation, the data is mainly come fr...
# general
b
From the documentation, the data is mainly come from the offline batch ingestion spark/hadoop or streaming w/kafka. What if my data is persisted in a database? Should I trigger a kafka event for any database write, and send this to the pinot to consume? Also pinot can read the daily dump snapshot from database.
m
At the moment, those are the only options in open source. Another possibility is CDC via debezium + kafka
b
Thanks for the quick answer! Is this the recommended way to use pinot?
m
Yes. May I ask what’s your use case?
b
It's mainly for a user facing analytical dashboard. Potentially we also want to use Pinot to power the recommendation system(feed the signals we use to do the ranking).
m
That’s a great use case (LinkedIn feed is already using Pinot for that).
b
Thanks for the answers! Let me give a try!
👍 1