From the documentation, the data is mainly come from the offline batch ingestion spark/hadoop or streaming w/kafka. What if my data is persisted in a database? Should I trigger a kafka event for any database write, and send this to the pinot to consume? Also pinot can read the daily dump snapshot from database.