Hi 👋 we are thinking of using Pinot for our user facing reporting data store. Our plan so far had been to take data from source and dump it into Pinot (1:1 mapping between source and Pinot tables). This mean's we'd need PrestoDB/Trino on top of Pinot to address complex JOINs. We are now considering to remodel the data, denormalize and maybe aggregate before pushing data to Pinot. To denormalize the data before pushing it to Pinot, we'd need to have a stream processing framework such as Flink sitting between Kafka and Pinot, right?