Hello All, I started looking into Apache Pinot for...
# general
k
Hello All, I started looking into Apache Pinot for a company usecase. We would like to read rows from Cassandra tables and insert them into Pinot within Apache Flink. Seems like from what I have read in the documentation so far, I would have to write a Custom Batch Segment Writer. Is there any way I can this without writing a Custom Writer and instead do a push into Pinot, like using JDBC insert statements for example?
m
If you can stream output to Kafka from your flink job, Pinot can consume for there
k
We don't have Kafka but instead Solace, so I guess I would need to write a custom Stream Ingester since I think Pinot only currently supports Kafka?
m
You don’t have to write a custom segment writer. You can also write to a format like orc/avro/parquet and Pinot has utilities to read those
k
I would like to avoid having to set up additional infrastructure for hadoop or kafka and use what we currently have which is Solace.
m
In that case, another option I can think of is writing a connector for Solace. We have abstracted out realtime stream ingestion api's, so it should be doable to write that connector. FYI, we currently have connectors for flavors of Kafka, and are in the processing of writing one for Kinesis using this abstraction.
k
ok I see, thank you
k
we dont a write API for Pinot. There is a Flink sink that is WIP. @Yupeng Fu @Marta Paes can provide more info here.
y
The flink sink is not available now. perhaps you can consider spark