Hello, I have a scala-developed streaming pipeline...
# advice-metadata-modeling
b
Hello, I have a scala-developed streaming pipeline which reads from kafka and outputs to kafka. In the middle it does some mappings (with reading from Cassandra). Is that a case that can be represented as a DataJob ? And inside the datajob (which is the pipeline) I can show what the mappings are by creating some custom datasets ?
a
@big-carpet-38439 Would love your help here!