My question above was a little too specific, so le...
# random
h
My question above was a little too specific, so let me try again, with a bit more background info for context 🙂 I have some wide tables where transformations in the pipeline add new columns. I’d like to build this pipeline without having to specify the same schema over and over and over again. During the transformations this is not really a problem, because I can do
select *, newColumn
without having to specify the new schema explicitly. However, sinking this data into downstream systems (or parquet archive) is trickier, because there I do need to explicitly define the schema. Is there a way to take a table and directly write it to a kafka topic or parquet file, without specifying the schema, but still allowing for settings like partitioning?