This message was deleted.
# ask-for-help
s
This message was deleted.
replied 1
j
Hi Ariel
a
Hi
j
The format is multiline json.
JSON Files - Spark 3.3.1 Documentation (apache.org)
In detail, it depends on how your tech stack looks like. • If your team already has analytical pipelines, it more recommended to use some tools like fluentbit to redirect the monitoring data logs to your data warehouse. • If you just wanna have a test on bentoml's monitoring feature, you can just load the data manually like the example below. Here is an example to read it to pandas dataframe:
Copy code
import pandas as pd
import glob

files = glob.glob(f"{PATH}/monitoring/iris_classifier_prediction/data/*")
frames = [
    pd.read_json(file_name, lines=True)
    for file_name in files
]
dataset = pd.concat(frames)
a
Thanks for the detailed reply!