This message was deleted.
# opal
s
This message was deleted.
a
Hi @Jack Geek, these logs are propagate to stderr (this is the loguru default, you can read about that here) and that could be the reason. We are using datadog for log aggregation and there is a way to configure log processing on their end, i assume GKE has a similar log processing feature.
o
This seems relevant: https://stackoverflow.com/questions/71158475/gcp-log-explorer-shows-wrong-severity-level-of-log-records The simplest fix would be to add a severity field to the structured logs
j
Hi @Or Weis: "add a severity field to the structured logs" , on the OPAL server side ?
o
Yes, either that or adding another log enriching solution in between. Should be a pretty easy PR though (add a flag to add the severity field, and add it when logging based on the actual log level) @Asaf Cohen WDYT ?
a
OPAL already have a severity field in the logs (log level) - info, warning, error, critical, etc
why add another one?
OPAL can output the logs as json, simply take the log level and map it correctly in GKE
Hi @Jack Geek, From what i understand, you ingest logs in one of several ways and you can also ingest logs with fluentd. If you do use fluentd i found how to transform json logs, maybe that's helpful? https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/write https://docs.fluentd.org/filter/record_transformer We can also add a configuration variable to OPAL that let's you control the json format of the logs (without actually duplication the severity field).
@Ori Shavit wdyt about adding a configurable json format for opal logs?
o
Another format or something customizable?
a
a customizable json format
is that possible?
o
I feel this could be difficult from a UX standpoint. How do users specify the format? some kind of schema? not sure how this would look.
o
Yeah, we can use confi.model() with a Pydantic schema