Random question about log storage, On a serious no...
# ask-ai
j
Random question about log storage, On a serious note, why would you not store log data in a time-series DB and then apply some sort of visualisation tool ontop of it? How do others store logs and then query / visualisation / monitor those logs
c
Hi Jins! It's a good question. To be honest, we just haven't gotten to that feature yet. Exporting the logs to some sort of elastic source db or some such is definitely on the roadmap. Right now we dump to files and truncate the logs when they get to big.
I'd be curious to hear how you think about the priority of this from your point of view. We definitely will do this, but next to some other features it's not seeming too high on the list right now.
But if we get feedback saying that it should be higher priority or it is the next most important thing to you we would be willing to adjust.
j
log data can be highly unstructured, and usually centralized in an operational intelligence tool like Datadog, Newrelic, etc that has the mechanisms to parse it, interpret it, co-relate it and generate alerts. A basic integration with Cloudwatch logs / Datadog would be a good way to start as these tools will allow alerts on errors out of the box 🙂
c
yup. agreed!
j
@Joao Correia Yeah agree. Well my thoughts are more architectural. I ask because at AgriDigital we are in the process of improving our logging approach and I was discussing about the key aspects of logging: 1. Storage of the log data somewhere 2. Query / Visualisation of log data If I think about the issues I’ve experienced with logging is that its: • Poorly co-located when you have distributed systems or micro-services • logging data comes in broadly two types - system generated vs developer generated (with the latter more “readable” and potentially more useful) • Trying to query log data usually sucks Any thoughts on how to store logs (eg save on S3 vs a time-series DB vs local storage on the server) Any thoughts on how to query (eg using log analysis tools like DataDog vs hooking up to a SQL BI tool)
j
save them to a NoSQL db or better yet CloudWatch
have you see Datadog @Jins Kadwood? you just feed the logs to Datadog
or Cloudwatch then to DataDog
j
Yeah we have used DataDog. Sorry i forgot to mention, cost as a decision driver. DataDog becomes very expensive as they charge per host and they define a host as a thing that is being monitored. When you start to have 100s of micro-services, the cost of DataDog becomes unbearable.
j
@Jins Kadwood Datadog doesnt price logs as hosts, they are actually one of the cheapest. We monitor all our infra K8s, GCP, AWS, hosts and logs and pay around 200 a month.