Hi, folks! How are y’all shipping your file-based ...
# docker
l
Hi, folks! How are y’all shipping your file-based logs (application.log, exception.log, etc) from your containers to your log aggregator service?
I’ve got STDOUT logging covered, but not the file-based logs.
Specifically, I’m using docker-commandbox and deploying to a Docker Swarm.
My log aggregator has an agent that runs as a system service to ship logs from files, but that doesn’t really make sense in Docker-land.
q
I pipe ALL my logs to either stdout or stderr. Since the log names show up in the console, the log service can split them up
RUN ln -sf /dev/stdout /opt/lucee/web/logs/application.log && ln -sf /dev/stderr /opt/lucee/web/logs/exception.log
RUN ln -sf /dev/stdout /opt/lucee/server/lucee-server/context/logs/application.log && ln -sf /dev/stdout /opt/lucee/server/lucee-server/context/logs/out.log
l
Thanks, @quetwo!
Can I ask how the log names show up in the console? My app (quite rationally) doesn’t print the log file name on each log line.
q
I guess looking at it further, they don't actually say the old log file names.. But the format of cataliana.log and applicaiton.log are different. And by putting Exception to the /dev/stderr, it would be very different as well.
l
That makes sense. Thanks for letting me know! I’m using your /dev/stdout trick.
j
i use filebeat and logstash. i have separate handling for the tomcat stuff and the standardized lucee logging. (i parse all the regular lucee logs as CSV.) they all get shipped to elasticsearch
i never thought to send to /dev/stdout (or stderr). i didn't even know that was a thing you could do. i think the problem in doing so may be that the different log types have different parsing needs, etc., (as @quetwo alluded to) which is why my approach seems to work fine.
q
Jamie -- thing to keep in mind is as you evolve in your docker deployments, most of the tools /expect/ to read logs from stdout. On AWS (cloudwatch), Azure, etc. all do their log parsing from there. There are ways to get the logs coming from log4j to have filenames in the prefix to make that work better.
j
yeah i like your idea if i can get the wrinkles ironed out. however, i haven't had any luck influencing the output of lucee logs with log4j configuration. there were some details on my (failed) attempt at doing so, here: https://dev.lucee.org/t/logging-in-json-format/3753/8
speaking of logs, i really want this: https://luceeserver.atlassian.net/browse/LDEV-101 it's not easy to correlate logs without it
l
We wrote our own logger.cfc and we just use that everywhere so we have full control over the output. It uses FileWrite instead of WriteLog
We’ve been including a request ID for a while now. It’s very helpful!
j
hmm, not a bad idea. sounds easy to implement but can you share the source?
j
For file-based, logs, Filebeat is a great option. You can point it to a directory and specify the logging patterns separately for different directories or file names. You just need an elasticsearch server. https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html
Then you can configure index lifecycle policies for retention, etc.
The nice thing is that they become easily searchable in Kibana ( or StacheBox )