Saurabh Kumar
07/10/2024, 11:16 AMSaurabh Kumar
07/10/2024, 11:17 AMSaurabh Kumar
07/10/2024, 11:17 AMSaurabh Kumar
07/10/2024, 11:18 AMSaurabh Kumar
07/10/2024, 11:18 AMPat
07/10/2024, 11:37 AMgampuero
07/31/2024, 8:00 PM[PARSER]
Name k3s
Format regex
Regex ^(?<log_level>[A-Z]\d{4}) (?<time>\d{2}:\d{2}:\d{2}\.\d{6})\s+(?<pid>\d+)\s+(?<log_file>[^:]+):(?<log_line>\d+)\]\s+(?<msg>.+)$
[PARSER]
Name trace_parser_0
Format regex
Regex ^Trace\[(?<trace_id>\d+)\]: ---"(?<msg>[^"]+)" count:(?<count>\d+) (?<duration>\d+ms) \((?<time>\d{2}:\d{2}:\d{2}\.\d{3})\)$
[PARSER]
Name trace_parser_1
Format regex
Regex ^Trace\[(?<trace_id>\d+)\]: \[(?<duration>\d+\.\d+ms)\] \[(?<total_duration>\d+\.\d+ms)\] (?<status>\w+)$
[INPUT]
Name tail
Alias logs_k3s
Path /var/log/k3s-service.log
Tag k3s.*
Mem_Buf_Limit 200MB
Skip_Long_Lines Off
Buffer_Max_Size 10M
Path_Key log.file.path
Offset_Key log.offset
Refresh_Interval 1
Rotate_Wait 30
[FILTER]
Name parser
Match k3s.*
Key_name log
Parser k3s
Parser logfmt
Reserve_Data On
Preserve_Key On
[FILTER]
Name parser
Match k3s.*
Key_name log
Parser trace_parser_0
Parser trace_parser_1
Reserve_Data On
Preserve_Key On
[OUTPUT]
Name es
Alias k3s_logs
Match k3s.*
Host ${FLUENT_ELASTICSEARCH_HOST}
Port ${FLUENT_ELASTICSEARCH_PORT}
HTTP_User ${FLUENT_ELASTICSEARCH_USER}
HTTP_Passwd ${FLUENT_ELASTICSEARCH_PASSWORD}
Type _doc
Index logs-bksl-a
Replace_Dots On
Retry_Limit False
TLS On
TLS.verify Off
Suppress_Type_Name On
These log lines:
Trace[1351576716]: ---\"Writing http response done\" 4021ms (19:32:20.860)
Trace[1351576716]: [4.025415827s] [4.025415827s] END
are parsed by trace_parser_0 and trace_parser_1, for some reason the end result have unwanted additional boolean fields for each word in the log line, like this:
{"Trace[1351576716]:":true,"---":true,"Writing":true,"http":true,"response":true,"done":true,"4021ms":true,"(19:32:20.860)":true,"log.file.path":"/var/log/k3s-service.log","log.offset":32042866,"log":"Trace[1351576716]: ---\"Writing http response done\" 4021ms (19:32:20.860)"}
{"Trace[1351576716]:":true,"[4.025415827s]":true,"END":true,"log.file.path":"/var/log/k3s-service.log","log.offset":32042939,"log":"Trace[1351576716]: [4.025415827s] [4.025415827s] END"}
Im having no issues with the custom "k3s" parser, only trace_parser_0 and 1 are having this problem. Why are the boolean fields getting added?
Any option I can turn on to avoid this or am I doing something wrong?eduardo
07/31/2024, 11:12 PMAshish_1797
09/17/2024, 6:12 PM2024-09-11 03:22:04 +0000 [warn]: #0 [containers.log] /var/log/containers/c3po-859f746fff-5nkcm_c3po-prod_c3po-0a124b2250fc00e882de6f803d990ea22105972ce6cc9ed1dcb2f6ef3acb1423.log unreadable. It is excluded and would be examined next time.
Angelos Naoum
09/20/2024, 1:36 PMAngelos Naoum
09/20/2024, 1:37 PMMohamed Rasvi
10/13/2024, 8:34 PMMohamed Rasvi
10/13/2024, 8:34 PMMohamed Rasvi
10/13/2024, 8:35 PMVP
10/20/2024, 12:49 PMPat
10/22/2024, 2:45 PMAugusto Santos
12/17/2024, 3:08 PMSwastik Gowda
12/20/2024, 6:19 AM${kubernetes.pod_name}
in my fluentbit cloudwatch plugin output any idea why ?
I am getting this warning - [warn] [env] variable ${kubernetes.pod_name} is used but not set
Any help would be appreciated!
Here is the config -
config:
service: |
[SERVICE]
Flush 1
Daemon Off
Log_Level info
Parsers_File parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 50MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Merge_Log_Key log_processed
Labels On
Annotations On
K8S-Logging.Parser On
K8S-Logging.Exclude On
outputs: |
[OUTPUT]
Name cloudwatch_logs
Match kube.*
region ${DEV_PNAP_AWS_REGION}
log_group_name ${DEV_PNAP_AWS_LOG_GROUP_NAME}
log_stream_name ${kubernetes.pod_name}
auto_create_group On
But the log which is being sent to cloudwatch has all the necessary details.Swastik Gowda
01/10/2025, 12:48 PM[ warn] [record accessor] translation failed, root key=kubernetes
Seems like its unable to enrich the log or unable to get kubernetes metadata, this is happening only sometimes.
This is our config -
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser docker
Tag kube.*
Mem_Buf_Limit 50MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Merge_Log_Key log_processed
Labels On
Annotations On
K8S-Logging.Parser On
K8S-Logging.Exclude On
[FILTER]
Name record_modifier
Match kube.*
Record infra true
outputs: |
[OUTPUT]
Name cloudwatch_logs
Match kube.*
region region
log_group_name log_group_name
log_stream_prefix fallback-stream-
log_stream_template $kubernetes['pod_name']
auto_create_group On
Any help solving this would be really appreciatedPat
01/10/2025, 1:14 PMPat
01/10/2025, 1:14 PMPat
01/10/2025, 1:15 PMSwastik Gowda
01/13/2025, 10:07 AMDaniel Ngo
02/17/2025, 4:04 AMfilters: |
[FILTER]
Name modify
Match *
Set index idx_test
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
multiline.parser docker, cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
outputs: |
[OUTPUT]
Name splunk
Match *
Splunk_Send_Raw On
Port 8088
tls On
tls.verify On
I am trying to send it under the index "idx_test". However, it just adds index="idx_test" in the log message as a field value, rather than changing the index the log will be sent to.
At the moment, I have tried changing between Set/Add, index/event_index, Splunk_Send_Raw On/Off, but nothing has worked.
Thanks in advance!Fatih Sarhan
02/22/2025, 11:27 AMRav2001
03/06/2025, 6:08 AMPadma
03/11/2025, 8:47 PMPat
03/24/2025, 9:29 AMkubectl logs
but the actual log files on disk.
https://chronosphere.io/learn/fluent-bit-kubernetes-filter/
The actual files on disk are what you are parsing and I bet you need to follow the docs per:
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
multiline.parser docker, cri
Padma
04/02/2025, 6:49 AMWilliam
04/29/2025, 8:49 AM