https://linen.dev logo
Join Slack
Powered by
# k8s
  • s

    Saurabh Kumar

    07/10/2024, 11:16 AM
    šŸ‘‹ Hello, team!
  • s

    Saurabh Kumar

    07/10/2024, 11:17 AM
    I deployed fluent bit 3.0.6 on kuberntes with opensearch
  • s

    Saurabh Kumar

    07/10/2024, 11:17 AM
    i am facing one error.
  • s

    Saurabh Kumar

    07/10/2024, 11:18 AM
    output-elasticsearch.conf: | [OUTPUT] Name es Match * Host ${FLUENT_ELASTICSEARCH_HOST} Port ${FLUENT_ELASTICSEARCH_PORT} HTTP_User xxx HTTP_Passwd xxx Index fluentbit Logstash_Format On Logstash_Prefix logstash tls On tls.verify Off Replace_Dots On Retry_Limit False #Type _doc
  • s

    Saurabh Kumar

    07/10/2024, 11:18 AM
    [2024/07/10 111306] [ warn] [engine] failed to flush chunk '1-1720609975.875768777.flb', retry in 21 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) [2024/07/10 111306] [error] [outputšŸ‡ŖšŸ‡øes.0] HTTP status=400 URI=/_bulk, response: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"}],"type":"illegal_argument_exception","reason":"Action/metadata line [1] contains an unknown parameter [_type]"},"status":400}
  • p

    Pat

    07/10/2024, 11:37 AM
    best to link the posts rather than duplicate: https://fluent-all.slack.com/archives/C0CTQGHKJ/p1720611344685389
  • g

    gampuero

    07/31/2024, 8:00 PM
    Hi, im having a little trouble with regex parsing in fluentbit. Im currently parsing a log that utilizes four differnt parsers, 3 custom made regex ones and logfmt, my configuration looks like this:
    Copy code
    [PARSER]        
            Name k3s
            Format regex
            Regex ^(?<log_level>[A-Z]\d{4}) (?<time>\d{2}:\d{2}:\d{2}\.\d{6})\s+(?<pid>\d+)\s+(?<log_file>[^:]+):(?<log_line>\d+)\]\s+(?<msg>.+)$
    
        [PARSER]
            Name   trace_parser_0
            Format regex
            Regex ^Trace\[(?<trace_id>\d+)\]: ---"(?<msg>[^"]+)" count:(?<count>\d+) (?<duration>\d+ms) \((?<time>\d{2}:\d{2}:\d{2}\.\d{3})\)$
    
        [PARSER]
            Name   trace_parser_1
            Format regex
            Regex ^Trace\[(?<trace_id>\d+)\]: \[(?<duration>\d+\.\d+ms)\] \[(?<total_duration>\d+\.\d+ms)\] (?<status>\w+)$     
    
        [INPUT]
            Name tail
            Alias logs_k3s
            Path /var/log/k3s-service.log
            Tag k3s.*
            Mem_Buf_Limit 200MB
            Skip_Long_Lines Off
            Buffer_Max_Size 10M
            Path_Key log.file.path
            Offset_Key log.offset
            Refresh_Interval 1
            Rotate_Wait 30
    
        [FILTER]
            Name parser
            Match k3s.*
            Key_name log
            Parser k3s
            Parser logfmt
            Reserve_Data On
            Preserve_Key On
    
        [FILTER]
            Name parser
            Match k3s.*
            Key_name log
            Parser trace_parser_0
            Parser trace_parser_1
            Reserve_Data On
            Preserve_Key On
    
        [OUTPUT]
            Name            es
            Alias           k3s_logs
            Match           k3s.*
            Host            ${FLUENT_ELASTICSEARCH_HOST}
            Port            ${FLUENT_ELASTICSEARCH_PORT}
            HTTP_User       ${FLUENT_ELASTICSEARCH_USER}
            HTTP_Passwd     ${FLUENT_ELASTICSEARCH_PASSWORD}
            Type            _doc
            Index           logs-bksl-a
            Replace_Dots    On
            Retry_Limit     False
            TLS             On
            TLS.verify      Off
            Suppress_Type_Name On
    These log lines:
    Copy code
    Trace[1351576716]: ---\"Writing http response done\" 4021ms (19:32:20.860)
    Trace[1351576716]: [4.025415827s] [4.025415827s] END
    are parsed by trace_parser_0 and trace_parser_1, for some reason the end result have unwanted additional boolean fields for each word in the log line, like this:
    Copy code
    {"Trace[1351576716]:":true,"---":true,"Writing":true,"http":true,"response":true,"done":true,"4021ms":true,"(19:32:20.860)":true,"log.file.path":"/var/log/k3s-service.log","log.offset":32042866,"log":"Trace[1351576716]: ---\"Writing http response done\" 4021ms (19:32:20.860)"}
    {"Trace[1351576716]:":true,"[4.025415827s]":true,"END":true,"log.file.path":"/var/log/k3s-service.log","log.offset":32042939,"log":"Trace[1351576716]: [4.025415827s] [4.025415827s] END"}
    Im having no issues with the custom "k3s" parser, only trace_parser_0 and 1 are having this problem. Why are the boolean fields getting added? Any option I can turn on to avoid this or am I doing something wrong?
  • e

    eduardo

    07/31/2024, 11:12 PM
    [announcement / Town Hall session] Hey Folks! Tomorrow (Thursday), I'll be hosting the next live Fluent Bit town hall. For this town hall we'll cover Fluent Bit's metrics collection and processing capabilities. There will be dedicated time for Q&A as well as space to engage during the session. This will be a great opportunity to learn from other Fluent Bit users and maintainers. We look forward to seeing you there! [Sign up here]
  • a

    Ashish_1797

    09/17/2024, 6:12 PM
    šŸ‘‹ Hello, team! Seeing an issue with fluentd pod in the AWS eks cluster where the application is running but fluentd stops picking up the logs for a working application even though the logs are present.
    Copy code
    2024-09-11 03:22:04 +0000 [warn]: #0 [containers.log] /var/log/containers/c3po-859f746fff-5nkcm_c3po-prod_c3po-0a124b2250fc00e882de6f803d990ea22105972ce6cc9ed1dcb2f6ef3acb1423.log unreadable. It is excluded and would be examined next time.
    m
    • 2
    • 1
  • a

    Angelos Naoum

    09/20/2024, 1:36 PM
    šŸ‘‹ Hello, team! I have configured flieunt-bit in a k8s cluster forwarding the pods logs to ES. I want to filter out the log level of the logs in order then to have it as a filter in Kibana dashboards. Does anyone has any further insights about it? Unfortunately by searching around I haven't seen any relevant configuration which seems pretty odd since I think that this is somewhat of a common feature. Basically, what I want is to have a "Level" field under "Available fields" on Kibana/ES index.
    p
    • 2
    • 1
  • a

    Angelos Naoum

    09/20/2024, 1:37 PM
    image.png
  • m

    Mohamed Rasvi

    10/13/2024, 8:34 PM
    Hi guys
  • m

    Mohamed Rasvi

    10/13/2024, 8:34 PM
    I have used fluent-bit for GKE cluster in one of the largest bank in brasil
  • m

    Mohamed Rasvi

    10/13/2024, 8:35 PM
    would like to know how do I join your open-source community, so that I can contribute
    p
    • 2
    • 1
  • v

    VP

    10/20/2024, 12:49 PM
    In the k8s documentation related to node level logging i found the statement "By default, if a container restarts, the kubelet keeps one terminated container with its logs. If a pod is evicted from the node, all corresponding containers are also evicted, along with their logs." Can anyone explain what is meant by when a pod is evicted , its logs are also evicted. Will it lead to log loss ? Once the pods are evicted will the logs still be present in /var/log/containers till so that fluentbit can send it even after pod eviction whatever is remaining to be sent I am using fluentbit and sending logs to ElasticSearch. I n case of pod eviction will i loose logs
    n
    • 2
    • 2
  • p

    Pat

    10/22/2024, 2:45 PM
    Anyone at KCD UK today? I'm hanging about in a purple shirt!
  • a

    Augusto Santos

    12/17/2024, 3:08 PM
    Hey guys, how are you doing? I'm making an fluentbit instalation on kubernetes using helm. In the helm template, i found the option of using daemsonset or a deployment, but I couldn't figure out which one to use. What's your opinion on that?
    p
    • 2
    • 15
  • s

    Swastik Gowda

    12/20/2024, 6:19 AM
    Hey guys, for some reason I am not able to access
    ${kubernetes.pod_name}
    in my fluentbit cloudwatch plugin output any idea why ? I am getting this warning -
    [warn] [env] variable ${kubernetes.pod_name} is used but not set
    Any help would be appreciated! Here is the config -
    Copy code
    config:
      service: |
        [SERVICE]
            Flush 1
            Daemon Off
            Log_Level info
            Parsers_File parsers.conf
            HTTP_Server On
            HTTP_Listen 0.0.0.0
            HTTP_Port 2020
    
      inputs: |
        [INPUT]
            Name tail
            Path /var/log/containers/*.log
            Parser docker
            Tag kube.*
            Mem_Buf_Limit 50MB
            Skip_Long_Lines On
    
        [INPUT]
            Name systemd
            Tag host.*
            Systemd_Filter _SYSTEMD_UNIT=kubelet.service
            Read_From_Tail On
    
      filters: |
        [FILTER]
            Name kubernetes
            Match kube.*
            Merge_Log On
            Merge_Log_Key log_processed
            Labels On
            Annotations On
            K8S-Logging.Parser On
            K8S-Logging.Exclude On
    
      outputs: |
        [OUTPUT]
            Name              cloudwatch_logs
            Match             kube.*
            region            ${DEV_PNAP_AWS_REGION}
            log_group_name    ${DEV_PNAP_AWS_LOG_GROUP_NAME}
            log_stream_name   ${kubernetes.pod_name}
            auto_create_group On
    But the log which is being sent to cloudwatch has all the necessary details.
    p
    • 2
    • 6
  • s

    Swastik Gowda

    01/10/2025, 12:48 PM
    We are facing this error in our fluentbit -
    Copy code
    [ warn] [record accessor] translation failed, root key=kubernetes
    Seems like its unable to enrich the log or unable to get kubernetes metadata, this is happening only sometimes. This is our config -
    Copy code
    inputs: |
        [INPUT]
            Name tail
            Path /var/log/containers/*.log
            Parser docker
            Tag kube.*
            Mem_Buf_Limit 50MB
            Skip_Long_Lines On
    
        [INPUT]
            Name systemd
            Tag host.*
            Systemd_Filter _SYSTEMD_UNIT=kubelet.service
            Read_From_Tail On
    
      filters: |
        [FILTER]
            Name kubernetes
            Match kube.*
            Merge_Log On
            Merge_Log_Key log_processed
            Labels On
            Annotations On
            K8S-Logging.Parser On
            K8S-Logging.Exclude On
    
        [FILTER]
            Name record_modifier
            Match kube.*
            Record infra true
    
      outputs: |
        [OUTPUT]
            Name                cloudwatch_logs
            Match               kube.*
            region              region
            log_group_name      log_group_name
            log_stream_prefix   fallback-stream-
            log_stream_template $kubernetes['pod_name']
            auto_create_group   On
    Any help solving this would be really appreciated
  • p

    Pat

    01/10/2025, 1:14 PM
    some of your records are nothing to do with K8S, i.e the systemd ones will not have any pod metadata added so yes they will then fail when you try to look it up in the output
  • p

    Pat

    01/10/2025, 1:14 PM
    ah your match is different but yeah it sounds like they're missing the field for some reason
  • p

    Pat

    01/10/2025, 1:15 PM
    you can add a record modifier/content modifier/etc filter/processor first to ensure that the pod-name key is defaulted if missing
  • s

    Swastik Gowda

    01/13/2025, 10:07 AM
    It appears that we are encountering the above mentioned issue particularly with short-lived pods, specifically those that run for less than 30 seconds.
    p
    • 2
    • 3
  • d

    Daniel Ngo

    02/17/2025, 4:04 AM
    Hi, I am having trouble configuring fluentbit to send logs to splunk in the correct index. My config looks something like this:
    Copy code
    filters: |
        [FILTER]
            Name modify
            Match *
            Set index idx_test
      inputs: |
        [INPUT]
            Name tail
            Path /var/log/containers/*.log
            multiline.parser docker, cri
            Tag kube.*
            Mem_Buf_Limit 5MB
            Skip_Long_Lines On
      outputs: |
        [OUTPUT]
            Name        splunk
            Match       *
            Splunk_Send_Raw On
            Port        8088
            tls         On
            tls.verify  On
    I am trying to send it under the index "idx_test". However, it just adds index="idx_test" in the log message as a field value, rather than changing the index the log will be sent to. At the moment, I have tried changing between Set/Add, index/event_index, Splunk_Send_Raw On/Off, but nothing has worked. Thanks in advance!
    g
    d
    • 3
    • 2
  • f

    Fatih Sarhan

    02/22/2025, 11:27 AM
    @Fatih Sarhan has left the channel
  • r

    Rav2001

    03/06/2025, 6:08 AM
    I'm encountering difficulty configuring the Kubernetes events input plugin to filter and export only events associated with Kubernetes Jobs, and subsequently enrich those events with the Job's labels. I've reviewed the Kubernetes filter documentation, but haven't found a solution. Could you give me some guidance on how to do this?
    • 1
    • 4
  • p

    Padma

    03/11/2025, 8:47 PM
    Hi. We are facing issue of logs missing with fluent-bit. When huge load of data(k8s container logs generated) we see some data is missing.Can some one help on fixing it. Heres the config
    • 1
    • 1
  • p

    Pat

    03/24/2025, 9:29 AM
    There's no real useful information to help but I'll make a guess on misconfiguration šŸ™‚ Are your actual kubelet logs in JSON format? That's what you're telling tail they are - not the logs you see from
    kubectl logs
    but the actual log files on disk. https://chronosphere.io/learn/fluent-bit-kubernetes-filter/ The actual files on disk are what you are parsing and I bet you need to follow the docs per:
    Copy code
    [INPUT]
        Name tail
        Tag kube.*
        Path /var/log/containers/*.log
        multiline.parser  docker, cri
    • 1
    • 1
  • p

    Padma

    04/02/2025, 6:49 AM
    Hi all . Does any faced issue with chunks not being Flushed from storage path in specific Kubernetes node. Data is sent out to destination but not flushed
  • w

    William

    04/29/2025, 8:49 AM
    Hello. Out of curiosity, did anyone used the Kubernetes filter as an Input processsor (tail mostly) with the latest releases ?
    p
    • 2
    • 9