https://linen.dev logo
Join Slack
Powered by
# fluent-bit
  • c

    Colin Mollenhour

    04/29/2025, 11:21 PM
    Also cannot get environment variables to work to save my life...
    Copy code
    env:
      ENVIRONMENT: x
      INSTANCE_UID: x
    
    # Main Fluent Bit Configuration
    service:
      flush: 1
      grace: 0
      daemon: off
      log_level: debug
      parsers_file: parsers.yaml
    
    # Include input configurations
    includes:
      - inputs/*.yaml
    
    # Output to stdout for testing
    pipeline:
      filters:
        - name: modify
          match: "*"
          add:
            - environment ${ENVIRONMENT}
            - instance_uid ${INSTANCE_UID}
    
      outputs:
        - name: stdout
          match: "*"
    If I run this with
    docker run -e ENVIRONMENT=dev ...
    it is still reported as "x"
    p
    • 2
    • 3
  • f

    Filip Havlíček

    04/30/2025, 8:01 AM
    Hi guys, I have a quick question. How it works when I have one INPUT in my configuration with
    Mem_Buf_Limit     16MB
    and in the same time there is default configuration
    max_chunks_up=128
    How much memory will Fluentbit use? 16MB (according to buf limit) or +/- 256MB (chunks * 2)? It's older version running in AWS (aws-for-fluent-bit), but the most recent from AWS
    Copy code
    Fluent Bit v1.9.10
    * Copyright (C) 2015-2022 The Fluent Bit Authors
    * Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
    * <https://fluentbit.io>
    
    [2025/04/30 07:44:12] [ info] [fluent bit] version=1.9.10, commit=b4eaeab1ec, pid=1
    [2025/04/30 07:44:12] [ info] [storage] version=1.4.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
  • s

    Sagar Nikam

    04/30/2025, 12:24 PM
    Are these errors due to fluent-bit config or Loki config?
    Copy code
    [2025/04/30 12:08:22] [error] [output:loki:loki.0] 10.41.3.89:3101, HTTP status=400 Not retrying.
    entry with timestamp 2025-04-30 10:04:03.432559554 +0000 UTC ignored, reason: 'entry too far behind, oldest acceptable timestamp is: 2025-04-30T11:49:07Z',
    user 'fake', total ignored: 1 out of 1 for stream: {controller_revision_hash="55b8d44fd6", k8s_app="kube-proxy", pod_template_generation="4"}
    
    [2025/04/30 12:08:52] [error] [output:loki:loki.0] 10.41.3.89:3101, HTTP status=400 Not retrying.
    entry with timestamp 2025-04-30 10:04:03.345278889 +0000 UTC ignored, reason: 'entry too far behind, oldest acceptable timestamp is: 2025-04-30T11:53:30Z',
    user 'fake', total ignored: 1 out of 1 for stream: {app_kubernetes_io_instance="incident", app_kubernetes_io_name="incident", pod_template_hash="bf9549f95", tags_datadoghq_com_env="dev", tags_datadoghq_com_service="incident", tags_datadoghq_com_version="1.0.0-RESP-MAR2025-R1-1"}
    p
    • 2
    • 13
  • w

    William

    04/30/2025, 1:10 PM
    Hello, I'm currently in the process of re-writting a custom output plugin in Go. I've been asking myself about the
    FLBPluginFlushCtx
    function that is invoked to flush a chunk of records, and especially the concurrency policy around it. Per my current understanding, in case I define
    workers: x
    , where
    x > 1
    , I should have a different context for each call to ``FLBPluginFlushCtx`` (that can be retrieved and set with
    FLBPluginGetContext
    and
    FLBPluginSetContext
    ). Is it possible to have concurrent invocations of the ``FLBPluginFlushCtx`` functions with the same context, or each output worker (using the same plugin) will treat their chunks sequentially ?
  • c

    Chris Athan

    04/30/2025, 1:47 PM
    Hi team! Is there an image for use at EKS that uses a newer version than this of aws? This (2.32.5.20250327) :
    Copy code
    containers:
          - name: fluent-bit
            image: amazon/aws-for-fluent-bit:latest
    Uses a quite old fluent-bit version (1.9.10), while 4.0.1 is the latest. and my problem is that I cannot make complex parsing for different apps. Now I have to deploy 2 separate fluent-bits for different log type parsing and still I cannot parse the logs of logstash app witch are multiline sql and always breaks between the lines of message.
    p
    • 2
    • 7
  • r

    Rushi

    04/30/2025, 6:15 PM
    Has anyone tried to use fluentbit 4.0 as a sidecar container using the new yaml configuration support instead of classic?
    w
    p
    • 3
    • 4
  • m

    Max

    05/01/2025, 12:27 PM
    Hey everyone, when using Tail input also set up the parameter for DB. However, I notice the DB keeps growing in size. Why is that? Is there a way to limit its size? Using FluentBit v4.0
    p
    n
    • 3
    • 7
  • a

    Arjun

    05/01/2025, 2:38 PM
    Hey everyone, can anyone help me resolve this issue, today as I was going through some logs, I noticed some of the corrupted logs in my dashboard and when I checked in the another output of fluentbit for cross verification, I saw the same kind of logs as well, If anyone has any context regarding this behaviour of fluentbit and how can I resolve it, it would be really helpful sample log for reference
    Copy code
    {
      "time": "2025-04-30T11:00:06.935147966Z",
      "stream": "stdout",
      "_p": "F",
      "log": "\u001b[0m\u001b[31m[error] \u001b[0munsupported data type: 0xc000a92f90",
      "kubernetes": {
        "pod_name": "backend-service-765b7c5fd7-dhnln",
        "namespace_name": "apps",
        "pod_id": "075485da-bee0-4e17-80d7-1390fe7a54ec",
        "labels": {
          "<http://app.kubernetes.io/instance|app.kubernetes.io/instance>": "backend-service",
          "<http://app.kubernetes.io/name|app.kubernetes.io/name>": "backend-service",
          "pod-template-hash": "765b7c5fd7"
        },
        "annotations": {
          "checksum/config": "972d24f4667f0a0e0128f38d1cd73e65330f734408aac20ab776acac04d94cd1",
          "checksum/esecret": "55d3f58e14bc4ba05edb233ddaf134963564b183b0fbccb0c395f43b05a5550c"
        },
        "host": "x.y.z.q",
        "pod_ip": "x.y.z.q",
        "container_name": "backend-service",
        "docker_id": "sgagasg",
        "container_hash": "imagerepo",
        "container_image": "app_image"
      }
    }
    p
    • 2
    • 11
  • v

    Valentin Petrov

    05/02/2025, 6:37 AM
    Dear all, I am looking for a GenerateID option (from the ES output) to the Kafka Output. I want to assure dedup. I am checking the Lua filter potentially but i don't see any cryptographic library or hashing algorithms there to generate unique IDs. Is there or how can I achieve this in this case?
    p
    • 2
    • 3
  • s

    Sven

    05/02/2025, 7:46 AM
    Hi, in approximately one out of ten installations of Fluent Bit on AWS EC2 instances, I encounter an error related to the GPG key. Have you come across this issue before?
    Copy code
    2025-05-02T09:01:13+0200 DDEBUG Command: yum install -y fluent-bit-4.0.1-1
    2025-05-02T09:01:13+0200 DDEBUG Installroot: /
    2025-05-02T09:01:13+0200 DDEBUG Releasever: 2023.7.20250414
    2025-05-02T09:01:13+0200 DEBUG cachedir: /var/cache/dnf
    2025-05-02T09:01:13+0200 DDEBUG Base command: install
    2025-05-02T09:01:13+0200 DDEBUG Extra commands: ['install', '-y', 'fluent-bit-4.0.1-1']
    2025-05-02T09:01:13+0200 DEBUG User-Agent: constructed: 'libdnf (Amazon Linux 2023; generic; Linux.x86_64)'
    2025-05-02T09:01:13+0200 DEBUG repo: using cache for: amazonlinux
    2025-05-02T09:01:13+0200 DEBUG amazonlinux: using metadata from Wed Apr  9 19:48:07 2025.
    2025-05-02T09:01:13+0200 DEBUG repo: using cache for: kernel-livepatch
    2025-05-02T09:01:13+0200 DEBUG kernel-livepatch: using metadata from Wed Apr 23 20:16:04 2025.
    2025-05-02T09:01:13+0200 DEBUG repo: downloading from remote: fluent-bit
    2025-05-02T09:01:14+0200 DEBUG fluent-bit: using metadata from Thu Apr 24 01:30:27 2025.
    2025-05-02T09:01:14+0200 DDEBUG timer: sack setup: 931 ms
    2025-05-02T09:01:14+0200 DEBUG Completion plugin: Generating completion cache...
    2025-05-02T09:01:14+0200 DEBUG --> Starting dependency resolution
    2025-05-02T09:01:14+0200 DEBUG ---> Package libpq.x86_64 17.4-1.amzn2023.0.1 will be installed
    2025-05-02T09:01:14+0200 DEBUG ---> Package fluent-bit.x86_64 4.0.1-1 will be installed
    2025-05-02T09:01:14+0200 DEBUG --> Finished dependency resolution
    2025-05-02T09:01:14+0200 DDEBUG timer: depsolve: 50 ms
    2025-05-02T09:01:14+0200 INFO Dependencies resolved.
    2025-05-02T09:01:14+0200 INFO ================================================================================
     Package         Arch        Version                     Repository        Size
    ================================================================================
    Installing:
     fluent-bit      x86_64      4.0.1-1                     fluent-bit       7.7 M
    Installing dependencies:
     libpq           x86_64      17.4-1.amzn2023.0.1         amazonlinux      262 k
    
    Transaction Summary
    ================================================================================
    Install  2 Packages
    
    2025-05-02T09:01:14+0200 INFO Total download size: 7.9 M
    2025-05-02T09:01:14+0200 INFO Installed size: 24 M
    2025-05-02T09:01:14+0200 INFO Downloading Packages:
    2025-05-02T09:01:16+0200 INFO --------------------------------------------------------------------------------
    2025-05-02T09:01:16+0200 INFO Total                                           6.1 MB/s | 7.9 MB     00:01
    2025-05-02T09:01:16+0200 DEBUG Using rpmkeys executable at /usr/bin/rpmkeys to verify signatures
    2025-05-02T09:01:16+0200 CRITICAL Importing GPG key 0x3888C1CD:
     Userid     : "Fluentbit releases (Releases signing key) <releases@fluentbit.io>"
     Fingerprint: C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD
     From       : <https://packages.fluentbit.io/fluentbit.key>
    2025-05-02T09:01:16+0200 CRITICAL Key import failed (code 2). Failing package is: fluent-bit-4.0.1-1.x86_64
     GPG Keys are configured as: <https://packages.fluentbit.io/fluentbit.key>
    2025-05-02T09:01:16+0200 DDEBUG Cleaning up.
    2025-05-02T09:01:16+0200 INFO The downloaded packages were saved in cache until the next successful transaction.
    2025-05-02T09:01:16+0200 INFO You can remove cached packages by executing 'yum clean packages'.
    2025-05-02T09:01:16+0200 SUBDEBUG
    Traceback (most recent call last):
      File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 67, in main
        return _main(base, args, cli_class, option_parser_class)
      File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 106, in _main
        return cli_run(cli, base)
      File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 130, in cli_run
        ret = resolving(cli, base)
      File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 176, in resolving
        base.do_transaction(display=displays)
      File "/usr/lib/python3.9/site-packages/dnf/cli/cli.py", line 238, in do_transaction
        self.gpgsigcheck(install_pkgs)
      File "/usr/lib/python3.9/site-packages/dnf/cli/cli.py", line 305, in gpgsigcheck
        raise dnf.exceptions.Error(_("GPG check FAILED"))
    dnf.exceptions.Error: GPG check FAILED
    2025-05-02T09:01:16+0200 CRITICAL Error: GPG check FAILED
    2025-05-02T07:01:27+0000 DDEBUG RPM transaction over.
    2025-05-02T07:01:27+0000 DDEBUG timer: verify transaction: 48 ms
    2025-05-02T07:01:27+0000 DDEBUG timer: transaction: 14805 ms
    2025-05-02T07:01:27+0000 DEBUG Completion plugin: Generating completion cache...
    2025-05-02T09:01:27+0200 INFO Transaction check succeeded.
    2025-05-02T09:01:27+0200 INFO Running transaction test
    p
    • 2
    • 1
  • x

    Xinrui Zheng

    05/02/2025, 6:31 PM
    Hi Currently we’re using AWS fluent-bit running in the firelens container to upload the logs to S3. Since the build-in fluent-bit doesn’t support parquet format conversion for the logs now, does anyone have any idea how to convert the logs into parquet format and compressed as snappy before the fluent-bit upload the logs to S3?
    p
    • 2
    • 2
  • s

    shams

    05/03/2025, 5:36 PM
    Hi team, A bug was fixed in tail plugin, but it was kept as opt-in, in this PR, through a new tail config
    ignore_active_older_files
    1. I think it should be default as true. 2. Also please add documentation for this new config, alongside ignore_older.
    p
    • 2
    • 1
  • g

    Gleb Tyunikov

    05/05/2025, 8:17 AM
    Hi team, I opened an issue a month ago and received no response, Have I missed a step or something creating it?
    p
    • 2
    • 1
  • p

    Preethi Voore

    05/05/2025, 12:04 PM
    Hello Team, I'm currently using Fluent Bit metrics for monitoring and accessing them via the
    /api/v2/metrics/prometheus
    endpoint. Is there a way to filter and retrieve only the specific metrics that I require? Thank you!
    w
    p
    • 3
    • 5
  • m

    Milen Mladenov

    05/05/2025, 4:59 PM
    Hi team, I'm trying to switch from irsa to pod-identity authentication. We should be able to put logs in firehose in another account in same AWS organisation. We have following errors:
    Copy code
    [2025/05/05 16:22:14] [error] [engine] chunk '1-1746462127.162873254.flb' cannot be retried: task_id=14, input=emitter_for_multiline.0 > output=kinesis_firehose.0
    [2025/05/05 16:22:14] [error] [aws_credentials] STS assume role request failed
    [2025/05/05 16:22:14] [ warn] [aws_credentials] No cached credentials are available and a credential refresh is already in progress. The current co-routine will retry.
    [2025/05/05 16:22:14] [error] [signv4] Provider returned no credentials, service=firehose
    [2025/05/05 16:22:14] [error] [aws_client] could not sign request
    Can you suggest what may be wrong.
    p
    • 2
    • 4
  • m

    Matteo Ferraroni

    05/06/2025, 12:41 AM
    Hello, we would like to use Fluent-bit as a dumb log forwarder from Syslog to AWS Firehose. Is there a way to bypass log parsing?
    p
    • 2
    • 20
  • w

    William

    05/06/2025, 8:59 AM
    Hello. I've been trying to use the
    modify
    filter (as a processor, but that's not relevant to the issue) with the
    hard_copy
    rule. The first parameter
    STRING:KEY
    should allow a record accessor of the form
    $key['subKey']
    , according to the doc. > You can set Record Accessor as
    STRING:KEY
    for nested key. However, it fails with the error
    [error] [filter:modify:modify.0] Unable to create regex(key) from $key['subKey']
    . Digging the code, I found the origin of the error [here](https://github.com/fluent/fluent-bit/blob/master/plugins/filter_modify/modify.c#L482). I seems that we'll always enter the else block since for the
    hard_copy
    operation the
    rule->key_is_regex
    is always false. Does anyone else have a clue if I'm misusing the filter, or if that's a bug as I suspect it ?
    • 1
    • 2
  • a

    Andrew Longwill

    05/06/2025, 9:15 AM
    Hi, I'm sure this question has been asked many times before, but I couldn't find an answer 🤔 Would appreciate any pointers 🙏 https://github.com/fluent/fluent-bit/discussions/10293
    p
    • 2
    • 17
  • p

    Pat

    05/06/2025, 9:36 AM
    hi folks, I'm going to update the list of enterprise providers in the docs so please point me at any to add? https://fluentbit.io/enterprise/
  • l

    Likhith

    05/06/2025, 12:14 PM
    HI, I am using fluent-bit filter Kubernetes to extract labels from namespaces but the labels are nested as follow kubernetes_namespace.labels.customer i want to label this value both in logs and metrics, for that do i need to use a lua script or there is a direct way to do it. For logs we can use nest and lift those values to root json for metrics i am not sure what needs to be done?
    p
    • 2
    • 24
  • w

    William

    05/06/2025, 12:57 PM
    Hey Using the
    recod-modifier
    filter (https://docs.fluentbit.io/manual/pipeline/filters/record-modifier) with the following
    allow_list
    triggers the following YAML parsing error:
    [error] unable to add key to list map
    - kubernetes_annotation_logs.foo.com/sink-addr
    - kubernetes_annotation_logs.foo.com/token
    - kubernetes_annotation_logs.foo.com/sink-port
    - kubernetes_annotation_syslog.foo.com/enabled
    Underscores are valid, but I have a doubt regarding the
    /
    or dots
    .
    in the keys. Has anyone experienced the same. This is unfortunate, since the
    /
    is often used in Kubernetes annotations keys.
    p
    • 2
    • 27
  • n

    Nico

    05/06/2025, 7:46 PM
    Hello! I have a potentially stupid question, but what happens when a parser fails to parse a record? Let's say I'm using a built-in
    json
    parser, and one of the record is a non json log. Will the record be unchanged and continue its flow in the pipeline?
    e
    • 2
    • 4
  • g

    G Smith

    05/06/2025, 8:20 PM
    Is it possible to use the new yaml based configuration format with the Fluent Bit Helm chart? I tried replacing the contents of my fluent-bit.conf file with YAML but the Fluent Bit pods failed to start and the error messages in the log complain that the file doesn't appear to be in the original configuration format. I tried changing the file extension to .yaml but the Helm chart has a hard-coded reference to the .conf file (i.e.
    config=/fluent-bit/etc/conf/fluent-bit.conf
    ). I can't believe I'm the first to run into this so I must be missing some obvious solution. Anyone know how to resolve this?
    p
    • 2
    • 3
  • w

    William

    05/06/2025, 9:39 PM
    The Kubernetes filters allows to add an
    ownerReferences
    array to the records. Given that the
    nest
    filter only operates on "maps" (objects), I presume that transforming this array requires advanced filters (LUA or WASM). Wondering if anyone found a way to extract/lift the fields, from let's say the first object of the array at index 0, using only "basic" filters ? I haven't tried it yet, but the record accessor syntax might allow array indexing ?
    p
    • 2
    • 1
  • z

    zane

    05/06/2025, 10:36 PM
    hey, what is the use of netapi32.dll for fluent-bit? is it only for communication between plugins of fluent-bit, or some other external network usage?
    p
    • 2
    • 4
  • t

    Tim Förster

    05/07/2025, 5:14 AM
    Hey everyone, We're in the process of switching from Fluentd to Fluent Bit and encountered some unexpected behavior. It looks like I might be misconfiguring something. The current setup is fairly straightforward: we have a TCP syslog input, followed by a few basic filters (JSON parser, grep, record_modifier, etc.), with OpenSearch as the output. Since OpenSearch handles large bulk requests much more efficiently than many small ones, we previously configured Fluentd with a buffer flush interval of 10 seconds or 4MB. Fluent Bit seems to behave differently. By limiting the number of network connections to one (for easier inspection), I noticed that chunks often don't fill up to the maximum size, and retry routines are triggered frequently. As far as I understand, the connection limiter in Fluent Bit is still relatively simple, so small requests queuing up might be expected. However, I still don't understand why the chunks aren't filling up to the configured 4MB or waiting for the defined
    <flush_interval>
    . Any insights or suggestions would be greatly appreciated!
    p
    • 2
    • 7
  • j

    John

    05/07/2025, 5:42 PM
    Hello all, is there any support for azure government endpoints, *.azure.us?
  • g

    G Smith

    05/07/2025, 7:00 PM
    I'm trying to migrate a configuration from the classic mode to the new YAML format and have run into a problem. I believe the following two fragments should be equivalent: Here's the "classic" mode
    Copy code
    [FILTER]
           Name modify
           Match *
           Condition  Key_exists     json_log
           Set        DB__structlog  true
    and here's the new YAML format. This fragment is part of a within a pipeline/inputs/processors block:
    Copy code
    - name: modify
                    condition: Key_exists json_log
                    set: DB__structlog true
    Fluent Bit (4.0.1) reports the following error messages on startup when the above yaml block is included.
    Copy code
    [2025/05/07 18:48:48] [error] [processor] condition must be a map
    [2025/05/07 18:48:48] [error] failed to set condition for processor 'modify'
    [2025/05/07 18:48:48] [error] failed to load 'logs' processors
    I have also tried reformatting the block like this with no better luck. And the examples in the documentation show that specifying a single condition on the same line is acceptable.
    Copy code
    - name: modify
                    condition:
                      - Key_exists json_log
                    set: DB__structlog true
    Can anyone identify the problem?
    p
    r
    • 3
    • 6
  • n

    Nithin

    05/08/2025, 6:44 AM
    Hi, I am collecting some K8s Node metrics using Fluent-bit Node Exporter Plugin, these k8s nodes have some custom labels which I want to add while writing using prometheus remote write. This will be used to filter metrics based on the custom labels. How do I achieve this?
    p
    • 2
    • 6
  • k

    Karthi Pragasam

    05/08/2025, 1:37 PM
    Hi, We are collecting the logs from Azure blob container. We are doing it by creating a volume mount in the k8s & then ingesting the data using the Tail Plugin . It seeems like, it’s recognizing the file. But not reading the full content . Any thoughts? Also, for some files, it doesn’t even process them.
    p
    • 2
    • 3