https://linen.dev logo
Join Slack
Powered by
# fluent-bit
  • r

    Richard

    10/20/2025, 5:39 PM
    Hey, I'd like to ask you if you have experience with error
    could not enqueue records into the ring buffer
    (see https://fossies.org/linux/fluent-bit/src/flb_input_chunk.c line 1922). We are getting this error frequently along with the less frequent
    broken connection to <aws_s3_bucket_url>
    . We experience these errors on both small and big AWS EKS clusters (fluentbit running as daemonset) Our outputs configuration looks like this:
    upload_timeout           5m
    total_file_size          50M
    auto_retry_requests      true
    retry_limit              5
    store_dir                <path_to_store_dir>
    use_put_object           On
    compression              gzip
    store_dir_limit_size     512M
    We tried several recommendations from the Fluentbit documentation along with upgrading to the latest stable version. Are we hitting limits of AWS S3 bucket or we need to change our buffering configuration? According to our metrics we are frequently dropping records. Any help or directions would be appreciated.
  • k

    keerthana

    10/21/2025, 8:24 AM
    Hi, Is it possible to use process exporter metrics plugin and get output in json format ?
  • e

    Eric D. Schabell

    10/21/2025, 1:37 PM
    Digging into the various open issues and PR’s for Fluent Bit docs, so please watch for requests on your PR’s as I try to nudge various conflicts, waiting-on-reviews, and waiting-on-code-merge status updates!
  • e

    Eric D. Schabell

    10/21/2025, 1:37 PM
    This PR I just pushed is finishing touch on community docs update in the output plugins, getting it added to the docs site listing, looking for a review: https://github.com/fluent/fluent-bit-docs/pull/2106
  • o

    Ofek Ezra

    10/21/2025, 4:29 PM
    Hello, is there an easy way to extract the individual fields from the
    Message
    in Sysmon logs so that they are structured and fully searchable in OpenSearch? Here’s an example of the JSON result I currently have (from opensearch after sending logs via fluent bit)"
    Copy code
    {
      "_index": "windows-2025.10.21",
      "_id": "SGCQB5oBuk1sTIS_aKE-",
      "_version": 1,
      "_score": null,
      "_source": {
        "@timestamp": "2025-10-21T16:17:41.329Z",
        "ProviderName": "Microsoft-Windows-Sysmon",
        "ProviderGuid": "{5770385F-C22A-43E0-BF4C-06F5698FFBD9}",
        "Qualifiers": "",
        "EventID": 7,
        "Version": 3,
        "Level": 4,
        "Task": 7,
        "Opcode": 0,
        "Keywords": "0x8000000000000000",
        "TimeCreated": "2025-10-21 16:17:39 +0000",
        "EventRecordID": 9666,
        "ActivityID": "",
        "RelatedActivityID": "",
        "ProcessID": 5372,
        "ThreadID": 9092,
        "Channel": "Microsoft-Windows-Sysmon/Operational",
        "Computer": "win10",
        "UserID": "NT AUTHORITY\\SYSTEM",
        "Message": "Image loaded:\r\nRuleName: technique_id=T1053,technique_name=Scheduled Task\r\nUtcTime: 2025-10-21 16:17:39.502\r\nProcessGuid: {aa15f251-b20a-68f7-130b-000000000b00}\r\nProcessId: 8412\r\nImage: C:\\Windows\\System32\\sppsvc.exe\r\nImageLoaded: C:\\Windows\\System32\\taskschd.dll\r\nFileVersion: 10.0.19041.3636 (WinBuild.160101.0800)\r\nDescription: Task Scheduler COM API\r\nProduct: Microsoft® Windows® Operating System\r\nCompany: Microsoft Corporation\r\nOriginalFileName: taskschd.dll\r\nHashes: SHA1=633C92F5A5000CDE3235C7FDC306B2B04C501C48,MD5=84F942C1A4BD60C94EBEC6643E0216C9,SHA256=BBE733452E6CE871D5D2064948847AAB2A71F7FEDA2C962BA4002F64E2D198F2,IMPHASH=B7A4477FA36E2E5287EE76AC4AFCB05B\r\nSigned: true\r\nSignature: Microsoft Windows\r\nSignatureStatus: Valid\r\nUser: NT AUTHORITY\\NETWORK SERVICE",
        "StringInserts": [
          "technique_id=T1053,technique_name=Scheduled Task",
          "2025-10-21 16:17:39.502",
          "{AA15F251-B20A-68F7-130B-000000000B00}",
          8412,
          "C:\\Windows\\System32\\sppsvc.exe",
          "C:\\Windows\\System32\\taskschd.dll",
          "10.0.19041.3636 (WinBuild.160101.0800)",
          "Task Scheduler COM API",
          "Microsoft® Windows® Operating System",
          "Microsoft Corporation",
          "taskschd.dll",
          "SHA1=633C92F5A5000CDE3235C7FDC306B2B04C501C48,MD5=84F942C1A4BD60C94EBEC6643E0216C9,SHA256=BBE733452E6CE871D5D2064948847AAB2A71F7FEDA2C962BA4002F64E2D198F2,IMPHASH=B7A4477FA36E2E5287EE76AC4AFCB05B",
          "true",
          "Microsoft Windows",
          "Valid",
          "NT AUTHORITY\\NETWORK SERVICE"
        ]
      },
      "fields": {
        "@timestamp": [
          "2025-10-21T16:17:41.329Z"
        ]
      },
      "sort": [
        1761063461329
      ]
    }
    p
    • 2
    • 4
  • e

    Eric D. Schabell

    10/22/2025, 6:33 PM
    Pushed PR to docs for updating listing of missing ToC entries: https://github.com/fluent/fluent-bit-docs/pull/2109
  • d

    Deepak

    10/23/2025, 6:32 AM
    Hi Team, With the following Fluent Bit configuration, I'm aiming to process approximately 600K events per minute. These events are distributed across 40 files in NFS directory, which are generated hourly. Additionally, some events may still be written to older files. To optimise performance, filesystem storage has been disabled. Issue Observed • Events that were already ingested in real-time are being re-ingested on an hourly basis. Impact • This results in duplicate events and delayed ingestion, affecting data accuracy and timeliness. Could you please review the configuration and let me know if there are any misconfigurations or improvements that can help prevent re-ingestion and ensure consistent performance?
    Copy code
    [SERVICE]
        Flush                       1
        Grace                       15
        Daemon                      Off
        HTTP_Server                 On
        HTTP_Listen                 0.0.0.0
        HTTP_Port                   2020
        Health_Check                On
        Storage.metrics             On
        Storage.sync                Normal
        Storage.checksum            Off
        Storage.path                /data/fluentbit/storage
        Storage.backlog.mem_limit   24576M
        Storage.backlog.chunk_size  1M
        Storage.max_chunks_up       128
        Storage.delete_oldest       On
        storage.delete_irrecoverable_chunks On
        storage.pause_on_chunks_overlimit On
        storage.backlog.flush_on_shutdown On
        Hot_Reload                  On
        Log_Level                   Info
        Log_File                    /mnt/prod/fluentbit/logs/log-2025-10-23.log
        Workers                     12
    
    [INPUT]
        Name                        tail
        Path                        /logs/source/*.ndjson
        Path_Key                    filepath
        Tag                         daily.logs
        DB                          /data/fluentbit/db/fluentbit.db
        DB.Sync                     full
        DB.compare_filename         true
        DB.locking                  true
        Mem_Buf_Limit               24576M
        Buffer_Chunk_Size           1M
        Buffer_Max_Size             4M
        Skip_Long_Lines             On
        Skip_Empty_Lines            On
        Refresh_Interval            2
        Ignore_Older                24h
        Rotate_Wait                 86400
        Read_from_Head              true
        Inotify_Watcher             true
        Exit_On_Eof                 Off
        Multiline                   Off
        threaded                    true
        File_Cache_Advise           Off
    
    [OUTPUT]
        Name                        http
        Match                       daily.logs
        Host                        clickhouse-chproxy.clickhouse-cluster.svc.cluster.local
        Port                        8080
        URI                         /?query=INSERT%20INTO%20events_distributed
        Format                      json
        json_date_key               timestamp
        json_date_format            iso8601
        Header                      Content-Type application/json
        Workers                     12
        Retry_Limit                 no_retries
        net.connect_timeout         20
        net.keepalive               On
        net.keepalive_idle_timeout  300
        net.keepalive_max_recycle   2000
        net.max_worker_connections  0
        storage.total_limit_size    8GB
        Compress                    gzip
    cc: @Pat, @lecaros
    p
    • 2
    • 17
  • s

    Sujay

    10/23/2025, 9:42 AM
    hi team, Im getting this log in fluentbit which is deployed in production, can anyone help me in understanding why this happens and what is the recommended config if we get these logs
    Copy code
    [2025/10/23 09:39:52] [ warn] [input:tail:tail.0] purged rotated file while data ingestion is paused, consider increasing rotate_wait
    [2025/10/23 09:41:31] [ info] [input] tail.0 resume (mem buf overlimit)
    [2025/10/23 09:41:31] [ warn] [input] tail.0 paused (mem buf overlimit)
    [2025/10/23 09:41:31] [ info] [input] pausing tail.0
    [2025/10/23 09:41:31] [ info] [input] resume tail.0
    [2025/10/23 09:41:31] [ info] [input] tail.0 resume (mem buf overlimit)
    currentconfig:
    Copy code
    [SERVICE]
        Flush        30
        Grace        5
        Daemon       Off
        Log_Level    info
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc-operator/custom-parsers.conf
        Coro_Stack_Size    24576
        HTTP_Server  On
        Listen 0.0.0.0
        HTTP_Port    2020
        storage.path  /buffers
    [INPUT]
        Name         tail
        Buffer_Chunk_Size  100MB
        Buffer_Max_Size  800MB
        DB  /tail-db/tail-containers-state.db
        DB.locking  true
        Ignore_Older  1h
        Mem_Buf_Limit  1000MB
        Parser  cri-log-key
        Path  /var/log/containers/*.log
        Refresh_Interval  5
        Skip_Long_Lines  On
        Tag  kubernetes.*
        storage.pause_on_chunks_overlimit  on
    p
    • 2
    • 45
  • b

    Benjamin Wootton

    10/23/2025, 12:09 PM
    Hi team. I've been having an issue trying to set a service.name field. I've tried everything and would welcome any help!
    p
    • 2
    • 36
  • h

    Hitendra

    10/24/2025, 4:47 AM
    hi
  • r

    Richard

    10/24/2025, 8:58 AM
    Hey, nobody has this issue or use S3 bucket for their outputs?
    p
    • 2
    • 5
  • s

    Scott Neagle

    10/24/2025, 1:38 PM
    Hello, I reported https://github.com/fluent/fluent-bit/issues/9290 last year. Your bot keeps trying to close it as stale, but it's not stale, it's an active issue. The related issue https://github.com/fluent/fluent-bit/issues/8787 was automatically closed by the bot, apparently with no investigation. Is someone going to triage these active issues? Or do I need to keep commenting "this is still active" every 90 days?
  • k

    KimJohn Quinn

    10/26/2025, 1:44 PM
    Hello everyone. We just upgraded FluentBit using the latest Helm chart in Artifact hub and in both our environments we are seeing this error in our logs:
    Copy code
    [input:node_exporter_metrics:node_metrics] read error, check permissions: /sys/class/hwmon/hwmon*
    It says this is benign and trying to get "temperature" metrics (which we just introduced a node group for Nvida instances). The configuration for the input is pretty basic
    Copy code
    [INPUT]
            Alias                           node_metrics
            name                            node_exporter_metrics
            tag                             node_metrics
            scrape_interval                 60
    Can I disable this somehow? It is flooding out logs.
    p
    • 2
    • 12
  • g

    Gijs

    10/27/2025, 2:20 PM
    Hey guys, I'm new to Fluent-bit, and I'm trying to adjust my logs to our required format. However sadly not a single change i make to my helm values.yaml file is reflected onto my logs. Is there anyone who could help me out? 🙂 Looking forward to it. For those interested, I would like to lift my body of be individual key value pairs. And I'm reading directly from my .log files
    Copy code
    {
      "body": "2025-10-27T13:54:18.47655819Z stderr F {\"timestamp\":\"2025-10-27T13:54:18.475990Z\",\"level\":\"INFO\",\"fields\":{\"message\":\"handling request\",\"workflow_id\":\"0d7ec256-ee9c-4c85-87cf-7e248c1eaccd\",\"function_name\":\"handle_tracing_logs\",\"method\":\"GET\"}}",
      "id": "0iOAZU9FM7mFS1yxuQ1fYdiI1sN",
      "timestamp": 1761573258477877800,
      "attributes": {
        "fluent.tag": "kube.var.log.containers.tracing-logs-7c5c8dbf5b-7q6hr_default_tracing-logs-5b0b870e3ac845ec4198de2c28bad0c270b3e9e14798d490691ae0b2c0e3f005.log"
      },
      "resources": {},
      "scope": {},
      "severity_text": "",
      "severity_number": 0,
      "scope_name": "",
      "scope_version": "",
      "span_id": "",
      "trace_flags": 0,
      "trace_id": ""
    }
    This is an example from this service its logfile
    Copy code
    {
      "timestamp": "2025-10-27T14:09:49.486890Z",
      "level": "INFO",
      "fields": {
        "message": "request completed",
        "workflow_id": "f76293c4-d371-4d15-bb56-48c1d987a944",
        "function_name": "handle_tracing_logs",
        "execution_time_ms": "0",
        "execution_time_ns": "178129",
        "execution_time_us": "178"
      }
    }
    p
    • 2
    • 6
  • j

    Johan Lindvall

    10/28/2025, 7:26 AM
    Hi, is there a way to keep keys after the multiline filter? I scrape Kubernetes logs with multiline app logs and when there is a multiline match, the filename key "disappears". Config:
    Copy code
    [INPUT]
      Name tail
      Path /var/log/containers/*.log
      multiline.parser docker, cri
      Tag kube.*
      Mem_Buf_Limit 5MB
      Skip_Long_Lines On
      Path_Key filename
      DB /run/fluent-bit/containers.db
    
    [FILTER]
      Name kubernetes
      Match kube.*
      Owner_References On
      Cache_Use_Docker_Id On
      Use_Kubelet Off
      Buffer_Size 1024k
    
    [FILTER]
      Name multiline
      Match kube.*
      Multiline.Key_Content log
      Multiline.Parser go,python,java
    p
    • 2
    • 5
  • s

    Sagi Rosenthal

    10/28/2025, 8:13 AM
    Hi there! I've setup a PR to add a new
    out_logrotate
    plugin to fluent-bit. It is generally based on the
    out_file
    code with extra steps. patrick-stephens started reviewing but it seems tests are failing on Windows. Any idea how to progress the PR so it can be merged? https://github.com/fluent/fluent-bit/pull/10824
    p
    • 2
    • 7
  • s

    Sujay

    10/28/2025, 8:34 AM
    hi team, Im was getting below warn logs in fluentbit
    Copy code
    [2025/10/23 09:39:52] [ warn] [input:tail:tail.0] purged rotated file while data ingestion is paused, consider increasing rotate_wait
    [2025/10/23 09:41:31] [ warn] [input] tail.0 paused (mem buf overlimit)
    [2025/10/23 09:41:31] [ info] [input] pausing tail.0
    [2025/10/23 09:41:31] [ info] [input] resume tail.0
    [2025/10/23 09:41:31] [ info] [input] tail.0 resume (mem buf overlimit)
    now I have switched from memory to stroagetype filesystem: I dont see overlimit logs anymore but I observed a log line
    Copy code
    [/src/fluent-bit/plugins/in_tail/tail_fs_inotify.c:147 errno=2] No such file or directory
    does this lead to log loss ? I wanted some help in understanding why does this happen and what is the result of this
    p
    • 2
    • 16
  • a

    Andrew Elwell

    10/29/2025, 3:43 AM
    Is there a way to restrict the prometheus exporter to only allow a CIDR mask to access it? https://docs.fluentbit.io/manual/data-pipeline/outputs/prometheus-exporter doesn't look likely, so I need to do this at the host level firewall?
  • l

    Louis

    10/29/2025, 5:28 AM
    Hey! Has anyone experience in compiling and running Fluent Bit for Android?
    p
    • 2
    • 1
  • s

    Sanath Ramesh

    10/29/2025, 8:12 AM
    Hello team, I am trying to forward logs from log event channels on
    windows
    which is under the following path :
    Copy code
    Applications and Services Logs -> MyCompany -> SUB -> Monitor -> Operational
    I tried both winlog and winevtlog plugins but I am unable to forward these logs, but other channels such as Security are flowing as expected. Config :
    Copy code
    [INPUT]
        Name winevtlog
        Tag  windows-mycompany-sub-monitor
        DB   C:\Program Files\New Relic\newrelic-infra\newrelic-integrations\logging\fb.db
        Channels MyCompany/SUB/Monitor/Operational
    
    [FILTER]
        Name  record_modifier
        Match windows-mycompany-sub-monitor
        Record "fb.input" "winlog"
    
    [FILTER]
        Name  lua
        Match windows-mycompany-sub-monitor
        script C:\Windows\SystemTemp\nr_fb_lua_filter2456764623
        call eventIdFilter
    
    [FILTER]
        Name  modify
        Match windows-mycompany-sub-monitor
        Rename EventType WinEventType
        Rename Message message
    p
    • 2
    • 1
  • s

    Shelby Hagman

    10/29/2025, 3:03 PM
    Just a note that AWS has released: • AWS for Fluent Bit 3.0.0 (includes fluent-bit 4.1.1) - https://aws.amazon.com/about-aws/whats-new/2025/10/aws-fluent-bit-3-0-0-based-4-1-0/ • CloudWatch Observability Addon 4.6.0 (includes AWS for Fluent Bit 3.0.0) - https://github.com/aws-observability/helm-charts/releases/tag/amazon-cloudwatch-observability-4.6.0
    👍 1
    fluent bit 1
    👀 1
    p
    • 2
    • 6
  • z

    zane

    10/29/2025, 10:21 PM
    hey team, our team at Microsoft has been working on using fluent-bit on windows nano server container microsoft/windows-nanoserver - Docker Image | Docker Hub, the major concern is that how to prevent future broken changes of fluent-bit on nano server container. I believe currently there is no test for nano server container compatibility, but I wonder what is the existing testing mechanism of fluent-bit for windows platform? By leveraging existing tests, we could develop new tests to guard fluent-bit nano server container compatibility.
    p
    • 2
    • 9
  • l

    liangxin

    10/30/2025, 7:40 AM
    Hello, I have a requirement as follows: use a tail plugin to collect all files in a directory. This directory contains various types of log files, such as Go, Java, or Nginx logs. For example, for Java logs, I also need to implement multi-line merging for stack traces. Currently, in the configuration of the tail plugin, I have set up multiple multi-line parsers using
    multiline.parser
    to try to achieve this. However, it seems that a single log entry may match multiple multi-line parsers, resulting in the same log line generating multiple event streams and being collected repeatedly. I am currently using Fluent Bit v3.2. After checking the implementation of the relevant source code, it appears that for multiple multi-line parsers specified in
    multiline.parser
    , each parser will attempt to match the log. Has any friend encountered a similar situation?
  • s

    Sujay

    10/30/2025, 10:45 AM
    https://docs.fluentbit.io/manual/administration/backpressure
    Copy code
    Mitigate the risk of data loss by configuring secondary storage on the filesystem using the storage.type of filesystem (as described in Buffering and storage). Initially, logs will be buffered to both memory and the filesystem. When the storage.max_chunks_up limit is reached, all new data will be stored in the filesystem.
    had a question regarding this doc, when we enable storageType to filesystem, will data be pushed from 1.memory buffer 2. filesystem buffer 3. or it uses both to push data to destination
  • d

    Daler

    10/30/2025, 3:41 PM
    Hi, I am running fluentbit in AWS Firelens using log router ECS container. When deploying the container in ECS, getting the below error.
    Copy code
    [2025/10/30 14:59:07] [ warn] [config] Read twice. path=/fluent-bit/etc/fluent-bit.conf
    [2025/10/30 14:59:07] [error] configuration file contains errors, aborting.
    Firelens config setting in container definition
    Copy code
    "firelensConfiguration" : {
          "type" : "fluentbit",
          "options" : {
            "enable-ecs-log-metadata" : "true",
            "config-file-type" : "file",
            "config-file-value" : "/fluent-bit/etc/fluent-bit-filter.conf"
          }
        }
    fluent-bit-filter.conf file
    Copy code
    [FILTER]
       Name            grep
       Alias           ignore-healthcheck-message
       Match           *
       Logical_Op      and
       Exclude         level INFO
       Exclude         message (?i)(health[-]?check)
    
    [FILTER]
       Name            grep
       Alias           ignore-healthcheck-data
       Match           *
       Logical_Op      and
       Exclude         level INFO
       Exclude         data (?i)(health[-]?check|ELB[-]?HealthChecker)
    
    [FILTER]
       Name            grep
       Alias           ignore-healthchecks-log
       Match           *
       Logical_Op      and
       Exclude         level INFO
       Exclude         log (?i)(health[-]?check|ELB[-]?HealthChecker)
    p
    • 2
    • 4
  • n

    Nitesh B V

    10/30/2025, 9:41 PM
    Hello Team, I see lot of log line drops not seeing error or errors from upstream loki not sure if something is wrong with config. Can anyone help me on this ? New to fluentbit, evaluating few other options if this doesn't work for us. Initially it started working but later input bytes started reducing even though apps are writing logs.
    Copy code
    apiVersion: v1
    data:
      fluent-bit.conf: |
        [SERVICE]
            Flush        1
            Daemon       Off
            Log_Level    debug
            Parsers_File /fluent-bit/etc/parsers.conf
            HTTP_Server  On
            HTTP_Listen  0.0.0.0
            HTTP_Port    2020
            Storage.path              /fluent-bit/logs
            Storage.sync              normal
            Storage.checksum          off
            Storage.metrics           on
            Storage.backlog.mem_limit 128MB
            Storage.max_chunks_up     2048
    
        [INPUT]
            Name              tail
            Path              /var/log/containers/*.log
            Tag               kube.*
            Parser            cri
            Parser_Fallback   docker_json
            Refresh_Interval  5
            Buffer_Max_Size   64MB
            Buffer_Chunk_Size 2MB
            Mem_Buf_Limit     1GB
            Skip_Long_Lines   On
            DB                /var/log/flb_kube.db
            DB.Sync           full
            Storage.type      filesystem
    
        [FILTER]
            Name                kubernetes
            Match               kube.*
            Kube_URL            <https://kubernetes.default.svc:443>
            Kube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            Kube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token
            Kube_Tag_Prefix     kube.var.log.containers.
            Merge_Log           Off
            Keep_Log            On
            Annotations         Off
            Labels              On
    
        [FILTER]
            Name  modify
            Match *
            Add   project qa
            Add   cluster qa
    
        [OUTPUT]
            Name   stdout
            Match  * 
            Format json_lines
    
        [OUTPUT]
            Name        loki
            Match       kube.*
            Host        test.com
            Port        443
            Tenant_ID   qa
            tls         On
            tls.verify  Off
            Uri         /loki/api/v1/push
            Compress    gzip
            Workers     32
            Retry_Limit 5
            Labels      job=fluent-bit,project=$project,cluster=$cluster,namespace=$kubernetes['namespace_name'],pod=$kubernetes['pod_name'],container=$kubernetes['container_name'],app=$kubernetes['labels']['app.kubernetes.io/name'],tier=$kubernetes['labels']['tier'],release=$kubernetes['labels']['release'],host=$kubernetes['host']
            storage.total_limit_size   8G
            Auto_Kubernetes_Labels On
            net.keepalive             On
            net.connect_timeout       10s
            net.keepalive_idle_timeout 30s
    
      parsers.conf: |
        # Docker JSON (fallback)
        [PARSER]
            Name        docker_json
            Format      json
            Time_Key    time
            Time_Format %Y-%m-%dT%H:%M:%S.%LZ
            Time_Keep   On
    
        # CRI / containerd (primary)
        [PARSER]
            Name        cri
            Format      regex
            Regex       ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<flags>[^ ]*) (?<log>.*)$
            Time_Key    time
            Time_Format %Y-%m-%dT%H:%M:%S.%L%z
            Decode_Field_As  escaped_utf8  log  do_next
            Decode_Field_As  json          log
    p
    • 2
    • 6
  • n

    Nitesh B V

    10/30/2025, 9:42 PM
    I tuned most of the attributes but no luck so far
  • a

    Andrew Elwell

    10/31/2025, 12:11 AM
    hmm. build no worky on a SLES box?
    Copy code
    [ 89%] Built target flb-plugin-custom_calyptia
    [ 98%] Built target fluent-bit-shared
    make[2]: *** No rule to make target 'backtrace-prefix/lib/libbacktrace.a', needed by 'bin/fluent-bit'.  Stop.
    make[1]: *** [CMakeFiles/Makefile2:10067: src/CMakeFiles/fluent-bit-bin.dir/all] Error 2
    make: *** [Makefile:156: all] Error 2
    aelwell@joey-02:~/compile/fluent-bit-4.1.1/build>
    p
    • 2
    • 7
  • d

    dujas

    11/01/2025, 11:17 AM
    Any suggestion for this issue: https://github.com/fluent/fluent-bit/issues/10898
  • j

    Joao Costa

    11/03/2025, 9:51 AM
    Hi guys, We are using Fluent Bit Operator on AKS and we need to collect Prometheus metrics exposed by application pods across multiple namespaces. Our goal is to scrape /metrics endpoints from all pods that define Prometheus scrape annotations, similar to how Prometheus Service Discovery works. The problem is that Fluent Bit Operator’s ClusterInput CRD only seems to expose the prometheusScrapeMetrics input, which requires static host/port configuration, we basically need to scrape metrics from dynamic workloads across AKS without manually defining each endpoint. Has anyone faced this limitation? Does fluent bit have anything that we might be missing that could address this? Thanks in advance
    m
    • 2
    • 1