https://linen.dev logo
Join Slack
Powered by
# fluent-bit
  • d

    Dean Meehan

    11/07/2025, 2:14 PM
    Are we able to set the OTEL Resource value dynamically eg. from a field within our log for example setting the
    fields.service_name
    to
    OTEL Resource: service.name
    Copy code
    Fluentbit Tail: {"event": "my log message", "fields": {"service_name": "my_service", "datacenter": "eu-west"}}
    • 1
    • 1
  • g

    Gmo1492

    11/07/2025, 4:38 PM
    Hello is anyone having trouble getting to https://packages.fluentbit.io/
  • g

    Gmo1492

    11/07/2025, 4:38 PM
    my deployments are failing
  • g

    Gmo1492

    11/07/2025, 4:39 PM
    because it cant reach https://packages.fluentbit.io/fluentbit.key
  • g

    Gmo1492

    11/07/2025, 4:39 PM
    Copy code
    Reading state information... Done
    E: Unable to locate package fluent-bit
    [fluent-bit][error] APT install failed (vendor repo unreachable and Ubuntu archive install failed).
  • g

    Gmo1492

    11/07/2025, 4:39 PM
    something changed in the last 24 hours because I was able to do deployments yesterday
  • g

    Gmo1492

    11/07/2025, 4:42 PM
    I first received the error
    apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
    (my stuff was outdated) - once I changed it to follow the latest install docs it fails
  • s

    Scott Bisker

    11/07/2025, 4:46 PM
    ^^ According to Cloudflare, the host is not responding on the fluentbit.io side.
    ➕ 1
    👆 3
    🔴 2
  • g

    Gmo1492

    11/07/2025, 4:48 PM
    FWIW, it would be very valuable if https://packages.fluentbit.io had a status page. We could then be notify when internal issues happen
  • j

    Jason A

    11/07/2025, 5:38 PM
    Hey all. just joined as we're experiencing this packages outage and it's breaking our deployments as well
    Copy code
    amazon-ebs: E: Failed to fetch <https://packages.fluentbit.io/ubuntu/noble/dists/noble/InRelease>  522 
        amazon-ebs: E: The repository '<https://packages.fluentbit.io/ubuntu/noble> noble InRelease' is no longer signed.
        amazon-ebs: N: Updating from such a repository can't be done securely, and is therefore disabled by default.
        amazon-ebs: N: See apt-secure(8) manpage for repository creation and user configuration details.
    ==> amazon-ebs: Provisioning step had errors: Running the cleanup provis
    🔴 3
  • s

    Scott Bisker

    11/07/2025, 6:19 PM
    @Phillip Whelan @eduardo ^^ Apologies for tagging you. Both of you are primary maintainers of fluent-bit. Not sure if anyone on your end is aware of the outage or not.
    🙏 1
    p
    • 2
    • 1
  • j

    Jason A

    11/07/2025, 6:21 PM
    FYI Someone created an ""It's not just you! packages.fluentbit.io is down." #11133" issue on the fluent-bit GitHub repo about an hour ago as well: https://github.com/fluent/fluent-bit/issues/11133
    p
    a
    • 3
    • 4
  • j

    Josh

    11/07/2025, 6:52 PM
    Hi Folks, do you folks know if this site is down? URL: https://packages.fluentbit.io/windows/
    m
    l
    • 3
    • 4
  • g

    Gmo1492

    11/07/2025, 7:06 PM
    thank you @lecaros
  • c

    Celalettin

    11/07/2025, 7:23 PM
    I put 301 redirect to s3 bucket for quick mitigation , can I ask your help to understand if it mitigated your issues for now?
    👍 2
    m
    j
    +7
    • 10
    • 21
  • s

    Saksham

    11/10/2025, 8:23 AM
    Hello Guys
  • b

    Bryson Edwards

    11/10/2025, 10:49 PM
    Hi - Im using the logging operator with kubernetes and I'm trying to change the json schema that gets sent to the destination output. How would I do that? Example: If the default is
    Copy code
    {
      "namespace": "test"
    }
    i would want:
    Copy code
    {
      "some_new_field": "some_new_value",
      "spec": {
        "namespace": "test"
      }
    }
    p
    • 2
    • 5
  • m

    Michael Marshall

    11/11/2025, 9:51 PM
    Hello, i am trying to use fluent-bit as a destination from Splunk Edge Processor or EP. i have defined a Splunk HEC destination. Now, i think this should work, but i am actually running fluent-bit 4.1.1 installed on ubuntu. The same machine which the Splunk Edge instance is running on. There is a specific reason, but im having problems getting fluent-bit to receive the events from Splunk. On the Splunk end i am getting:
    Copy code
    Post "<http://192.168.141.95:9880/services/collector>": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    I have tried lots of options and configurations, but my current one is cli based:
    Copy code
    root@ip-192-168-141-95:~# /opt/fluent-bit/bin/fluent-bit -i splunk -p port=9880 -p buffer_chunk_size=1024 -p buffer_max_size=32M -p tag=splunk.logs -p net.io_timeout=300s -o stdout -p match=splunk.logs -vv
    which is producing:
    Copy code
    [2025/11/11 21:44:09.381347930] [trace] [io] connection OK
    [2025/11/11 21:44:09.381397730] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.381863699] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.381894442] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382594157] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382625300] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382642132] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382657844] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382674014] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382684183] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382700140] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382710296] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382724162] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382734559] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382748716] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382759216] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382772032] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382782780] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382796156] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382805907] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382818906] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382828814] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382843934] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.382853034] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382863254] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382878776] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382888383] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382908014] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382918664] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382933485] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382943435] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382961527] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.382972431] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.382990641] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383000808] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383026942] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383042965] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383060552] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383070761] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383085467] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383097179] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383111593] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383120958] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383137668] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.383146008] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383157180] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383170359] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383179843] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:09.383193275] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.383203629] [trace] [io coro=(nil)] [net_read] ret=706
    [2025/11/11 21:44:09.383216611] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.431509514] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.681537238] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.681554644] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.879452869] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:09.879531281] [trace] [io coro=(nil)] [net_read] ret=0
    [2025/11/11 21:44:09.879549725] [trace] [downstream] destroy connection #48 to <tcp://192.168.141.95:46304>
    [2025/11/11 21:44:09.879621135] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:09.931509675] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:10.95119333] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:10.95162342] [trace] [io coro=(nil)] [net_read] ret=0
    [2025/11/11 21:44:10.95179536] [trace] [downstream] destroy connection #49 to <tcp://192.168.141.95:46314>
    [2025/11/11 21:44:10.95247475] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:10.181511800] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:10.431508128] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:10.681546565] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:10.681585263] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:10.931508179] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:11.181514100] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:11.431510732] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:11.681539544] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:11.931508704] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.173087049] [trace] [io] connection OK
    [2025/11/11 21:44:12.173199150] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.173810559] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173841862] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.173872772] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173883888] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.173898853] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173909280] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.173923156] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173933024] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.173946124] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173955163] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.173967800] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173977479] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.173989628] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.173999144] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.174096070] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.174110901] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.174203854] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.174395146] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.174415522] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.174426114] [trace] [io coro=(nil)] [net_read] ret=1024
    [2025/11/11 21:44:12.174435379] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.174441221] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.174447781] [trace] [io coro=(nil)] [net_read] ret=314
    [2025/11/11 21:44:12.174457878] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.181508649] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.181507560] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.430735078] [trace] [io coro=(nil)] [net_read] try up to 1024 bytes
    [2025/11/11 21:44:12.430779926] [trace] [io coro=(nil)] [net_read] ret=0
    [2025/11/11 21:44:12.430796710] [trace] [downstream] destroy connection #52 to <tcp://192.168.141.95:46322>
    [2025/11/11 21:44:12.430866695] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.431506047] [trace] [sched] 0 timer coroutines destroyed
    [2025/11/11 21:44:12.681535932] [trace] [sched] 0 timer coroutines destroyed
    Any ideas? When i switched it to tcp, i get:
    Copy code
    ______ _                  _    ______ _ _             ___   __
    |  ___| |                | |   | ___ (_) |           /   | /  |
    | |_  | |_   _  ___ _ __ | |_  | |_/ /_| |_  __   __/ /| | `| |
    |  _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / /_| |  | |
    | |   | | |_| |  __/ | | | |_  | |_/ / | |_   \ V /\___  |__| |_
    \_|   |_|\__,_|\___|_| |_|\__| \____/|_|\__|   \_/     |_(_)___/
    
    
    [2025/11/11 21:49:12.454217350] [ info] [fluent bit] version=4.1.1, commit=, pid=7654
    [2025/11/11 21:49:12.454345650] [ info] [storage] ver=1.5.3, type=memory, sync=normal, checksum=off, max_chunks_up=128
    [2025/11/11 21:49:12.454355937] [ info] [simd    ] SSE2
    [2025/11/11 21:49:12.454363428] [ info] [cmetrics] version=1.0.5
    [2025/11/11 21:49:12.454371187] [ info] [ctraces ] version=0.6.6
    [2025/11/11 21:49:12.454441883] [ info] [input:tcp:tcp.0] initializing
    [2025/11/11 21:49:12.454450891] [ info] [input:tcp:tcp.0] storage_strategy='memory' (memory only)
    [2025/11/11 21:49:12.455168829] [ info] [sp] stream processor started
    [2025/11/11 21:49:12.455347140] [ info] [output:stdout:stdout.0] worker #0 started
    [2025/11/11 21:49:12.455396357] [ info] [engine] Shutdown Grace Period=5, Shutdown Input Grace Period=2
    "}] tcp.0: [[1762897752.520261984, {}], {"log"=>"POST /services/collector HTTP/1.1
    "}] tcp.0: [[1762897752.520272277, {}], {"log"=>"Host: 192.168.141.95:9880
    "}] tcp.0: [[1762897752.520273812, {}], {"log"=>"User-Agent: OpenTelemetry Collector Contrib/11f9362e
    "}] tcp.0: [[1762897752.520275124, {}], {"log"=>"Content-Length: 44970
    "}] tcp.0: [[1762897752.520276343, {}], {"log"=>"Authorization: Splunk my_token
    "}] tcp.0: [[1762897752.520277527, {}], {"log"=>"Connection: keep-alive
    "}] tcp.0: [[1762897752.520278816, {}], {"log"=>"Content-Encoding: gzip
    "}] tcp.0: [[1762897752.520280153, {}], {"log"=>"Content-Type: application/json
    "}] tcp.0: [[1762897752.520281350, {}], {"log"=>"__splunk_app_name: OpenTelemetry Collector Contrib
    "}] tcp.0: [[1762897752.520282527, {}], {"log"=>"__splunk_app_version:
    "}]] tcp.0: [[1762897752.520283955, {}], {"log"=>"Accept-Encoding: gzip
    "}]] tcp.0: [[1762897752.520285037, {}], {"log"=>"Connection: close
    "}]] tcp.0: [[1762897752.520286276, {}], {"log"=>"
  • m

    Michael Marshall

    11/11/2025, 9:52 PM
    Oh and if i use curl to send a test splunk HEC message, it works fine.
  • v

    Victor Nilsson

    11/12/2025, 2:02 PM
    Hey 🤠 We have a large fleet of servers that is running fluent-bit as a docker container, with systemd input. Whenever we push out changes to the configuration of fluent-bit, ansible will replace the container with a new one. This generates a lot of systemd logs that fluent-bit is sending to opensearch. The massive influx of systemd logs seems like maybe fluent-bit is resending already processed logs. This is how we have configured the pipeline for systemd logs:
    Copy code
    ---
    pipeline:
      inputs:
        - name: systemd
          tag: systemd.*
          read_from_tail: on
          threaded: true
          lowercase: on
          db: /fluent-bit/db/systemd.db
          storage.type: memory # Filesystem buffering is not needed for tail input since the files are stored locally.
          mem_buf_limit: 250M
          alias: in_systemd
    We have set
    db
    as well as
    read_from_tail: on
    so our thoughts were that the fluent-bit container should not resend already processed logs, is this true?
  • a

    Andrew Elwell

    11/13/2025, 2:31 AM
    is that DB contained within the (replaced) container? if so then the fresh container won't necessarily know what line the old one was up to
  • m

    Michael Marshall

    11/13/2025, 3:27 PM
    is anyone sucessfully using the in_splunk plugin to ingest logs from splunk over HEC? if yes, what version of fluent-bit and what version of splunk?
  • d

    DennyF

    11/13/2025, 3:44 PM
    hi
  • m

    Megha Aggarwal

    11/13/2025, 7:18 PM
    Hello team! I am playing around with https://docs.fluentbit.io/manual/data-pipeline/inputs/prometheus-scrape-metrics Trying to understand what all potential "output" alternatives we have for this? Is there some otel-native exporter supported for getting the metrics out?
    p
    • 2
    • 5
  • g

    Gabriel Alacchi

    11/13/2025, 10:02 PM
    I wasn't quite sure whether to raise a GitHub issue for this just yet since I don't believe it's a fluentbit problem per-se. Under high traffic volume with
    storage.type=filesystem
    we see a rapid leak of memory in-use by the fluent-bit pod in k8s. Growing to as much as 16GB after 1d or so without a pod restart. We see that fluent-bit itself is not consuming much memory, maybe a few hundred MB, but rather that kernel slab associated to the container cgroup is accounting for all of the excess memory.
    slabtop
    claims that VFS dentry cache is accounting for all of those leaked kernel objects. The behavior we're seeing is that since we are running buffering a large # of chunks/sec, we are creating easily hundreds of chunk files per second which leaks dentry entries rather quickly. Even upon file deletion the kernel will keep negative dentries which cache the non-existence of a file, and they aren't purged from kernel cache all that easily unless the system is under memory pressure. More context on this topic: https://lwn.net/Articles/894098/ Is this dentry cache bloat a well-known problem in the fluent-bit community? Are there good solutions / workarounds? Some workarounds we've considered, but are looking for guidance from maintainers & community: 1. Raise VFS cache pressure on the nodes. I'm not 100% sure on how much this changes VFS cache behavior here, and not sure what perf consequences this can have on the rest of workloads on the Node. It's worth experimenting with. 2. Periodically reboot fluent-bit pods. This resets its memory accounting, however doesn't actually clean up the bloat in the dentry cache since it's a system wide cache. If our system gets into memory pressure, the sheer volume of dentry entries could lock-up the system. Feels like sweeping a bigger problem under the rug. 3. Periodically migrate fluent-bit storage directory to another directory and delete the old directory. Supposedly when a directory is deleted, a negative dentry is kept for it, but nested entries are pruned since they are now made redundant. I think this is the most plausible option since we can add a script wrapper around fluent-bit to gracefully shut it down, reconfigure, and re-start, no code changes are required in fluent-bit itself. How do we handle periods of backpressure when there is an existing backlog of chunks? One idea to improve things within fluent-bit itself would be to re-use chunk file names so those cached dentries can be re-used. Either that, or use larger pre-allocated files with block-arena like memory management to store FS chunks. This may be more efficient? You can always add more files or extend the block arena if the FS storage buffer needs to grow. CC @Pandu Aji
    p
    • 2
    • 2
  • r

    Rafael Martinez Guerrero

    11/14/2025, 1:50 PM
    Hello, would it be possible to open https://github.com/fluent/fluent-bit/issues/11068 (Fluent-bit crashes with a coredump when running on RHEL10) again? The bug is still active and fluent-bit (4.1.1 and 4.2.0) still crash with a core dump when using it with the input systemd.
    p
    h
    • 3
    • 10
  • p

    Phil Wilkins

    11/14/2025, 9:43 PM
    @here Will the documentation for 4.2 be released to the main website?
    p
    e
    a
    • 4
    • 9
  • a

    Andrew Elwell

    11/16/2025, 10:05 PM
    OK,Dumb Q, but what's the advantage of one over the other between using a) the built in monitoring
    Copy code
    service:
      http_server: on
      http_listen: 0.0.0.0
      http_port: 2020
    https://docs.fluentbit.io/manual/administration/monitoring vs https://docs.fluentbit.io/manual/data-pipeline/inputs/fluentbit-metrics and sending those to a prometheus_exporter as described in the docs? Does one have more things? better coverage of pipeline/ consume fewer resources?
    p
    • 2
    • 1
  • a

    Andrew Elwell

    11/16/2025, 10:35 PM
    also it looks like the prometheus_exporter just ignores anything in the query URI providing it ends in
    /metrics
    - is this expected?
    Copy code
    [aelwell@admiral ~]$ curl -s <http://127.0.0.1:2021/blah/randomshit../../../../../../../../metrics> | head
    # HELP fluentbit_uptime Number of seconds that Fluent Bit has been running.
    # TYPE fluentbit_uptime counter
    fluentbit_uptime{hostname="admiral"} 122936
    # HELP fluentbit_logger_logs_total Total number of logs
    # TYPE fluentbit_logger_logs_total counter
    fluentbit_logger_logs_total{message_type="error"} 0
    fluentbit_logger_logs_total{message_type="warn"} 0
    fluentbit_logger_logs_total{message_type="info"} 20
    fluentbit_logger_logs_total{message_type="debug"} 0
    fluentbit_logger_logs_total{message_type="trace"} 0
    ah, there's a choice of only
    Copy code
    static void cb_metrics(mk_request_t *request, void *data)
    static void cb_root(mk_request_t *request, void *data)
  • s

    Sagi Rosenthal

    11/17/2025, 9:30 AM
    Anyone experienced in compiling and testing on Mac with M2 (macos apple silicone)? I'm running this
    cd build && cmake .. -DFLB_TESTS_RUNTIME=On && make
    and getting lib_crypto issues:
    Copy code
    Undefined symbols for architecture arm64:
      "_EVP_MD_size", referenced from:
          _flb_hmac_init in flb_hmac.c.o
          _flb_hash_init in flb_hash.c.o
      "_EVP_PKEY_size", referenced from:
          _flb_crypto_init in flb_crypto.c.o
    ld: symbol(s) not found for architecture arm64
    clang++: error: linker command failed with exit code 1 (use -v to see invocation)
    make[2]: *** [lib/libfluent-bit.dylib] Error 1
    make[1]: *** [src/CMakeFiles/fluent-bit-shared.dir/all] Error 2
    make: *** [all] Error 2
    p
    • 2
    • 4