Colin Mollenhour
04/29/2025, 11:21 PMenv:
ENVIRONMENT: x
INSTANCE_UID: x
# Main Fluent Bit Configuration
service:
flush: 1
grace: 0
daemon: off
log_level: debug
parsers_file: parsers.yaml
# Include input configurations
includes:
- inputs/*.yaml
# Output to stdout for testing
pipeline:
filters:
- name: modify
match: "*"
add:
- environment ${ENVIRONMENT}
- instance_uid ${INSTANCE_UID}
outputs:
- name: stdout
match: "*"
If I run this with docker run -e ENVIRONMENT=dev ...
it is still reported as "x"Filip Havlíček
04/30/2025, 8:01 AMMem_Buf_Limit 16MB
and in the same time there is default configuration max_chunks_up=128
How much memory will Fluentbit use? 16MB (according to buf limit) or +/- 256MB (chunks * 2)?
It's older version running in AWS (aws-for-fluent-bit), but the most recent from AWS
Fluent Bit v1.9.10
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* <https://fluentbit.io>
[2025/04/30 07:44:12] [ info] [fluent bit] version=1.9.10, commit=b4eaeab1ec, pid=1
[2025/04/30 07:44:12] [ info] [storage] version=1.4.0, type=memory-only, sync=normal, checksum=disabled, max_chunks_up=128
Sagar Nikam
04/30/2025, 12:24 PM[2025/04/30 12:08:22] [error] [output:loki:loki.0] 10.41.3.89:3101, HTTP status=400 Not retrying.
entry with timestamp 2025-04-30 10:04:03.432559554 +0000 UTC ignored, reason: 'entry too far behind, oldest acceptable timestamp is: 2025-04-30T11:49:07Z',
user 'fake', total ignored: 1 out of 1 for stream: {controller_revision_hash="55b8d44fd6", k8s_app="kube-proxy", pod_template_generation="4"}
[2025/04/30 12:08:52] [error] [output:loki:loki.0] 10.41.3.89:3101, HTTP status=400 Not retrying.
entry with timestamp 2025-04-30 10:04:03.345278889 +0000 UTC ignored, reason: 'entry too far behind, oldest acceptable timestamp is: 2025-04-30T11:53:30Z',
user 'fake', total ignored: 1 out of 1 for stream: {app_kubernetes_io_instance="incident", app_kubernetes_io_name="incident", pod_template_hash="bf9549f95", tags_datadoghq_com_env="dev", tags_datadoghq_com_service="incident", tags_datadoghq_com_version="1.0.0-RESP-MAR2025-R1-1"}
William
04/30/2025, 1:10 PMFLBPluginFlushCtx
function that is invoked to flush a chunk of records, and especially the concurrency policy around it.
Per my current understanding, in case I define workers: x
, where x > 1
, I should have a different context for each call to ``FLBPluginFlushCtx`` (that can be retrieved and set with FLBPluginGetContext
and FLBPluginSetContext
). Is it possible to have concurrent invocations of the ``FLBPluginFlushCtx`` functions with the same context, or each output worker (using the same plugin) will treat their chunks sequentially ?Chris Athan
04/30/2025, 1:47 PMcontainers:
- name: fluent-bit
image: amazon/aws-for-fluent-bit:latest
Uses a quite old fluent-bit version (1.9.10), while 4.0.1 is the latest.
and my problem is that I cannot make complex parsing for different apps. Now I have to deploy 2 separate fluent-bits for different log type parsing and still I cannot parse the logs of logstash app witch are multiline sql and always breaks between the lines of message.Rushi
04/30/2025, 6:15 PMMax
05/01/2025, 12:27 PMArjun
05/01/2025, 2:38 PM{
"time": "2025-04-30T11:00:06.935147966Z",
"stream": "stdout",
"_p": "F",
"log": "\u001b[0m\u001b[31m[error] \u001b[0munsupported data type: 0xc000a92f90",
"kubernetes": {
"pod_name": "backend-service-765b7c5fd7-dhnln",
"namespace_name": "apps",
"pod_id": "075485da-bee0-4e17-80d7-1390fe7a54ec",
"labels": {
"<http://app.kubernetes.io/instance|app.kubernetes.io/instance>": "backend-service",
"<http://app.kubernetes.io/name|app.kubernetes.io/name>": "backend-service",
"pod-template-hash": "765b7c5fd7"
},
"annotations": {
"checksum/config": "972d24f4667f0a0e0128f38d1cd73e65330f734408aac20ab776acac04d94cd1",
"checksum/esecret": "55d3f58e14bc4ba05edb233ddaf134963564b183b0fbccb0c395f43b05a5550c"
},
"host": "x.y.z.q",
"pod_ip": "x.y.z.q",
"container_name": "backend-service",
"docker_id": "sgagasg",
"container_hash": "imagerepo",
"container_image": "app_image"
}
}
Valentin Petrov
05/02/2025, 6:37 AMSven
05/02/2025, 7:46 AM2025-05-02T09:01:13+0200 DDEBUG Command: yum install -y fluent-bit-4.0.1-1
2025-05-02T09:01:13+0200 DDEBUG Installroot: /
2025-05-02T09:01:13+0200 DDEBUG Releasever: 2023.7.20250414
2025-05-02T09:01:13+0200 DEBUG cachedir: /var/cache/dnf
2025-05-02T09:01:13+0200 DDEBUG Base command: install
2025-05-02T09:01:13+0200 DDEBUG Extra commands: ['install', '-y', 'fluent-bit-4.0.1-1']
2025-05-02T09:01:13+0200 DEBUG User-Agent: constructed: 'libdnf (Amazon Linux 2023; generic; Linux.x86_64)'
2025-05-02T09:01:13+0200 DEBUG repo: using cache for: amazonlinux
2025-05-02T09:01:13+0200 DEBUG amazonlinux: using metadata from Wed Apr 9 19:48:07 2025.
2025-05-02T09:01:13+0200 DEBUG repo: using cache for: kernel-livepatch
2025-05-02T09:01:13+0200 DEBUG kernel-livepatch: using metadata from Wed Apr 23 20:16:04 2025.
2025-05-02T09:01:13+0200 DEBUG repo: downloading from remote: fluent-bit
2025-05-02T09:01:14+0200 DEBUG fluent-bit: using metadata from Thu Apr 24 01:30:27 2025.
2025-05-02T09:01:14+0200 DDEBUG timer: sack setup: 931 ms
2025-05-02T09:01:14+0200 DEBUG Completion plugin: Generating completion cache...
2025-05-02T09:01:14+0200 DEBUG --> Starting dependency resolution
2025-05-02T09:01:14+0200 DEBUG ---> Package libpq.x86_64 17.4-1.amzn2023.0.1 will be installed
2025-05-02T09:01:14+0200 DEBUG ---> Package fluent-bit.x86_64 4.0.1-1 will be installed
2025-05-02T09:01:14+0200 DEBUG --> Finished dependency resolution
2025-05-02T09:01:14+0200 DDEBUG timer: depsolve: 50 ms
2025-05-02T09:01:14+0200 INFO Dependencies resolved.
2025-05-02T09:01:14+0200 INFO ================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
fluent-bit x86_64 4.0.1-1 fluent-bit 7.7 M
Installing dependencies:
libpq x86_64 17.4-1.amzn2023.0.1 amazonlinux 262 k
Transaction Summary
================================================================================
Install 2 Packages
2025-05-02T09:01:14+0200 INFO Total download size: 7.9 M
2025-05-02T09:01:14+0200 INFO Installed size: 24 M
2025-05-02T09:01:14+0200 INFO Downloading Packages:
2025-05-02T09:01:16+0200 INFO --------------------------------------------------------------------------------
2025-05-02T09:01:16+0200 INFO Total 6.1 MB/s | 7.9 MB 00:01
2025-05-02T09:01:16+0200 DEBUG Using rpmkeys executable at /usr/bin/rpmkeys to verify signatures
2025-05-02T09:01:16+0200 CRITICAL Importing GPG key 0x3888C1CD:
Userid : "Fluentbit releases (Releases signing key) <releases@fluentbit.io>"
Fingerprint: C3C0 A285 34B9 293E AF51 FABD 9F9D DC08 3888 C1CD
From : <https://packages.fluentbit.io/fluentbit.key>
2025-05-02T09:01:16+0200 CRITICAL Key import failed (code 2). Failing package is: fluent-bit-4.0.1-1.x86_64
GPG Keys are configured as: <https://packages.fluentbit.io/fluentbit.key>
2025-05-02T09:01:16+0200 DDEBUG Cleaning up.
2025-05-02T09:01:16+0200 INFO The downloaded packages were saved in cache until the next successful transaction.
2025-05-02T09:01:16+0200 INFO You can remove cached packages by executing 'yum clean packages'.
2025-05-02T09:01:16+0200 SUBDEBUG
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 67, in main
return _main(base, args, cli_class, option_parser_class)
File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 106, in _main
return cli_run(cli, base)
File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 130, in cli_run
ret = resolving(cli, base)
File "/usr/lib/python3.9/site-packages/dnf/cli/main.py", line 176, in resolving
base.do_transaction(display=displays)
File "/usr/lib/python3.9/site-packages/dnf/cli/cli.py", line 238, in do_transaction
self.gpgsigcheck(install_pkgs)
File "/usr/lib/python3.9/site-packages/dnf/cli/cli.py", line 305, in gpgsigcheck
raise dnf.exceptions.Error(_("GPG check FAILED"))
dnf.exceptions.Error: GPG check FAILED
2025-05-02T09:01:16+0200 CRITICAL Error: GPG check FAILED
2025-05-02T07:01:27+0000 DDEBUG RPM transaction over.
2025-05-02T07:01:27+0000 DDEBUG timer: verify transaction: 48 ms
2025-05-02T07:01:27+0000 DDEBUG timer: transaction: 14805 ms
2025-05-02T07:01:27+0000 DEBUG Completion plugin: Generating completion cache...
2025-05-02T09:01:27+0200 INFO Transaction check succeeded.
2025-05-02T09:01:27+0200 INFO Running transaction test
Xinrui Zheng
05/02/2025, 6:31 PMshams
05/03/2025, 5:36 PMignore_active_older_files
1. I think it should be default as true.
2. Also please add documentation for this new config, alongside ignore_older.Gleb Tyunikov
05/05/2025, 8:17 AMPreethi Voore
05/05/2025, 12:04 PM/api/v2/metrics/prometheus
endpoint. Is there a way to filter and retrieve only the specific metrics that I require?
Thank you!Milen Mladenov
05/05/2025, 4:59 PM[2025/05/05 16:22:14] [error] [engine] chunk '1-1746462127.162873254.flb' cannot be retried: task_id=14, input=emitter_for_multiline.0 > output=kinesis_firehose.0
[2025/05/05 16:22:14] [error] [aws_credentials] STS assume role request failed
[2025/05/05 16:22:14] [ warn] [aws_credentials] No cached credentials are available and a credential refresh is already in progress. The current co-routine will retry.
[2025/05/05 16:22:14] [error] [signv4] Provider returned no credentials, service=firehose
[2025/05/05 16:22:14] [error] [aws_client] could not sign request
Can you suggest what may be wrong.Matteo Ferraroni
05/06/2025, 12:41 AMWilliam
05/06/2025, 8:59 AMmodify
filter (as a processor, but that's not relevant to the issue) with the hard_copy
rule.
The first parameter STRING:KEY
should allow a record accessor of the form $key['subKey']
, according to the doc.
> You can set Record Accessor as STRING:KEY
for nested key.
However, it fails with the error [error] [filter:modify:modify.0] Unable to create regex(key) from $key['subKey']
.
Digging the code, I found the origin of the error [here](https://github.com/fluent/fluent-bit/blob/master/plugins/filter_modify/modify.c#L482). I seems that we'll always enter the else block since for the hard_copy
operation the rule->key_is_regex
is always false.
Does anyone else have a clue if I'm misusing the filter, or if that's a bug as I suspect it ?Andrew Longwill
05/06/2025, 9:15 AMPat
05/06/2025, 9:36 AMLikhith
05/06/2025, 12:14 PMWilliam
05/06/2025, 12:57 PMrecod-modifier
filter (https://docs.fluentbit.io/manual/pipeline/filters/record-modifier) with the following allow_list
triggers the following YAML parsing error:
[error] unable to add key to list map
- kubernetes_annotation_logs.foo.com/sink-addr
- kubernetes_annotation_logs.foo.com/token
- kubernetes_annotation_logs.foo.com/sink-port
- kubernetes_annotation_syslog.foo.com/enabled
Underscores are valid, but I have a doubt regarding the /
or dots .
in the keys.
Has anyone experienced the same. This is unfortunate, since the /
is often used in Kubernetes annotations keys.Nico
05/06/2025, 7:46 PMjson
parser, and one of the record is a non json log. Will the record be unchanged and continue its flow in the pipeline?G Smith
05/06/2025, 8:20 PMconfig=/fluent-bit/etc/conf/fluent-bit.conf
). I can't believe I'm the first to run into this so I must be missing some obvious solution. Anyone know how to resolve this?William
05/06/2025, 9:39 PMownerReferences
array to the records.
Given that the nest
filter only operates on "maps" (objects), I presume that transforming this array requires advanced filters (LUA or WASM).
Wondering if anyone found a way to extract/lift the fields, from let's say the first object of the array at index 0, using only "basic" filters ? I haven't tried it yet, but the record accessor syntax might allow array indexing ?zane
05/06/2025, 10:36 PMTim Förster
05/07/2025, 5:14 AM<flush_interval>
.
Any insights or suggestions would be greatly appreciated!John
05/07/2025, 5:42 PMG Smith
05/07/2025, 7:00 PM[FILTER]
Name modify
Match *
Condition Key_exists json_log
Set DB__structlog true
and here's the new YAML format. This fragment is part of a within a pipeline/inputs/processors block:
- name: modify
condition: Key_exists json_log
set: DB__structlog true
Fluent Bit (4.0.1) reports the following error messages on startup when the above yaml block is included.
[2025/05/07 18:48:48] [error] [processor] condition must be a map
[2025/05/07 18:48:48] [error] failed to set condition for processor 'modify'
[2025/05/07 18:48:48] [error] failed to load 'logs' processors
I have also tried reformatting the block like this with no better luck. And the examples in the documentation show that specifying a single condition on the same line is acceptable.
- name: modify
condition:
- Key_exists json_log
set: DB__structlog true
Can anyone identify the problem?Nithin
05/08/2025, 6:44 AMKarthi Pragasam
05/08/2025, 1:37 PM