https://linen.dev logo
Join Slack
Powered by
# fluentd
  • d

    Deds

    01/10/2025, 11:03 PM
    Good Evening, is anyone available to assist with the fluent-plugin-http v(1.1.0) output plugin? I am trying to output data via API key / url. However, following the current 1.0 documentation the fluentd CLI for windows does not seem to be correct. Example: Documentation indicates to use endpoint parameter however, the CLI will explicitly state to use the parameter URL which is not located on the documentation. I've tried using multiple different resources that have successfully done this and I'm still unable to communicate or reach out to where I am trying to send the data with fluentd.
  • s

    Sebastian

    01/15/2025, 10:25 AM
    hi, does it make sense to have a <system> section for each fluentd conf under conf.d, describing workers and log-levels seperately for each conf?
    • 1
    • 1
  • a

    Andrew Elwell

    01/16/2025, 3:21 AM
    @Andrew Elwell has left the channel
  • g

    Guilherme Zanini

    01/28/2025, 4:17 PM
    Hello everyone, We’re running some performance tests with Fluentd and Postgres, and we’ve noticed that Fluentd inserts entries using
    INSERT
    instead of
    COPY
    . Is there any particular reason for this? I’m asking because, in theory,
    COPY
    is more efficient than
    INSERT
    when inserting large numbers of entries.
  • h

    Humza ilyas

    01/28/2025, 7:54 PM
    Hello Team, i am trying to deploy fluentd using docker compose on my macbook i copy paste everything that is in the docs . All other containers work but fluentd fails everytime with a mounting error. It fails to mount fluent.conf file to the container config file
    p
    • 2
    • 6
  • s

    srinivas

    01/29/2025, 7:20 AM
    Hi, I want to push logs from flunetbit to fluentd directly without any intermediate and both fluentbit and fluentd will run as pods in eks cluster, can you help with the documentation to implement the same
  • k

    Kenny Osiyoye

    02/04/2025, 8:42 PM
    I’m trying to send logs to splunk HEC and I’m getting this error Failed to post
  • a

    Adrija Basu

    02/05/2025, 1:10 PM
    Hello Community we have implemented a robust monitoring system with EFK where fluentd is processing the logs to elastic , recently we have noticed when the logs size increases we are getting an error as "Worker Node ) killed by SIGKILL" whenever the log size increases , currently we have configured memory to 3072 Mi in resources limit and requests memory to 1024 Mi and cpu to 1500 Mi in the daemonset file also in fluentd config we have set the values for buffer like flush_interval:2s , flush_thread_count:8, chunk_limit_size20M,total limit size1G,retry_type eponential_backoff,retry_wait1s,retry max interval60,retry_forever true, queue_limit_length 20, overflow_action block the worker node where the daemonset is running has memory size of 32g and cpu size of 8 core please suggest what config we can give to avoid this issue configuration of Fluentd type file path /var/log/fluentd-buffer flush_mode interval flush_thread_count "{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT']||'8'}" flush_interval "{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL']||'5s'}" chunk_limit_size "{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE']||'2M'}" queue_limit_length "{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH']||'32'}" retry_max_interval "{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL']||'30'}" retry_forever true logs of Fluentd failed to write data into buffer by buffer overflow action=block Worker 0 exited unexpectedly with signal SIGKILL Note:The fluentd is running as daemon set and the host where it is running it is an ec2 instance of capacity of 32gb memory and 8 core cpu , currently we have configured memory to 3072 Mi in resources limit and requests memory to 1024 Mi and cpu to 1500 Mi in the daemonset file
  • s

    Sanjay

    02/05/2025, 2:22 PM
    instance a plugin name that doesn't exist this is the error we are seeing when trying to connect with s3 bucket
    a
    • 2
    • 1
  • r

    Romeo

    02/10/2025, 4:03 PM
    Hi guys glad to meet you Currently i tried install efk stack following this tutorial https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes And end up facing 2 issue 1. My pod fluentd print a lot backslash \\\\\\ 2. Index not found on kibana Is any workaround for thia Thanks in advance
  • n

    nishant_dhingra

    02/12/2025, 5:26 AM
    I am using logging operator version 103.1.1+up4.4.0. I witnessed that duplicate logs are being dumped in ES intermittently. This is random and not for all the logs. Below is my flow config
    Copy code
    Name: smnts-semanticsearchbatch-json-applogs
    Namespace: jc-pd
    Labels: <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
    Annotations: <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: smnts-semanticsearchbatch-pd
    <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: cattle-logging-system
    API Version: <http://logging.banzaicloud.io/v1beta1|logging.banzaicloud.io/v1beta1>
    Kind: Flow
    Metadata:
    Creation Timestamp: 2024-12-20T05:43:07Z
    Generation: 1
    Resource Version: 6359244
    UID: 25ef9de3-5e5d-40ff-8069-c808f8ee0690
    Spec:
    Filters:
    record_modifier:
    Records:
    Platform: application-logs
    Parser:
    Parse:
    Patterns:
    Expression: /^[[^ ]* [(?[^\]])] [(?[^\]])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] (?[^\n](\n^[^\[].|$))/
    Format: regexp
    Format: json
    time_format: %Y-%m-%dT%H:%M:%S.%NZ
    Type: multi_format
    remove_key_name_field: true
    reserve_data: true
    reserve_time: true
    Grep:
    Exclude:
    Key: log
    Pattern: /.+|\s+/
    Key: logType
    Pattern: /business_event/
    Global Output Refs:
    pd-es-output
    Match:
    Select:
    container_names:
    smnts-semanticsearchbatch**
    Below is the clusteroutput config.
    Copy code
    Name: pd-es-output
    Namespace: cattle-logging-system
    Labels:
    Annotations:
    API Version: <http://logging.banzaicloud.io/v1beta1|logging.banzaicloud.io/v1beta1>
    Kind: ClusterOutput
    Metadata:
    Creation Timestamp: 2024-12-20T05:43:54Z
    Generation: 2
    Resource Version: 27785653
    UID: 5c387eb0-e622-4afa-af87-26ab6c883074
    Spec:
    Elasticsearch:
    Buffer:
    chunk_limit_size: 50m
    flush_interval: 10s
    queued_chunks_limit_size: 10240
    retry_max_times: 8
    retry_timeout: 60s
    retry_wait: 5s
    Timekey: 1m
    timekey_use_utc: true
    timekey_wait: 30s
    Host: ******************
    include_timestamp: true
    index_name: ************
    Password:
    Value From:
    Secret Key Ref:
    Key: elastic
    Name: elastic-user
    Port: ******
    reload_connections: true
    reload_on_failure: true
    request_timeout: 120s
    Scheme: http
    suppress_type_name: true
    time_key_format: %Y-%m-%dT%H:%M:%S.%N%z
    User: *******
    Kindly help with this problem.
  • n

    nishant_dhingra

    02/12/2025, 5:26 AM
    Copy code
    Logging Operator Version: 103.1.1+up4.4.0
    Fluentd Image: rancher/mirrored-banzaicloud-fluentd:v1.14.6-alpine-5
    Operator Image: rancher/mirrored-kube-logging-logging-operator:4.4.0
    Fluentbit Image: rancher/mirrored-fluent-fluent-bit:2.2.0
  • k

    Kentaro Hayashi

    02/14/2025, 2:25 AM
    Hi users! (for non #C01SF2H1MC7 subscriber, reposted here) We have released fluent-package v5.0.6. fluent-package is a stable distribution package of Fluentd. (successor of td-agent v4) This is a maintenance release of v5.0.x LTS series. Bundled Fluentd was updated to 1.16.7. We recommend upgrading to fluent-package v5.0.6! See blog announce in detail. https://www.fluentd.org/blog/fluent-package-v5.0.6-has-been-released
  • m

    marzottifabio

    02/19/2025, 5:38 PM
    Hi users, I'm using fluentd client from openshift 4.14 to send the events in http protocol to an external fluentd server, last version. The connectivity is between 2 different network and there is a NAT (Network address translation) in the middle, do you know if fluent is able to work with the NAT? I see strange tcp reset sometimes from the firewall. THANKS
    s
    • 2
    • 3
  • n

    Neeraj Joshi

    02/21/2025, 6:24 PM
    @Neeraj Joshi has left the channel
  • f

    Fatih Sarhan

    02/22/2025, 11:27 AM
    @Fatih Sarhan has left the channel
  • a

    ArunJP

    02/27/2025, 10:33 AM
    Hi, is there a fluentd input plug-in for remotely connecting to even viewer logs. Currently I am forced to install the agent in the same server where we collect the data. Most of the times the ruby service is taking up lot of cpu and the primary application suffers a hit in production setup.
    a
    p
    • 3
    • 5
  • a

    antonio falzarano

    02/28/2025, 3:08 PM
    Hi, someone have used parent_key/routing_key fields into fluentd successfully with opensearch/elasticsearch to create directly with fluentd parent/child relationship into documents? i have readed the documentation but there is a lack of examples
  • b

    Bonny Rais

    03/04/2025, 2:52 AM
    Hi all, I have what I think is a basic question, but I am not able to resolve it regardless of config changes... Messages arrive from a source with the following format: • time: 2025-03-04 023801.565942831 +0000 • tag: some.app.log • log: syslog message with the following structure ◦ 135Mar 3 100020 hostname process[id]: internal code: nested json payload I am trying to parse the nested json payload and tried using the syslog parser. I am getting errors of the form `Fluent:PluginParser:ParserError error="parse failed invalid time format: value = 135Mar 3, error_class = ArgumentError, error = invalid date or strptime format -
    <135>Mar 3'
    If I do not set time_format I get some other error related to time. What I'd like to find out is which timestamp generates this and why The syslog parser is configured with message_format rfc3164
  • b

    Bonny Rais

    03/04/2025, 2:57 AM
    I would like to avoid parsing the entire message using regexp parser, but that's an option if syslog cannot work for me...
  • a

    ArunJP

    03/07/2025, 1:57 AM
    Hi, is there a SQLserver plug-in to pull SQL logs. I found some sql plug-ins in github but it is not working. Not able to configure. Getting Ruby errors.
  • n

    Naseer Hussain

    03/14/2025, 6:24 AM
    Hi all, I am using fluentd and nginx image from docker and using fluentd I'm collecting loga of nginx and storing them in a file but the buffer files are not automatically flushed they are flushed when i restart fluentd container. The
    flush_mode immediate
    is configured. when I checked fluentd logs it is having an
    ENOENT error="No such file or directory - fstat"
    but the .log files are present there and also the buffer folder has full permissions So it has to be flushed but they aren't. And when i restart the container the logs are flushed but fluentd logs have error of unable to purge that flushed .log files I think fluentd is unable to recognize the .log buffer files but is able to access .log.meta files. Why is this happening ? Did i configured something wrong ? How to resolve it? (I'll provide more required information if you ask what you want to know more)
    s
    • 2
    • 1
  • n

    Naseer Hussain

    03/14/2025, 8:46 AM
    Hi everyone, I’m facing the above explained issue with Fluentd and need urgent help. If anyone knows how to solve this, please respond as soon as possible. I need to submit this task soon, and any assistance would be greatly appreciated. Thanks in advance!
  • p

    Phil Wilkins

    03/31/2025, 6:49 PM
    https://blog.mp3monster.org/2025/03/31/fluentd-labels-and-fluent-bit/
  • j

    Justen L.

    04/07/2025, 2:27 PM
    hey all. i'm running into the permissions issues that was covered in the git documentation, including adding
    FLUENT_UID
    and setting it to
    0
    in the yaml file... yet, my problem still persists. any advice? i also had to disable TLS for now as it was throwing an error too
  • j

    Justen L.

    04/08/2025, 5:10 PM
    anyone here familiar with launching FluentD to Elastic? i'm having the issue above... i removed it as its for old image versions.. i'm running the
    fluentd-kubernetes-daemonset:v1.18.0-debian-elasticsearch8-1.4
    image as we are running Elasticsearch 8.15.. still seeing continuous errors for
    [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/fluentd-containers.log.pos
  • e

    Erik Bjers

    04/09/2025, 10:52 PM
    Anyone have experience using XML queries for the Event_Query key in the winevtlog source? Trying to create a query that gets EventID 5, but there are no examples in the document
  • y

    Yash Jain

    04/16/2025, 7:43 AM
    Hi Guys, Hope everyone is well! - Do we have any document around integrating fluentd with Azure log analytic workspace ?
  • d

    Doug Whitfield

    04/22/2025, 4:10 PM
    I have the following memory profile. It looks strange to me. Why would ruby be taking up so much more RAM than fluentd? Using Fluentd 1.16.5
    PID: 1 NAME: tini VmRSS:     4 kB
    PID: 16 NAME: ruby VmRSS:   450500 kB
    PID: 23 NAME: sh VmRSS:    1312 kB
    PID: 7 NAME: fluentd VmRSS:   48784 kB
  • s

    Srijitha S

    04/26/2025, 6:06 AM
    I'm encountering an error when trying to ship logs from Fluentd (v1.18) to an ECK-managed Elasticsearch cluster (v9). The error occurs in the fluent-plugin-elasticsearch output
    Copy code
    2025-04-26 06:23:57 +0000 [warn]: #0 fluent/log.rb:383:warn: failed to flush the buffer. retry_times=5 next_retry_time=2025-04-26 06:24:26 +0000 chunk="633a80cbab567f92f93159fdc356b3d5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"quickstart-v2-es-http\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): [400] {\"error\":{\"root_cause\":[{\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\"}],\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\",\"caused_by\":{\"type\":\"status_exception\",\"reason\":\"A compatible version is required on both Content-Type and Accept headers if either one has requested a compatible version. Accept=null, Content-Type=application/vnd.elasticsearch+x-ndjson; compatible-with=9\"}},\"status\":400}"
    Current configuration
    Copy code
    <match **>
      @type elasticsearch
      host quickstart-v2-es-http
      port 9200
      scheme https
      user elastic
      password xxxx
    </match>
    What I've tried: • Verified network connectivity • Confirmed credentials are correct • Tried both
    application/json
    and default content types Has anyone encountered this header compatibility issue between Fluentd 1.18 and Elasticsearch 9? Any guidance on required configuration changes would be greatly appreciated. and additional info this is the elasticsearch plugin version
    Copy code
    elastic-transport (8.4.0)
    elasticsearch (9.0.2)
    elasticsearch-api (9.0.2)
    a
    s
    • 3
    • 3