https://linen.dev logo
Join Slack
Powered by
# fluentd
  • e

    Erik Bjers

    04/09/2025, 10:52 PM
    Anyone have experience using XML queries for the Event_Query key in the winevtlog source? Trying to create a query that gets EventID 5, but there are no examples in the document
  • y

    Yash Jain

    04/16/2025, 7:43 AM
    Hi Guys, Hope everyone is well! - Do we have any document around integrating fluentd with Azure log analytic workspace ?
  • d

    Doug Whitfield

    04/22/2025, 4:10 PM
    I have the following memory profile. It looks strange to me. Why would ruby be taking up so much more RAM than fluentd? Using Fluentd 1.16.5
    PID: 1 NAME: tini VmRSS:     4 kB
    PID: 16 NAME: ruby VmRSS:   450500 kB
    PID: 23 NAME: sh VmRSS:    1312 kB
    PID: 7 NAME: fluentd VmRSS:   48784 kB
  • s

    Srijitha S

    04/26/2025, 6:06 AM
    I'm encountering an error when trying to ship logs from Fluentd (v1.18) to an ECK-managed Elasticsearch cluster (v9). The error occurs in the fluent-plugin-elasticsearch output
    Copy code
    2025-04-26 06:23:57 +0000 [warn]: #0 fluent/log.rb:383:warn: failed to flush the buffer. retry_times=5 next_retry_time=2025-04-26 06:24:26 +0000 chunk="633a80cbab567f92f93159fdc356b3d5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"quickstart-v2-es-http\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): [400] {\"error\":{\"root_cause\":[{\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\"}],\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\",\"caused_by\":{\"type\":\"status_exception\",\"reason\":\"A compatible version is required on both Content-Type and Accept headers if either one has requested a compatible version. Accept=null, Content-Type=application/vnd.elasticsearch+x-ndjson; compatible-with=9\"}},\"status\":400}"
    Current configuration
    Copy code
    <match **>
      @type elasticsearch
      host quickstart-v2-es-http
      port 9200
      scheme https
      user elastic
      password xxxx
    </match>
    What I've tried: • Verified network connectivity • Confirmed credentials are correct • Tried both
    application/json
    and default content types Has anyone encountered this header compatibility issue between Fluentd 1.18 and Elasticsearch 9? Any guidance on required configuration changes would be greatly appreciated. and additional info this is the elasticsearch plugin version
    Copy code
    elastic-transport (8.4.0)
    elasticsearch (9.0.2)
    elasticsearch-api (9.0.2)
    a
    s
    • 3
    • 3
  • p

    Prasanth Ravi

    05/14/2025, 4:27 AM
    Copy code
    <buffer>
        @type file
        path /fluentd/buffer
        flush_mode interval
        flush_thread_count 4
        flush_interval 10s
        retry_forever true
        retry_max_times 3
        retry_max_interval 30s
        overflow_action block
        chunk_limit_size 5MB
        queue_limit_length 512
      </buffer>
  • p

    Prasanth Ravi

    05/14/2025, 4:30 AM
    2025-05-14 042456 +0000 [debug]: #0 [out_es] taking back chunk for errors. chunk="63510e9cf8a74a5561f7568d55d1fe3b" 2025-05-14 042456 +0000 [warn]: #0 [out_es] failed to flush the buffer. retry_times=5 next_retry_time=2025-05-14 042527 +0000 chunk="63510e9cf8a74a5561f7568d55d1fe3b" error_class=Fluent:PluginOpenSearchOutput:RecoverableRequestFailure error="could not push logs to OpenSearch cluster ({:host=>\"xxxxxxxe\", :port=>9200, :scheme=>\"https\", :user=>\"admin\", :password=>\"obfuscated\"}): read timeout reached" I am getting this error with above configuration, My logs size is big. ~400mb/minute
    o
    • 2
    • 1
  • p

    Prasanth Ravi

    05/14/2025, 4:30 AM
    my fluentd pod has 1 gb memory with 20 replicas(pods)
  • p

    Prasanth Ravi

    05/14/2025, 4:32 AM
    should I change any values here,
  • a

    Alec Holmes

    05/14/2025, 4:20 PM
    @Alec Holmes has left the channel
  • a

    Ahmad Sherif

    05/15/2025, 3:50 PM
    Hello everyone, we started seeing errors from Fluentd APT repo on Ubuntu Focal. The error we're getting when running
    apt-get update
    is:
    Copy code
    ...
    Get:11 <https://packages.treasuredata.com/lts/5/ubuntu/focal> focal/contrib all Packages [2,834 B]                                                                                                                                      
    Get:12 <https://packages.treasuredata.com/lts/5/ubuntu/focal> focal/contrib amd64 Packages [4,599 B]                                                                                                                                    
    Err:12 <https://packages.treasuredata.com/lts/5/ubuntu/focal> focal/contrib amd64 Packages                                                                                                                                              
      File has unexpected size (4302 != 4599). Mirror sync in progress? [IP: 18.173.166.102 443]                                                                                                                                          
      Hashes of expected file:                                                                                                                                                                                                            
       - Filesize:4599 [weak]                                                                                                                                                                                                             
       - SHA512:907296f5183eb31a1b490a503c22103d4bd238240f26a2a10a145d81dcb0f65e3606c4f6403161c06f1942217c1c84118a6cdadc549be72cf528656a1151a710                                                                                          
       - SHA256:1f8d6c0e8b58e4bd62b8cea2ca22f5538cd39ddad8fd059f877f76745751e47b                                                                                                                                                          
       - SHA1:c1c0ee04e1611d25c12905846e06ab0816e8edc3 [weak]                                                                                                                                                                             
       - MD5Sum:90a11d00260d4acc5f8268820a242fb3 [weak]                                                                                                                                                                                   
      Release file created at: Thu, 15 May 2025 05:18:39 +0000                                                                                                                                                                            
    Fetched 10.9 kB in 1s (10.8 kB/s)                                                                                                                                                                                                     
    Reading package lists... Done                                                                                                                                                                                                         
    E: Failed to fetch <https://packages.treasuredata.com/lts/5/ubuntu/focal/dists/focal/contrib/binary-amd64/Packages.bz2>  File has unexpected size (4302 != 4599). Mirror sync in progress? [IP: 18.173.166.102 443]                     
       Hashes of expected file:                                                                                                                                                                                                           
        - Filesize:4599 [weak]                                                                                                                                                                                                            
        - SHA512:907296f5183eb31a1b490a503c22103d4bd238240f26a2a10a145d81dcb0f65e3606c4f6403161c06f1942217c1c84118a6cdadc549be72cf528656a1151a710                                                                                         
        - SHA256:1f8d6c0e8b58e4bd62b8cea2ca22f5538cd39ddad8fd059f877f76745751e47b                                                                                                                                                         
        - SHA1:c1c0ee04e1611d25c12905846e06ab0816e8edc3 [weak]                                                                                                                                                                            
        - MD5Sum:90a11d00260d4acc5f8268820a242fb3 [weak]                                                                                                                                                                                  
       Release file created at: Thu, 15 May 2025 05:18:39 +0000                                                                                                                                                                           
    E: Some index files failed to download. They have been ignored, or old ones used instead.
    e
    • 2
    • 1
  • a

    Ahmad Sherif

    05/15/2025, 3:51 PM
    Has anything got changed today for this repo?
  • h

    Harry Peach

    05/21/2025, 10:54 AM
    Hello all! Is there a way to validate that a flush has been completed? I'm investigating fluent for managing logs of a test suite that uses many docker containers and want to flush and move the logs to a new location after each test - but I'm a bit stuck as I'm having to put a delay between each test to ensure that the logs have been flushed to disk before moving them
    h
    • 2
    • 1
  • e

    Emil Billberg

    05/21/2025, 7:35 PM
    @Emil Billberg has left the channel
  • c

    Chandan Kumar

    05/27/2025, 7:19 AM
    Subject: Fluentd Integration with EMS (JMS) Queue Support Hi Slack team, I have a requirement to use Fluentd on a Linux machine to read logs from files and send them to a TIBCO EMS queue. While exploring the official Fluentd documentation, I couldn't find native support for JMS or TIBCO EMS output plugins. Could you please advise: If there is an existing plugin or recommended method to integrate Fluentd with JMS-compatible systems like EMS? If not, what would be the most efficient and reliable approach to achieve this use case? Any guidance or best practices would be greatly appreciated.
  • e

    eduardo

    05/29/2025, 4:50 PM
    Hey Fluent Bit Community! fluent bit We’re reshaping our Fluent Bit bi-weekly community call into something more intentional: Fluent Bit Office Hours. 🎯 What’s new? These Office Hours will go beyond just PR triage. Expect practical demos, insider tips, and Q&A with maintainers. This week’s Office Hours #79 on May 29 features we are starting in 10 minutes, more details in the following doc: https://docs.google.com/document/d/1vJvsn8E0SanLO1R0X3RC1qTw0XQK_7q75sZ8IbWAu-g/edit?tab=t.0
  • e

    eduardo

    05/29/2025, 4:56 PM
    We start in 5 minutes: zoom link: https://chronosphere-io.zoom.us/j/88126788858?pwd=YM6tVHGLZXJRKYFxqRZFQLpL5dTgch.1
  • z

    zane

    06/07/2025, 3:17 PM
    Hey, any one knows how to start fluentd as a windows service in a windows container? I installed fluentd v1.16.5 through Ruby gems.
    • 1
    • 1
  • e

    eduardo

    06/13/2025, 4:57 PM
    <!here> hey folks, in 5 minutes we will start our office hour, we will demo Lua scripting and give a preview of the new metadata support, details to join are here: https://docs.google.com/document/d/1vJvsn8E0SanLO1R0X3RC1qTw0XQK_7q75sZ8IbWAu-g/edit?tab=t.0#heading=h.be1p5tatfwwo zoom link: https://chronosphere-io.zoom.us/j/87317609300?pwd=kLczOb4hY9vvqGaXqGTXzQc3ZmpkD9.1
  • a

    Anuj Singh

    06/20/2025, 10:48 PM
    Is there a fluentd docker image that does not explicitly define
    USER fluent
    in its Dockerfile?
    s
    p
    • 3
    • 6
  • m

    MugenOficial

    07/01/2025, 4:30 AM
    @MugenOficial has left the channel
  • m

    Miguel

    07/01/2025, 1:14 PM
    Did anyone encounter any issues with fluentd not flushing the buffer upon restart?
  • e

    Elvinas Piliponis

    07/02/2025, 5:22 AM
    @Elvinas Piliponis has left the channel
  • a

    Anton

    07/04/2025, 12:26 PM
    @Anton has left the channel
  • p

    Philipp Noack

    07/05/2025, 11:49 AM
    Hey! I have a problem - i simply want to parse nginx logs. Configuration file is correct and when i run fluentd via cli `fluentd -c /etc/fluent/fluentd.conf`it works and the output is stored on filesystem. But if i start the service via
    systemctl start fluentd.service
    it doesn't work. This is the log after starting via systemctl (redacted)
    Copy code
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: init supervisor logger path="/var/log/fluent/fluentd.log" rotate_age=nil rotate_size=nil
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: parsing config file is succeeded path="/etc/fluent/fluentd.conf"
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluentd' version '1.16.9'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-calyptia-monitoring' version '0.1.3'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-elasticsearch' version '5.4.4'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-flowcounter-simple' version '0.1.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-kafka' version '0.19.3'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-metrics-cmetrics' version '0.1.2'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-opensearch' version '1.1.4'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-prometheus' version '2.1.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-prometheus_pushgateway' version '0.1.1'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-record-modifier' version '2.1.1'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-s3' version '1.7.2'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-sd-dns' version '0.1.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-systemd' version '1.1.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-td' version '1.2.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-utmpx' version '0.5.0'
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-webhdfs' version '1.5.0'
    2025-07-05 11:48:53 +0000 [debug]: fluent/log.rb:341:debug: No fluent logger for internal event
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: using configuration file: <ROOT>
      <system>
        log_level debug
      </system>
      <match td.*.*>
        @type tdlog
        @id output_td
        apikey xxxxxx
        auto_create_table 
        <buffer>
          @type "file"
          path "/var/log/fluent/buffer/td"
        </buffer>
        <secondary>
          @type "secondary_file"
          directory "/var/log/fluent/failed_records"
        </secondary>
      </match>
      <match debug.**>
        @type stdout
        @id output_stdout
      </match>
      <source>
        @type forward
        @id input_forward
      </source>
      <source>
        @type http
        @id input_http
        port 8888
      </source>
      <source>
        @type debug_agent
        @id input_debug_agent
        bind "127.0.0.1"
        port 24230
      </source>
      <match local.**>
        @type file
        @id output_file
        path "/var/log/fluent/access"
        <buffer time>
          path "/var/log/fluent/access"
        </buffer>
      </match>
      <source>
        @type tail
        @id input_tail
        path "/var/www/domain.com/logs/access.log"
        pos_file "/var/log/fluent/domain.com.access.log.pos"
        tag "local.domain.access"
        <parse>
          @type "nginx"
          unmatched_lines 
        </parse>
      </source>
    </ROOT>
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: starting fluentd-1.16.9 pid=45738 ruby="3.2.8"
    2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: spawn command to main:  cmdline=["/opt/fluent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/fluent/bin/fluentd", "--log", "/var/log/fluent/fluentd.log", "--daemon", "/var/run/fluent/fluentd.pid", "--under-supervisor"]
    2025-07-05 11:48:54 +0000 [info]: #0 fluent/log.rb:362:info: init worker0 logger path="/var/log/fluent/fluentd.log" rotate_age=nil rotate_size=nil
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding match pattern="td.*.*" type="tdlog"
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding match pattern="debug.**" type="stdout"
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding match pattern="local.**" type="file"
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="forward"
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="http"
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="debug_agent"
    2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="tail"
    2025-07-05 11:48:54 +0000 [debug]: #0 fluent/log.rb:341:debug: No fluent logger for internal event
    2025-07-05 11:48:54 +0000 [info]: #0 fluent/log.rb:362:info: starting fluentd worker pid=45747 ppid=45744 worker=0
    2025-07-05 11:48:54 +0000 [debug]: #0 [output_file] buffer started instance=2420 stage_size=0 queue_size=0
    2025-07-05 11:48:54 +0000 [debug]: #0 [output_file] flush_thread actually running
    2025-07-05 11:48:54 +0000 [debug]: #0 [output_td] buffer started instance=2360 stage_size=0 queue_size=0
    2025-07-05 11:48:54 +0000 [debug]: #0 [output_file] enqueue_thread actually running
    2025-07-05 11:48:54 +0000 [debug]: #0 [input_tail] Compacted entries: []
    2025-07-05 11:48:54 +0000 [debug]: #0 [input_tail] Remove missing entries. existing_targets=[] entries_after_removing=[]
    2025-07-05 11:48:54 +0000 [debug]: #0 [input_tail] tailing paths: target =  | existing = 
    2025-07-05 11:48:54 +0000 [info]: #0 [input_debug_agent] listening dRuby uri="<druby://127.0.0.1:24230>" object="Fluent::Engine" worker=0
    2025-07-05 11:48:54 +0000 [debug]: #0 [input_http] listening http bind="0.0.0.0" port=8888
    2025-07-05 11:48:54 +0000 [info]: #0 [input_forward] listening port port=24224 bind="0.0.0.0"
    2025-07-05 11:48:54 +0000 [info]: #0 fluent/log.rb:362:info: fluentd worker is now running worker=0
    2025-07-05 11:48:55 +0000 [debug]: #0 [output_td] flush_thread actually running
    2025-07-05 11:48:55 +0000 [debug]: #0 [output_td] enqueue_thread actually running
    j
    • 2
    • 1
  • d

    DennyF

    07/10/2025, 8:17 AM
    Hello 🙂
  • d

    DennyF

    07/10/2025, 8:18 AM
    I have a question regarding masking stuff (without any 3rd party plugin)
  • d

    DennyF

    07/10/2025, 8:18 AM
    We have to migrate from NXlog to fluentd and we need to mask sensitive fields(?)
    • 1
    • 3
  • d

    Davidb

    07/10/2025, 9:12 AM
    Hi, i am new in fluentd. i am trying to remove our bad practice service that is responsible to send logs from a central NFS to eck. when deploying into dev (we work on openshift) the PVC of the buffer was full and no log was sent to eck. but when the buffer was full, still service1.pos was still being updated by fluentd pod (adding new log file and read the bytes),i hope it would stop but no. here the section of the buffer documentation:
    overflow_action
    Controls the buffer behavior when the queue becomes full. Supported modes: •
    throw_exception
    (default) • This mode throws the`BufferOverflowError` exception to the input plugin. How
    BufferOverflowError
    is handled depends on the input plugins, e.g. tail input stops reading new lines. This action is suitable for streaming. •
    block
    • This mode stops input plugin thread until buffer full issue is resolved. This action is good for batch-like use-cases. This is mainly for
    in_tail
    plugin. Other input plugins, e.g. socket-based plugin, don't assume this action. • We do not recommend using
    block
    action to avoid
    BufferOverflowError
    . Please consider improving destination settings to resolve
    BufferOverflowError
    or use
    @ERROR
    label for routing overflowed events to another backup destination (or
    secondary
    with lower
    retry_limit
    ). If you hit
    BufferOverflowError
    frequently, it means your destination capacity is insufficient for your traffic. both block and throw_exception should stop tail thread but my service1.pos file still updating. Someone knows why? and does there is an option to stop reading file ?
  • d

    Davidb

    07/13/2025, 10:48 AM
    I'm using FluentD with the in_tail plugin and the elasticsearch_data_stream output. My setup writes logs from many pods to a central NFS. FluentD tails the log files from that shared NFS and forwards them to Elasticsearch. The problem is: When Elasticsearch (ECK) is down and FluentD’s buffer fills up (I use @type file, overflow_action block, and retry_forever true), I expect FluentD to stop reading logs and freeze the .pos file. But FluentD continues reading and advancing the .pos offsets even though nothing is being flushed. This causes permanent data loss if FluentD is restarted or crashes. Questions: How can I make FluentD stop reading (and .pos advancing) when the buffer is full, to avoid reading and discarding logs that weren’t flushed? How do other people handle log file cleanup in a central NFS scenario? My current solution: I run a cronjob that deletes log files after FluentD finishes reading them (based on .pos state). But since .pos updates before flush, this can cause logs to be deleted before they’re successfully delivered. my env: fluentd version: 1.16.9 elasticsearch: 8.15
  • d

    Dan Nelson

    07/14/2025, 12:06 AM
    @Dan Nelson has left the channel