Erik Bjers
04/09/2025, 10:52 PMYash Jain
04/16/2025, 7:43 AMDoug Whitfield
04/22/2025, 4:10 PMPID: 1 NAME: tini VmRSS: 4 kB
PID: 16 NAME: ruby VmRSS: 450500 kB
PID: 23 NAME: sh VmRSS: 1312 kB
PID: 7 NAME: fluentd VmRSS: 48784 kB
Srijitha S
04/26/2025, 6:06 AM2025-04-26 06:23:57 +0000 [warn]: #0 fluent/log.rb:383:warn: failed to flush the buffer. retry_times=5 next_retry_time=2025-04-26 06:24:26 +0000 chunk="633a80cbab567f92f93159fdc356b3d5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"quickstart-v2-es-http\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): [400] {\"error\":{\"root_cause\":[{\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\"}],\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\",\"caused_by\":{\"type\":\"status_exception\",\"reason\":\"A compatible version is required on both Content-Type and Accept headers if either one has requested a compatible version. Accept=null, Content-Type=application/vnd.elasticsearch+x-ndjson; compatible-with=9\"}},\"status\":400}"
Current configuration
<match **>
@type elasticsearch
host quickstart-v2-es-http
port 9200
scheme https
user elastic
password xxxx
</match>
What I've tried:
• Verified network connectivity
• Confirmed credentials are correct
• Tried both application/json
and default content types
Has anyone encountered this header compatibility issue between Fluentd 1.18 and Elasticsearch 9? Any guidance on required configuration changes would be greatly appreciated.
and additional info this is the elasticsearch plugin version
elastic-transport (8.4.0)
elasticsearch (9.0.2)
elasticsearch-api (9.0.2)
Prasanth Ravi
05/14/2025, 4:27 AM<buffer>
@type file
path /fluentd/buffer
flush_mode interval
flush_thread_count 4
flush_interval 10s
retry_forever true
retry_max_times 3
retry_max_interval 30s
overflow_action block
chunk_limit_size 5MB
queue_limit_length 512
</buffer>
Prasanth Ravi
05/14/2025, 4:30 AMPrasanth Ravi
05/14/2025, 4:30 AMPrasanth Ravi
05/14/2025, 4:32 AMAlec Holmes
05/14/2025, 4:20 PMAhmad Sherif
05/15/2025, 3:50 PMapt-get update
is:
...
Get:11 <https://packages.treasuredata.com/lts/5/ubuntu/focal> focal/contrib all Packages [2,834 B]
Get:12 <https://packages.treasuredata.com/lts/5/ubuntu/focal> focal/contrib amd64 Packages [4,599 B]
Err:12 <https://packages.treasuredata.com/lts/5/ubuntu/focal> focal/contrib amd64 Packages
File has unexpected size (4302 != 4599). Mirror sync in progress? [IP: 18.173.166.102 443]
Hashes of expected file:
- Filesize:4599 [weak]
- SHA512:907296f5183eb31a1b490a503c22103d4bd238240f26a2a10a145d81dcb0f65e3606c4f6403161c06f1942217c1c84118a6cdadc549be72cf528656a1151a710
- SHA256:1f8d6c0e8b58e4bd62b8cea2ca22f5538cd39ddad8fd059f877f76745751e47b
- SHA1:c1c0ee04e1611d25c12905846e06ab0816e8edc3 [weak]
- MD5Sum:90a11d00260d4acc5f8268820a242fb3 [weak]
Release file created at: Thu, 15 May 2025 05:18:39 +0000
Fetched 10.9 kB in 1s (10.8 kB/s)
Reading package lists... Done
E: Failed to fetch <https://packages.treasuredata.com/lts/5/ubuntu/focal/dists/focal/contrib/binary-amd64/Packages.bz2> File has unexpected size (4302 != 4599). Mirror sync in progress? [IP: 18.173.166.102 443]
Hashes of expected file:
- Filesize:4599 [weak]
- SHA512:907296f5183eb31a1b490a503c22103d4bd238240f26a2a10a145d81dcb0f65e3606c4f6403161c06f1942217c1c84118a6cdadc549be72cf528656a1151a710
- SHA256:1f8d6c0e8b58e4bd62b8cea2ca22f5538cd39ddad8fd059f877f76745751e47b
- SHA1:c1c0ee04e1611d25c12905846e06ab0816e8edc3 [weak]
- MD5Sum:90a11d00260d4acc5f8268820a242fb3 [weak]
Release file created at: Thu, 15 May 2025 05:18:39 +0000
E: Some index files failed to download. They have been ignored, or old ones used instead.
Ahmad Sherif
05/15/2025, 3:51 PMHarry Peach
05/21/2025, 10:54 AMEmil Billberg
05/21/2025, 7:35 PMChandan Kumar
05/27/2025, 7:19 AMeduardo
05/29/2025, 4:50 PMeduardo
05/29/2025, 4:56 PMzane
06/07/2025, 3:17 PMeduardo
06/13/2025, 4:57 PMAnuj Singh
06/20/2025, 10:48 PMUSER fluent
in its Dockerfile?MugenOficial
07/01/2025, 4:30 AMMiguel
07/01/2025, 1:14 PMElvinas Piliponis
07/02/2025, 5:22 AMAnton
07/04/2025, 12:26 PMPhilipp Noack
07/05/2025, 11:49 AMsystemctl start fluentd.service
it doesn't work. This is the log after starting via systemctl (redacted)
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: init supervisor logger path="/var/log/fluent/fluentd.log" rotate_age=nil rotate_size=nil
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: parsing config file is succeeded path="/etc/fluent/fluentd.conf"
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluentd' version '1.16.9'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-calyptia-monitoring' version '0.1.3'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-elasticsearch' version '5.4.4'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-flowcounter-simple' version '0.1.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-kafka' version '0.19.3'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-metrics-cmetrics' version '0.1.2'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-opensearch' version '1.1.4'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-prometheus' version '2.1.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-prometheus_pushgateway' version '0.1.1'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-record-modifier' version '2.1.1'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-s3' version '1.7.2'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-sd-dns' version '0.1.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-systemd' version '1.1.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-td' version '1.2.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-utmpx' version '0.5.0'
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: gem 'fluent-plugin-webhdfs' version '1.5.0'
2025-07-05 11:48:53 +0000 [debug]: fluent/log.rb:341:debug: No fluent logger for internal event
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: using configuration file: <ROOT>
<system>
log_level debug
</system>
<match td.*.*>
@type tdlog
@id output_td
apikey xxxxxx
auto_create_table
<buffer>
@type "file"
path "/var/log/fluent/buffer/td"
</buffer>
<secondary>
@type "secondary_file"
directory "/var/log/fluent/failed_records"
</secondary>
</match>
<match debug.**>
@type stdout
@id output_stdout
</match>
<source>
@type forward
@id input_forward
</source>
<source>
@type http
@id input_http
port 8888
</source>
<source>
@type debug_agent
@id input_debug_agent
bind "127.0.0.1"
port 24230
</source>
<match local.**>
@type file
@id output_file
path "/var/log/fluent/access"
<buffer time>
path "/var/log/fluent/access"
</buffer>
</match>
<source>
@type tail
@id input_tail
path "/var/www/domain.com/logs/access.log"
pos_file "/var/log/fluent/domain.com.access.log.pos"
tag "local.domain.access"
<parse>
@type "nginx"
unmatched_lines
</parse>
</source>
</ROOT>
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: starting fluentd-1.16.9 pid=45738 ruby="3.2.8"
2025-07-05 11:48:53 +0000 [info]: fluent/log.rb:362:info: spawn command to main: cmdline=["/opt/fluent/bin/ruby", "-Eascii-8bit:ascii-8bit", "/opt/fluent/bin/fluentd", "--log", "/var/log/fluent/fluentd.log", "--daemon", "/var/run/fluent/fluentd.pid", "--under-supervisor"]
2025-07-05 11:48:54 +0000 [info]: #0 fluent/log.rb:362:info: init worker0 logger path="/var/log/fluent/fluentd.log" rotate_age=nil rotate_size=nil
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding match pattern="td.*.*" type="tdlog"
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding match pattern="debug.**" type="stdout"
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding match pattern="local.**" type="file"
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="forward"
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="http"
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="debug_agent"
2025-07-05 11:48:54 +0000 [info]: fluent/log.rb:362:info: adding source type="tail"
2025-07-05 11:48:54 +0000 [debug]: #0 fluent/log.rb:341:debug: No fluent logger for internal event
2025-07-05 11:48:54 +0000 [info]: #0 fluent/log.rb:362:info: starting fluentd worker pid=45747 ppid=45744 worker=0
2025-07-05 11:48:54 +0000 [debug]: #0 [output_file] buffer started instance=2420 stage_size=0 queue_size=0
2025-07-05 11:48:54 +0000 [debug]: #0 [output_file] flush_thread actually running
2025-07-05 11:48:54 +0000 [debug]: #0 [output_td] buffer started instance=2360 stage_size=0 queue_size=0
2025-07-05 11:48:54 +0000 [debug]: #0 [output_file] enqueue_thread actually running
2025-07-05 11:48:54 +0000 [debug]: #0 [input_tail] Compacted entries: []
2025-07-05 11:48:54 +0000 [debug]: #0 [input_tail] Remove missing entries. existing_targets=[] entries_after_removing=[]
2025-07-05 11:48:54 +0000 [debug]: #0 [input_tail] tailing paths: target = | existing =
2025-07-05 11:48:54 +0000 [info]: #0 [input_debug_agent] listening dRuby uri="<druby://127.0.0.1:24230>" object="Fluent::Engine" worker=0
2025-07-05 11:48:54 +0000 [debug]: #0 [input_http] listening http bind="0.0.0.0" port=8888
2025-07-05 11:48:54 +0000 [info]: #0 [input_forward] listening port port=24224 bind="0.0.0.0"
2025-07-05 11:48:54 +0000 [info]: #0 fluent/log.rb:362:info: fluentd worker is now running worker=0
2025-07-05 11:48:55 +0000 [debug]: #0 [output_td] flush_thread actually running
2025-07-05 11:48:55 +0000 [debug]: #0 [output_td] enqueue_thread actually running
DennyF
07/10/2025, 8:17 AMDennyF
07/10/2025, 8:18 AMDennyF
07/10/2025, 8:18 AMDavidb
07/10/2025, 9:12 AMoverflow_action
Controls the buffer behavior when the queue becomes full.
Supported modes:
• throw_exception
(default)
• This mode throws the`BufferOverflowError` exception to the input plugin. How BufferOverflowError
is handled depends on the input plugins, e.g. tail input stops reading new lines. This action is suitable for streaming.
• block
• This mode stops input plugin thread until buffer full issue is resolved. This action is good for batch-like use-cases. This is mainly for in_tail
plugin. Other input plugins, e.g. socket-based plugin, don't assume this action.
• We do not recommend using block
action to avoid BufferOverflowError
. Please consider improving destination settings to resolve BufferOverflowError
or use @ERROR
label for routing overflowed events to another backup destination (or secondary
with lower retry_limit
). If you hit BufferOverflowError
frequently, it means your destination capacity is insufficient for your traffic.
both block and throw_exception should stop tail thread but my service1.pos file still updating.
Someone knows why? and does there is an option to stop reading file ?Davidb
07/13/2025, 10:48 AMDan Nelson
07/14/2025, 12:06 AM