Deds
01/10/2025, 11:03 PMSebastian
01/15/2025, 10:25 AMAndrew Elwell
01/16/2025, 3:21 AMGuilherme Zanini
01/28/2025, 4:17 PMINSERT
instead of COPY
. Is there any particular reason for this?
I’m asking because, in theory, COPY
is more efficient than INSERT
when inserting large numbers of entries.Humza ilyas
01/28/2025, 7:54 PMsrinivas
01/29/2025, 7:20 AMKenny Osiyoye
02/04/2025, 8:42 PMAdrija Basu
02/05/2025, 1:10 PMSanjay
02/05/2025, 2:22 PMRomeo
02/10/2025, 4:03 PMnishant_dhingra
02/12/2025, 5:26 AMName: smnts-semanticsearchbatch-json-applogs
Namespace: jc-pd
Labels: <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
Annotations: <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: smnts-semanticsearchbatch-pd
<http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: cattle-logging-system
API Version: <http://logging.banzaicloud.io/v1beta1|logging.banzaicloud.io/v1beta1>
Kind: Flow
Metadata:
Creation Timestamp: 2024-12-20T05:43:07Z
Generation: 1
Resource Version: 6359244
UID: 25ef9de3-5e5d-40ff-8069-c808f8ee0690
Spec:
Filters:
record_modifier:
Records:
Platform: application-logs
Parser:
Parse:
Patterns:
Expression: /^[[^ ]* [(?[^\]])] [(?[^\]])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] [(?[^ ])] (?[^\n](\n^[^\[].|$))/
Format: regexp
Format: json
time_format: %Y-%m-%dT%H:%M:%S.%NZ
Type: multi_format
remove_key_name_field: true
reserve_data: true
reserve_time: true
Grep:
Exclude:
Key: log
Pattern: /.+|\s+/
Key: logType
Pattern: /business_event/
Global Output Refs:
pd-es-output
Match:
Select:
container_names:
smnts-semanticsearchbatch**
Below is the clusteroutput config.
Name: pd-es-output
Namespace: cattle-logging-system
Labels:
Annotations:
API Version: <http://logging.banzaicloud.io/v1beta1|logging.banzaicloud.io/v1beta1>
Kind: ClusterOutput
Metadata:
Creation Timestamp: 2024-12-20T05:43:54Z
Generation: 2
Resource Version: 27785653
UID: 5c387eb0-e622-4afa-af87-26ab6c883074
Spec:
Elasticsearch:
Buffer:
chunk_limit_size: 50m
flush_interval: 10s
queued_chunks_limit_size: 10240
retry_max_times: 8
retry_timeout: 60s
retry_wait: 5s
Timekey: 1m
timekey_use_utc: true
timekey_wait: 30s
Host: ******************
include_timestamp: true
index_name: ************
Password:
Value From:
Secret Key Ref:
Key: elastic
Name: elastic-user
Port: ******
reload_connections: true
reload_on_failure: true
request_timeout: 120s
Scheme: http
suppress_type_name: true
time_key_format: %Y-%m-%dT%H:%M:%S.%N%z
User: *******
Kindly help with this problem.nishant_dhingra
02/12/2025, 5:26 AMLogging Operator Version: 103.1.1+up4.4.0
Fluentd Image: rancher/mirrored-banzaicloud-fluentd:v1.14.6-alpine-5
Operator Image: rancher/mirrored-kube-logging-logging-operator:4.4.0
Fluentbit Image: rancher/mirrored-fluent-fluent-bit:2.2.0
Kentaro Hayashi
02/14/2025, 2:25 AMmarzottifabio
02/19/2025, 5:38 PMNeeraj Joshi
02/21/2025, 6:24 PMFatih Sarhan
02/22/2025, 11:27 AMArunJP
02/27/2025, 10:33 AMantonio falzarano
02/28/2025, 3:08 PMBonny Rais
03/04/2025, 2:52 AM<135>Mar 3'
If I do not set time_format I get some other error related to time.
What I'd like to find out is which timestamp generates this and why
The syslog parser is configured with message_format rfc3164Bonny Rais
03/04/2025, 2:57 AMArunJP
03/07/2025, 1:57 AMNaseer Hussain
03/14/2025, 6:24 AMflush_mode immediate
is configured.
when I checked fluentd logs it is having an ENOENT error="No such file or directory - fstat"
but the .log files are present there and also the buffer folder has full permissions So it has to be flushed but they aren't. And when i restart the container the logs are flushed but fluentd logs have error of unable to purge that flushed .log files I think fluentd is unable to recognize the .log buffer files but is able to access .log.meta files.
Why is this happening ? Did i configured something wrong ? How to resolve it? (I'll provide more required information if you ask what you want to know more)Naseer Hussain
03/14/2025, 8:46 AMPhil Wilkins
03/31/2025, 6:49 PMJusten L.
04/07/2025, 2:27 PMFLUENT_UID
and setting it to 0
in the yaml file... yet, my problem still persists. any advice?
i also had to disable TLS for now as it was throwing an error tooJusten L.
04/08/2025, 5:10 PMfluentd-kubernetes-daemonset:v1.18.0-debian-elasticsearch8-1.4
image as we are running Elasticsearch 8.15.. still seeing continuous errors for [error]: #0 unexpected error error_class=Errno::EACCES error="Permission denied @ rb_sysopen - /var/log/fluentd-containers.log.pos
Erik Bjers
04/09/2025, 10:52 PMYash Jain
04/16/2025, 7:43 AMDoug Whitfield
04/22/2025, 4:10 PMPID: 1 NAME: tini VmRSS: 4 kB
PID: 16 NAME: ruby VmRSS: 450500 kB
PID: 23 NAME: sh VmRSS: 1312 kB
PID: 7 NAME: fluentd VmRSS: 48784 kB
Srijitha S
04/26/2025, 6:06 AM2025-04-26 06:23:57 +0000 [warn]: #0 fluent/log.rb:383:warn: failed to flush the buffer. retry_times=5 next_retry_time=2025-04-26 06:24:26 +0000 chunk="633a80cbab567f92f93159fdc356b3d5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"quickstart-v2-es-http\", :port=>9200, :scheme=>\"https\", :user=>\"elastic\", :password=>\"obfuscated\"}): [400] {\"error\":{\"root_cause\":[{\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\"}],\"type\":\"media_type_header_exception\",\"reason\":\"Invalid media-type value on headers [Accept, Content-Type]\",\"caused_by\":{\"type\":\"status_exception\",\"reason\":\"A compatible version is required on both Content-Type and Accept headers if either one has requested a compatible version. Accept=null, Content-Type=application/vnd.elasticsearch+x-ndjson; compatible-with=9\"}},\"status\":400}"
Current configuration
<match **>
@type elasticsearch
host quickstart-v2-es-http
port 9200
scheme https
user elastic
password xxxx
</match>
What I've tried:
• Verified network connectivity
• Confirmed credentials are correct
• Tried both application/json
and default content types
Has anyone encountered this header compatibility issue between Fluentd 1.18 and Elasticsearch 9? Any guidance on required configuration changes would be greatly appreciated.
and additional info this is the elasticsearch plugin version
elastic-transport (8.4.0)
elasticsearch (9.0.2)
elasticsearch-api (9.0.2)