Here is my configuration for the fluentd
@type tail
@id tail_resumeparserLogs
tag kubernetes.log
path "/var/log/containers/rchilli-resumeparser-*.log"
pos_file "/var/log/fluentd-resumeparser.log.pos"
exclude_path ["/var/log/containers/fluentd-*.log"]
refresh_interval 30
read_from_head true
@type "json"
time_format "%Y-%m-%dT%H:%M:%S.%NZ"
time_type string
@type elasticsearch
@log_level debug
host "#{ENV['FLUENT_ELASTICSEARCH_HOST'] || 'elastic-cluster-es-http.elastic-system.svc.cluster.local'}"
port "#{ENV['FLUENT_ELASTICSEARCH_PORT'] || '9200'}"
scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'false'}"
user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
logstash_format true
logstash_dateformat "%Y.%m.%d"
index_name "fluentd-yors"
type_name "fluentd-yors"
flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}"
chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}"
queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
it is not creating the index on the elasticsearch
system
(system)
Closed
April 15, 2024, 1:42pm
2
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.