I am trying to forward my local server log from windows to an elasticsearch server in a linux machine and check these logs in the kibana. This is test environment currently. fuentd on either ends is not showing any issues. But there is no index created in kibana. Not sure what the issue here is. Please find the config files of the two servers below.
One more question is there any way to know where a forwarded log is stored #in destination server.
Note: I have configured Ruby 2.6.3 in my Rhel 7.5 machine. I have installed fluentd through Gem install method and I also installed fluentd-plugin-elasticsearch 3.5.5 and elasticsearch 7.2.1 gems which are compatible with my current elasticsearch version.
Forwarder Windows server
<source>
@type tail
tag server
path C:\sw_logs\server.log
pos_file C:\opt\pos_files\server.log.pos
<parse>
@type json
</parse>
</source>
<match server>a
@type stdout
</match>
<match server>
@type forward
send_timeout 60s
<server>
host x.x.x.154
port 24224
</server>
<buffer>
retry_max_times 3
retry_randomize false
retry_max_interval 32s
retry_timeout 1h
path /var/log/fluentd/forward.log
</buffer>
</match>
Aggreagation and Elasticsearch forward config Rhel 7.5
<source>
@type forward
port 24224
</source>
<match server>
@type copy
<store>
@type elasticsearch
path /var/log/fluentd/forward.log
host x.x.x.154
port 9200
logstash_format false
index_name fluentd.${tag}.%Y%m%d
type_name fluentd
type_name "_doc"
#New Change begin
utc_index true
#End new Change
verify_es_version_at_startup false
default_elasticsearch_version 7.x
max_retry_get_es_version 1
max_retry_putting_template 1
<buffer>
@type file
path /var/log/ge_efk_logdata/buffer/win29.buffer/
# Buffer Parameters
flush_thread_count 3
chunk_limit_size 16MB
total_limit_size 4GB
queue_limit_length
chunk_full_threshold 0.85
compress gzip
retry_timeout
# Flush Parameters
flush_at_shutdown false
#Assuming persistent buffers
flush_mode immediate
#flush_interval 60s
flush_thread_count 2
flush_thread_interval 1.0
flush_thread_burst_interval 1.0
delayed_commit_timeout 60s
overflow_action throw_exception
# Retry Parameters
retry_timeout 1h
retry_forever false
retry_max_times 5
</buffer>
</store>
</match>
Update:
Windows server:
##Forwarder Windows server
<source>
@type tail
tag match.tag
path C:\sw_logs\sample_json.json
read_from_head true
pos_file C:\opt\pos_files\sample_json.json.pos
#@log_level debug
#This is writing the debug data to the destination server.
#if any store condition is given the data is stored in that file. Make sure you disable this option once your setup is complete.
@type json
emit_unmatched_lines true
</source>
<match **>
@type forward
send_timeout 60s
recover_wait 10s
hard_timeout 60s
Details of Fluent Aggreagator server IP and Port
host x.x.x.154
port 24224
Buffer parameters to send Log file from Windows client to Fluent Aggreagator
@type file
path C:\sw_logs\buffer[client_buff.ge](http://client_buff.ge/)
retry_max_times 3
retry_randomize false
retry_max_interval 32s
retry_timeout 1h
</match>
Linux Server:
<source>
@type forward
port 24224
tag match.tag
skip_invalid_event false
</source>
<match match.*>
@type copy
<store>
# @type file
# path /var/log/fluentd/forward.log
@type elasticsearch
host x.x.x.154
port 9200
logstash_format false
index_name fluentd.${tag}.%Y%m%d
type_name fluentd
type_name "_doc"
#New Change begin
utc_index true
#End new Change
verify_es_version_at_startup false
default_elasticsearch_version 7.x
max_retry_get_es_version 1
max_retry_putting_template 1
<buffer>
# Buffer Parameters
flush_thread_count 3
#chunk_limit_size 16MB
total_limit_size 4GB
queue_limit_length
chunk_full_threshold 0.85
compress gzip
retry_timeout
# Flush Parameters
flush_at_shutdown false
#Assuming persistent buffers
flush_mode immediate
#flush_interval 60s
flush_thread_count 2
flush_thread_interval 1.0
flush_thread_burst_interval 1.0
delayed_commit_timeout 60s
overflow_action throw_exception
# Retry Parameters
retry_timeout 1h
retry_forever false
retry_max_times 5
# Write to file incase of failure / race condition(deadlock) or
# exceeds given buffer limit
@type file
path /var/log/ge_efk_logdata/buffer/29_buff.log
</buffer>
</store>
Error: fluent/log.rb:362:error: unexpected error on reading data host="x.x.x.29" port=61349 error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data"