Hi All,
I was able to send logs to Elasticsearch using Filebeat using the below configuration successfully.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: log
enabled: false
paths:
- /var/log/*.log
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
# Authentication credentials - either API key or username/password.
username: "elastic"
password: "XXXXXXXXXXXXX"
#Index name customization as we do not want 'Filebeat-" prefix for the indices that filbeat creates by default
index: "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
#Below configuration setting are mandatory when customizing index name
setup.ilm.enabled: false
setup.template:
name: 'network'
pattern: 'network-*'
enabled: false
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
# ============================= X-Pack Monitoring ==============================
#monitoring.elasticsearch:
monitoring:
enabled: true
cluster_uuid: 9ZIXSpCDBASwK5K7K1hqQA
elasticsearch:
hosts: ["http:/esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
username: beats_system
password: XXXXXXXXXXXXXX
I enabled all Cisco modules and they are able to create indices as below:
network-cisco.ios-YYYY.MM.DD
network-cisco.nexus-YYYY.MM.DD
network-cisco.asa-YYYY.MM.DD
network-cisco.ftd-YYYY.MM.DD
Until here there was no issue but it all came to a halt when I tried to introduce Logstash in between Filebeat & Elasticsearch.
Below is the network.conf file details for your analysis.
input {
beats {
port => "5046"
}
}
output {
if [event.dataset] == "cisco.ios" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[@metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.nexus" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[@metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.asa" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[@metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cisco.ftd" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[@metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "cef.log" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[@metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
else if [event.dataset] == "panw.panos" {
elasticsearch {
hosts => ["http://esnode1.cluster.com:9200","http://esnode2.cluster.com:9200"]
index => "network-%{[event.dataset]}-%{+yyyy.MM.dd}"
user => "elastic"
password => "XXXXXXXXXXXX"
pipeline => "%{[@metadata][pipeline]}"
manage_template => "false"
ilm_enabled => "false"
}
}
stdout {codec => rubydebug}
}
With the above configuration I am unable to connect Filbeat --> Logstash --> Elasticsearch pipeline that I am looking to achieve.
There is no data that is getting added and stdout is able to produce output when I run logstash as below:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/network.conf
Using --config_test_and_exit the config file is tested successfully, also the above line is producing stdout json lines, but in spite of that there is no document that is getting added to the existing indices (network-cisco.ios-YYYY.MM.DD, network-cisco.nexus-YYYY.MM.DD etc.).
When I tried to change the index name to 'test-%{+yyyy.MM.dd}' by testing with one Elasticsearch output, I found that it creates an index with the same execution above.
Also when I take Logstash out of the equation, Filebeat is able to continue writing to the existing indices but it is not happening with the above Logstash configuration.
Any help would be greatly appreciated!
Thanks,
Arun