Hi !
I would like to know if everything in my head is clear and if the configurations are good.
So I would like to deploy a 3 nodes cluster.
First here are the following elasticsearch.yml config for the 3 nodes:
Node 1:
cluster.name: ES_cluster
node.name: node-
network.host: localhost
discovery.zen.ping.unicast.hosts: ["10.1.200.11", "10.1.200.10"] #juste exemples
discovery.zen.minimum_master_nodes: 2
index.number_of_shards: 3
index.number_of_replicas: 2
Node 2:
cluster.name: ES_cluster
node.name: node-2
network.host: localhost
discovery.zen.ping.unicast.hosts: ["10.1.200.9", "10.1.200.10"] #juste exemples
discovery.zen.minimum_master_nodes: 2
index.number_of_shards: 3
index.number_of_replicas: 2
Node 3:
cluster.name: ES_cluster
node.name: node-3
network.host: localhost
discovery.zen.ping.unicast.hosts: ["10.1.200.11", "10.1.200.9"] #juste exemples
discovery.zen.minimum_master_nodes: 2
index.number_of_shards: 3
index.number_of_replicas: 2
Then I would like to know, on which node should I put my logstash instance ?
Here's the config file:
input {
beats {
port => 5044
}
}
filter {
if [type] == "server_log" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
else if [type] == "apache_access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
For the clients, I'll install Filebeat. Shoudl I put all the nodes IPs in the configu file ?
The config file:
############################# Filebeat ######################################
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
paths:
- /var/log/apache2/*.log
input_type: log
document_type: apache_access
-
paths:
- /var/log/*.log
input_type: log
document_type: server_log
registry_file: /var/lib/filebeat/registry
output:
### Logstash as output
logstash:
# The Logstash hosts
hosts: ["10.1.200.9:5044", "10.1.200.10:5044", "10.1.200.11:5044"]
shipper:
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
# If this options is not defined, the hostname is used.
name: elk_client
logging:
# To enable logging to files, to_files option has to be set to true
files:
rotateeverybytes: 10485760 # = 10MB
And finally, on which node should I install Kibana ?
The goal is to always have the cluster running and receiving logs.
Thanks for everything !