I have created a docker-compose file for ELK stack to parse the logs.
[root@onw-kwah-2v ELK-compose]# cat docker-compose.yml
version: '3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
environment:
- "discovery.type=single-node"
- "xpack.security.enabled=false"
- "XPACK_MONITORING_ENABLED=false"
volumes:
- esdata:/usr/share/elasticsearch/data:rw
ports:
- 9200:9200
networks:
- elk
restart: unless-stopped
kibana:
depends_on:
- elasticsearch
image: docker.elastic.co/kibana/kibana:6.2.4
networks:
- elk
environment:
- "xpack.security.enabled=false"
- "XPACK_MONITORING_ENABLED=false"
ports:
- 5601:5601
restart: unless-stopped
logstash:
image: ervikrant06/logstashbpimage:6.2.4
depends_on:
- elasticsearch
networks:
- elk
environment:
- "xpack.security.enabled=false"
- "INPUT1=/var/tmp/log"
- "XPACK_MONITORING_ENABLED=false"
restart: unless-stopped
volumes:
- /tmp/logstash/:/var/tmp/log
volumes:
esdata:
driver: local
networks:
elk:
I have used this dockerfile to create my image. Purpose of using ENV is to take the input from user depending upon the location of log files present on host system.
# cat Dockerfile
FROM docker.elastic.co/logstash/logstash:6.2.4
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD pipeline/ /usr/share/logstash/pipeline/
ENV INPUT1 ${variable:-/var/tmp/}
My pipeline file is:
[root@onw-kwah-2v logstash]# cat pipeline/bp-filter.conf
input {
file {
path => "${INPUT1}/syslog.log*"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
grok {
match => {"message" => ["%{TIMESTAMP_ISO8601:logdate} %{HOSTNAME:hostname} %{WORD:conatiner_name}: %{GREEDYDATA:[@metadata][messageline]}",
"%{TIMESTAMP_ISO8601:logdate} %{HOSTNAME:hostname} %{WORD:container}\[%{INT:haprorxy_id}\]: %{GREEDYDATA:[@metadata][messageline]}"]}
}
if "_grokparsefailure" in [tags] {
drop {}
}
mutate {
remove_field => ["message", "@timestamp"]
}
json {
source => "[@metadata][messageline]"
}
if "_jsonparsefailure" in [tags] {
drop {}
}
date {
match => ["logdate", "yyyy-MM-dd'T'HH:mm:ss.SSSSSSZ"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logs-%{+yyyy-MM-dd}"
document_type => "applicationlogs"
}
}
All containers are started successfully with docker-compose. before bringing up the containers, following files were added into the /tmp/logstash directory of host machine. I was expecting that all files should have been parsed by logstash because of the input file name SYNTAX which i have used in it.
[root@onw-kwah-2v logstash]# ls
syslog.log syslog.log11thJune syslog.log.gz
But it's only parsing the content of syslog.log file which contains messages of 8th June. List of indices which are present in ES. All log files contain similar kind of log messages. this is not a parsing issue.
# curl 172.19.0.2:9200/_cat/indices?pretty
green open .monitoring-es-6-2018.06.21 Hy-F0j9LSoWqSOx8TZ8hrg 1 0 195 0 140.2kb 140.2kb
yellow open logs-2018-06-08 FdP1XDFyQJ-RClcjtoPlAw 5 1 6 0 83kb 83kb
Can anyone please help me to understand the following points:
- Why the monitoring index is created in ES? I have used option to disable it in my compose file.
- Why the logstash is not able to read all the input log files?
- If the kibana is storing the dashboard in ES .kibana index then how can i save the dashboard so that users after spinning up the ELK can access the dashboard? I know volume is used for persistent storage, but I am talking about scenario in which this docker compose setup should be run on different setups in which storing dashboard on volume would not be an option.