Creating simple pipeline to parse the docker logs

As part of my lab exercise to learn the docker log parsing. I am creating simple pipeline in which beat will send the data to logstash and further logstash will index this data in ES by creating new index.

root@elk-machine1:~/BELK# cat beat/filebeat.yml
filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  #processors:
  #- add_docker_metadata: ~
output.elasticsearch:
  hosts: ["logstash1:9200"]


root@elk-machine1:~/BELK# cat logstash/pipeline/logstash-docker.conf
input {
  beats {
    port => 5044
    codec => json_lines
  }
}
filter {
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch1:9200"]
    index => "docker-logs"
  }
}

I have used these commands for starting the containers.

$ docker run -d --rm --name elasticsearch1 --hostname=elasticsearch1 --network=elk -p 9200:9200 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.4


$ docker run -d --rm --name logstash1 --hostname=logstash1 --network=elk -v ~/BELK/logstash/pipeline/:/usr/share/logstash/pipeline/  docker.elastic.co/logstash/logstash:6.2.4

$ docker run -d --rm --name filebeat1 --hostname=filebeat1 --network=elk  -v ~/BELK/beat/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/beats/filebeat:6.2.4

I am not able to see any new index being getting created in ES after running the containers. Following are the containers which I started to generate some log files in /var/lib/docker/containers/ directory. I have created some containers even after starting the whole pipeline to see whether "beginning" logstash issue is happening here but no luck.

root@elk-machine1:~/BELK# docker ps -a | grep graphite
c8a7466c5abb        hopsoft/graphite-statsd                               "/sbin/my_init"          23 minutes ago      Up 23 minutes       80/tcp, 2003-2004/tcp, 2023-2024/tcp, 8125-8126/tcp, 8125/udp   graphite4
ed398e7f895c        hopsoft/graphite-statsd                               "/sbin/my_init"          35 minutes ago      Up 35 minutes       80/tcp, 2003-2004/tcp, 2023-2024/tcp, 8125-8126/tcp, 8125/udp   graphite3
2b9676a1cb97        hopsoft/graphite-statsd                               "/sbin/my_init"          About an hour ago   Up About an hour    80/tcp, 2003-2004/tcp, 2023-2024/tcp, 8125-8126/tcp, 8125/udp   graphite2
8251763d3082        hopsoft/graphite-statsd                               "/sbin/my_init"          About an hour ago   Up About an hour    80/tcp, 2003-2004/tcp, 2023-2024/tcp, 8125-8126/tcp, 8125/udp   graphite1

Sample log output for one container.

root@elk-machine1:~/BELK# docker logs 8251763d3082
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/01_conf_init.sh...
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 14
Jun  7 09:15:13 graphite1 syslog-ng[26]: syslog-ng starting up; version='3.5.3'
Jun  7 09:17:01 graphite1 /USR/SBIN/CRON[59]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Jun  7 10:17:01 graphite1 /USR/SBIN/CRON[62]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)

Can someone please help me to understand what I am doing wrong here?

Adding some more information. I have verified the docker logs output for all containers and I don't see any issue in them.

root@elk-machine1:~/BELK# docker ps -a | egrep "filebeat|logstash|elastic"
e92ff3bed44c        docker.elastic.co/logstash/logstash:6.2.4             "/usr/local/bin/dock…"   29 minutes ago      Up 29 minutes       5044/tcp, 9600/tcp                                              logstash1
01bcee6e955c        docker.elastic.co/beats/filebeat:6.2.4                "/usr/local/bin/dock…"   32 minutes ago      Up 32 minutes                                                                       filebeat1
188286ec2a23        docker.elastic.co/elasticsearch/elasticsearch:6.2.4   "/usr/local/bin/dock…"   33 minutes ago      Up 33 minutes       0.0.0.0:9200->9200/tcp, 9300/tcp                                elasticsearch1

output from the filebeat container.

root@elk-machine1:~# docker run --rm --name filebeat1 --hostname=filebeat1 --network=elk -v ~/BELK/beat/filebeat.yml:/usr/share/filebeat/filebeat.yml docker.elastic.co/be
ats/filebeat:6.2.4
2018-06-07T09:52:31.182Z	INFO	instance/beat.go:468	Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018-06-07T09:52:31.187Z	INFO	instance/beat.go:475	Beat UUID: d430015a-201e-427b-8d5c-5e9e581ade12
2018-06-07T09:52:31.188Z	INFO	instance/beat.go:213	Setup Beat: filebeat; Version: 6.2.4
2018-06-07T09:52:31.189Z	INFO	elasticsearch/client.go:145	Elasticsearch url: http://logstash1:9200
2018-06-07T09:52:31.189Z	INFO	pipeline/module.go:76	Beat name: filebeat1
2018-06-07T09:52:31.190Z	INFO	instance/beat.go:301	filebeat start running.
2018-06-07T09:52:31.201Z	INFO	registrar/registrar.go:73	No registry file found under: /usr/share/filebeat/data/registry. Creating a new registry file.
2018-06-07T09:52:31.202Z	INFO	[monitoring]	log/log.go:97	Starting metrics logging every 30s
2018-06-07T09:52:31.207Z	INFO	registrar/registrar.go:110	Loading registrar data from /usr/share/filebeat/data/registry
2018-06-07T09:52:31.208Z	INFO	registrar/registrar.go:121	States Loaded from registrar: 0
2018-06-07T09:52:31.208Z	INFO	crawler/crawler.go:48	Loading Prospectors: 1
2018-06-07T09:52:31.209Z	INFO	log/prospector.go:111	Configured paths: [/var/lib/docker/containers/*/*.log]
2018-06-07T09:52:31.209Z	INFO	crawler/crawler.go:82	Loading and starting Prospectors completed. Enabled prospectors: 1
2018-06-07T09:53:01.205Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":0,"time":0},"total":{"ticks":30,"time":30,"value":30},"user":{"ticks":30,"time":30}},"info":{"ephemeral_id":"8ba63587-92a1-47a7-993d-341f49012766","uptime":{"ms":30029}},"memstats":{"gc_next":4473924,"memory_alloc":2786296,"memory_total":2786296,"rss":19984384}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":1},"system":{"cpu":{"cores":1},"load":{"1":1.34,"15":0.41,"5":0.56,"norm":{"1":1.34,"15":0.41,"5":0.56}}}}}}

Hi,
As i can see you are configuring the elasticsearch o/p section but giving logstash IP and using logstash pipeline.

could you please check and confirm the scenario so that i can understand the problem.

Regards,

Thanks for pointing it out. I have fixed it.. Basically, i want to send the beat output to logstash and then from logstash for creating index and indexing the data into ES.

Corrected file:

root@elk-machine1:~/BELK# cat beat/filebeat.yml
filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  #processors:
  #- add_docker_metadata: ~
output.logstash:
  hosts: ["logstash1:9200"]

Still ES is not showing me the index name which I have mentioned in logstash output.

root@elk-machine1:~/BELK# curl localhost:9200/_cat/indices?pretty
green open .monitoring-es-6-2018.06.07 OgiRFGQITC-0HNsQbuP5yw 1 0 2255 2 906.4kb 906.4kb

Hi,
Please check port setting filebeat always run on 5044 (by default) but in your config file you have mentioned 9200 port which is elasticsearch port

Correct one is:

Using 5044 port will send the logs to logstash pipeline and your logstash pipeline config seems ok. please change port and try with restart your service.

Please do let me know if you are getting any error.

Regards,

Thanks it was really a silly mistake.

root@elk-machine1:~/BELK# cat beat/filebeat.yml
filebeat.prospectors:
- type: log
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  #processors:
  #- add_docker_metadata: ~
output.logstash:
  hosts: ["logstash1:5044"]
root@elk-machine1:~/BELK# cat logstash/pipeline/logstash-docker.conf
input {
  beats {
    port => 5044
    codec => json_lines
  }
}
filter {
}
output {
  elasticsearch {
    hosts => ["http://elasticsearch1:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

Still I don't see new index created in ES. Note: I have changed the name of index which I want to create in ES.

root@elk-machine1:~/BELK# curl localhost:9200/_cat/indices?pretty
green open .monitoring-es-6-2018.06.07 OgiRFGQITC-0HNsQbuP5yw 1 0 3495 7 1.5mb 1.5mb

I have tried without codec json_lines also but the results are same.

Ok,

Could you please share your filebeat logs and logstash logs and also share the full yml file of filebeat it will help me to resolve that why index is not created in elasticsearch.

Regards,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.