Filebeat output not reaching logstash

I have logstash, es and kibana inside docker-compose. They are communicating to each other well. But filebeat which is installed outside is responsible for fetching data from a file and sending to logstash. Filebeat is harvesting data well, as i could print the data into console. It can also send data to logstash when its installed in vm directly than in docker i have logstash input filter as beats with port 5044 and filebeat config file filebeat.yml has

output.logstash:

hosts: ["172.17.0.1:5044","0.0.0.0:5044","localhost:5044"]

but logstash receives no input

y?

i even used sudo filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
and loaded index template

Listing 0.0.0.0.5044 in the output.logstash.hosts option doesn't make any sense. It might not break anything but I suggest you remove it anyway.

What does the Filebeat log say? If it has problems connecting to Logstash it'll tell you about it.

I think its connecting to logstash bcz if i stop logstash it gives me error otherwise it wont

yeah 0.0.0.0 was simply given as i was trying with all possible values

this is my docker-compose.yml file

version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
container_name: elasticsearch
environment:
- cluster.name=ptp-corrections
- path.data=/usr/share/elasticsearch/var/data
- path.logs=/usr/share/elasticsearch/var/logs
- path.repo=["/usr/share/elasticsearch/es_backup"]
- bootstrap.memory_lock=true
- transport.host=localhost
- network.host=0.0.0.0
- transport.tcp.port=9300
- http.port=9200
- "ES_JAVA_OPTS=-Xms100m -Xmx100m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- elk-network

logstash:
image: docker.elastic.co/logstash/logstash-oss:6.2.4
container_name: logstash
environment:
- path.data=/usr/share/logstash/var/data
- path.logs=/usr/share/logstash/var/log
- pipeline.batch.size=24
- config.reload.automatic=true
- queue.type=persisted
- queue.page_capacity=4mb
- queue.max_bytes=2mb
- pipeline.unsafe_shutdown=true
volumes:
- /home/deviks/docker/logstash/pipeline/:/usr/share/logstash/pipeline/
ports:
- 9600:9600
- 5044:5044
depends_on:
- elasticsearch
networks:
- elk-network

kibana:
image: docker.elastic.co/kibana/kibana-oss:6.2.4
container_name: kibana
environment:
- server.port=5601
- server.host="0.0.0.0"
- elasticsearch.url="http://elasticsearch:9200"
volumes:
- kibana-data:/usr/share/kibana/data
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
- elk-network

volumes:
es-data:
driver: local
kibana-data:
driver: local

networks:
elk-network:
driver: bridge

i also tried the same using logstash alone in a docker container but it still didnt work

Oh it seems logstash and filebeat are connected. I just removed my filters and then found that logstash have received 2 lines of data that was send. But the remaining data haven't reached logstash. I suppose its lost somewhere in between. What to do now? Filebeat is sending all data correctly bcz if i use stdout as output then all data is printed to console but its not reaching logstash

Its sooo weird logstash always receive just 2 lines of data. That is the starting of my file. i cleared my file still it shows the earlier starting line. How can it access it even after deleting from file. Is it Filebeat that sends the data again and again? But if output for filebeat is stdout it behaves quite normal then why is this happening only when logstash is inside docker

I had changed queue type to persistent. When i changed back to default value its working fine. I think the memory was not enough to use a persistent queue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.