SSL/TLS connection between ELK-Stack with Docker

Hi,
I have a big problem with my ELK-Stack (version 8.7.0) created with docker.
I have created elasticsearch, kibana, logstash and filebeat with docker-compose.

elasticsearch and kibana connects per ssl with the token, I've created with elasticsearch, that works fine. But the other services did not work with SSL.

Here is the way it should work:
Filebeat reads Logfiles that is connected per volume to filebeat.
Filebeat should send it to Logstash. Logstash filter the data and send it to elasticsearch.

The Connection from filebeat to logstash could not established.
I get this message:

Failed to publish events caused by: write tcp 192.168.13.6:49220->192.168.13.4:5044: write: connection reset by peer

The only documentation I've found is for local installations. I tried to adopt this, but it failed every time. I did not get a connection per SSL/TLS

Does anyone have experience with this and can help me?

Does nobody have a solution for this?

You didn't share your docker-compose, you need to share it.

The error you are getting is unrelated to any tool on the stack, it is a network error, you first need to check if the containers can talk with each other on the specified ports.

This is my docker-compose

---
### Version 1.0.4
services:
### Elasticsearch Installation ########################################        
    sq-docker-elasticsearch:
        image: elasticsearch:8.6.2
        container_name: sq-docker-elasticsearch
        ports:
            - 9200:9200
            - 9300:9300
        volumes:
            - ./elasticsearchdata/data:/usr/share/elasticsearch/data
            - ./elasticsearchlogs:/usr/share/elasticsearch/logs
            - ./elasticsearchconf/config:/usr/share/elasticsearch/config
        networks:
            brdo0:
                ipv4_address: 192.168.13.2
        hostname: elasticsearch
        restart: unless-stopped

### Kibana Installation ################################################
    sq-docker-kibana:
        image: kibana:8.6.2
        container_name: sq-docker-kibana
        ports:
            - 5601:5601
        volumes:
            - ./kibana/config:/usr/share/kibana/config
            - ./kibana_data:/usr/share/kibana/data
            - /etc/timezone:/etc/timezone:ro
            - /etc/localtime:/etc/localtime:ro
        networks:
            brdo0:
                ipv4_address: 192.168.13.3   
        hostname: kibana
        links:
          - sq-docker-elasticsearch:elasticsearch
        restart: unless-stopped

### Logstash Installation ###############################################
    sq-docker-logstash:
        image: logstash:8.6.2
        container_name: sq-docker-logstash
        ports:
            - 9500:9500
            - 9350:5000
            - 9351:5044
        volumes:
            - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
            - ./logstash/pipeline:/usr/share/logstash/pipeline
            - /etc/timezone:/etc/timezone:ro
            - /etc/localtime:/etc/localtime:ro
        networks:
            brdo0:
                ipv4_address: 192.168.13.4
        hostname: logstash
        links:
          - sq-docker-elasticsearch:elasticsearch
        restart: unless-stopped

### Filebeat Installation ################################################
    sq-docker-filebeat:
        image: docker.elastic.co/beats/filebeat:8.6.2
        container_name: sq-docker-filebeat
        volumes:
            - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
            - /var/lib/docker/containers:/var/lib/docker/containers:ro
            - /var/run/docker.sock:/var/run/docker.sock:ro
            - ./storage/jenkins:/usr/share/jenkins_home:ro
            - ./filebeat/prospectors.d/:/usr/share/filebeat/prospectors.d/
            - /etc/timezone:/etc/timezone:ro
            - /etc/localtime:/etc/localtime:ro
        user: root
        environment:
            - strict.perms=false
            - output.elasticsearch.hosts=["sq-docker-elasticsearch:9200"]
        links:
          - sq-docker-elasticsearch:elasticsearch
        networks:
            brdo0:
                ipv4_address: 192.168.13.6
        hostname: filebeat
        restart: unless-stopped

### Network Declaration ######################################################
networks:
  brdo0:
    external: true

You need to share this file as well and your Logstash configuration pipeline, with the beats input.

This does not make much sense, you cannot have an elasticsearch output in filebeat if you have a logstash output, not sure why you have this setting.

That's the filebeat.yml:

#filebeat.registry_file: /usr/share/filebeat/data/registry
filebeat.config.inputs:
#prospectors dynamically loaded from the sub-directory
  path: ${path.config}/prospectors.d/*.yml
  reload.enabled: false
filebeat.modules:
#All data to indexed to Elasticsearch
output.logstash:
  hosts: ["192.168.13.4:5044"]

I've copied the environment from the old Docker Server. Now I see that this does not make sense.

So I have to delete the line with the elasticsearch output, right?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.