Logstash generating two indices with the same content

I have Elasticsearch, Logstash and Kibana running on Docker.

First I run Elasticsearch and Kibana and confirm that no exists index.

docker-compose-elastic-kibana.yml
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
    container_name: elasticsearch-container
    environment:
      - node.name=elasticsearch
      - discovery.seed_hosts=elasticsearch
      - cluster.initial_master_nodes=elasticsearch
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - 9200:9200
    networks:
      - default
    healthcheck:
      test: ["CMD", "curl","-s" ,"-f", "-u", "elastic:", "http://localhost:9200/_cat/health"]
  kibana:
    image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
    container_name: kibana-container
    environment:
      LS_JAVA_OPTS: "-Xms256m -Xmx256m"
      ELASTICSEARCH_URL: "http://elasticsearch:9200"
    ports:
      - 5601:5601
    networks:
      - default
    depends_on:
      - elasticsearch
    healthcheck:
      test: ["CMD", "curl", "-s", "-f", "http://localhost:5601/login"]
      retries: 6
networks:
  default:
    external:
      name: my-network

After that, I run Logstash without Filebeat.

docker-compose-logstash.yml
version: '3'
services:
  logstash:
    image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
    container_name: logstash-container
    volumes:
      ### Pipelines
      # File input
	  - type: bind
        source: ./logstash/pipeline/logstash-file-sample.conf
        target: /usr/share/logstash/pipeline/logstash-file-sample.conf
        read_only: true
      # Filebeat input
      - type: bind
        source: ./logstash/pipeline/logstash-filebeat-sample.conf
        target: /usr/share/logstash/pipeline/logstash-filebeat-sample.conf
        read_only: true
      ### Logs
      - type: bind
        source: ./fakelogs
        target: /fakelogs
        read_only: true
    ports:
      - "5000:5000/tcp"
      - "5000:5000/udp"
      - "8003:8003"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xms256m -Xmx256m"
    # Automatically reload pipeline files when changed
    command: --config.reload.automatic
    networks:
      - default
networks:
  default:
    external:
      name: my-network

In Logstash I configured two pipelines as below.

fakelogs\squid.log
1524206424.034   19395 207.96.0.0 TCP_MISS/304 15363 GET http://elastic.co/android-chrome-192x192.gif - DIRECT/10.0.5.120 -

1524206424.145     106 207.96.0.0 TCP_HIT/200 68247 GET http://elastic.co/guide/en/logstash/current/images/logstash.gif - NONE/- image/gif

1524206424.140     106 207.96.0.0 TCP_HIT/200 68240 GET http://elastic.co/guide/en/logstash/current/images/logstash40.gif - NONE/- image/gif
logstash\pipeline\logstash-file-sample.conf

This pipeline I use a simple file input.

input {
 file {
   path => ["/fakelogs/squid.log"]
   sincedb_path => "/dev/null"
   start_position => "beginning"
  }
}
filter {
}
output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "squid-%{+YYYY.MM.dd}"
    manage_template => true
    template => "/var/lib/logstash/template/squid_mapping.json"
    template_name => "squid_template"
    #user => "elastic"
    #password => "changeme"
  }
}
logstash\pipeline\logstash-filebeat-sample.conf

This pipeline I use a filebeat input.

input {
  beats {
    port => 5040
  }
}
filter {
}
output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "filebeat-logstash-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

Lastly, I confirm that was created two indices that has the same content.

filebeat-logstash-2020.10.22
squid-2020.10.22

I tryed to change port 5040 in "logstash-filebeat-sample.conf" to others values, but the same occours.

Why it that?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.