Logstash unable to create indices

Hello everyone i am really hoping someone will be able to help me out here, as i am fairly new with Elastic Stack, and Linux in general. I have been seeing a strange problem with a Beats > Logstash > Elasticsearch pipeline that i have been unable to resolve for a few days now. Namely the pipeline seems operational on all fronts but the index is not being created into elasticsearch. Config files are following:


  • type: log
    enabled: true
    - /var/log/*.log

host: ""

hosts: [""]


input {

beats {
port => 6969


output {

elasticsearch {
hosts => ""
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"



cluster.name: logmgmt01cluster.log.lab.aginion.net
node.name: elastic01-a.log.lab.aginion.net
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
discovery.seed_hosts: [""]
cluster.initial_master_nodes: ["elastic01-a.log.lab.aginion.net"]

As an additional piece of info, firewall and selinux are off, everything is working on centos7 and i can see filebeat publish events. On the logstash side i see traffic comming in via tcpdump -Xni eth0 port 6969

09:47:17.857497 IP > Flags [S], seq 2576683403, win 29200, options [mss 1460,sackOK,TS val 85653687 ecr 0,nop,wscale 7], length 0

and journalctl -u logstash shows the pipeline to elasticsearch starting fine

[logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.0"}
[org.reflections.Reflections] Reflections took 64 ms to scan 1 urls, producing 19 keys and 39 v
[logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :add
[logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"
[logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't
[logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticS
[logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2,...
[logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>""}
[logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_run
[org.logstash.beats.Server] Starting server on port: 6969
[logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

However i have noticed when using a separate server tcpdum only sees and catches packes sent from but am not sue if i am barking up a wrong tree.
Any help would be more than appreciated

This has been resolved, i found out that once a clustered enviorenment is made you require to turn on ILM on logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.