Logstash unable to create indices

Hello everyone i am really hoping someone will be able to help me out here, as i am fairly new with Elastic Stack, and Linux in general. I have been seeing a strange problem with a Beats > Logstash > Elasticsearch pipeline that i have been unable to resolve for a few days now. Namely the pipeline seems operational on all fronts but the index is not being created into elasticsearch. Config files are following:

Filebeat:

  • type: log
    enabled: true
    paths:
    - /var/log/*.log

setup.kibana:
host: "10.226.10.20:5601"

output.logstash:
hosts: ["10.226.100.21:6969"]

Logstash

input {

beats {
port => 6969
}

}

output {

elasticsearch {
hosts => "10.226.100.31:9200"
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

}
}

Elasticsearch

cluster.name: logmgmt01cluster.log.lab.aginion.net
node.name: elastic01-a.log.lab.aginion.net
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 10.226.100.31
http.port: 9200
discovery.seed_hosts: ["10.226.100.31"]
cluster.initial_master_nodes: ["elastic01-a.log.lab.aginion.net"]

As an additional piece of info, firewall and selinux are off, everything is working on centos7 and i can see filebeat publish events. On the logstash side i see traffic comming in via tcpdump -Xni eth0 port 6969

09:47:17.857497 IP 10.226.100.10.55318 > 10.226.100.21.acmsoda: Flags [S], seq 2576683403, win 29200, options [mss 1460,sackOK,TS val 85653687 ecr 0,nop,wscale 7], length 0

and journalctl -u logstash shows the pipeline to elasticsearch starting fine

[logstash.runner ] Starting Logstash {"logstash.version"=>"7.3.0"}
[org.reflections.Reflections] Reflections took 64 ms to scan 1 urls, producing 19 keys and 39 v
[logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :add
[logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.226.100.3
[logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't
[logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticS
[logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2,...
[logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:6969"}
[logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_run
[org.logstash.beats.Server] Starting server on port: 6969
[logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

However i have noticed when using a separate server tcpdum only sees and catches packes sent from 10.226.100.10 but am not sue if i am barking up a wrong tree.
Any help would be more than appreciated

This has been resolved, i found out that once a clustered enviorenment is made you require to turn on ILM on logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.