Elasticsearch not creatng logstash or packetbeat indexes

I have set up elasticsearch, kibana, logstash, and packetbeat. I have an issue with elasticsearch not creating indexes, and therefore no data is getting to it.

The flows I am testing are:

  • packetbeat-> elasticsearch
  • logstash-> elasticsearch

My configuration is as follows:

system1 (192.168.1.142): ubuntu 16.04.1, elasticsearch 5.1.1, kibana 5.1.1, logstash 5.1.1-1-1 . Installed using apt, all default locations used.
system 2: (192.168.1.5): red hat 7.1 with packetbeat 5.5.1-1-1, install using yum and all default locations used.

I am trying get get data logstash->elasticsearch, and packetbeatbeat->elasticsearch, and visualize them in kibana.

In both the logstash and packbeatbeat logs I am getting "index_not_found_exception" errors and thus cannot see any data in elasticsearch (and therefore kibana).

My logtash.yml file:

node.name: hl142
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d
http.host: "192.168.1.142"
http.port: 9600-9700
log.level: info
path.logs: /var/log/logstash

the errors in the logstash log is:

[WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2016.12.27", :_type=>"logs", :_routing=>nil}, 2016-12-27T01:19:10.244Z hl142.local More data again], :response=>{"index"=>{"_index"=>"logstash-2016.12.27", "_type"=>"logs", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", "resource.id"=>"logstash-2016.12.27", "index_uuid"=>"na", "index"=>"logstash-2016.12.27"}}}}

the packetbeat.yml file is as follows:

packetbeat.interfaces.device: any
packetbeat.flows:
timeout: 30s
period: 10s
packetbeat.protocols.icmp:
enabled: true
packetbeat.protocols.amqp:
ports: [5672]
packetbeat.protocols.cassandra:
ports: [9042]
packetbeat.protocols.dns:
ports: [53]
include_authorities: true
include_additionals: true
packetbeat.protocols.http:
ports: [80, 8080, 8000, 5000, 8002]
packetbeat.protocols.memcache:
ports: [11211]
packetbeat.protocols.mysql:
ports: [3306]
packetbeat.protocols.pgsql:
ports: [5432]
packetbeat.protocols.redis:
ports: [6379]
packetbeat.protocols.thrift:
ports: [9090]
packetbeat.protocols.mongodb:
ports: [27017]
packetbeat.protocols.nfs:
ports: [2049]
output.elasticsearch:
hosts: ["192.168.1.142:9200"]
template.name: "packetbeat"
template.path: "packetbeat.template.json"
template.overwrite: true
username: "elastic"
password: "changeme"
logging.level: info

The error in the packetbeat log is
WARN Can not index event (status=404): {"type":"index_not_found_exception","reason":"no such index","resource.type":"index_expression","resource.id":"packetbeat-2016.12.27","index_uuid":"na","index":"packetbeat-2016.12.27"}

According to the doc, the above packetbeat configuration is supposed to automatically add the index template, but apparently it did not do this due to the above error. I also tried running the command to manually load the index (curl -XPUT 'http://192.168.1.142:9200/_template/packetbeat' -d@/etc/packetbeat/packetbeat.template.json), no output was returned but it did not give me an error.

When I issue the command curl --user elastic:changeme http://192.168.1.142:9200/_cat/indices?v to view the existing indexes, I get the following:

yellow open .monitoring-kibana-2-2016.12.24 nAWRkNYaQb6PG8DmEqgs0w 1 1 17277 0 3.5mb 3.5mb
yellow open .monitoring-es-2-2016.12.23 r45ientZTpmr_fA-D5Dluw 1 1 67924 48 27.1mb 27.1mb
yellow open .monitoring-kibana-2-2016.12.25 -hEV89KeTtGM1iUyW44rYg 1 1 17275 0 3.4mb 3.4mb
yellow open .monitoring-es-2-2016.12.26 DYZJMLnsSX2R3adFHKOp8Q 1 1 137210 209 58.5mb 58.5mb
yellow open .monitoring-kibana-2-2016.12.23 KlT9KljUQnKYBQMhDhMi2A 1 1 13629 0 2.9mb 2.9mb
green open .security RErg66ENQbC0rX1j_QflyQ 1 0 1 0 4.4kb 4.4kb
yellow open .kibana Su74Qs5LTZWD3nEjdk1ApA 1 1 83 43 187.6kb 187.6kb
yellow open .monitoring-es-2-2016.12.25 VtRAzXOeRr-5RquoRr5Rig 1 1 119460 144 49.3mb 49.3mb
yellow open .monitoring-data-2 H0lC7w_VTJiye6X3kbAFhg 1 1 4 0 14kb 14kb
yellow open .monitoring-es-2-2016.12.24 JHwNiBPVQO-4n4yXdNgqeQ 1 1 102453 84 41.5mb 41.5mb
yellow open .monitoring-es-2-2016.12.27 BQ5j1cnvRXa9BuizfLB-OA 1 1 103416 225 44.6mb 44.6mb
yellow open .monitoring-kibana-2-2016.12.27 8IpkuUiZRe2YLs5GUSs1-A 1 1 11045 0 2.4mb 2.4mb
yellow open .monitoring-kibana-2-2016.12.26 E-K5tQoJTDSK3PqK7FtPLw 1 1 17266 0 3.5mb 3.5mb

the elasiticsearh logs do not show any errors, and no mention of problem building indexes. Running the API serach for either logstash-* or packebeat-* returns the same result:

{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" : [ ]
}

So... any ideas on what I have missed, or other things to test, are welcome. I have used the online documentation but sometimes it seems it is still written for earlier versions of the products.

Hey,

can you share your elasticsearch.yml configuration file? Is it possible that you disabled automatic index creation? See https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#index-creation

--Alex

Hello,

Thanks for the response! I was about to post how I resolved it when I saw your response. You are correct, the issue was with automatic index creation.

Following the "quick start" steps to configure elasticsearch and kibana, the steps included adding x-pack and adding the action.auto_create_index parm for the x-pack created indexes. HOWEVER the quick start doc did not indicate that, when that parameter line is used, elasticsearch will create indexes ONLY for values that map to the values specified on the parameter. Once I added the packetbeat and logstash masks to that parameter. It created the indexes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.