Filebeat with Logstash failed to create index in Elasticsearch

Everything was working fine after installing the elk-stack.
But then i deleted my index once and restarted all services again, it is failing to create new index.
Logstash log files seems to be normal.

[2017-05-19T08:51:28,007][WARN ][logstash.runner ] SIGTERM received. Shutting down the agent.
[2017-05-19T08:51:28,014][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}
[2017-05-19T08:51:33,037][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>28, "name"=>"[main]<beats", "current_call"=>"[...]/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.12-java/lib/logstash/inputs/beats.rb:213:in run'"}, {"thread_id"=>23, "name"=>"[main]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:136:insynchronize'"}, {"thread_id"=>24, "name"=>"[main]>worker1", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:136:in synchronize'"}, {"thread_id"=>25, "name"=>"[main]>worker2", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:124:insynchronize'"}, {"thread_id"=>26, "name"=>"[main]>worker3", "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:118:in `synchronize'"}]}}
[2017-05-19T08:51:33,038][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2017-05-19T08:51:50,952][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2017-05-19T08:51:50,956][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2017-05-19T08:51:51,097][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x8eae1b URL:http://localhost:9200/>}
[2017-05-19T08:51:51,100][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x6f9b267a URL://localhost:9200>]}
[2017-05-19T08:51:51,255][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-05-19T08:51:51,865][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-05-19T08:51:51,902][INFO ][logstash.pipeline ] Pipeline main started
[2017-05-19T08:51:51,980][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-05-19T08:51:56,190][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[http://localhost:9200/], :added=>[http://127.0.0.1:9200/]}}
[2017-05-19T08:51:56,195][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[2017-05-19T08:51:56,218][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x1e08aa6d URL:http://127.0.0.1:9200/>}

Please provide the Filebeat and Logstash configurations that you are using.

Filebeat.yml

filebeat.prospectors:

- input_type: log
  paths:
    - /home/patagonia/Downloads/trygrok.log

output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

02-beats-input.conf

input {
  beats {
    port => 5044
  }
}

10-syslog-filter.conf

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
  if [message] =~ "^#" {
    drop {}
  }
  else {
     grok {
    match => {"message" => "%{TIMESTAMP_ISO8601:timestamp} %{IPORHOST:serverip} %{WORD:method} %{URIPATH:stem} %{NOTSPACE:query} %{USERNAME:username} %{GREEDYDATA:referer} %{NUMBER:status} %{NUMBER:sc-bytes} %{NUMBER:cs-bytes} %{NUMBER:time-taken}" }    
    }
     mutate {
         convert => {"status" => "integer"}
    }
  }
}

30-elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Is there something in the ES logs indicating this? Can you share them. The configs log good to me.

[2017-05-19T09:32:11,017][INFO ][o.e.n.Node ] [xK_xDGK] stopping ...
[2017-05-19T09:32:11,705][INFO ][o.e.n.Node ] [xK_xDGK] stopped
[2017-05-19T09:32:11,705][INFO ][o.e.n.Node ] [xK_xDGK] closing ...
[2017-05-19T09:32:11,858][INFO ][o.e.n.Node ] [xK_xDGK] closed
[2017-05-19T09:32:40,452][INFO ][o.e.n.Node ] [] initializing ...
[2017-05-19T09:32:41,579][INFO ][o.e.e.NodeEnvironment ] [xK_xDGK] using [1] data paths, mounts [[/ (/dev/mapper/ubuntu--vg-root)]], net usable_space [416.6gb], net total_space [452gb], spins? [possibly], types [ext4]
[2017-05-19T09:32:41,579][INFO ][o.e.e.NodeEnvironment ] [xK_xDGK] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-05-19T09:32:41,859][INFO ][o.e.n.Node ] node name [xK_xDGK] derived from node ID [xK_xDGKDRJet3HcD9GOBqA]; set [node.name] to override
[2017-05-19T09:32:41,859][INFO ][o.e.n.Node ] version[5.4.0], pid[20265], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Linux/4.4.0-38-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11]
[2017-05-19T09:33:03,454][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [aggs-matrix-stats]
[2017-05-19T09:33:03,456][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [ingest-common]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [lang-expression]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [lang-groovy]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [lang-mustache]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [lang-painless]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [percolator]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [reindex]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [transport-netty3]
[2017-05-19T09:33:03,457][INFO ][o.e.p.PluginsService ] [xK_xDGK] loaded module [transport-netty4]
[2017-05-19T09:33:03,459][INFO ][o.e.p.PluginsService ] [xK_xDGK] no plugins loaded
[2017-05-19T09:33:08,079][INFO ][o.e.d.DiscoveryModule ] [xK_xDGK] using discovery type [zen]
[2017-05-19T09:33:12,040][INFO ][o.e.n.Node ] initialized
[2017-05-19T09:33:12,040][INFO ][o.e.n.Node ] [xK_xDGK] starting ...
[2017-05-19T09:33:12,367][INFO ][o.e.t.TransportService ] [xK_xDGK] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2017-05-19T09:33:14,719][WARN ][o.e.m.j.JvmGcMonitorService] [xK_xDGK] [gc][young][1][2] duration [2.1s], collections [1]/[2.6s], total [2.1s]/[2.4s], memory [254.4mb]->[56mb]/[1.9gb], all_pools {[young] [234.5mb]->[12.8mb]/[266.2mb]}{[survivor] [19.9mb]->[33.2mb]/[33.2mb]}{[old] [0b]->[10mb]/[1.6gb]}
[2017-05-19T09:33:14,722][WARN ][o.e.m.j.JvmGcMonitorService] [xK_xDGK] [gc][1] overhead, spent [2.1s] collecting in the last [2.6s]
[2017-05-19T09:33:15,478][INFO ][o.e.c.s.ClusterService ] [xK_xDGK] new_master {xK_xDGK}{xK_xDGKDRJet3HcD9GOBqA}{Jjx3CwrxRJuXjfs43p1w0Q}{localhost}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-05-19T09:33:15,774][INFO ][o.e.h.n.Netty4HttpServerTransport] [xK_xDGK] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2017-05-19T09:33:15,778][INFO ][o.e.n.Node ] [xK_xDGK] started
[2017-05-19T09:33:16,373][WARN ][o.e.g.DanglingIndicesState] [xK_xDGK] [[.kibana/IoqXczKfSjGuM9Tlgaln6Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2017-05-19T09:33:16,374][INFO ][o.e.g.GatewayService ] [xK_xDGK] recovered [1] indices into cluster_state
[2017-05-19T09:33:16,886][WARN ][o.e.g.DanglingIndicesState] [xK_xDGK] [[.kibana/IoqXczKfSjGuM9Tlgaln6Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata
[2017-05-19T09:33:17,802][INFO ][o.e.c.r.a.AllocationService] [xK_xDGK] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
[2017-05-19T09:33:18,029][WARN ][o.e.g.DanglingIndicesState] [xK_xDGK] [[.kibana/IoqXczKfSjGuM9Tlgaln6Q]] can not be imported as a dangling index, as index with same name already exists in cluster metadata

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.