I don't always check this server but have found out today that I am getting no results showing in Kibana.
There are no errors.
Elasticsearch log
[2018-01-02 11:45:12,114][INFO ][env ] [Justin Hammer] using [1] data paths, mounts [[/ (/dev/mapper/logstash--vg-root)]], net usable_space [360.2gb], net total_space [440.6gb], spins? [possibly], types [ext4]
[2018-01-02 11:45:12,114][INFO ][env ] [Justin Hammer] heap size [15.9gb], compressed ordinary object pointers [true]
[2018-01-02 11:45:13,786][INFO ][node ] [Justin Hammer] initialized
[2018-01-02 11:45:13,786][INFO ][node ] [Justin Hammer] starting ...
[2018-01-02 11:45:13,877][INFO ][transport ] [Justin Hammer] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-01-02 11:45:13,880][INFO ][discovery ] [Justin Hammer] elasticsearch/Yddzub8nRKOeflGt_oWUYA
[2018-01-02 11:45:16,953][INFO ][cluster.service ] [Justin Hammer] new_master {Justin Hammer}{Yddzub8nRKOeflGt_oWUYA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2018-01-02 11:45:17,019][INFO ][http ] [Justin Hammer] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-01-02 11:45:17,020][INFO ][node ] [Justin Hammer] started
[2018-01-02 11:45:17,263][INFO ][gateway ] [Justin Hammer] recovered [17] indices into cluster_state
Logstash log
{:timestamp=>"2018-01-02T12:03:13.605000-0500", :message=>#<LogStash::PipelineReporter::Snapshot:0x599acc73 @data={:events_filtered=>856, :events_consumed=>856, :worker_count=>4, :inflight_count=>392, :worker_states=>[{:status=>"run", :alive=>true, :index=>0, :inflight_count=>100}, {:status=>"run", :alive=>true, :index=>1, :inflight_count=>100}, {:status=>"run", :alive=>true, :index=>2, :inflight_count=>101}, {:status=>"run", :alive=>true, :index=>3, :inflight_count=>91}], :output_info=>[{:type=>"elasticsearch", :config=>{"hosts"=>["localhost:9200"], "sniffing"=>"true", "manage_template"=>"false", "index"=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", "document_type"=>"%{[@metadata][type]}"}, :is_multi_worker=>true, :events_received=>856, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:-965130303 [<LogStash::Outputs::ElasticSearch hosts=>["localhost:9200"], sniffing=>true, manage_template=>false, index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing_delay=>5>, <LogStash::Outputs::ElasticSearch hosts=>["localhost:9200"], sniffing=>true, manage_template=>false, index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>fals
Initially I did find a Curcuit breaker warning int he logstash file and changed the congestion_thrteshold to a very high number.
I still don't see any results in Kibana and I am not sure what the next steps should be.
thanks
Wil