No results found - Kibana


(Wil McGilvery) #1

I don't always check this server but have found out today that I am getting no results showing in Kibana.

There are no errors.

Elasticsearch log
[2018-01-02 11:45:12,114][INFO ][env ] [Justin Hammer] using [1] data paths, mounts [[/ (/dev/mapper/logstash--vg-root)]], net usable_space [360.2gb], net total_space [440.6gb], spins? [possibly], types [ext4]
[2018-01-02 11:45:12,114][INFO ][env ] [Justin Hammer] heap size [15.9gb], compressed ordinary object pointers [true]
[2018-01-02 11:45:13,786][INFO ][node ] [Justin Hammer] initialized
[2018-01-02 11:45:13,786][INFO ][node ] [Justin Hammer] starting ...
[2018-01-02 11:45:13,877][INFO ][transport ] [Justin Hammer] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2018-01-02 11:45:13,880][INFO ][discovery ] [Justin Hammer] elasticsearch/Yddzub8nRKOeflGt_oWUYA
[2018-01-02 11:45:16,953][INFO ][cluster.service ] [Justin Hammer] new_master {Justin Hammer}{Yddzub8nRKOeflGt_oWUYA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2018-01-02 11:45:17,019][INFO ][http ] [Justin Hammer] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2018-01-02 11:45:17,020][INFO ][node ] [Justin Hammer] started
[2018-01-02 11:45:17,263][INFO ][gateway ] [Justin Hammer] recovered [17] indices into cluster_state

Logstash log
{:timestamp=>"2018-01-02T12:03:13.605000-0500", :message=>#<LogStash::PipelineReporter::Snapshot:0x599acc73 @data={:events_filtered=>856, :events_consumed=>856, :worker_count=>4, :inflight_count=>392, :worker_states=>[{:status=>"run", :alive=>true, :index=>0, :inflight_count=>100}, {:status=>"run", :alive=>true, :index=>1, :inflight_count=>100}, {:status=>"run", :alive=>true, :index=>2, :inflight_count=>101}, {:status=>"run", :alive=>true, :index=>3, :inflight_count=>91}], :output_info=>[{:type=>"elasticsearch", :config=>{"hosts"=>["localhost:9200"], "sniffing"=>"true", "manage_template"=>"false", "index"=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", "document_type"=>"%{[@metadata][type]}"}, :is_multi_worker=>true, :events_received=>856, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:-965130303 [<LogStash::Outputs::ElasticSearch hosts=>["localhost:9200"], sniffing=>true, manage_template=>false, index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, action=>"index", path=>"/", ssl_certificate_verification=>true, sniffing_delay=>5>, <LogStash::Outputs::ElasticSearch hosts=>["localhost:9200"], sniffing=>true, manage_template=>false, index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>fals

Initially I did find a Curcuit breaker warning int he logstash file and changed the congestion_thrteshold to a very high number.

I still don't see any results in Kibana and I am not sure what the next steps should be.

thanks

Wil


(Magnus Bäck) #2

How do you know Logstash has any events to process? What inputs do you have?


(Wil McGilvery) #3

That is a good point – I am collecting event logs from all of our Domain Controllers and Exchange servers and before it stopped working I would have hundreds of messages a day.

Here is a snippet from one of our servers.

2018-01-05T09:23:57-05:00 INFO Total non-zero values: msg_file_cache.SecurityMisses=1 msg_file_cache.SecurityHits=99 msg_file_cache.MSExchange ManagementMisses=1 libbeat.logstash.publish.read_errors=2 msg_file_cache.MSExchange ManagementHits=99 msg_file_cache.ApplicationHits=91 libbeat.logstash.published_but_not_acked_events=200 libbeat.publisher.published_events=400 msg_file_cache.ApplicationMisses=10 msg_file_cache.SystemHits=83 msg_file_cache.SystemMisses=17 libbeat.logstash.call_count.PublishEvents=3 libbeat.logstash.publish.write_bytes=4038
2018-01-05T09:23:57-05:00 INFO Uptime: 6m30.8200781s
2018-01-05T09:23:57-05:00 INFO winlogbeat stopped.

Wil McGilvery
Network Manager
Sofina Foods Inc.


(Magnus Bäck) #4

If Logstash isn't logging any errors it seems like things might be reaching ES, but maybe not in the index you expect. Use e.g. the "cat indices" API to list the available indexes and how much data they contain. Does that reveal anything interesting? Could it be a Kibana permissions issue, i.e. the data is there but the Kibana user doesn't have sufficient permissions to see those indexes? Unless you're encrypting the traffic from Logstash to ES snooping on it could give clues.


(Wil McGilvery) #5

I cannot find any problem with either logstash or elasticsearch. Elasticsearch does show it is receiving the windows event logs.

I will repost in the Kibana forum

thanks


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.