Logstash stopped logging, but I don't see errors


(ZillaG) #1

I have an ELK stack. Logstash v.2.3.4 and Kibana v4.5.4 services are running on one server. I have another box running Elasticsearch v2.4.1 in client mode, and then a 3-box Elasticsearch v2.4.1 cluster running in master and data mode. All was working until yesterday, that is I was able to see the logs I've been sending to the stack.

For some reason, I don't see any logs today (I filter out "Today" in Kibana). I don't see my logstash.stdout file increasing size (I use the stdout plugin in my logstash.conf output filter). The only logstash log I see are these. (I have snipped the log since I can't post all of it in this message). Via tcpdump, I can tell that I'm receiving logs, and sending them out to my Elasticsearch client. What am I missing?

I have restarted my logstash service, and the elasticsearch service on the client box.

{:timestamp=>"2017-02-17T17:10:01.494000+0000", :message=>#<LogStash::PipelineReporter::Snapshot:0x567d9997 @data={:events_filtered=>1292377, :events_consumed=>1292377, :worker_count=>4, :inflight_count=>13, :worker_states=>[{:status=>"run", :alive=>true, :index=>0, :inflight_count=>2}, {:status=>"sleep", :alive=>true, :index=>1, :inflight_count=>4}, {:status=>"sleep", :alive=>true, :index=>2, :inflight_count=>1}, {:status=>"sleep", :alive=>true, :index=>3, :inflight_count=>6}], :output_info=>[{:type=>"stdout", :config=>{"codec"=>"rubydebug", "ALLOW_ENV"=>false}, :is_multi_worker=>false, :events_received=>1292377, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:485794817 [<LogStash::Outputs::Stdout codec=><LogStash::Codecs::RubyDebug metadata=>false>, workers=>1>]>, :busy_workers=>0}, {:type=>"elasticsearch", :config=>{"user"=>"logstash", "password"=>"l0gst@sh", "ssl"=>"true", "ssl_certificate_verification"=>"true", "truststore"=>"/etc/elasticsearch/truststore.jks", "truststore_password"=>"pw", "hosts"=>["https://10.53.162.51:9200"], "index"=>"logstash-eu-%{customer}-%{+YYYY.MM.dd}", "ALLOW_ENV"=>false}, :is_multi_worker=>false, :events_received=>1292377, :workers=><Java::JavaUtilConcurrent::CopyOnWriteArrayList:-1478703875 [<LogStash::Outputs::ElasticSearch user=>"logstash", password=>, ssl=>true, ssl_certificate_verification=>true, truststore=>"/etc/elasticsearch/truststore.jks", truststore_password=>, hosts=>["https://10.53.162.51:9200"], index=>"logstash-eu-%{customer}-%{+YYYY.MM.dd}", codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, flush_size=>500, idle_flush_time=>1, doc_as_upsert=>false, max_retries=>3, script_type=>"inline", script_var_name=>"event", scripted_upsert=>false, retry_max_interval=>2, retry_max_items=>500, retry_on_conflict=>1, (snip)
`pop'"}]}>, :level=>:warn}


(ZillaG) #2

All boxes are 4-core/16GB mem machines BTW, and I setup Elasticsearch to use 50% of the memory where the service is running.


(ZillaG) #3

I restarted the server, and I don't accumulate the logs anymore. However, Logstash just stops capturing logs. I have the following input and output filter in my configuration file. I see that /var/log/logstash/logstash.stdout stops growing after a few seconds. To remove Elasticsearch out of the equation, i removed it from the output filter.

input {
  udp {
    port => 5514
    codec => json
  }
}

filter {
  grok {
    ...
  }
}

output {
  stdout {
    codec => rubydebug
  }
}

(Mark Walkom) #4

Is there still data coming in via udp?

Can you upgrade?


(ZillaG) #5

Yes I still see data coming in, per tcpdump. This was working fine, but stopped working Thursday of last week. Upgrading is my last resort since I want to understand the problem.


(Mark Walkom) #6

Why is upgrading a last resort, you're running a relatively old version.


(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.