Keep Logstash from Crashing

Been having a hell of a time with logstash. Right now I setup a few different nginx instances to pump access and error logs into logstash to funnel into elasticsearch, but now I'm getting crashes and I have no idea how to fix it:

[2017-06-22T05:37:41,258][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<LogStash::Error: timestamp field is missing>, :backtrace=>["org/logstash/ext/ `sprintf'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:153:in `event_action_params'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:40:in `event_action_tuple'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:34:in `multi_receive'", "org/jruby/ `map'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:34:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:47:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:407:in `output_batch'", "org/jruby/ `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:406:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:352:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:317:in `start_workers'"]}

Is there a way for logstash to just flatout ignore anything that doesn't have a timestamp? I used the NGXINX templates from here:

1 Like

Your log is mangled and doesn't show the full error message. What comes after "An unexpected error occurred! {:error=>#"? Also, what does your Logstash configuration look like?

fixed up the logged out error. The link i posted above contains both setups for nginx access and error logs. The only things I changed were the elasticsearch and port properties.

The problem is that you're using the default index value for your elasticsearch output, "logstash-%{+YYYY.MM.dd}", which requires that the @timestamp field exists.

Can you temporarily replace your elasticsearch output with a stdout { codec => rubydebug } output so we can see exactly what the events look like? The error indicates that the event that causes the crash lacks a @timestamp field but the date filter should make sure that field exists.

ah... very nice :slight_smile: thanks a lot for that! i just removed the timestamp. You would figure that the timestamp would have been caught or ignored or something.

Just changing the name of teh index seemed to have worked well. next problem is disk space! :slight_smile:

Sounds like you've resolved this issue. But for other users who encounter this problem, another work around is to use add_field instead of rename in your mutate filter.

So use this:

mutate {
      add_field => { "read_timestamp" => "%{@timestamp}" }

Instead of:

mutate {
      rename => { "@timestamp" => "read_timestamp" }

The config in the docs should work with this change.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.