Old logs?

I recently upgrade our elasticstack from 2.4 to 5.6.4 which went fairly smoothly.

I'm seeing lots of the following in the logstash logs tho:

[2017-12-04T22:29:34,620][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"index_closed_exception", "reason"=>"closed", "index_uuid"=>"YXz-fU6nSseZKZPC0z24Nw", "index"=>"logstash-syslog-na-2017.10.13"})
[2017-12-04T22:29:34,620][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}

Which will eventually cause logstash to stop sending data to ES (happened over the weekend). Looks like something is sending me old logs? How can I see the actual logs that are being sent without disrupting / breaking my cluster?

I have filebeat and nagioscheckbeat (yes, I intend to switch to metricbeat but one thing at a time...) sending data to logstash which sends it on to elasticsearch. And all of the error messages are similar - they're all for indexes like logstash-syslog-*

I use curator to close indexes over 30 days old and to delete indexes over 1 year old.

Restarting logstash seems to "fix" the problem at least temporarily.

Since you're getting messages like:

retrying failed action with response code: 403 ({"type"=>"index_closed_exception"

I'm going to have to guess that filebeat is sending older log data—data older than your 30 day old window. What I can't tell is whether it's duplicate data, or omitted data.

You could try the ignore_older option to prevent resending older log data.

1 Like

That's my assumption too. I was curious if there was a way to get more detail - which host was sending what... But good suggestion, I'll try that. Thanks!

I made that change but the issue persists. Restarting logstash will "fix" it for awhile but it comes back. Here's my config that generates logs to the index in the error message:

filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth.log
        - /var/log/syslog
      input_type: log
      fields:
        service: na
        env: dev
        type: syslog
      fields_under_root: true
      ignore_older: 120h
...
output:
  logstash:
    hosts: ["<logstash_host>:5044"]
    ssl:
      certificate_authorities: ["<path_to_cert>"]

Is there any way for me to figure out where these are coming from?
And here's the logstash config:

input {
  beats {
    port => 5044
    type => "logs"
    client_inactivity_timeout => 600
    ssl => true
    ssl_certificate => "<path_to_cert>"
    ssl_key => "<path_to_key>"
  }
}

and the output config:

output {
  elasticsearch { 
    hosts => ["<IP_1>:9200","<IP_2>:9200"] 
    index => "logstash-%{type}-%{service}-%{+YYYY.MM.dd}"
  }
}

This had stopped occurring but has started again - happened on Christmans eve :frowning:

  1. How can I figure out what host these old logs are coming from
  2. Why does Logstash eventually stop sending all data to ES when this occurs? It's happening right now and I haven't yet restarted Logstash to "fix" it.

Ok, I've answered #1 above - I reopened the index that logstash was complaining about, let the old log line get indexed, looked for and found it. I looked in the syslog for the host it came from and lo and behold:

Dec 26 06:17:01 eip-u1-c CRON[31207]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 18 08:17:01 eip-u1-c CRON[31387]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Dec 26 06:25:01 eip-u1-c CRON[31387]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ))

WHY???

Still doesn't help me with #2 above.

https://bugs.launchpad.net/ubuntu/+source/rsyslog/+bug/1392317

I'm running ubuntu 14.04 and have the affected version of rsyslogd:

jhoff909@eip-u1-c:~$ rsyslogd -version
rsyslogd 7.4.4, compiled with:

And the config:

# Filter duplicated messages
$RepeatedMsgReduction on

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.