Logstash doesn't log to elastic

I've been trying to troubleshoot this for quite some time and have managed to find out the following:

  1. filebeat logs everything seemingly fine
  2. turning on logstash after deleting the indexes i'm logging to SEEMINGLY works fine for a while
    until at some point logs literally just stop.
  3. logstash is still logging it's own output fine somehow.
  4. making my output stdout just prints every log entry to stdout (as expected)
  5. there is no logstash log file by default in the docker container and the log4j seems to only output errors to console of which there are zero.
    Logstash debug logs: Using bundled JDK: /usr/share/logstash/jdk,OpenJDK 64-Bit Server VM warning: O - Pastebin.com
    Elastic console output: "stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: a - Pastebin.com
    filebeats config: filebeat.modules:- module: system syslog: enabled: true auth: - Pastebin.com
    logstash config: input { beats { client_inactivity_timeout => 1200 port => 5044 } - Pastebin.com
    logstash config i changed but STILL doesn't work: input { beats { client_inactivity_timeout => 1200 port => 5044 } - Pastebin.com

All indexes green in health.
All hostnames such as "elasticsearch:9200" in the configs resolve and i can connect via tcp
What's really frustrating is that SOME log lines mystically do seem to occasionally get through but most do not. For instance the logstash console output logs somehow make it with no issue.

EDIT: HEre's my filebeeat log which has plenty of errors but they seem unsolvable and I think they do that even after I delete indexes and logging seems to be working again:
filebeat.log (not pastebin because pastebin thinks it's questionable content somehow)

Maybe this is in the wrong place? Where should i go for help on this?

|2021-03-23T01:19:58.017Z|ERROR|[logstash]|logstash/async.go:280|Failed to publish events caused by: read tcp> read: connection reset by peer|
|2021-03-23T01:19:59.996Z|ERROR|[logstash]|logstash/async.go:280|Failed to publish events caused by: read tcp> read: connection reset by peer|

filebeat is sitting in a loop trying to connect to logstash. logstash is not accepting the connection.

You need to look at the logstash logs. What you posted as the logstash logs is just the log4j log. It is likely that logstash is logging an error once a second and you need to know what it is.

If it is an InvalidFrameProtocolException then a common explanation would be a mismatch between the beat and logstash about whether TLS is enabled. That does not appear to be the case. If there is a firewall or load balancer between the beat and logstash it is possible the connection is getting rejected there.

@Badger There's no firewall between the two (they're both docker containers on the same network) and I can't find any other place with anything logstash might tell me. There's no indication other than that error that there is some problem connecting to logstash. i'm able to connect via tcp with no issue from the filebeat container to the logstash container and from thee logstash container to the elasticsearch container. in fact if i restart logstash, some logs (specifically the opening stdout for logstash) do show up, but almost everything is still missing.

I recently figured out the turning filebeat's config into direct to elasticsearch until it parses all my logs then switching back to logstash does fix it until it (seemingly randomly) stops working again. The time it takes to break is usually at least a day. Until it does break ALL logs do seem to be appearing in elastic just fine.

Note I'm using the elastic docker containers for 7.11.2 but this has been a problem since at LEAST 7.9.

I've also noticed that logstash claims:

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties

But no such log directory (or file) exists, and if i create it and restart the container nothing ever appears in the folder.
ls: cannot access /usr/share/logstash/logs: No such file or directory
find . -name *.log from /usr/share/logstash returns nothing
there is no indication of anything logstash related in /var/log either
stdout simply has:

Using bundled JDK: /usr/share/logstash/jdk,
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.,
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties,
[2021-03-26T21:45:26,951][ERROR][logstash.monitoring.internalpipelinesource] Monitoring is not available: License information is currently unavailable. Please make sure you have added your production elasticsearch connection info in the xpack.monitoring.elasticsearch settings.,
since container creation

Also possibly relevant:
[root@e8a40e0b129c filebeat]# timeout 1 bash -c 'cat < /dev/null > /dev/tcp/logstash/5044'
[root@e8a40e0b129c filebeat]# echo $?

To make things even MORE frustrating... new documents appear to be getting created, but i'vee NO idea what they are or how to find them. Kibana shows nothing in the past 15 minutes no matter how often i refresh, and none for the last 24 hours either, ruling out timezone shenanigans. The document count (in kibana index management) still goes up though.

I would try using a ridiculously large time range (all time, if available, otherwise the last 1000 years or something like that). Or else use the Dev tools in Kibana to query the elasticsearch index directly to see what documents are in it.

Yep. setting it to 9999yearsago to 9999years from now results inthe number of hits going up. That doesn't help much though because i still have no way i'm aware of to be able to tell why i'm not able to see it with the correct timestamps showing up or why it's happening all of the sudden after some seemingly random time with no errors or logs i can find.

Actually I was able to narrow this down by slowly zooming in on the largest bars in the time frame in kibana and every new entry appears to just be logstash console output, but the logstash console output only contains 3 lines, so it's just repeatedly logging it to elastic thousands and thousands of times for some reason.

All showing up at the same 2 miliseconds:
Mar 30, 2021 @ 11:32:16.544
Mar 30, 2021 @ 11:32:16.550
the exact same number of entries appears in both times and the number is rising (for both equally)

This made me think logstash was restarting itself over and over and over but i did a ps ax and the pid never changes. filebeat doesn't appear to change pid either. Nor does the docker stdout for logstash seem to be accumulating that message. The container id isn't changing in any of the logs either. I have no idea why the message is appearing over and over constantly. I can confirm that 100% of the thousands of messages in those two timestamps are all logstash.

Also still no idea why it all works fine for a short period after letting filebeat send it's backlog directly to elastic.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.