Filebeats to logstash issues

It seems like this is all too common, but I'm having issues sending from beats to logstash. filebeat version: filebeat-5.6.4p0 (the working and not working versions are the same)

Another host successfully sends messages to the same logstash instance (different kind of log, no json), but this host does not. Instead of I get:

2018-02-23T15:22:15-05:00 DBG  Run prospector
2018-02-23T15:22:15-05:00 DBG  Start next scan
2018-02-23T15:22:15-05:00 DBG  Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T15:22:15-05:00 DBG  Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T15:22:15-05:00 DBG  Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T15:22:15-05:00 DBG  Prospector states cleaned up. Before: 1, After: 1
2018-02-23T15:22:21-05:00 DBG  handle error: write tcp 192.168.17.9:7839->192.168.18.61:9001: i/o timeout
2018-02-23T15:22:21-05:00 DBG  0 events out of 2047 events sent to logstash host elastic:9001:10200. Continue sending
2018-02-23T15:22:21-05:00 DBG  close connection to logstash host elastic:9001:10200
2018-02-23T15:22:21-05:00 DBG  closing
2018-02-23T15:22:21-05:00 ERR Failed to publish events (host: elastic:9001:10200), caused by: write tcp 192.168.17.9:7839->192.168.18.61:9001: i/o timeout
2018-02-23T15:22:21-05:00 INFO Error publishing events (retrying): write tcp 192.168.17.9:7839->192.168.18.61:9001: i/o timeout
2018-02-23T15:22:21-05:00 DBG  close connection to logstash host elastic:9001:10200
2018-02-23T15:22:21-05:00 DBG  send fail

filebeat.yml:

filebeat.prospectors:

- input_type: log

  paths:
    - /var/ossec/logs/alerts/alerts.json

  fields:
    log_type: ossec
  json.keys_under_root: true

output.logstash:
  hosts: ["elastic:9001"]

output.console:
  pretty: true

logging.level: debug
logging.selectors: ["*"]

I've trimmed a lot out of the logstash configuration (this is what I'm currently running with) in the hopes of getting this to work. logstash version: logstash-6.2.2-1.noarch
Current lostash config:

input {
  beats {
    id => "beats_test"
    port => 9001
  }
}

output {

    elasticsearch {
      index => "ossec-%{+YYYY.MM.dd}"
    }

  stdout {
  }

}

Have you checked the Logstash logs to see if there are any errors?

Can you try running Logstash with a bare bones config and see if the i/o timeouts still occur.

input {
  beats {
    port => 9001
  }
}

output {
  stdout { }
}

Using that config here is what I get:

[root@elastic logstash]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.config /etc/logstash/conf.d/logstash.conf --log.level=debug
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties

No events show up here.

filebeat log:

2018-02-23T21:29:45-05:00 DBG  output worker: publish 2047 events
2018-02-23T21:29:45-05:00 DBG  connect to logstash host elastic:9001:10200
2018-02-23T21:29:45-05:00 DBG  output worker: publish 2047 events
2018-02-23T21:29:45-05:00 DBG  Flushing spooler because spooler full. Events flushed: 2048
2018-02-23T21:29:54-05:00 DBG  Run prospector
2018-02-23T21:29:54-05:00 DBG  Start next scan
2018-02-23T21:29:54-05:00 DBG  Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:29:54-05:00 DBG  Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T21:29:54-05:00 DBG  Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:29:54-05:00 DBG  Prospector states cleaned up. Before: 1, After: 1
2018-02-23T21:30:04-05:00 DBG  Run prospector
2018-02-23T21:30:04-05:00 DBG  Start next scan
2018-02-23T21:30:04-05:00 DBG  Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:04-05:00 DBG  Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T21:30:04-05:00 DBG  Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:04-05:00 DBG  Prospector states cleaned up. Before: 1, After: 1
2018-02-23T21:30:14-05:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.call_count.PublishEvents=1 libbeat.publisher.published_events=2047
2018-02-23T21:30:14-05:00 DBG  Run prospector
2018-02-23T21:30:14-05:00 DBG  Start next scan
2018-02-23T21:30:14-05:00 DBG  Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:14-05:00 DBG  Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T21:30:14-05:00 DBG  Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:14-05:00 DBG  Prospector states cleaned up. Before: 1, After: 1
2018-02-23T21:30:15-05:00 DBG  handle error: write tcp 192.168.17.9:4934->192.168.18.61:9001: i/o timeout
2018-02-23T21:30:15-05:00 DBG  0 events out of 2047 events sent to logstash host elastic:9001:10200. Continue sending
2018-02-23T21:30:15-05:00 DBG  close connection to logstash host elastic:9001:10200
2018-02-23T21:30:15-05:00 DBG  closing
2018-02-23T21:30:15-05:00 ERR Failed to publish events (host: elastic:9001:10200), caused by: write tcp 192.168.17.9:4934->192.168.18.61:9001: i/o timeout
2018-02-23T21:30:15-05:00 INFO Error publishing events (retrying): write tcp 192.168.17.9:4934->192.168.18.61:9001: i/o timeout
2018-02-23T21:30:15-05:00 DBG  close connection to logstash host elastic:9001:10200
2018-02-23T21:30:15-05:00 DBG  send fail
2018-02-23T21:30:17-05:00 DBG  connect to logstash host elastic:9001:10200

Here is the end of the logstash-plain.log:

[2018-02-23T21:32:58,167][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: SSL_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA
[2018-02-23T21:32:58,168][DEBUG][io.netty.handler.ssl.OpenSsl] Supported protocols (OpenSSL): [[SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]]
[2018-02-23T21:32:58,168][DEBUG][io.netty.handler.ssl.OpenSsl] Default cipher suites (OpenSSL): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_
WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_1
28_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA]
[2018-02-23T21:32:58,169][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:9001"}
[2018-02-23T21:32:58,198][DEBUG][io.netty.channel.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 2
[2018-02-23T21:32:58,243][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.noKeySetOptimization: false
[2018-02-23T21:32:58,244][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
[2018-02-23T21:32:58,265][DEBUG][io.netty.util.internal.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
[2018-02-23T21:32:58,308][INFO ][org.logstash.beats.Server] Starting server on port: 9001
[2018-02-23T21:32:58,353][INFO ][logstash.pipeline        ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x229463b1@/usr/share/l
ogstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}
[2018-02-23T21:32:58,372][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-02-23T21:32:58,382][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.processId: 4522 (auto-detected)
[2018-02-23T21:32:58,388][DEBUG][io.netty.util.NetUtil    ] -Djava.net.preferIPv4Stack: true
[2018-02-23T21:32:58,388][DEBUG][io.netty.util.NetUtil    ] -Djava.net.preferIPv6Addresses: false
[2018-02-23T21:32:58,390][DEBUG][io.netty.util.NetUtil    ] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[2018-02-23T21:32:58,391][DEBUG][io.netty.util.NetUtil    ] /proc/sys/net/core/somaxconn: 128
[2018-02-23T21:32:58,392][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.machineId: 1e:d6:4b:ff:fe:a3:aa:ca (auto-detected)
[2018-02-23T21:33:01,165][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-23T21:33:01,166][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-02-23T21:33:03,366][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x229463b1@/usr/share/lo
gstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}

...

[2018-02-23T21:38:06,622][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-23T21:38:06,622][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-02-23T21:38:08,392][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x229463b1@/usr/share/lo
gstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
[2018-02-23T21:38:11,627][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-23T21:38:11,628][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

I hit the 7000 character limit. :rofl:

The only noticeable error I see is:

[2018-02-23T21:32:57,441][DEBUG][io.netty.util.internal.NativeLibraryLoader] Unable to load the library 'netty_tcnative_linux_x86_64', trying other loadin
g mechanism.
java.lang.UnsatisfiedLinkError: no netty_tcnative_linux_x86_64 in java.library.path
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) ~[?:1.8.0_161]
        at java.lang.Runtime.loadLibrary0(Runtime.java:870) ~[?:1.8.0_161]
        at java.lang.System.loadLibrary(System.java:1122) ~[?:1.8.0_161]

But later in the log I get:

[2018-02-23T21:32:57,517][DEBUG][io.netty.util.internal.NativeLibraryLoader] Successfully loaded the library /tmp/libnetty_tcnative_linux_x86_642658628793
936769155.so
[2018-02-23T21:32:57,517][DEBUG][io.netty.handler.ssl.OpenSsl] netty-tcnative using native library: BoringSSL

Sorry if I'm flooding you with a lot of useless information. This has been frustrating, especially since another filebeat host is working (sending regular logs, not json). I just tested the other host again with the basic config you posted and it's still working just fine.

I had to rebuild the system running filebeat (the one that wasn't working). So here are links to new logs. Hopefully the links work, the site was slow and the logs are large.

logstash-plain.log:
https://paste.fedoraproject.org/paste/BNUs~OiGt8jAAUQ0lEBfHg

filebeat:
https://paste.fedoraproject.org/paste/JHeZO-Bva9AWcy2JGPu~eg

Strangely enough, I see the following in the console output from logstash:

2018-02-28T13:44:42.761Z ix.example.com %{message}
2018-02-28T13:44:42.761Z ix.example.com %{message}
2018-02-28T13:44:42.761Z ix.example.com %{message}
2018-02-28T13:44:42.761Z ix.example.com %{message}
2018-02-28T13:44:42.761Z ix.example.com %{message}

I'll try putting the logstash configuration that puts the messages into elasticsearch back in place and re-running it.

I wonder if we should move the topic to the LS forum as to me it seems more like an issue related to LS receiving the events. Unfortunately was not able to load the second filebeat link for the logs :frowning: Do you still see the sending errors?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.