Using that config here is what I get:
[root@elastic logstash]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash --path.config /etc/logstash/conf.d/logstash.conf --log.level=debug
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
No events show up here.
filebeat log:
2018-02-23T21:29:45-05:00 DBG output worker: publish 2047 events
2018-02-23T21:29:45-05:00 DBG connect to logstash host elastic:9001:10200
2018-02-23T21:29:45-05:00 DBG output worker: publish 2047 events
2018-02-23T21:29:45-05:00 DBG Flushing spooler because spooler full. Events flushed: 2048
2018-02-23T21:29:54-05:00 DBG Run prospector
2018-02-23T21:29:54-05:00 DBG Start next scan
2018-02-23T21:29:54-05:00 DBG Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:29:54-05:00 DBG Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T21:29:54-05:00 DBG Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:29:54-05:00 DBG Prospector states cleaned up. Before: 1, After: 1
2018-02-23T21:30:04-05:00 DBG Run prospector
2018-02-23T21:30:04-05:00 DBG Start next scan
2018-02-23T21:30:04-05:00 DBG Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:04-05:00 DBG Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T21:30:04-05:00 DBG Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:04-05:00 DBG Prospector states cleaned up. Before: 1, After: 1
2018-02-23T21:30:14-05:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.logstash.call_count.PublishEvents=1 libbeat.publisher.published_events=2047
2018-02-23T21:30:14-05:00 DBG Run prospector
2018-02-23T21:30:14-05:00 DBG Start next scan
2018-02-23T21:30:14-05:00 DBG Check file for harvesting: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:14-05:00 DBG Update existing file for harvesting: /var/ossec/logs/alerts/alerts.json, offset: 3847925
2018-02-23T21:30:14-05:00 DBG Harvester for file is still running: /var/ossec/logs/alerts/alerts.json
2018-02-23T21:30:14-05:00 DBG Prospector states cleaned up. Before: 1, After: 1
2018-02-23T21:30:15-05:00 DBG handle error: write tcp 192.168.17.9:4934->192.168.18.61:9001: i/o timeout
2018-02-23T21:30:15-05:00 DBG 0 events out of 2047 events sent to logstash host elastic:9001:10200. Continue sending
2018-02-23T21:30:15-05:00 DBG close connection to logstash host elastic:9001:10200
2018-02-23T21:30:15-05:00 DBG closing
2018-02-23T21:30:15-05:00 ERR Failed to publish events (host: elastic:9001:10200), caused by: write tcp 192.168.17.9:4934->192.168.18.61:9001: i/o timeout
2018-02-23T21:30:15-05:00 INFO Error publishing events (retrying): write tcp 192.168.17.9:4934->192.168.18.61:9001: i/o timeout
2018-02-23T21:30:15-05:00 DBG close connection to logstash host elastic:9001:10200
2018-02-23T21:30:15-05:00 DBG send fail
2018-02-23T21:30:17-05:00 DBG connect to logstash host elastic:9001:10200
Here is the end of the logstash-plain.log:
[2018-02-23T21:32:58,167][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: SSL_RSA_WITH_3DES_EDE_CBC_SHA => DES-CBC3-SHA
[2018-02-23T21:32:58,168][DEBUG][io.netty.handler.ssl.OpenSsl] Supported protocols (OpenSSL): [[SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]]
[2018-02-23T21:32:58,168][DEBUG][io.netty.handler.ssl.OpenSsl] Default cipher suites (OpenSSL): [TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_
WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_1
28_GCM_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA]
[2018-02-23T21:32:58,169][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:9001"}
[2018-02-23T21:32:58,198][DEBUG][io.netty.channel.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 2
[2018-02-23T21:32:58,243][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.noKeySetOptimization: false
[2018-02-23T21:32:58,244][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
[2018-02-23T21:32:58,265][DEBUG][io.netty.util.internal.PlatformDependent] org.jctools-core.MpscChunkedArrayQueue: available
[2018-02-23T21:32:58,308][INFO ][org.logstash.beats.Server] Starting server on port: 9001
[2018-02-23T21:32:58,353][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x229463b1@/usr/share/l
ogstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}
[2018-02-23T21:32:58,372][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-02-23T21:32:58,382][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.processId: 4522 (auto-detected)
[2018-02-23T21:32:58,388][DEBUG][io.netty.util.NetUtil ] -Djava.net.preferIPv4Stack: true
[2018-02-23T21:32:58,388][DEBUG][io.netty.util.NetUtil ] -Djava.net.preferIPv6Addresses: false
[2018-02-23T21:32:58,390][DEBUG][io.netty.util.NetUtil ] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[2018-02-23T21:32:58,391][DEBUG][io.netty.util.NetUtil ] /proc/sys/net/core/somaxconn: 128
[2018-02-23T21:32:58,392][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.machineId: 1e:d6:4b:ff:fe:a3:aa:ca (auto-detected)
[2018-02-23T21:33:01,165][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-23T21:33:01,166][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-02-23T21:33:03,366][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x229463b1@/usr/share/lo
gstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
...
[2018-02-23T21:38:06,622][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-23T21:38:06,622][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-02-23T21:38:08,392][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x229463b1@/usr/share/lo
gstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
[2018-02-23T21:38:11,627][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-23T21:38:11,628][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}