Logstash going to sleep without pulling files

log file excerpt:

2017-11-06 11:54:43,333; [LOG_LEVEL=ALWAYS, CMPNT_NM=com.fmr.ifeb.alayer.cache.ehcache.ApplicationCacheImpl, MESSAGE=Initialized cache named 'oscarJdbcDaxCache']

filebeat.yml

--- 
filebeat.prospectors: 
  - 
    document_type: springlog
    input_type: log
    #multiline.match: after
    #multiline.negate: true
    #multiline.pattern: "^\\[[0-9]{4}-[0-9]{2}-[0-9]{2}"
    paths: 
      - "C:\\Users\\a617744\\Newdata\\data6.log"
    #tail_files: true
logging.level: debug
output.logstash: 
  hosts: 
    - "localhost:5044"

logstash config file:

input {
	 beats {
        port => 5044
  }
}
filter {
  mutate{
		gsub=>["message","\r",""]
	}
	grok {
	id => "myspringlogfilter"  
	match => { "message" => [ "(?m)^%{TIMESTAMP_ISO8601:timestamp};  [LOG_LEVEL=%{LOGLEVEL:log-level}, CMPNT_NM=%{NOTSPACE:component}, MESSAGE=%{QUOTEDSTRING:restmessage}]"]} 
	overwrite => ["message"]
	}
}
output {
	elasticsearch {
		hosts => "localhost:9200" 
    	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
		#index =>  "filebeat"
   		document_type => "%{[@metadata][type]}" 	
	}
	stdout {
         codec => rubydebug
  }

logstash log file looks like:

[2017-12-08T20:11:12,522][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: TLS_PSK_WITH_RC4_128_SHA => PSK-RC4-SHA
[2017-12-08T20:11:12,522][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: SSL_PSK_WITH_RC4_128_SHA => PSK-RC4-SHA
[2017-12-08T20:11:12,522][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: TLS_RSA_WITH_RC4_128_MD5 => RC4-MD5
[2017-12-08T20:11:12,522][DEBUG][io.netty.handler.ssl.CipherSuiteConverter] Cipher suite mapping: SSL_RSA_WITH_RC4_128_MD5 => RC4-MD5
[2017-12-08T20:11:12,532][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-12-08T20:11:12,552][DEBUG][io.netty.channel.MultithreadEventLoopGroup] -Dio.netty.eventLoopThreads: 4
[2017-12-08T20:11:12,645][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.noKeySetOptimization: false
[2017-12-08T20:11:12,647][DEBUG][io.netty.channel.nio.NioEventLoop] -Dio.netty.selectorAutoRebuildThreshold: 512
[2017-12-08T20:11:12,685][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"=>"main"}
[2017-12-08T20:11:12,713][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2017-12-08T20:11:12,688][DEBUG][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x35a4651d@C:/Users/a617744/logstash-6.0.0/logstash-core/lib/logstash/pipeline.rb:290 run>"}
[2017-12-08T20:11:12,911][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2017-12-08T20:11:12,916][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.processId: 8956 (auto-detected)
[2017-12-08T20:11:13,042][DEBUG][io.netty.util.NetUtil    ] Loopback interface: lo (Software Loopback Interface 1, 127.0.0.1)
[2017-12-08T20:11:13,044][DEBUG][io.netty.util.NetUtil    ] \proc\sys\net\core\somaxconn: 200 (non-existent)
[2017-12-08T20:11:13,094][DEBUG][io.netty.channel.DefaultChannelId] -Dio.netty.machineId: 00:50:56:ff:fe:b4:1d:75 (auto-detected)
[2017-12-08T20:11:17,692][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x35a4651d@C:/Users/a617744/logstash-6.0.0/logstash-core/lib/logstash/pipeline.rb:290 sleep>"}
[2017-12-08T20:11:22,699][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x35a4651d@C:/Users/a617744/logstash-6.0.0/logstash-core/lib/logstash/pipeline.rb:290 sleep>"}

Have you established that Filebeat is reading the files and sending them to Logstash? The Filebeat log should contain clues.

yes, filebeat is not connecting with 127.0.0.1:5044 i guess, because earlier it used to show 'connection refused' till the logstash server is up and 'connect' after its up. But I don't see any such thing now. Not sure what caused the problem either. The filebeat configuration looks just fine to me.

Filebeat log looks as below:

2017-12-09T11:04:44+05:30 DBG  Disable stderr logging
2017-12-09T11:04:44+05:30 INFO Home path: [C:\Users\a617744\filebeat-6.0.0-windows-x86_64] Config path: [C:\Users\a617744\filebeat-6.0.0-windows-x86_64] Data path: [C:\Users\a617744\filebeat-6.0.0-windows-x86_64\data] Logs path: [C:\Users\a617744\filebeat-6.0.0-windows-x86_64\logs]
2017-12-09T11:04:44+05:30 DBG  Beat metadata path: C:\Users\a617744\filebeat-6.0.0-windows-x86_64\data\meta.json
2017-12-09T11:04:44+05:30 INFO Metrics logging every 30s
2017-12-09T11:04:44+05:30 INFO Beat UUID: 480f9bcd-2fed-44b1-b63c-0d33d35f6dbe
2017-12-09T11:04:44+05:30 INFO Setup Beat: filebeat; Version: 6.0.0
2017-12-09T11:04:44+05:30 DBG  Initializing output plugins
2017-12-09T11:04:44+05:30 DBG  Processors: 
2017-12-09T11:04:44+05:30 DBG  start pipeline event consumer
2017-12-09T11:04:44+05:30 INFO Beat name: INDV050921
2017-12-09T11:04:44+05:30 INFO filebeat start running.
2017-12-09T11:04:44+05:30 INFO Registry file set to: C:\Users\a617744\filebeat-6.0.0-windows-x86_64\data\registry
2017-12-09T11:04:44+05:30 DBG  Windows is interactive: true
2017-12-09T11:04:44+05:30 INFO Loading registrar data from C:\Users\a617744\filebeat-6.0.0-windows-x86_64\data\registry
2017-12-09T11:04:44+05:30 INFO States Loaded from registrar: 13
2017-12-09T11:04:44+05:30 WARN Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2017-12-09T11:04:44+05:30 INFO Loading Prospectors: 1
2017-12-09T11:04:44+05:30 WARN DEPRECATED: input_type prospector config is deprecated. Use type instead. Will be removed in version: 6.0.0
2017-12-09T11:04:44+05:30 DBG  Processors: 
2017-12-09T11:04:44+05:30 DBG  recursive glob disabled
2017-12-09T11:04:44+05:30 DBG  exclude_files: []
2017-12-09T11:04:44+05:30 DBG  New state added for C:\Users\a617744\Newdata\data6.log
2017-12-09T11:04:44+05:30 INFO Starting Registrar
2017-12-09T11:04:44+05:30 DBG  Processing 1 events
2017-12-09T11:04:44+05:30 DBG  Registrar states cleaned up. Before: 13, After: 13
2017-12-09T11:04:44+05:30 DBG  Write registry file: C:\Users\a617744\filebeat-6.0.0-windows-x86_64\data\registry
2017-12-09T11:04:44+05:30 DBG  Prospector with previous states loaded: 1
2017-12-09T11:04:44+05:30 DBG  File Configs: [C:\Users\a617744\Newdata\data6.log C:\Users\a617744\Newdata\data6.log]
2017-12-09T11:04:44+05:30 INFO Starting prospector of type: log; id: 1015238833121505571 
2017-12-09T11:04:44+05:30 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-12-09T11:04:44+05:30 DBG  Start next scan
2017-12-09T11:04:44+05:30 DBG  Check file for harvesting: C:\Users\a617744\Newdata\data6.log
2017-12-09T11:04:44+05:30 DBG  Update existing file for harvesting: C:\Users\a617744\Newdata\data6.log, offset: 487
2017-12-09T11:04:44+05:30 DBG  File didn't change: C:\Users\a617744\Newdata\data6.log
2017-12-09T11:04:44+05:30 DBG  Prospector states cleaned up. Before: 1, After: 1
2017-12-09T11:04:44+05:30 DBG  Registry file updated. 13 states written.
2017-12-09T11:04:54+05:30 DBG  Run prospector
2017-12-09T11:04:54+05:30 DBG  Start next scan

filebeat->elasticsearch works fine. But for filebeat->logstash->elasticsearch I'm facing this issue.

It worked after I placed my log files in some other directory. The connection was established. But for the second time when I tried to run filebeat and logstash again from that location. Same issue happened. What could possibly be the reason?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.