Config file compatibility

Hello,

I would like to ask if Logstash 5.0.0 alpha3 configuration is compatible with the version 2 config file?

I am asking cause I have copied my old version 2.x config file into /etc/logstash/conf.d/ but in elasticsearch does not seam that the information it is parsed by the config file.

If it's compatible how can I make it work?

Regards,
Peter

The pipeline file format is the same, but there are many plugins with changed and/or deprecated options.

What error did you get? How are you starting Logstash?

I don't get any error messages in the log files. I am parsing some nginx access.log with filebeat and forward it to logstash to parse the logs with some grok and also apply a geoip on the IP address and I observed that in the elasticsearch the information which is coming to logstash is stored as it is without being parsed by the grok and geoip filters.

I am starting the logstash using systemctl.

Regards,
Peter

I'd start by sending to stdout output first, and see if you can add verbose logging.

Putting logstash on verbose at startup I am getting the lots of line like this:

{"timestamp":"2016-06-02T22:00:43.556000+0000","message":"Adding pattern","RAILS3FOOT":"Completed %{NUMBER:response}%{DATA} in %{NUMBER:totalms}ms %{RAILS3PROFILE}%{GREEDYDATA}","level":"info"} {"timestamp":"2016-06-02T22:00:43.558000+0000","message":"Adding pattern","RAILS3PROFILE":"(?:\\(Views: %{NUMBER:viewms}ms \\| ActiveRecord: %{NUMBER:activerecordms}ms|\\(ActiveRecord: %{NUMBER:activerecordms}ms)?","level":"info"} {"timestamp":"2016-06-02T22:00:43.558000+0000","message":"Adding pattern","RAILS3":"%{RAILS3HEAD}(?:%{RPROCESSING})?(?<context>(?:%{DATA}\\n)*)(?:%{RAILS3FOOT})?","level":"info"} {"timestamp":"2016-06-02T22:00:43.560000+0000","message":"Adding pattern","REDISTIMESTAMP":"%{MONTHDAY} %{MONTH} %{TIME}","level":"info"} {"timestamp":"2016-06-02T22:00:43.561000+0000","message":"Adding pattern","REDISLOG":"\\[%{POSINT:pid}\\] %{REDISTIMESTAMP:timestamp} \\* ","level":"info"} {"timestamp":"2016-06-02T22:00:43.563000+0000","message":"Adding pattern","RUBY_LOGLEVEL":"(?:DEBUG|FATAL|ERROR|WARN|INFO)","level":"info"} {"timestamp":"2016-06-02T22:00:43.564000+0000","message":"Adding pattern","RUBY_LOGGER":"[DFEWI], \\[%{TIMESTAMP_ISO8601:timestamp} #%{POSINT:pid}\\] *%{RUBY_LOGLEVEL:loglevel} -- +%{DATA:progname}: %{GREEDYDATA:message}","level":"info"} {"timestamp":"2016-06-02T22:00:43.596000+0000","message":"Using geoip database","path":"/etc/logstash/GeoLite2-City.mmdb","level":"info"} {"timestamp":"2016-06-02T22:00:43.645000+0000","message":"Starting pipeline","id":"main","pipeline.workers":4,"pipeline.batch.size":125,"pipeline.batch.delay":5,"pipeline.max_inflight":500,"level":"info"} {"timestamp":"2016-06-02T22:00:43.663000+0000","message":"Pipeline main started"}

after the pipeline is started there is nothing more in the log.
I have also put stdout { codec => json } but that does not change anything.

Also the logstash.conf looks like this:

input { beats { host => "0.0.0.0" port => "5400" } } filter { if [type] == "nginx-access" { grok { match => { 'message' => '%{IPORHOST:clientip} %{USER:ident} %{USER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}' } } geoip { source => "clientip" database => "/etc/logstash/GeoLite2-City.mmdb" } } } output { stdout { codec => json } elasticsearch { hosts => ["localhost:9200"] user => someuser password => somepass } }

How do you know you're still getting data sent to the beats input?

I have multiple checks, logstash index is increasing in size, logstash data shows up in elasticsearch unprocessed, tcpdump on the logstash/beats port on the server where filebeat is running show traffic.

no one has any idea why I am getting this issue?

I have figured it out, The new filebeat has two config file one normal and one full the normal one does not have the document_type configuration and since my logstash config does need the document type to be nginx-access it does not parse the logfiles which came with the default document_type: log. Once I configured the full file and renamed it to filebeat.yml everything started to work correctly.

However I am still not able to get the logs on logstash.

Regards,
Peter