Logstash 6.4.x : reading multiple files in conf.d

Hello

I encounter a very special problem on a debian 9.5: 1 my conf files are well read in /etc/logstash/conf.d but as soon as there are several, they are no longer taken into account.
Example:
juste one file iptables.conf
systemctl start logstash.service

new lines iptables appears in :
root@kvm:~# curl -XGET 'localhost:9200/_cat/indices?v'

juste one file auth.conf
systemctl start logstash.service

new lines auth appears in :
root@kvm:~# curl -XGET 'localhost:9200/_cat/indices?v'

But when exactly the same 2 files iptables and auth are in conf.d:

root@kvm:~# curl -XGET 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .kibana hnnsF0SfSK6J_Aepw1Oxpw 1 0 5 0 37.5kb 37.5kb

no iptables and auth index...
I
I'am very disappointed ... several day without no solution... try with 6.4.0 and 6.4.1 but same issue...

Thanks you very much if someone can help me.

Is there anything in the Elasticsearch or Logstash logs? Logstash will concatenate all files in the directory - are you using conditionals to separate the flows? Do the two data types have any fields with conflicting formats that would cause a mapping error if they were written to the same index?

Hello
Thanks you Christian for your interest.

I create 2 separate index, my 2 conf files are:

root@kvm:~# cat /etc/logstash/conf.d/auth.conf
input {
tcp {
    port => "5001"
   codec => json
    tags => ["syslogauth"]
}
}

filter {
    grok {
            named_captures_only => false
            break_on_match => true
            match => { "message" => [" New session %{NUMBER} of user %{USERNAME:user}."," Accepted password for %{USERNAME:user} from %{IP:ip} port %{NUMBER} ssh2"," Failed password for %{USERNAME:user} from %{IP} port %{NUMBER} ssh2"," Accepted publickey for %{USERNAME} from %{IP:ip} port %{NUMBER} ssh2"] }
}


if "_grokparsefailure" in [tags] {
  drop { }
}
}

output {
    if "syslogauth" in [tags] {
     elasticsearch {
    hosts => [ "localhost:9200" ]
    index => "auth"
}
    }
}

And

root@kvm:~# cat /etc/logstash/conf.d/iptables.conf
input {
    tcp {
        port => "5003"
       codec => "json"
        tags => ["iptables"]
    }
}
# The filter part of this file is commented out to indicate that it is
# optional.
 filter {

#  IPTABLES DROP/REJECT/ACCEPT

  grok {
        named_captures_only => false
        break_on_match => true
        patterns_dir => "/etc/logstash/iptables.pattern"
        match => { "message" => "%{IPTABLES}"}
         }
# convertir string en int pour elasticsearch
mutate {
    convert => {
        "src_port" => "integer"
        "dst_port" => "integer"
        }
      remove_field => [ "host" ] # kludge pour ne pas avoir d'erreur dans les logs a propos du champ "host"
   }

geoip {
        source => "src_ip"
    }
geoip {
        source => "dst_ip"
    }
if "_grokparsefailure" in [tags] {
  drop { }
}
}
output {
    if "iptables" in [tags] {
        elasticsearch {
                hosts => [ "localhost:9200" ]
                index => "iptables"
        }
    }
}

In this configuration the indexes are not create in elasticsearch.
When i removed one of them, it works perfectly

I have installed the deb package without no specific configuration

The logs seems good

[2018-09-21T22:10:16,031][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-09-21T22:10:16,051][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-09-21T22:10:16,069][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-09-21T22:10:16,755][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-09-21T22:10:16,843][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-09-21T22:10:16,957][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5001", :ssl_enable=>"false"}
[2018-09-21T22:10:17,313][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5003", :ssl_enable=>"false"}
[2018-09-21T22:10:17,397][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6c326101 sleep>"}
[2018-09-21T22:10:17,482][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-21T22:10:18,030][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

And Elasticsearch

[2018-09-22T01:30:00,000][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2018-09-22T01:30:00,035][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [LRjRHtV] Deleting expired data
[2018-09-22T01:30:00,045][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [LRjRHtV] Completed deletion of expired data
[2018-09-22T01:30:00,045][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks

tcpdump shows syslog sent on 5001 but not 9200

root@kvm:~# tcpdump -i br0 -nn -vvv -s 0 port 5001 -w -
tcpdump: listening on br0, link-type EN10MB (Ethernet), capture size 262144 bytes
  {"@timestamp":"2018-09-22T09:17:34.864681+02:00","@version":"1","message":" Accepted publickey for arnaud from 192.168.0.60 port 51458 ssh2","sysloghost":"gibson","severity:"info","facility":"authpriv","programname":"sshd","procid":"10520"}

{@timestamp":"2018-09-22T09:17:34.871754+02:00","@version":"1","message":" pam_unix(sshd:session): session opened for user arnaud by (uid=0)","sysloghost":"gibson",etc..."}

and

root@kvm:~# tcpdump -i lo -nn -vvv -s 0 port 9200 -w -
tcpdump: listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes
HEAD / HTTP/1.1
Host: localhost:9200
Content-Length: 0
Connection: keep-alive

HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 493

GET /_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip
Host: localhost:9200
Content-Length: 0
Connection: keep-alived

not data syslog on 9200 but i works well when i have only auth.conf in /etc/logstash/conf.d

Below all logs after starting process logstash

root@kvm:~# tail -f /var/log/logstash/logstash-plain.log
[2018-09-22T09:43:31,110][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-09-22T09:43:36,838][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>31, "name"=>"[main]<tcp", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-tcp-5.0.9-java/lib/logstash/inputs/tcp.rb:180:in `close'"}], ["LogStash::Filters::GeoIP", {"source"=>"dst_ip", "id"=>"04dececbe7c1c2024cf9b2416f8d5eb17a4a6b70658cdc22c220656237625b57"}]=>[{"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:316:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:316:in `read_batch'"}]}}
[2018-09-22T09:43:36,910][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.
[2018-09-22T09:43:41,843][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x6c326101 run>"}

^[[A
[2018-09-22T09:45:48,536][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.1"}
[2018-09-22T09:45:56,086][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-09-22T09:45:57,242][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-09-22T09:45:57,266][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-09-22T09:45:57,792][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-09-22T09:45:58,073][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-09-22T09:45:58,079][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-09-22T09:45:58,145][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-09-22T09:45:58,185][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-09-22T09:45:58,248][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-09-22T09:45:58,260][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-09-22T09:45:58,261][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-09-22T09:45:58,324][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-09-22T09:45:58,364][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-09-22T09:45:58,364][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-09-22T09:45:58,392][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-09-22T09:45:58,397][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-09-22T09:45:58,418][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-09-22T09:45:59,305][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-09-22T09:45:59,423][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[2018-09-22T09:45:59,551][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5001", :ssl_enable=>"false"}
[2018-09-22T09:46:00,233][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5003", :ssl_enable=>"false"}
[2018-09-22T09:46:00,315][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x20265a26 run>"}
[2018-09-22T09:46:00,440][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-22T09:46:00,988][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

And for elasticsearch

root@kvm:~# tail -f /var/log/elasticsearch/elasticsearch.log

    [2018-09-22T09:44:08,824][INFO ][o.e.n.Node               ] [LRjRHtV] version[6.4.1], pid[26924], build[default/deb/e36acdb/2018-09-13T22:18:07.696808Z], OS[Linux/4.9.0-3-amd64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_181/25.181-b13]
    [2018-09-22T09:44:08,825][INFO ][o.e.n.Node               ] [LRjRHtV] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.a8E4dXc7, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb]


    [2018-09-22T09:44:15,099][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [aggs-matrix-stats]
    [2018-09-22T09:44:15,099][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [analysis-common]
    [2018-09-22T09:44:15,101][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [ingest-common]
    [2018-09-22T09:44:15,101][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [lang-expression]
    [2018-09-22T09:44:15,101][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [lang-mustache]
    [2018-09-22T09:44:15,101][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [lang-painless]
    [2018-09-22T09:44:15,102][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [mapper-extras]
    [2018-09-22T09:44:15,102][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [parent-join]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [percolator]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [rank-eval]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [reindex]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [repository-url]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [transport-netty4]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [tribe]
    [2018-09-22T09:44:15,103][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-core]
    [2018-09-22T09:44:15,104][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-deprecation]
    [2018-09-22T09:44:15,105][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-graph]
    [2018-09-22T09:44:15,105][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-logstash]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-ml]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-monitoring]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-rollup]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-security]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-sql]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-upgrade]
    [2018-09-22T09:44:15,106][INFO ][o.e.p.PluginsService     ] [LRjRHtV] loaded module [x-pack-watcher]
    [2018-09-22T09:44:15,107][INFO ][o.e.p.PluginsService     ] [LRjRHtV] no plugins loaded
    [2018-09-22T09:44:24,722][INFO ][o.e.x.s.a.s.FileRolesStore] [LRjRHtV] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
    [2018-09-22T09:44:26,430][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/26987] [Main.cc@109] controller (64 bit): Version 6.4.1 (Build 1df3104bc26648) Copyright (c) 2018 Elasticsearch BV
    [2018-09-22T09:44:27,335][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
    [2018-09-22T09:44:27,812][INFO ][o.e.d.DiscoveryModule    ] [LRjRHtV] using discovery type [zen]
    [2018-09-22T09:44:29,572][INFO ][o.e.n.Node               ] [LRjRHtV] initialized
    [2018-09-22T09:44:29,573][INFO ][o.e.n.Node               ] [LRjRHtV] starting ...
    [2018-09-22T09:44:30,063][INFO ][o.e.t.TransportService   ] [LRjRHtV] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
    [2018-09-22T09:44:33,234][INFO ][o.e.c.s.MasterService    ] [LRjRHtV] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {LRjRHtV}{LRjRHtV_Rr2sQROudZAXTA}{3qHMEPVlSv-TWYFaoq4bLg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=3158278144, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
    [2018-09-22T09:44:33,242][INFO ][o.e.c.s.ClusterApplierService] [LRjRHtV] new_master {LRjRHtV}{LRjRHtV_Rr2sQROudZAXTA}{3qHMEPVlSv-TWYFaoq4bLg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=3158278144, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {LRjRHtV}{LRjRHtV_Rr2sQROudZAXTA}{3qHMEPVlSv-TWYFaoq4bLg}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=3158278144, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
    [2018-09-22T09:44:33,329][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [LRjRHtV] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
    [2018-09-22T09:44:33,329][INFO ][o.e.n.Node               ] [LRjRHtV] started
    [2018-09-22T09:44:34,137][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [LRjRHtV] Failed to clear cache for realms [[]]
    [2018-09-22T09:44:34,248][INFO ][o.e.l.LicenseService     ] [LRjRHtV] license [030a953f-0eb8-4ed8-91a8-5d19a90dfe77] mode [basic] - valid
    [2018-09-22T09:44:34,268][INFO ][o.e.g.GatewayService     ] [LRjRHtV] recovered [0] indices into cluster_state

You are not protecting your filters using conditionals, which means that all events will go through all filters. As you are dropping everything that fails grok, I suspect this may apply to all events as the two types are mutually exclusive and every event would fail at least one of the patterns.

You can easily verify this by writing everything that failed grok to a file instead of just dropping it.

Hi Christian,

I was not able to remplace by a file (don't known why i have an error)

if "_grokparsefailure" in [tags] {
  file {
      path => "/tmp/auth.log"
      codec => rubydebug
    }
 }

give me an error.

It's not a problem, i remove this filter just to see and the index are create in elasticsearch but now all my lines in kibana begins by

tags: auth, _grokparsefailure, _geoip_lookup_failure message: .....(as you guessed)

I think the solution is what you say "I'am not protecting my filter using conditionals..."
sorry but i don't understand : I use tag in input and output for create a specific index and the filter select only what i want ?
I only want have a index iptables for my iptables logs and a second index for all authentications on my system.

You need to check for tag around your filters as well, just like you do for outputs like this:

input {
  tcp {
    port => "5001"
    codec => json
    tags => ["syslogauth"]
  }
}

filter {
  if "syslogauth" in [tags] {
    grok {
      named_captures_only => false
      break_on_match => true
      match => { "message" => [" New session %{NUMBER} of user %{USERNAME:user}."," Accepted password for %{USERNAME:user} from %{IP:ip} port %{NUMBER} ssh2"," Failed password for %{USERNAME:user} from %{IP} port %{NUMBER} ssh2"," Accepted publickey for %{USERNAME} from %{IP:ip} port %{NUMBER} ssh2"] }
    }

    if "_grokparsefailure" in [tags] {
      drop { }
    }
  }
}

output {
  if "syslogauth" in [tags] {
    elasticsearch {
      hosts => [ "localhost:9200" ]
      index => "auth"
    }
  }
}

If you want to write to file you need to do so in the output block. Write to file if the grok parse failure tag is set and otherwise write to Elasticsearch.

Thanks you very much Christian for your explanations : very clear and works very well !

Arno

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.