Logstash-Syslog Configuration in 7.0

(Woody) #1

My current logstash indexing is not working in 7.0 is it the syntax in 7.0 are changed?
btw, all this while, it is working in 6.7
My current configuration as below,

input {
  tcp {
    port => 5514
    type => syslog
  }
  udp {
    port => 5514
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch { hosts => ["10.3.3.41:9200"] }
  stdout { codec => rubydebug }
}

i can see my input from /var/log/messages,

May  8 15:37:03 z3elk-01 logstash: {
May  8 15:37:03 z3elk-01 logstash: "type" => "syslog",
May  8 15:37:03 z3elk-01 logstash: "@version" => "1",
May  8 15:37:03 z3elk-01 logstash: "@timestamp" => 2019-05-08T07:37:10.000Z,
May  8 15:37:03 z3elk-01 logstash: "syslog_hostname" => "z3leo-r03",
May  8 15:37:03 z3elk-01 logstash: "message" => "<166>May  8 15:37:10 z3leo-r03 IgmpSnooping: %IGMPSNOOPING-6-NO_IGMP_QUERIER: No IGMP querier detected in VLAN 7. IGMP report received from 30.32.7.61 on Ethernet11 for 239.255.255.253",
May  8 15:37:03 z3elk-01 logstash: "syslog_timestamp" => "May  8 15:37:10",
May  8 15:37:03 z3elk-01 logstash: "received_at" => "2019-05-08T07:37:02.972Z",
May  8 15:37:03 z3elk-01 logstash: "host" => "10.3.3.225",
May  8 15:37:03 z3elk-01 logstash: "received_from" => "10.3.3.225",
May  8 15:37:03 z3elk-01 logstash: "syslog_message" => "%IGMPSNOOPING-6-NO_IGMP_QUERIER: No IGMP querier detected in VLAN 7. IGMP report received from 30.32.7.61 on Ethernet11 for 239.255.255.253",
May  8 15:37:03 z3elk-01 logstash: "syslog_program" => "IgmpSnooping"
May  8 15:37:03 z3elk-01 logstash: }

but it cannot be index and display in kibana console, any expert can advice?

#2

If either logstash or elasticsearch is getting an error it will log an error message. What do you see in those 2 logs?

(Woody) #3

Hi Badger,

I saw these in logstash-plain.log,

[2019-05-08T15:25:03,420][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2019-05-08T15:25:10,165][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2019-05-08T15:25:10,360][INFO ][logstash.runner          ] Logstash shut down.
[2019-05-08T15:25:50,513][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.0.1"}
[2019-05-08T15:26:00,783][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.3.3.41:9200/]}}
[2019-05-08T15:26:00,984][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.3.3.41:9200/"}
[2019-05-08T15:26:01,085][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-05-08T15:26:01,091][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-05-08T15:26:01,115][INFO ][logstash.outputs.elasticsearch] Using default mapping template
[2019-05-08T15:26:01,145][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.3.3.41:9200"]}
[2019-05-08T15:26:01,370][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1, "index.lifecycle.name"=>"logstash-policy", "index.lifecycle.rollover_alias"=>"logstash"}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2019-05-08T15:26:01,448][INFO ][logstash.outputs.elasticsearch] Creating rollover alias <logstash-{now/d}-000001>
[2019-05-08T15:26:01,575][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x34652b63 run>"}
[2019-05-08T15:26:01,853][INFO ][logstash.outputs.elasticsearch] Rollover Alias <logstash-{now/d}-000001> already exists. Skipping
[2019-05-08T15:26:02,074][INFO ][logstash.outputs.elasticsearch] Installing ILM policy {"policy"=>{"phases"=>{"hot"=>{"actions"=>{"rollover"=>{"max_size"=>"50gb", "max_age"=>"30d"}}}}}} to _ilm/policy/logstash-policy
[2019-05-08T15:26:02,362][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-05-08T15:26:02,409][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:5514", :ssl_enable=>"false"}
[2019-05-08T15:26:02,730][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-05-08T15:26:02,836][INFO ][logstash.inputs.udp      ] Starting UDP listener {:address=>"0.0.0.0:5514"}
[2019-05-08T15:26:02,951][INFO ][logstash.inputs.udp      ] UDP listener started {:address=>"0.0.0.0:5514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2019-05-08T15:26:03,243][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
(Woody) #4

cluster logs,

[2019-05-09T07:59:52,194][INFO ][o.e.c.m.MetaDataCreateIndexService] [z3elk-01] [.monitoring-kibana-7-2019.05.09] creating index, cause [auto(bulk api)], templates [.monitoring-kibana, replica-counts], shards [1]/[0], mappings [_doc]
[2019-05-09T07:59:52,217][INFO ][o.e.c.r.a.AllocationService] [z3elk-01] updating number_of_replicas to [1] for indices [.monitoring-kibana-7-2019.05.09]
[2019-05-09T07:59:52,381][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T07:59:52,546][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T07:59:52,547][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T07:59:52,670][INFO ][o.e.c.r.a.AllocationService] [z3elk-01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-7-2019.05.09][0]] ...]).
[2019-05-09T07:59:52,713][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T07:59:59,836][INFO ][o.e.c.m.MetaDataCreateIndexService] [z3elk-01] [.monitoring-es-7-2019.05.09] creating index, cause [auto(bulk api)], templates [.monitoring-es, replica-counts], shards [1]/[0], mappings [_doc]
[2019-05-09T07:59:59,838][INFO ][o.e.c.r.a.AllocationService] [z3elk-01] updating number_of_replicas to [1] for indices [.monitoring-es-7-2019.05.09]
[2019-05-09T07:59:59,938][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T08:00:00,073][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T08:00:00,073][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T08:00:00,164][INFO ][o.e.c.r.a.AllocationService] [z3elk-01] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-7-2019.05.09][0]] ...]).
[2019-05-09T08:00:00,207][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name
[2019-05-09T09:00:00,002][INFO ][o.e.x.m.e.l.LocalExporter] [z3elk-01] cleaning up [2] old indices
[2019-05-09T09:00:00,017][INFO ][o.e.c.m.MetaDataDeleteIndexService] [z3elk-01] [.monitoring-kibana-6-2019.05.02/LaWI9-HwQV67M1Frm4g3zw] deleting index
[2019-05-09T09:00:00,017][INFO ][o.e.c.m.MetaDataDeleteIndexService] [z3elk-01] [.monitoring-es-6-2019.05.02/yDO1M-m9RaWNhq-5eaoroA] deleting index
[2019-05-09T09:00:00,202][WARN ][o.e.g.LocalAllocateDangledIndices] [z3elk-01] ignoring dangled index [[.kibana/gAWxTyGBRy-7Q7xkYjXT2A]] on node [{z3elk-02}{CXI1oA2tRu6GjeTvrvrwPw}{B1v2fUUiRVeAbznz4DDiTw}{10.3.3.42}{10.3.3.42:9300}{ml.machine_memory=8203431936, ml.max_open_jobs=20, xpack.installed=true}] due to an existing alias with the same name

any idea?

(Woody) #5

hi guys, anyone can assist?

#6

Increase the log level in logstash to debug. Use netstat to verify that a network connection to 10.3.3.41:9200 opens when you start logstash.