Have couple questions about Logstash configuration

Hello ,

i have 1 main server that ElasticSearch+kibana+logstash installed on.

and i want to monitor 2 apache servers and 1 firewall with this server,

  1. is it possible ? on each server i will install Logstash-Forwarder that will send logs to this main server

  2. is it possible to configure Logstash as a Syslog server and not install another open source like syslog-ng ? it should receive logs from my Juniper Firewall to the main server

Best Regards,
Alek

  1. Yes, that's what Logstash is meant to do. To be clear, you don't have to use logstash-forwarder. Feature-wise it's less capable than Logstash and it only exists because it has a smaller footprint.
  2. Yes, you can use the syslog input to receive syslog messages. Keep in mind that unless you run Logstash as root (which you shouldn't do) you can't listen on the default port of 514. I'm sure you already have a syslog daemon installed so I don't know if you'll actually be maintaining one less piece of software by having Logstash collect syslog messages too.

thanks for the quick reply.

  1. but when im define at Kibana the index pattern : logstash-*

how its know which server it is ? i mean
Server-Apache1 will the first that send logs to the main server
Server-Apache2 will the second one that send logs to the main server so how it knows which logstash-* it is ? where i need to configure that

  1. so i will change the port to something else , i will check it tomorrow morning

A properly configured Logstash will extract fields with e.g. the hostname from each input log entry, and you can use that to place queries against the data. A Logstash index typically contains data from multiple sources.

can you give me an example please ?

Thanks
Alek

https://www.elastic.co/guide/en/logstash/current/config-examples.html#_processing_syslog_messages

thanks for the example of syslog , i mean if there any exmaple for correct configuration for multiple logstash

I don't know what you mean with "multiple Logstash".

  1. i mean an example for for what you wrote :

A properly configured Logstash will extract fields with e.g. the hostname from each input log entry, and you can use that to place queries against the data. A Logstash index typically contains data from multiple sources.

  1. i have configures succesfuly Firewall to my Logstash server , i can see at Kibana information

but i cannot understand how im build a filter to those logs , lets say i have a message :

i want that i will have fields like : Source ip , Source Port , Destenation Ip etc... that i will be able to build Dashboard

becuase right now i have minimum information.

Thanks,
Alek

Okay, but the configuration example I linked to explains exactly how to extract fields from a syslog source which is what you're getting from your firewall. I can see right away that you'll have to adjust the example since the timestamp format is different, but it's the same principle. You'll want to look into the kv filter for parsing your message. It'll take care of most things.

Hey , its still doesn't work

i have created patterns file at : /opt/logstash/patterns/

the file contain :

FORTIGATE_52BASE <%{NUMBER:syslog_index}>date=%{FORTIDATE:date} time=%{TIME:time} devname=%{HOST:hostname} devid=%{HOST:devid} logid=%{NUMBER:logid} type=%{WORD:type} subtype=%{WORD:subtype} eventtype=%{WORD:eventtype} level=%{WORD:level} vd="%{WORD:vdom}"

FORTIGATE_52BASEV2 <%{NUMBER:syslog_index}>date=%{FORTIDATE:date} time=%{TIME:time} devname=%{HOST:hostname} devid=%{HOST:devid} logid=%{NUMBER:logid} type=%{WORD:type} subtype=%{WORD:subtype} level=%{WORD:level} vd="%{WORD:vdom}"

name : fortigate - owned by logstash user and group

my input file :

filter {
if [type] == "syslog" {
grok {
match => ["message", "%{FORTIGATE_52BASE} %{GREEDYDATA:forti_message}"]
}

syslog_pri { }

grok {
match => [
"forti_message", "%{FORTIGATE_52BASE}"
"forti_message", "%{FORTIGATE_52BASEV2}"]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

and im getting this

{:timestamp=>"2015-09-09T01:19:15.695000+0300", :message=>"Error: Expected one of #, {, ,, ] at line 57, column 1 (byte 599) after filter {\n if [type] == "syslog" {\n grok {\nmatch => ["message", "%{FORTIGATE_52BASE} %{GREEDYDATA:forti_message}"]\n } \n\n syslog_pri { }\n\ngrok {\nmatch => [\n"forti_message", "%{FORTIGATE_52BASE}"\n"}
{:timestamp=>"2015-09-09T01:19:15.717000+0300", :message=>"You may be interested in the '--configtest' flag which you can\nuse to validate logstash's configuration before you choose\nto restart a running system."}

Use the http://grokconstructor.appspot.com/do/construction to build your filter and extract field. What i used was to create a rsyslog server and forward all traffic to logstash and logstash will act gather all logs on that connection.

grok {
  match => [
    "forti_message", "%{FORTIGATE_52BASE}"
    "forti_message", "%{FORTIGATE_52BASEV2}"]
}

There's a comma missing after "forti_message", "%{FORTIGATE_52BASE}".

I repeat my suggestion to use the kv filter.

Thanks niraj_kumar

well i tried to use kv filter and it extraced some fileds but in a strange way and incorrect data.

i want to try make my own filter.

so i did a fresh configuration.

i have created pattern file with this syntax :

after that i have created conf file with this syntax :

filter {
if [type] == "syslog" {
grok {
match => [ "message", "%{FORTIGATE_52BASEV2}"]

}
syslog_pri { }
date {
  match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
}

}
}

from what i understood it should extract fields like src and dst and etc....

i have start logstash service and i get message with src field and dst field , but its not extracted , why ? where have i wrong

this are the fileds i have

does someone know what could be wrong ?

Your FORTIGATE_52BASEV2 pattern doesn't match the message. For example, the message begins with <117> but you're matching a plain integer (without angle brackets) followed by two spaces. Perhaps it would be helpful for you to use http://grokconstructor.appspot.com/ to construct your pattern.

well i can see in settings that there is new fields created , but still cannot see in Discover

Well at the end i used KV filter as you suggested me at the first time , its working and extracting the fields i wanted , i changed all my configuration

  1. but its extracting me a lot of time messages why ?

  2. how can i change the name of the pattern for kibana , and not the deafult logstash-* ?

Best Regards,
Alek

  1. It looks like you're creating fields whose names are the timestamp values.
  2. That can be changed in the Kibana settings. I don't recall exactly where.
  1. this is my configuration :

input {
file {
path => ["/var/log/network.log"]
sincedb_path => "/var/log/logstash"
start_position => "beginning"
type => "syslog"
tags => [ "netsyslog" ]
}
}

filter {

             kv {
            field_split => ","
    }

}

output {
elasticsearch {
protocol => "node"
host => "localhost"
cluster => "elasticsearch"
}
}

  1. i mean how can i create that logstash will send data to Kibana from 2 different devices with 2 different index pattern ?

Example :
a) Apache server sending logs and index pattern will be : apache-*
b) Fortigate sending logs and index pattern will be : fortigate-*