[Simple] Logstash config file

I've got these lines in my log:

1524708231.794259376  8  c0:4a:00:40:e6:0e  c0:4a:00:40:AA:AA   -45
1524708231.829154447  6  3c:08:f6:e6:AA:AA  3c:08:f6:e6:AA:AA   -71 

1st item is Unix timestamp
2nd item is channel
3rd item is MAC
4rd item is MAC
5th item is RSSI

What is the correct logstash config to output this to Elasticsearch?
I am running everything localhost, default setting, latest version[6.2.4] with X-Pack.

Should it be something like

input {
  file {
    path => "/my/log/file.log"
    type => "syslog"
  }

filter {
# ???
}

output {
  elasticsearch { hosts => ["localhost:9200"] }

Whats is your question here @Kevin_Csuka :cake:, :slight_smile:

Sorry, pushed the tab button and then pressed space, but then it saved my message instead of putting an actual tab in the message...
I edited the first post. Thanks.

@JKhondhu

@Kevin_Csuka
Have a look here wrt getting started with grok filters for syslog:
https://www.elastic.co/guide/en/logstash/current/config-examples.html
https://www.elastic.co/guide/en/kibana/6.2/xpack-grokdebugger.html

No new index is created ....

➜  logstash cat /etc/logstash/conf.d/test.conf
input {
  file {
    path => "/var/log/test.log"
  }
}

filter {
  grok {
    match => { "message" => "%{NUMBER:timestamp} %{NUMBER:channel} %{MAC:client_mac} %{MAC:mac} %{INT:rssi}" }
 }
}

output {
  elasticsearch { 
    hosts => ["localhost:9200"] 
    user => all
    password => x
}
  stdout { codec => rubydebug }
}
➜  logstash cat /var/log/test.log
1524708231.794259376  8  c0:4a:00:40:AA:AA  c0:4a:00:40:AA:AA   -45
1524708231.829154447  6  3c:08:f6:e6:AA:AA  3c:08:f6:e6:AA:AA   -71 
➜  logstash 

Log file of logstash: https://pastebin.com/rs30KkJE

@JKhondhu

When there is a single space between the message you are looking to ingest:

bin/logstash -e 'input { stdin{} } filter { grok { match => { "message" => "%{NUMBER:timestamp} %{NUMBER:channel} %{MAC:client_mac} %{MAC:mac} %{INT:rssi}" } }} output { stdout { codec => rubydebug }}'

1524708231.794259376 8 c0:4a:00:40:AA:AA c0:4a:00:40:AA:AA -45

It works:

[2018-04-26T11:10:00,812][DEBUG][logstash.pipeline        ] output received {"event"=>{"client_mac"=>"c0:4a:00:40:AA:AA", "timestamp"=>"1524708231.794259376", "channel"=>"8", "message"=>"1524708231.794259376 8 c0:4a:00:40:AA:AA c0:4a:00:40:AA:AA -45", "mac"=>"c0:4a:00:40:AA:AA", "@version"=>"1", "@timestamp"=>2018-04-26T10:10:00.687Z, "host"=>"khondhu", "rssi"=>"-45"}}
{
    "client_mac" => "c0:4a:00:40:AA:AA",
     "timestamp" => "1524708231.794259376",
       "channel" => "8",
       "message" => "1524708231.794259376 8 c0:4a:00:40:AA:AA c0:4a:00:40:AA:AA -45",
           "mac" => "c0:4a:00:40:AA:AA",
      "@version" => "1",
    "@timestamp" => 2018-04-26T10:10:00.687Z,
          "host" => "khondhu",
          "rssi" => "-45"

Perhaps we need to add a space grok pattern.

@JKhondhu

Seems like the data is being picked up.
I see that RSSI is set as string, while I set it to INT, see:

For example; I want to be able to create pie charts with the most channels used. Right now, this is not possible.

Use a mutate filter to convert it?

Final logstash config:

➜  logstash cat /etc/logstash/conf.d/test.conf 
input {
  file {
    path => "/var/log/test.log"
  }
}

filter {
  grok {
    match => { "message" => "%{NUMBER:timestamp} %{NUMBER:channel} %{MAC:mac} %{INT:rssi}" 
    }
  }

   date {
   match => [ "timestamp", "UNIX" ]
   locale => en
   }
   
   mutate {
     convert => { 
       "channel" => "integer" 
       "rssi" => "integer"
     }
  }
}

output {
  elasticsearch { 
    hosts => ["localhost:9200"] 
    user => all
    password => x
}
  stdout { codec => rubydebug }
}

Raw data:

1524671826.853290121 10 64:70:02:cb:9d:44 -51
1524671826.854767295 10 66:70:02:cb:9d:44 -51
1524671826.859872377 10 64:70:02:cb:9d:44 -51

Also note: Once logstash is started, I manually edit the raw data file and append an extra line myself, and then save it.. I don't know why, but else Logstash doesn't take the file.

Thanks.

@Jenni

How can I make MAC addresses searchable and visualize them with Kibana?
The OUI filter isn't applicable to my version.

I think that depends of the searches and visualizations you plan to do. You'll have to configure your ElasticSearch mapping with the right field types and analyzers to match your requirements.
If you want to to aggregations based on the mac address, you'll need a 'keyword' mapping. For text searches this might help: https://stackoverflow.com/questions/17839149/elasticsearch-mac-address-search-mapping

Really simple visualization, only the Total amount.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.