Begginers help grok and check if data is making it into elastic search

Hi, I have just started to play with elasticstack for the first time. My objective to create a simple to use system for firewall rule log aggregation/searching.

I have followed the basic guide to setup logstash, Elasticsearch (with Xpack) , kibana ( with Xpack) all on the same host. I have been using http://grokdebug.herokuapp.com/ to develop my grok paser/filter whatever you call it. but I am failing to make the system work end to end and I cant even verify if my logstash configuration is working correctly.

This is currently what my only logstash config file looks like

input{
tcp {
port => "55514"
type => "syslog-F5"
}
udp {
port => "55514"
type => "syslog-F5"
}
}
filter {
if [type] =~ "syslog-F5" {
grok {
match => { "message" => "^%{TIMESTAMP_ISO8601:SYSLOG_TIME} %{IPV4:DEVICEIP} %{HOSTNAME:HOSTNAME}|%{UNIXPATH:CONTEXT_NAME}|(?<CONTEXT_TYPE>[a-zA-Z_ ]*)|%{POSINT:RD}|%{UNIXPATH:ACL_NAME}|%{WORD:INUSE}|%{UNIXPATH:RULENAME}|%{WORD:ACTION}|((?<DROP_REASON>[a-zA-Z_ ]+))?|%{IPV4:SRC_IP}|%{NUMBER:SRC_PORT}|%{IPV4:DST_IP}|%{NUMBER:DST_PORT}|%{WORD:PROTOCOL}|%{UNIXPATH:VLANID}|%{GREEDYDATA:RAW_DATE}" }
}
date {
match => [ "RAW_DATE" , "MMM dd yyyy HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => localhost
user => elastic
password => changeme
}
file{
codec => plain
path => "/mnt/data/logstash/test.log"
}
}

Here is an example log taken from the output log file

2017-11-01T14:57:00.536Z 10.107.20.10 test2.mydomain.com.location|/Common/N82-Prod-Prod|Route Domain|1|/Comm-Prod|Enforced|/Common/Prod-Prod-Outside:Users_to_Prod_VRF-ip|Accept||10.200.5.5|16121|10.101.40.2|1967|UDP|/Common/VL_Prod-Prod-Outside_F5|Nov 01 2017 14:56:28

So that log is decoded fine by my grok on the site http://grokdebug.herokuapp.com/ but after that this is where my confusion begins.

  1. with my output setting a file should I not be seeing the key:pair values in it? Currently I only see un processed messages as they look like coming from syslog. If that's correct how do I check if my grok script is working?

  2. how do I check if data is making it from logstash into Elasticsearch? I am not seeing any data in Kibana and get the cant find index pattern logstash-* error.

my logstash log looks like

[2017-11-02T01:54:09,473][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[2017-11-02T01:54:09,474][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@localhost:9200/, :path=>"/"}
[2017-11-02T01:54:09,652][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[2017-11-02T01:54:09,675][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-11-02T01:54:09,680][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//localhost"]}
[2017-11-02T01:54:09,708][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-11-02T01:55:05,938][INFO ][logstash.pipeline ] Pipeline main started
[2017-11-02T01:55:05,939][INFO ][logstash.inputs.udp ] Starting UDP listener {:address=>"0.0.0.0:55514"}
[2017-11-02T01:55:05,947][INFO ][logstash.inputs.udp ] UDP listener started {:address=>"0.0.0.0:55514", :receive_buffer_bytes=>"62464", :queue_size=>"2000"}
[2017-11-02T01:55:05,957][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2017-11-02T01:55:06,183][INFO ][logstash.outputs.file ] Opening file {:path=>"/mnt/data/logstash/test.json"}

and my Elasticsearch log looks like

[2017-11-02T00:47:35,820][INFO ][o.e.c.m.MetaDataMappingService] [SSnuhdW] [logstash-2017.11.01/i8SUUAl3TDOKGJCtYBZN0w] create_mapping [syslog-F5]
[2017-11-02T01:12:38,416][INFO ][o.e.c.m.MetaDataMappingService] [SSnuhdW] [logstash-2017.11.01/i8SUUAl3TDOKGJCtYBZN0w] update_mapping [syslog-F5]
[2017-11-02T01:30:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2017-11-02T01:30:00,002][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [SSnuhdW] Deleting expired data
[2017-11-02T11:00:00,262][INFO ][o.e.c.m.MetaDataCreateIndexService] [SSnuhdW] [logstash-2017.11.02] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [default]
[2017-11-02T11:00:00,310][INFO ][o.e.c.m.MetaDataMappingService] [SSnuhdW] [logstash-2017.11.02/OV1XNcv4TH2AARaFN3lZFw] create_mapping [syslog-F5]
[2017-11-02T11:00:05,587][INFO ][o.e.c.m.MetaDataCreateIndexService] [SSnuhdW] [.monitoring-es-6-2017.11.02] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2017-11-02T11:00:08,758][INFO ][o.e.c.m.MetaDataCreateIndexService] [SSnuhdW] [.monitoring-kibana-6-2017.11.02] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[1], mappings [doc]
[2017-11-02T11:00:46,128][INFO ][o.e.c.m.MetaDataCreateIndexService] [SSnuhdW] [.watcher-history-6-2017.11.02] creating index, cause [auto(bulk api)], templates [.watch-history-6], shards [1]/[1], mappings [doc]
[2017-11-02T11:00:46,156][INFO ][o.e.c.m.MetaDataMappingService] [SSnuhdW] [.watcher-history-6-2017.11.02/sWXegmLzTh2o-mPXpck2mA] update_mapping [doc]
[2017-11-02T11:00:46,199][INFO ][o.e.c.m.MetaDataMappingService] [SSnuhdW] [.watcher-history-6-2017.11.02/sWXegmLzTh2o-mPXpck2mA] update_mapping [doc]

I would appreciate any help :slight_smile:

with my output setting a file should I not be seeing the key:pair values in it?

Not with the plain codec that you've configured for your file output. To debug inputs and filters I always recommend using a rubydebug codec.

how do I check if data is making it from logstash into Elasticsearch? I am not seeing any data in Kibana and get the cant find index pattern logstash-* error.

Your ES logs prove that you're getting data. Perhaps it's a permissions issue, i.e. the user you're logged into Kibana as doesn't have permission to read the logstash-2017.11.02 index?

Thax for getting back to me,

On the first point my build suffers a failure in Jruby when I set rubydebug codec so I cant use it. Also interestingly the data displayed in that log has additional data prepended compared to the actual payload data. So I was getting Grok failures so I just had to remove the first two pattern matches.

On the Second point you are 100% this was my issue, I was logged into kibana as the kibana user, I had to log in as the elastic user.

issuing curl -GET http://elastic:changeme@localhost:9200/_cat/indices? | grep log
showed that the data was getting to elastic stack and the counts where growing.

I was able to customize the template for elastic search easy enough following :


https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

So its all looking pretty good, I know this next question is a little off topic but I haven't been able to find a current answer. Do CIDR based lookups work in Kibana? I have only be able to use a range style command so far.

On the first point my build suffers a failure in Jruby when I set rubydebug codec so I cant use it.

I've never heard about that before.

Do CIDR based lookups work in Kibana?

I don't believe so, no.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.