Hi I am trying to get my head around the ELK stack "thing" but being defeated!
I have installed all the latest v5.x components (ELK) onto CentOS 7.
Now, I have tonnes of 100mb files being FTP'd to the server, which I want logstash to index for ElasticSearch, so I can browse/search/analyze in Kibana.
Specifically, they are from a content filter, so it's web access logs, but the data's pretty much in CSV format from what I can tell.
I have created a logstash conf file to index one of the many files, for testing:
input {
file {
path => "/data/incomingdata/accesslogs.@20170320T130730.s"
type => "Ironport"
}
}
filter {
csv {
columns => ["col1","col2","col3"]
separator => ","
}
if [message] =~ /^#/ {
drop{}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "idx_accesslogs"
}
stdout { codec => rubydebug }
}
So, it's reading in a file, getting the first three columns (comma delimited), but skipping any lines beginning with '#'.
Am I understanding this right?
My problem is that logstash successfully restarts, but no index gets created:
[root@svr-h003349 logstash]# curl 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 9gCH5oUTS4W3RBSs_b9TlQ 1 1 1 0 3.1kb 3.1kb
[root@svr-h003349 logstash]# vim conf.d/wsa-h002606.conf
I'm a total newb with ELK so need some hand-holding! How can I proceed please?
Thank you!
Best Regards,
Elliot