I am absolutely new to the ELK-Stack, but as described in my other thread I have an ELK-Stack running since yesterday and absolutely flashed what is possible with this.
Since yesterday I also send my TMG access log files (w3c) per filebeat to logstash and have them now visible in Kibana.
I already heard and read about how to grok such log files, but I don't know how start. I had one try yesterday, but nothing happened.
My log looks like this:
10.10.10.10 domain\user Microsoft Office/15.0 (Windows NT 6.3; Microsoft Outlook 15.0.4849; Pro) 2016-09-21 23:51:45 TMG - subdomain.domain.com 172.16.0.1 443 47 883 4446 https POST http://subdomain.domain.com/autodiscover/autodiscover.xml text/xml; charset=utf-8 Inet 200 Outlook Anywhere Req ID: 0a61611c; Compression: client=No, server=No, compress rate=0% decompress rate=0% ; FBA cookie: exists=no, valid=no, updated=yes, logged off=no, client type=unknown, user activity=yes Perimeter Local Host 0x600 Allowed - Allowed - - - - - - 0 - 0 - 172.16.100.1 - Web Proxy subdomain.domain.com 41461 -
Could you explain me how it works?
I would also like to know how I can install the GeoIP plugin and how to use that.
10.10.10.10 domain\user Mozilla/5.0 (iPad; CPU OS 9_3_5 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Version/9.0 Mobile/13G36 Safari/601.1 2016-09-28 06:39:10 servername - sub.domain.local
And this is my logstash conf:
input {
eventlog {
type => 'Win32-EventLog'
logfile => ["Application", "Security", "System"]
}
syslog {
type => "syslog"
port => 514
}
}
I can't see that you're defining the timestamp field anywhere. Instead of capturing the individual timestamp pieces you should be able to use the TIMESTAMP_ISO8601 pattern and capture the whole timestamp into the timestamp field.
The remove_field option accepts an array of field names, not a string with comma-separated field names. Hence, change to remove_field => ["year", "monthnum", ...].
Unless you're running Logstash 1.x (and why would you do that?) the elasticsearch output needs to be modified to remove the protocol option and to use hosts instead of host.
I have also added
beats {
port => 5044
}
to the logstash.conf - is that needed?
Are you going one of the Beats programs to send stuff to port 5044 on the Logstash server?
When you add that configuration, is Logstash actually listening on port 5044? Can you connect from the Filebeat machine via telnet? Is there a firewall on the Logstash machine that might be blocking the access? Is there a firewall on the Filebeat machine that might be blocking the access?
No, it's not listening so I can't connect by telnet. Connection refused.
I tried also to connect from another machine by telnet - the same problem.
Firewall is turned off.
I have the feeling that anything with my logstash.conf or beats plugin is not ok, because when I have the syslog as well as the beats plugin in, syslog is also no more working.
As soon as I have deleted the beats plugin again, syslog is working again.
Only beats as input will also not open port 5044.
Syslog on port 514 is immediately open when I save the conf and restart services.
I had a look into the Windows Event Viewer and if beats plugin is added to logstash.conf the service is killed every 20 seconds and then restarting, again and again. When I delete it and put in only syslog plugin it's working and not restarting.
Show us what the events look like when they leave Logstash. Either use a stdout { codec => rubydebug } output or copy/paste from the event's JSON tab in Kibana. No screenshots.
How can I see if an error message is generated from logstash?
Read the Logstash log. Its location is given by the -l (or --log) Logstash startup option. If absent, Logstash will log to stderr (or is it stdout?).
Shame on me... On the way to change the config and starting logstash in the commandline instead of the installed service I have found the problem...
I followed a tutorial to install the ELK-Stack which is not using the default config (logstash.conf), but another one in the bin folder: logstash.json which the guys from the tutorial delivered.
The service was directly started with this config, so the consequence is that nothing was applied what I have changed.
You showed me the right way... thank you!
Now the filter is working, all fields what I need are separated. Only the timestamp is not replaced as it should.
Do you have any idea what to correct to get this working?
Now the filter is working, all fields what I need are separated. Only the timestamp is not replaced as it should.
Do you have any idea what to correct to get this working?
Your date filter isn't working because you have no timestamp field to parse. The TIMESTAMP_ISO8601 pattern should work for you, and then you capture the timestamp into a single field that you can feed to the date filter.
Oh, you have a tab between the date and the time. In that case you can capture each date and time component into a field of its own (%{YEAR:year} etc) and merge those fields into a single field with add_field:
You didn't follow this part of my previous advice: "In that case you can capture each date and time component into a field of its own (%{YEAR:year} etc)".
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.