Those are my statistics. When I tcpdump port 5044 I see traffic coming from the host where I have winlogbeat running and when I tcpdump port 9200 on the server I see a lot of traffic. So I suppose Data is reaching elasticsearch. I also see the number of Documents increasing at the dashboard, but I cannot get Kibana to show me any actual event logs. What am I missing?
At the Dashboard it says I need an Index Pattern. I installed a "Custom Windows Event Logs" integration, but I am not really sure what that is.
At the Dashboard it says I need an Index Pattern. I installed a "Custom Windows Event Logs" integration, but I am not really sure what that is.
Go to your Kibana UI --> Stack Management --> Kibana --> Index Pattern
this is where you need to create Index Pattern for created index which are coming from winlogbeat
When I do so, it says "Ready to try Kibana? First you need data." And it asks me to add an integration. Somehow the data is not recognized or I need to configure something differently. But I dont know what to look for
Usually the log integration flow is like:
Filebeat/Logbeat -->Logstash-->Elasticsearch-->Kibana
Make sure your Logbeat should be running as an instance on every node,which will push the log to logstash collector and then to logstash indexer later towards Elasticsearch and finally to Kibana to view the logs
Okay, now I see some indices. However, these look like default ones. There is nothing that seems to be related to the winlogbeat data I want to process.
In /etc/logstash/conf.d/30-elasticsearch-output.conf I have defined
index => "testindex"
but in Kibana I see that the index is not created.
When doing a "tcpdump dst port 9200" I do not see any traffic anymore.
I actually wanted to check on the API Port. Anyhow, I think i got closer to the issue. A tcpdump at port 5044 shows only keepalive packets, but no log data being sent. In my winlogbeat config I have configured logstash port 5044 as my output target.
please check the logs for winlogbeat and logstash and does their any kind of redis is running which will cache the logs(just for curiosity)
Also if you felt your configurations are fine then you can restart all the nodes, just to check incase, you receive any new logs or not
Actually, I think my configs were right, even before the weekend. I restarted all nodes as you said and now I can access the Data. Thank you very much for your support!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.