For starters, you could take Kibana out of the equation and test if the data is in Elasticsearch. To do this check if the logstash- indices you expect are even there in Elasticsearch by running:
curl http://localhost:9200/_cat/indices
If the indices you expect to see show up, then do a search on them to see if there's the expected data in them:
apparently the indices, i'm waiting for is not there administrator@ELKibana4:~$ curl http://localhost:9200/_cat/indices yellow open .kibana 1 1 104 1 101kb 101kb yellow open blog 5 1 1 0 3.6kb 3.6kb
Okay, so then something's not correct on the shipping side as the documents don't even seem to be getting to Elasticsearch. I'm moving this to the Logstash category as that is more appropriate at this point.
You can't use the lumberjack input to receive data over the syslog protocol. Use a syslog, udp, or tcp input (depending on how the sender sends the data).
Unless you run Logstash as root or take other special measures you won't be able to listen on a port below 1024.
Could you explain me how to be able to receive log on a port bellow 1024, without runing logstash as root ?
You can use iptables to reroute the port and you should be able to adjust the process's capabilities (but I recall that being problematic with the JVM). I don't have any details as I've never done it myself.
I've set this up but i didn't have any indice too:
Be systematic. Forget about Kibana and ES for now. Comment out the elasticsearch output and focus on the stdout output. Does that make a difference? Does anything happen if you send stuff to the listening port with telnet or netcat?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.