Currently I have my syslog-ng --> logstash --> elasticsearch1 &
elastisearch2 setup working pretty good. It's accepting over 300 events per
second and hasn't bogged the systems down at all. However I'm running into
2 issues that I don't quite understand.
When viewing the information in Kibana, it appears to be anywhere from
15 min to an hr behind on the "all events" view. Sometimes when I search
for new logs it shows up correctly but overall it seems like it's lagging
behind trying to keep up with what logstash is sending it. That being said,
I'm concerned that logs are being dropped and I don't know about it. Are
there any commands I can use to validate this type of information or what I
can do to make sure elasticsearch/KIbana is keeping up?
I've had to restart elasticsearch a few times and every time I do, it
completely breaks things. Once it starts back up it doesn't continue to
show the logs in Kibana correctly and when I run a health check, it says
there are unassigned shards. I've not been able to fix this and in the past
I've always just had to delete them and start from scratch again.
Any idea what is going on with this or how I can more cleanly restart or
reboot the servers and recover from it?
I think I made my situation even worse. I tried deleting the shards and
starting over and now elasticsearch isn't even creating the
/etc/elasticsearch/data/my-cluster/node folder.
On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
Hello,
Currently I have my syslog-ng --> logstash --> elasticsearch1 &
elastisearch2 setup working pretty good. It's accepting over 300 events per
second and hasn't bogged the systems down at all. However I'm running into
2 issues that I don't quite understand.
When viewing the information in Kibana, it appears to be anywhere from
15 min to an hr behind on the "all events" view. Sometimes when I search
for new logs it shows up correctly but overall it seems like it's lagging
behind trying to keep up with what logstash is sending it. That being said,
I'm concerned that logs are being dropped and I don't know about it. Are
there any commands I can use to validate this type of information or what I
can do to make sure elasticsearch/KIbana is keeping up?
I've had to restart elasticsearch a few times and every time I do, it
completely breaks things. Once it starts back up it doesn't continue to
show the logs in Kibana correctly and when I run a health check, it says
there are unassigned shards. I've not been able to fix this and in the past
I've always just had to delete them and start from scratch again.
Any idea what is going on with this or how I can more cleanly restart or
reboot the servers and recover from it?
Did you install ES via a rpm/deb or using the zip? I ask because your data
store directory is custom.subl
Check out these plugins for monitoring - elastichq, kopf, bigdesk. They
will give you an overview of your cluster and might give you insight into
where your problem lies. The other best place to check is the ES logs.
I think I made my situation even worse. I tried deleting the shards and
starting over and now elasticsearch isn't even creating the
/etc/elasticsearch/data/my-cluster/node folder.
On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
Hello,
Currently I have my syslog-ng --> logstash --> elasticsearch1 &
elastisearch2 setup working pretty good. It's accepting over 300 events per
second and hasn't bogged the systems down at all. However I'm running into
2 issues that I don't quite understand.
When viewing the information in Kibana, it appears to be anywhere from
15 min to an hr behind on the "all events" view. Sometimes when I search
for new logs it shows up correctly but overall it seems like it's lagging
behind trying to keep up with what logstash is sending it. That being said,
I'm concerned that logs are being dropped and I don't know about it. Are
there any commands I can use to validate this type of information or what I
can do to make sure elasticsearch/KIbana is keeping up?
I've had to restart elasticsearch a few times and every time I do, it
completely breaks things. Once it starts back up it doesn't continue to
show the logs in Kibana correctly and when I run a health check, it says
there are unassigned shards. I've not been able to fix this and in the past
I've always just had to delete them and start from scratch again.
Any idea what is going on with this or how I can more cleanly restart or
reboot the servers and recover from it?
I used the rpm install. I'll take a look at the plugins. Thanks.
On Thursday, December 19, 2013 5:07:53 PM UTC-5, Mark Walkom wrote:
Did you install ES via a rpm/deb or using the zip? I ask because your data
store directory is custom.subl
Check out these plugins for monitoring - elastichq, kopf, bigdesk. They
will give you an overview of your cluster and might give you insight into
where your problem lies. The other best place to check is the ES logs.
I think I made my situation even worse. I tried deleting the shards and
starting over and now elasticsearch isn't even creating the
/etc/elasticsearch/data/my-cluster/node folder.
On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
Hello,
Currently I have my syslog-ng --> logstash --> elasticsearch1 &
elastisearch2 setup working pretty good. It's accepting over 300 events per
second and hasn't bogged the systems down at all. However I'm running into
2 issues that I don't quite understand.
When viewing the information in Kibana, it appears to be anywhere
from 15 min to an hr behind on the "all events" view. Sometimes when I
search for new logs it shows up correctly but overall it seems like it's
lagging behind trying to keep up with what logstash is sending it. That
being said, I'm concerned that logs are being dropped and I don't know
about it. Are there any commands I can use to validate this type of
information or what I can do to make sure elasticsearch/KIbana is keeping
up?
I've had to restart elasticsearch a few times and every time I do, it
completely breaks things. Once it starts back up it doesn't continue to
show the logs in Kibana correctly and when I run a health check, it says
there are unassigned shards. I've not been able to fix this and in the past
I've always just had to delete them and start from scratch again.
Any idea what is going on with this or how I can more cleanly restart or
reboot the servers and recover from it?
I got the initial issue fixed of me getting data back again. However I
still don't understand how to fix the unassigned shards issue and how to
properly restart elasticsearch without it complaining.
On Friday, December 20, 2013 9:28:53 AM UTC-5, Eric Luellen wrote:
Mark,
I used the rpm install. I'll take a look at the plugins. Thanks.
On Thursday, December 19, 2013 5:07:53 PM UTC-5, Mark Walkom wrote:
Did you install ES via a rpm/deb or using the zip? I ask because your
data store directory is custom.subl
Check out these plugins for monitoring - elastichq, kopf, bigdesk. They
will give you an overview of your cluster and might give you insight into
where your problem lies. The other best place to check is the ES logs.
I think I made my situation even worse. I tried deleting the shards and
starting over and now elasticsearch isn't even creating the
/etc/elasticsearch/data/my-cluster/node folder.
On Thursday, December 19, 2013 4:04:41 PM UTC-5, Eric Luellen wrote:
Hello,
Currently I have my syslog-ng --> logstash --> elasticsearch1 &
elastisearch2 setup working pretty good. It's accepting over 300 events per
second and hasn't bogged the systems down at all. However I'm running into
2 issues that I don't quite understand.
When viewing the information in Kibana, it appears to be anywhere
from 15 min to an hr behind on the "all events" view. Sometimes when I
search for new logs it shows up correctly but overall it seems like it's
lagging behind trying to keep up with what logstash is sending it. That
being said, I'm concerned that logs are being dropped and I don't know
about it. Are there any commands I can use to validate this type of
information or what I can do to make sure elasticsearch/KIbana is keeping
up?
I've had to restart elasticsearch a few times and every time I do,
it completely breaks things. Once it starts back up it doesn't continue to
show the logs in Kibana correctly and when I run a health check, it says
there are unassigned shards. I've not been able to fix this and in the past
I've always just had to delete them and start from scratch again.
Any idea what is going on with this or how I can more cleanly restart
or reboot the servers and recover from it?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.