Elasticseach issue with some indicies not populating data

I am new to elasticsearch and have a problem. I have 5 indicies. At first
all of them were running without issue. However, over the last 2 weeks,
all but one have stopped generating data. I have run a tcpdump on the
logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting this?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception or other troubles.

So to answer to your question, check your Elasticsearch logs and if nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At first all of them were running without issue. However, over the last 2 weeks, all but one have stopped generating data. I have run a tcpdump on the logstash server and confirmed that logging packets are getting to the server. I have looked into the servers health. I have issued the following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting this?

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best to
answer.

Here is an example. The one indecy that is working has a fresh directory
with todays date in the elasticsearch directory. The ones that are not
working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much
information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN: org.elasticsearch.transport.netty:
[logstash-logstash-3170-2032] Message not fully read (request) for [2] and
action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN: org.elasticsearch.transport.netty:
[logstash-logstash-3170-2032] Message not fully read (request) for [5] and
action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN: org.elasticsearch.transport.netty:
[logstash-logstash-3170-2032] Message not fully read (request) for [10] and
action [internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck |
Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato david@pilato.fr wrote:

From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception or
other troubles.

So to answer to your question, check your Elasticsearch logs and if
nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably
meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no data ?
My suspicion is that you have too many shards allocated on a single (tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich dpich@realtruck.com a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best to answer.

Here is an example. The one indecy that is working has a fresh directory with todays date in the elasticsearch directory. The ones that are not working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [2] and action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [5] and action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [10] and action [internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook | Youtube| Twitter | Google+ | Instagram | Linkedin | Our Guiding Principles
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato david@pilato.fr wrote:
From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception or other troubles.

So to answer to your question, check your Elasticsearch logs and if nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At first all of them were running without issue. However, over the last 2 weeks, all but one have stopped generating data. I have run a tcpdump on the logstash server and confirmed that logging packets are getting to the server. I have looked into the servers health. I have issued the following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting this?

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Hello David,

I found and this online that made my cluster go 'green'.
http://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/
I don't know for certain if that was 100% of the problem, but there are no
longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2792,
"active_shards" : 5584,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists. I did restart the
forwarders, and TCP dump does show that traffic is indeed hitting the
server. But my indicies folder does not contain fresh data except for one
source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck |
Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato david@pilato.fr wrote:

Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no data ?
My suspicion is that you have too many shards allocated on a single
(tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich dpich@realtruck.com a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best to
answer.

Here is an example. The one indecy that is working has a fresh directory
with todays date in the elasticsearch directory. The ones that are not
working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much
information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN: org.elasticsearch.transport.netty:
[logstash-logstash-3170-2032] Message not fully read (request) for [2] and
action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN: org.elasticsearch.transport.netty:
[logstash-logstash-3170-2032] Message not fully read (request) for [5] and
action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN: org.elasticsearch.transport.netty:
[logstash-logstash-3170-2032] Message not fully read (request) for [10] and
action [internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato david@pilato.fr wrote:

From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception or
other troubles.

So to answer to your question, check your Elasticsearch logs and if
nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably
meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Having unassigned shards is perfectly fine on a one node cluster.
The fact that your cluster were yellow does not mean your cluster was not behaving correctly.

--
David Pilato - Developer | Evangelist

@dadoonet https://twitter.com/dadoonet | @elasticsearchfr https://twitter.com/elasticsearchfr | @scrutmydocs https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 15:54, Don Pich dpich@realtruck.com a écrit :

Hello David,

I found and this online that made my cluster go 'green'. Trifork Blog - Keep updated on the technical solutions Trifork is working on! http://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/ I don't know for certain if that was 100% of the problem, but there are no longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2792,
"active_shards" : 5584,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists. I did restart the forwarders, and TCP dump does show that traffic is indeed hitting the server. But my indicies folder does not contain fresh data except for one source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube http://www.youtube.com/realtruckcom| Twitter http://twitter.com/realtruck | Google+ https://google.com/+Realtruck | Instagram http://instagram.com/realtruckcom | Linkedin http://www.linkedin.com/company/realtruck | Our Guiding Principles http://www.realtruck.com/our-guiding-principles/“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com http://realtruck.com/

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato <david@pilato.fr mailto:david@pilato.fr> wrote:
Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no data ?
My suspicion is that you have too many shards allocated on a single (tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich <dpich@realtruck.com mailto:dpich@realtruck.com> a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best to answer.

Here is an example. The one indecy that is working has a fresh directory with todays date in the elasticsearch directory. The ones that are not working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [2] and action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [5] and action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [10] and action [internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925 tel:701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube http://www.youtube.com/realtruckcom| Twitter http://twitter.com/realtruck | Google+ https://google.com/+Realtruck | Instagram http://instagram.com/realtruckcom | Linkedin http://www.linkedin.com/company/realtruck | Our Guiding Principles http://www.realtruck.com/our-guiding-principles/“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato <david@pilato.fr mailto:david@pilato.fr> wrote:
From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception or other troubles.

So to answer to your question, check your Elasticsearch logs and if nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com mailto:dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At first all of them were running without issue. However, over the last 2 weeks, all but one have stopped generating data. I have run a tcpdump on the logstash server and confirmed that logging packets are getting to the server. I have looked into the servers health. I have issued the following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting this?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer.

For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer.

For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Thanks for that info. Again, training wheels... :slight_smile:

So below is my logstash config. If I do a tcpdump on port 5044, I see all
of my forwarders communicating with the logstash server. However, if I do
a tcpdump on port 9300, I do not see any traffic. This leads me to believe
that I have a problem in my output.

input
{
lumberjack # comes from logstash-forwarder, we sent ALL formats and
types through this and control logType and logFormat on the client
{
# The port to listen on
port => 5044
host => "192.168.1.72"

   # The paths to your ssl cert and key
   ssl_certificate => "/opt/logstash-1.4.2/ssl/certs/lumberjack.crt" #

new cert needed for latest v of lumberjack-pusher
ssl_key => "/opt/logstash-1.4.2/ssl/private/lumberjack.key"
}

tcp
{
   # Remember with nxlog we're automatically converting our windows xml

to JSON
ssl_cert => "/opt/logstash-1.4.2/ssl/certs/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.4.2/ssl/private/logstash-forwarder.key"
ssl_enable => true
debug=>true
type => "windowsEventLog"
port => 3515
codec => "line"
add_field=>{"logType"=>"windowsEventLog"}
}
tcp
{
# Remember with nxlog we're automatically converting our windows xml
to JSON
# used for NFSServer which apparently cannot connect via SSL :frowning:
type => "windowsEventLog"
port => 3516
codec => "line"
add_field=>{"logType"=>"windowsEventLog"}
}

}

filter
{
if [logFormat] == "nginxLog"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when
we received this
grok
{
break_on_match => false
match =>
["message","%{IP:visitor_ip}|[^|]+|%{TIMESTAMP_ISO8601:entryDateTime}|%{URIPATH:url}%{URIPARAM:query_string}?|%{INT:http_response}|%{INT:response_length}|(?<http_referrer>[^|]+)|(?<user_agent>[^|]+)|%{BASE16FLOAT:request_time}|%{BASE16FLOAT:upstream_response_time}"]
match => ["url",".(?(?:.(?!.))+)$"]
}
date
{
match => ["entryDateTime","ISO8601"]
remove_field => ["entryDateTime"]
}
}
else if [logFormat] == "exim4"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when
we received this
grok
{
break_on_match => false
match => ["message","(?[^ ]+ [^ ]+)
[(?.)] (?.)"]
}
date
{
match => ["entryDateTime","YYYY-MM-dd HH:mm:ss"]
}
}
else if [logFormat]=="proftpd"
{
grok
{
break_on_match => false
match => ["message","(?[^ ]+) (?[^
]+) (?[^ ]+) [(?.)] (?".")
(?[^ ]+) (?".") (?[^ ]+)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve now
before date overwrites
}
date
{
match => ["entryDateTime","dd/MMM/YYYY:HH:mm:ss Z"]
#target => "testDate"
}
}
else if [logFormat] == "debiansyslog"
{
# linux sysLog
grok
{
break_on_match => false
match => ["message","(?[a-zA-Z]{3} [ 0-9]+ [^ ]+)
(?[^ ]+) (?[^:]+):(?.
)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve NOW
before date overwrites
}
date
{
# Mar 2 02:21:28 primaryweb-wheezy logstash-forwarder[754]:
2015/03/02 02:21:28.607445 Registrar received 348 events
match => ["entryDateTime","MMM dd HH:mm:ss","MMM d
HH:mm:ss"] # problems with jodatime and missing leading 0 on days, we can
supply multiple patterns :slight_smile:
}
}
else if [type] == "windowsEventLog"
{
json{ source => "message" } # set our source to the entire message
as its JSON
mutate
{
add_field => ["receivedAt","%{@timestamp}"]
}
if [SourceModuleName] == "eventlog"
{
# use the date/time of the entry and not physical time so viewing
acts as expected
date
{
match => ["EventTime","YYYY-MM-dd HH:mm:ss"]
}

     # message defaults to the entire message. Since we have json data

for all properties, copy the event message into it instead
mutate
{
replace => [ "message", "%{Message}" ]
}
mutate
{
remove_field => [ "Message" ]
}
}
}
}
output
{
if [logType] == "webLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-weblog-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "mailLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-mail-events-%{+YYYY.MM.dd}"
}
}
else if [type] == "windowsEventLog"
{
#file{
# path => "/var/log/logstash/snarf.txt"
#}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-windows-events%{+YYYY.MM.dd}"
}
}
else if [logType] == "proftpd"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-ftp-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "sysLog" or [logType] == "authLog"
{
#file { path => "/var/log/logstash/sysLog"}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-syslog-events-%{+YYYY.MM.dd}"
}
}
else
{

    elasticsearch
    {
        host=>"127.0.0.1"
        port=>9300
        cluster => "es-logstash"
        #node_name => "es-logstash-n1"
    }
}

}

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck |
Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 9:17 AM, David Pilato david@pilato.fr wrote:

Having unassigned shards is perfectly fine on a one node cluster.
The fact that your cluster were yellow does not mean your cluster was not
behaving correctly.

--
David Pilato - Developer | Evangelist
elastic.co http://elastic.co
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 15:54, Don Pich dpich@realtruck.com a écrit :

Hello David,

I found and this online that made my cluster go 'green'.
Trifork Blog - Keep updated on the technical solutions Trifork is working on!
I don't know for certain if that was 100% of the problem, but there are no
longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2792,
"active_shards" : 5584,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists. I did restart the
forwarders, and TCP dump does show that traffic is indeed hitting the
server. But my indicies folder does not contain fresh data except for one
source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato david@pilato.fr wrote:

Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no data
?
My suspicion is that you have too many shards allocated on a single
(tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich dpich@realtruck.com a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best
to answer.

Here is an example. The one indecy that is working has a fresh directory
with todays date in the elasticsearch directory. The ones that are not
working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much
information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [2] and action
[internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [5] and action
[internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [10] and action
[internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato david@pilato.fr wrote:

From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception
or other troubles.

So to answer to your question, check your Elasticsearch logs and if
nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably
meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Sw5UUaH2i-%3D63-EP3s%2B25oiymY5syVz%3DuYt4h5z8d6Pw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Also, sanity check:

root@logstash:/var/log/logstash# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination

Chain FORWARD (policy ACCEPT)
target prot opt source destination

Chain OUTPUT (policy ACCEPT)
target prot opt source destination
root@logstash:/var/log/logstash#

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck |
Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 9:38 AM, Don Pich dpich@realtruck.com wrote:

Thanks for that info. Again, training wheels... :slight_smile:

So below is my logstash config. If I do a tcpdump on port 5044, I see all
of my forwarders communicating with the logstash server. However, if I do
a tcpdump on port 9300, I do not see any traffic. This leads me to believe
that I have a problem in my output.

input
{
lumberjack # comes from logstash-forwarder, we sent ALL formats and
types through this and control logType and logFormat on the client
{
# The port to listen on
port => 5044
host => "192.168.1.72"

   # The paths to your ssl cert and key
   ssl_certificate => "/opt/logstash-1.4.2/ssl/certs/lumberjack.crt" #

new cert needed for latest v of lumberjack-pusher
ssl_key => "/opt/logstash-1.4.2/ssl/private/lumberjack.key"
}

tcp
{
   # Remember with nxlog we're automatically converting our windows

xml to JSON
ssl_cert => "/opt/logstash-1.4.2/ssl/certs/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.4.2/ssl/private/logstash-forwarder.key"
ssl_enable => true
debug=>true
type => "windowsEventLog"
port => 3515
codec => "line"
add_field=>{"logType"=>"windowsEventLog"}
}
tcp
{
# Remember with nxlog we're automatically converting our windows
xml to JSON
# used for NFSServer which apparently cannot connect via SSL :frowning:
type => "windowsEventLog"
port => 3516
codec => "line"
add_field=>{"logType"=>"windowsEventLog"}
}

}

filter
{
if [logFormat] == "nginxLog"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when
we received this
grok
{
break_on_match => false
match =>
["message","%{IP:visitor_ip}|[^|]+|%{TIMESTAMP_ISO8601:entryDateTime}|%{URIPATH:url}%{URIPARAM:query_string}?|%{INT:http_response}|%{INT:response_length}|(?<http_referrer>[^|]+)|(?<user_agent>[^|]+)|%{BASE16FLOAT:request_time}|%{BASE16FLOAT:upstream_response_time}"]
match => ["url",".(?(?:.(?!.))+)$"]
}
date
{
match => ["entryDateTime","ISO8601"]
remove_field => ["entryDateTime"]
}
}
else if [logFormat] == "exim4"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when
we received this
grok
{
break_on_match => false
match => ["message","(?[^ ]+ [^ ]+)
[(?.)] (?.)"]
}
date
{
match => ["entryDateTime","YYYY-MM-dd HH:mm:ss"]
}
}
else if [logFormat]=="proftpd"
{
grok
{
break_on_match => false
match => ["message","(?[^ ]+) (?[^
]+) (?[^ ]+) [(?.)] (?".")
(?[^ ]+) (?".") (?[^ ]+)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve now
before date overwrites
}
date
{
match => ["entryDateTime","dd/MMM/YYYY:HH:mm:ss Z"]
#target => "testDate"
}
}
else if [logFormat] == "debiansyslog"
{
# linux sysLog
grok
{
break_on_match => false
match => ["message","(?[a-zA-Z]{3} [ 0-9]+ [^
]+) (?[^ ]+) (?[^:]+):(?.
)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve NOW
before date overwrites
}
date
{
# Mar 2 02:21:28 primaryweb-wheezy logstash-forwarder[754]:
2015/03/02 02:21:28.607445 Registrar received 348 events
match => ["entryDateTime","MMM dd HH:mm:ss","MMM d
HH:mm:ss"] # problems with jodatime and missing leading 0 on days, we can
supply multiple patterns :slight_smile:
}
}
else if [type] == "windowsEventLog"
{
json{ source => "message" } # set our source to the entire message
as its JSON
mutate
{
add_field => ["receivedAt","%{@timestamp}"]
}
if [SourceModuleName] == "eventlog"
{
# use the date/time of the entry and not physical time so viewing
acts as expected
date
{
match => ["EventTime","YYYY-MM-dd HH:mm:ss"]
}

     # message defaults to the entire message. Since we have json data

for all properties, copy the event message into it instead
mutate
{
replace => [ "message", "%{Message}" ]
}
mutate
{
remove_field => [ "Message" ]
}
}
}
}
output
{
if [logType] == "webLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-weblog-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "mailLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-mail-events-%{+YYYY.MM.dd}"
}
}
else if [type] == "windowsEventLog"
{
#file{
# path => "/var/log/logstash/snarf.txt"
#}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-windows-events%{+YYYY.MM.dd}"
}
}
else if [logType] == "proftpd"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-ftp-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "sysLog" or [logType] == "authLog"
{
#file { path => "/var/log/logstash/sysLog"}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-syslog-events-%{+YYYY.MM.dd}"
}
}
else
{

    elasticsearch
    {
        host=>"127.0.0.1"
        port=>9300
        cluster => "es-logstash"
        #node_name => "es-logstash-n1"
    }
}

}

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 9:17 AM, David Pilato david@pilato.fr wrote:

Having unassigned shards is perfectly fine on a one node cluster.
The fact that your cluster were yellow does not mean your cluster was not
behaving correctly.

--
David Pilato - Developer | Evangelist
elastic.co http://elastic.co
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 15:54, Don Pich dpich@realtruck.com a écrit :

Hello David,

I found and this online that made my cluster go 'green'.
Trifork Blog - Keep updated on the technical solutions Trifork is working on!
I don't know for certain if that was 100% of the problem, but there are no
longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2792,
"active_shards" : 5584,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists. I did restart the
forwarders, and TCP dump does show that traffic is indeed hitting the
server. But my indicies folder does not contain fresh data except for one
source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato david@pilato.fr wrote:

Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no
data ?
My suspicion is that you have too many shards allocated on a single
(tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich dpich@realtruck.com a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best
to answer.

Here is an example. The one indecy that is working has a fresh
directory with todays date in the elasticsearch directory. The ones that
are not working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much
information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [2] and action
[internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [5] and action
[internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [10] and action
[internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato david@pilato.fr wrote:

From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception
or other troubles.

So to answer to your question, check your Elasticsearch logs and if
nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably
meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET
'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Qe7R5S3AszAVU2qZjSOVUg8PaW%2BRZCtgQpeAXtp%2BkPWQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Might be. But you should ask this on the logstash mailing list.
I think that elasticsearch is working fine here as you did not see any trouble in logs.

That said I’d use:

elasticsearch {
protocol => "http"
host => "localhost"
}

So using REST port (9200) that is.

You can also add this output to make sure something is meant to be sent in elasticsearch:

output {
stdout {
codec => rubydebug
}
elasticsearch {
protocol => "http"
host => "localhost"
}
}

--
David Pilato - Developer | Evangelist

@dadoonet https://twitter.com/dadoonet | @elasticsearchfr https://twitter.com/elasticsearchfr | @scrutmydocs https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 16:38, Don Pich dpich@realtruck.com a écrit :

Thanks for that info. Again, training wheels... :slight_smile:

So below is my logstash config. If I do a tcpdump on port 5044, I see all of my forwarders communicating with the logstash server. However, if I do a tcpdump on port 9300, I do not see any traffic. This leads me to believe that I have a problem in my output.

input
{
lumberjack # comes from logstash-forwarder, we sent ALL formats and types through this and control logType and logFormat on the client
{
# The port to listen on
port => 5044
host => "192.168.1.72"

   # The paths to your ssl cert and key
   ssl_certificate => "/opt/logstash-1.4.2/ssl/certs/lumberjack.crt" # new cert needed for latest v of lumberjack-pusher
   ssl_key         => "/opt/logstash-1.4.2/ssl/private/lumberjack.key"
}

tcp
{
   # Remember with nxlog we're automatically converting our windows xml to JSON
   ssl_cert => "/opt/logstash-1.4.2/ssl/certs/logstash-forwarder.crt"
   ssl_key  => "/opt/logstash-1.4.2/ssl/private/logstash-forwarder.key"
   ssl_enable => true
   debug=>true
   type => "windowsEventLog"
   port => 3515
   codec => "line"
   add_field=>{"logType"=>"windowsEventLog"}
}
tcp
{
   # Remember with nxlog we're automatically converting our windows xml to JSON
   # used for NFSServer which apparently cannot connect via SSL :(
   type => "windowsEventLog"
   port => 3516
   codec => "line"
   add_field=>{"logType"=>"windowsEventLog"}
}

}

filter
{
if [logFormat] == "nginxLog"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when we received this
grok
{
break_on_match => false
match => ["message","%{IP:visitor_ip}|[^|]+|%{TIMESTAMP_ISO8601:entryDateTime}|%{URIPATH:url}%{URIPARAM:query_string}?|%{INT:http_response}|%{INT:response_length}|(?<http_referrer>[^|]+)|(?<user_agent>[^|]+)|%{BASE16FLOAT:request_time}|%{BASE16FLOAT:upstream_response_time}"]
match => ["url",".(?(?:.(?!.))+)$"]
}
date
{
match => ["entryDateTime","ISO8601"]
remove_field => ["entryDateTime"]
}
}
else if [logFormat] == "exim4"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when we received this
grok
{
break_on_match => false
match => ["message","(?[^ ]+ [^ ]+) [(?.)] (?.)"]
}
date
{
match => ["entryDateTime","YYYY-MM-dd HH:mm:ss"]
}
}
else if [logFormat]=="proftpd"
{
grok
{
break_on_match => false
match => ["message","(?[^ ]+) (?[^ ]+) (?[^ ]+) [(?.)] (?".") (?[^ ]+) (?".") (?[^ ]+)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve now before date overwrites
}
date
{
match => ["entryDateTime","dd/MMM/YYYY:HH:mm:ss Z"]
#target => "testDate"
}
}
else if [logFormat] == "debiansyslog"
{
# linux sysLog
grok
{
break_on_match => false
match => ["message","(?[a-zA-Z]{3} [ 0-9]+ [^ ]+) (?[^ ]+) (?[^:]+):(?.
)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve NOW before date overwrites
}
date
{
# Mar 2 02:21:28 primaryweb-wheezy logstash-forwarder[754]: 2015/03/02 02:21:28.607445 Registrar received 348 events
match => ["entryDateTime","MMM dd HH:mm:ss","MMM d HH:mm:ss"] # problems with jodatime and missing leading 0 on days, we can supply multiple patterns :slight_smile:
}
}
else if [type] == "windowsEventLog"
{
json{ source => "message" } # set our source to the entire message as its JSON
mutate
{
add_field => ["receivedAt","%{@timestamp}"]
}
if [SourceModuleName] == "eventlog"
{
# use the date/time of the entry and not physical time so viewing acts as expected
date
{
match => ["EventTime","YYYY-MM-dd HH:mm:ss"]
}

     # message defaults to the entire message. Since we have json data for all properties, copy the event message into it instead
     mutate
     {
       replace => [ "message", "%{Message}" ]
     }
     mutate
     {
       remove_field => [ "Message" ]
     }
  }

}
}
output
{
if [logType] == "webLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-weblog-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "mailLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-mail-events-%{+YYYY.MM.dd}"
}
}
else if [type] == "windowsEventLog"
{
#file{
# path => "/var/log/logstash/snarf.txt"
#}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-windows-events%{+YYYY.MM.dd}"
}
}
else if [logType] == "proftpd"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-ftp-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "sysLog" or [logType] == "authLog"
{
#file { path => "/var/log/logstash/sysLog"}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-syslog-events-%{+YYYY.MM.dd}"
}
}
else
{

    elasticsearch
    {
        host=>"127.0.0.1"
        port=>9300
        cluster => "es-logstash"
        #node_name => "es-logstash-n1"
    }
}

}

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube http://www.youtube.com/realtruckcom| Twitter http://twitter.com/realtruck | Google+ https://google.com/+Realtruck | Instagram http://instagram.com/realtruckcom | Linkedin http://www.linkedin.com/company/realtruck | Our Guiding Principles http://www.realtruck.com/our-guiding-principles/“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com http://realtruck.com/

On Mon, Apr 20, 2015 at 9:17 AM, David Pilato <david@pilato.fr mailto:david@pilato.fr> wrote:
Having unassigned shards is perfectly fine on a one node cluster.
The fact that your cluster were yellow does not mean your cluster was not behaving correctly.

--
David Pilato - Developer | Evangelist
elastic.co http://elastic.co/
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr https://twitter.com/elasticsearchfr | @scrutmydocs https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 15:54, Don Pich <dpich@realtruck.com mailto:dpich@realtruck.com> a écrit :

Hello David,

I found and this online that made my cluster go 'green'. Trifork Blog - Keep updated on the technical solutions Trifork is working on! http://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/ I don't know for certain if that was 100% of the problem, but there are no longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2792,
"active_shards" : 5584,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists. I did restart the forwarders, and TCP dump does show that traffic is indeed hitting the server. But my indicies folder does not contain fresh data except for one source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925 tel:701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube http://www.youtube.com/realtruckcom| Twitter http://twitter.com/realtruck | Google+ https://google.com/+Realtruck | Instagram http://instagram.com/realtruckcom | Linkedin http://www.linkedin.com/company/realtruck | Our Guiding Principles http://www.realtruck.com/our-guiding-principles/“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com http://realtruck.com/

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato <david@pilato.fr mailto:david@pilato.fr> wrote:
Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no data ?
My suspicion is that you have too many shards allocated on a single (tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich <dpich@realtruck.com mailto:dpich@realtruck.com> a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best to answer.

Here is an example. The one indecy that is working has a fresh directory with todays date in the elasticsearch directory. The ones that are not working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [2] and action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [5] and action [internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN: org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message not fully read (request) for [10] and action [internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925 tel:701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube http://www.youtube.com/realtruckcom| Twitter http://twitter.com/realtruck | Google+ https://google.com/+Realtruck | Instagram http://instagram.com/realtruckcom | Linkedin http://www.linkedin.com/company/realtruck | Our Guiding Principles http://www.realtruck.com/our-guiding-principles/“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato <david@pilato.fr mailto:david@pilato.fr> wrote:
From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception or other troubles.

So to answer to your question, check your Elasticsearch logs and if nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com mailto:dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At first all of them were running without issue. However, over the last 2 weeks, all but one have stopped generating data. I have run a tcpdump on the logstash server and confirmed that logging packets are getting to the server. I have looked into the servers health. I have issued the following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting this?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer.

For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer.

For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr?utm_medium=email&utm_source=footer.

For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com mailto:elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Sw5UUaH2i-%3D63-EP3s%2B25oiymY5syVz%3DuYt4h5z8d6Pw%40mail.gmail.com https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Sw5UUaH2i-%3D63-EP3s%2B25oiymY5syVz%3DuYt4h5z8d6Pw%40mail.gmail.com?utm_medium=email&utm_source=footer.
For more options, visit https://groups.google.com/d/optout https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7974D2E3-533B-4FCF-A7AC-82D13589C455%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Thanks David. I will move over to logstash as I agree that is where it is
starting to feel like the problem is there.

I appreciate your help!!

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck |
Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 9:43 AM, David Pilato david@pilato.fr wrote:

Might be. But you should ask this on the logstash mailing list.
I think that elasticsearch is working fine here as you did not see any
trouble in logs.

That said I’d use:

elasticsearch {
protocol => "http"
host => "localhost"
}

So using REST port (9200) that is.

You can also add this output to make sure something is meant to be sent in
elasticsearch:

output {
stdout {
codec => rubydebug
}
elasticsearch {
protocol => "http"
host => "localhost"
}
}

--
David Pilato - Developer | Evangelist
elastic.co http://elastic.co
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 16:38, Don Pich dpich@realtruck.com a écrit :

Thanks for that info. Again, training wheels... :slight_smile:

So below is my logstash config. If I do a tcpdump on port 5044, I see all
of my forwarders communicating with the logstash server. However, if I do
a tcpdump on port 9300, I do not see any traffic. This leads me to believe
that I have a problem in my output.

input
{
lumberjack # comes from logstash-forwarder, we sent ALL formats and
types through this and control logType and logFormat on the client
{
# The port to listen on
port => 5044
host => "192.168.1.72"

   # The paths to your ssl cert and key
   ssl_certificate => "/opt/logstash-1.4.2/ssl/certs/lumberjack.crt" #

new cert needed for latest v of lumberjack-pusher
ssl_key => "/opt/logstash-1.4.2/ssl/private/lumberjack.key"
}

tcp
{
   # Remember with nxlog we're automatically converting our windows

xml to JSON
ssl_cert => "/opt/logstash-1.4.2/ssl/certs/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.4.2/ssl/private/logstash-forwarder.key"
ssl_enable => true
debug=>true
type => "windowsEventLog"
port => 3515
codec => "line"
add_field=>{"logType"=>"windowsEventLog"}
}
tcp
{
# Remember with nxlog we're automatically converting our windows
xml to JSON
# used for NFSServer which apparently cannot connect via SSL :frowning:
type => "windowsEventLog"
port => 3516
codec => "line"
add_field=>{"logType"=>"windowsEventLog"}
}

}

filter
{
if [logFormat] == "nginxLog"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when
we received this
grok
{
break_on_match => false
match =>
["message","%{IP:visitor_ip}|[^|]+|%{TIMESTAMP_ISO8601:entryDateTime}|%{URIPATH:url}%{URIPARAM:query_string}?|%{INT:http_response}|%{INT:response_length}|(?<http_referrer>[^|]+)|(?<user_agent>[^|]+)|%{BASE16FLOAT:request_time}|%{BASE16FLOAT:upstream_response_time}"]
match => ["url",".(?(?:.(?!.))+)$"]
}
date
{
match => ["entryDateTime","ISO8601"]
remove_field => ["entryDateTime"]
}
}
else if [logFormat] == "exim4"
{
mutate{add_field => ["receivedAt","%{@timestamp}"]} #preserve when
we received this
grok
{
break_on_match => false
match => ["message","(?[^ ]+ [^ ]+)
[(?.)] (?.)"]
}
date
{
match => ["entryDateTime","YYYY-MM-dd HH:mm:ss"]
}
}
else if [logFormat]=="proftpd"
{
grok
{
break_on_match => false
match => ["message","(?[^ ]+) (?[^
]+) (?[^ ]+) [(?.)] (?".")
(?[^ ]+) (?".") (?[^ ]+)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve now
before date overwrites
}
date
{
match => ["entryDateTime","dd/MMM/YYYY:HH:mm:ss Z"]
#target => "testDate"
}
}
else if [logFormat] == "debiansyslog"
{
# linux sysLog
grok
{
break_on_match => false
match => ["message","(?[a-zA-Z]{3} [ 0-9]+ [^
]+) (?[^ ]+) (?[^:]+):(?.
)"]
add_field => ["receivedAt","%{@timestamp}"] # preserve NOW
before date overwrites
}
date
{
# Mar 2 02:21:28 primaryweb-wheezy logstash-forwarder[754]:
2015/03/02 02:21:28.607445 Registrar received 348 events
match => ["entryDateTime","MMM dd HH:mm:ss","MMM d
HH:mm:ss"] # problems with jodatime and missing leading 0 on days, we can
supply multiple patterns :slight_smile:
}
}
else if [type] == "windowsEventLog"
{
json{ source => "message" } # set our source to the entire message
as its JSON
mutate
{
add_field => ["receivedAt","%{@timestamp}"]
}
if [SourceModuleName] == "eventlog"
{
# use the date/time of the entry and not physical time so viewing
acts as expected
date
{
match => ["EventTime","YYYY-MM-dd HH:mm:ss"]
}

     # message defaults to the entire message. Since we have json data

for all properties, copy the event message into it instead
mutate
{
replace => [ "message", "%{Message}" ]
}
mutate
{
remove_field => [ "Message" ]
}
}
}
}
output
{
if [logType] == "webLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-weblog-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "mailLog"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-mail-events-%{+YYYY.MM.dd}"
}
}
else if [type] == "windowsEventLog"
{
#file{
# path => "/var/log/logstash/snarf.txt"
#}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-windows-events%{+YYYY.MM.dd}"
}
}
else if [logType] == "proftpd"
{
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-ftp-events-%{+YYYY.MM.dd}"
}
}
else if [logType] == "sysLog" or [logType] == "authLog"
{
#file { path => "/var/log/logstash/sysLog"}
elasticsearch
{
host=>"127.0.0.1"
port=>9300
cluster => "es-logstash"
#node_name => "es-logstash-n1"
index => "logstash-syslog-events-%{+YYYY.MM.dd}"
}
}
else
{

    elasticsearch
    {
        host=>"127.0.0.1"
        port=>9300
        cluster => "es-logstash"
        #node_name => "es-logstash-n1"
    }
}

}

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 9:17 AM, David Pilato david@pilato.fr wrote:

Having unassigned shards is perfectly fine on a one node cluster.
The fact that your cluster were yellow does not mean your cluster was not
behaving correctly.

--
David Pilato - Developer | Evangelist
elastic.co http://elastic.co/
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr | @scrutmydocs
https://twitter.com/scrutmydocs

Le 20 avr. 2015 à 15:54, Don Pich dpich@realtruck.com a écrit :

Hello David,

I found and this online that made my cluster go 'green'.
Trifork Blog - Keep updated on the technical solutions Trifork is working on!
I don't know for certain if that was 100% of the problem, but there
are no longer unassigned shards.

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"active_primary_shards" : 2792,
"active_shards" : 5584,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
root@logstash:/#

However, the root of my problem still exists. I did restart the
forwarders, and TCP dump does show that traffic is indeed hitting the
server. But my indicies folder does not contain fresh data except for one
source.

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 10:04 PM, David Pilato david@pilato.fr wrote:

Are you using the same exact JVM version?
Where do those logs come from? LS ? ES ?

Could you try the same with a cleaned Elasticsearch ? I mean with no
data ?
My suspicion is that you have too many shards allocated on a single
(tiny?) node.

What is your node size BTW (memory / heap size)?

David

Le 19 avr. 2015 à 23:09, Don Pich dpich@realtruck.com a écrit :

Thanks for taking the time to answer David.

Again, got my training wheels on with an ELK stack so I will do my best
to answer.

Here is an example. The one indecy that is working has a fresh
directory with todays date in the elasticsearch directory. The ones that
are not working do not have a directory.

Logstash and Elastisearch are running with the logs not generating much
information as far as pointing to any error.

log4j, [2015-04-19T13:41:44.723] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [2] and action
[internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:49.569] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [5] and action
[internal:discovery/zen/unicast_gte_1_4], resetting
log4j, [2015-04-19T13:41:54.572] WARN:
org.elasticsearch.transport.netty: [logstash-logstash-3170-2032] Message
not fully read (request) for [10] and action
[internal:discovery/zen/unicast_gte_1_4], resetting

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925
3320 Westrac Drive South, Suite A * Fargo, ND 58103
Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Sun, Apr 19, 2015 at 2:38 PM, David Pilato david@pilato.fr wrote:

From an Elasticsearch point of view, I don't see anything wrong.
You have a way too much shards for sure so you might hit OOM exception
or other troubles.

So to answer to your question, check your Elasticsearch logs and if
nothing looks wrong, check logstash.

Just adding that Elasticsearch is not generating data so you probably
meant that logstash stopped generating data, right?

HTH

David

Le 19 avr. 2015 à 21:08, dpich@realtruck.com a écrit :

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET
'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/df426052-4552-4360-a988-b5f39aeee2c0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/F5646856-C617-459A-A4BF-ED123DCE0211%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_R0b9L9HOLpKLVCyG1nvgMv3%2B1Ai32nNXO1x5LHiM0v6A%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/6789A58D-B460-4C15-BCCC-BFF90EE2AF7E%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Q8gOSQ57uF4CWUq0MYX8jvVf-B%3D-Qv2qeL_bqJoe4YkQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/4F247919-9964-4B24-99F7-4978E53B7B4F%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Sw5UUaH2i-%3D63-EP3s%2B25oiymY5syVz%3DuYt4h5z8d6Pw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHjBx_Sw5UUaH2i-%3D63-EP3s%2B25oiymY5syVz%3DuYt4h5z8d6Pw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/7974D2E3-533B-4FCF-A7AC-82D13589C455%40pilato.fr
https://groups.google.com/d/msgid/elasticsearch/7974D2E3-533B-4FCF-A7AC-82D13589C455%40pilato.fr?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_TKR7qD6Fmi2nOaFk%3DZGksUOQwVRqr6SAEExf8NPq774Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

Having read through the thread it sounds like your configuration has been
working in the past. Is that correct?

If this is the case I would reiterate David's initial questions about your
node's RAM and heap size as the number of shards look quite large for a
single node. Could you please provide information about this?

Best regards,

Christian

On Sunday, April 19, 2015 at 8:08:05 PM UTC+1, dp...@realtruck.com wrote:

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2a4d7543-b110-499b-a8d3-ccfa19284617%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hey Christian,

8 gigs of ram
-Xms6g -Xmx6g

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck |
Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 10:29 AM, christian.dahlqvist@elasticsearch.com
wrote:

Hi,

Having read through the thread it sounds like your configuration has been
working in the past. Is that correct?

If this is the case I would reiterate David's initial questions about your
node's RAM and heap size as the number of shards look quite large for a
single node. Could you please provide information about this?

Best regards,

Christian

On Sunday, April 19, 2015 at 8:08:05 PM UTC+1, dp...@realtruck.com wrote:

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2a4d7543-b110-499b-a8d3-ccfa19284617%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2a4d7543-b110-499b-a8d3-ccfa19284617%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHjBx_SY_%3DhVpsSNZ9urA2MGqetg0QrfOuorY_2rc0uCu_%2B1Xg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

HI,

That sounds like a very large amount of shards for a node that size, and
this is most likely the source of your problems. Each shard in
Elasticsearch corresponds to a Lucene instance and carries with it a
certain amount of overhead. You therefore do not want your shards to be too
small. For logging use cases a common shard size at least a few GB.

If you are using daily indices and the default 5 shards per index, you may
want to consider reducing the shard count for each of your indices and/or
switch to weekly or perhaps monthly indices in order to reduce the number
of shards created each day and increase the average shard size going
forward.

In order to get the instance working again you may also need to start
closing the older insides in order to bring down the number of active
shards and/or upgrade the node to get more RAM.

Best regards,

Christian

On Monday, April 20, 2015 at 4:38:53 PM UTC+1, Don Pich wrote:

Hey Christian,

8 gigs of ram
-Xms6g -Xmx6g

Don Pich | Jedi Master (aka System Administrator 2) | O: 701-952-5925

3320 Westrac Drive South, Suite A * Fargo, ND 58103

Facebook http://www.facebook.com/RealTruck | Youtube
http://www.youtube.com/realtruckcom| Twitter
http://twitter.com/realtruck | Google+ https://google.com/+Realtruck
| Instagram http://instagram.com/realtruckcom | Linkedin
http://www.linkedin.com/company/realtruck | Our Guiding Principles
http://www.realtruck.com/our-guiding-principles/
“If it goes on a truck we got it, if it’s fun we do it” – RealTruck.com
http://realtruck.com/

On Mon, Apr 20, 2015 at 10:29 AM, <christian...@elasticsearch.com
<javascript:>> wrote:

Hi,

Having read through the thread it sounds like your configuration has been
working in the past. Is that correct?

If this is the case I would reiterate David's initial questions about
your node's RAM and heap size as the number of shards look quite large for
a single node. Could you please provide information about this?

Best regards,

Christian

On Sunday, April 19, 2015 at 8:08:05 PM UTC+1, dp...@realtruck.com wrote:

I am new to elasticsearch and have a problem. I have 5 indicies. At
first all of them were running without issue. However, over the last 2
weeks, all but one have stopped generating data. I have run a tcpdump on
the logstash server and confirmed that logging packets are getting to the
server. I have looked into the servers health. I have issued the
following to check on the cluster:

root@logstash:/# curl -XGET 'localhost:9200/_cluster/health?pretty=true'
{
"cluster_name" : "es-logstash",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 2791,
"active_shards" : 2791,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 2791
}
root@logstash:/#

Can some one please point me in the right direction on troubleshooting
this?

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/0GEaRABjLQY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2a4d7543-b110-499b-a8d3-ccfa19284617%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2a4d7543-b110-499b-a8d3-ccfa19284617%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b7853329-aa6b-4cd5-b8e6-dfb30d779509%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.