Hourly Shards Elasticsearch/Kibana

Hello All,

I have a question about hourly sharding with either logstash or fluentd.
Since we are, or will be using, a set up called FLEKZ. I am trying to
integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour shard
deletion.

When I add

logstash_dateformat %Y.%m.%d.%H

in fluentd and

index => "logstash-%{+YYYY.MM.dd.HH}"

into logstash.

Elasticsearch cannot find the indices anymore. I go onto Kibana and they
cannot be found. I switch back to the normal Y.m.d in both and the
information is back on the screen. Using the api I am also not able to
search any of the indices. Is there something I am doing wrong or is there
something in the config file that I am missing?

Thank you for your help,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a0dc08e6-c570-4305-bc0b-808937551f54%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hey There,

Did you remember to change the "Timestamping" on Kibana so that it would
know you are using an hourly index ? Go the index configuration screen to
see that.

Also, if you have the requirement for 24 hour roll out, did you try
enabling _ttl
(Elasticsearch Platform — Find real-time answers at scale | Elastic)
on your indices ? Like that the docs older than the specified time would be
automatically deleted.

On Wednesday, June 4, 2014 12:16:56 PM UTC-3, Kellan Strong wrote:

Hello All,

I have a question about hourly sharding with either logstash or fluentd.
Since we are, or will be using, a set up called FLEKZ. I am trying to
integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour shard
deletion.

When I add

logstash_dateformat %Y.%m.%d.%H

in fluentd and

index => "logstash-%{+YYYY.MM.dd.HH}"

into logstash.

Elasticsearch cannot find the indices anymore. I go onto Kibana and they
cannot be found. I switch back to the normal Y.m.d in both and the
information is back on the screen. Using the api I am also not able to
search any of the indices. Is there something I am doing wrong or is there
something in the config file that I am missing?

Thank you for your help,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

TTL isn't the best idea as it consumes a lot of resources. You're better
off getting your hourly indexes working.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 02:29, Antonio Augusto Santos mkhaos7@gmail.com wrote:

Hey There,

Did you remember to change the "Timestamping" on Kibana so that it would
know you are using an hourly index ? Go the index configuration screen to
see that.

Also, if you have the requirement for 24 hour roll out, did you try
enabling _ttl (
Elasticsearch Platform — Find real-time answers at scale | Elastic)
on your indices ? Like that the docs older than the specified time would be
automatically deleted.

On Wednesday, June 4, 2014 12:16:56 PM UTC-3, Kellan Strong wrote:

Hello All,

I have a question about hourly sharding with either logstash or fluentd.
Since we are, or will be using, a set up called FLEKZ. I am trying to
integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour shard
deletion.

When I add

logstash_dateformat %Y.%m.%d.%H

in fluentd and

index => "logstash-%{+YYYY.MM.dd.HH}"

into logstash.

Elasticsearch cannot find the indices anymore. I go onto Kibana and they
cannot be found. I switch back to the normal Y.m.d in both and the
information is back on the screen. Using the api I am also not able to
search any of the indices. Is there something I am doing wrong or is there
something in the config file that I am missing?

Thank you for your help,

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624Zp%2BsoR-uUSDmVm_5MRBZs4AHBh-9ggT9twK5ucR_vRyg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hey Mark,

What are you calling "lot of resources" ? And how do you go about detecting
it?
Currently I'm ussing ttls for rolling old logs from my cluster. Its pretty
small currently (about 40GB of data), but as its get bigger I want to know
it it will pose a problem.

Thanks

On Wednesday, June 4, 2014 7:46:42 PM UTC-3, Mark Walkom wrote:

TTL isn't the best idea as it consumes a lot of resources. You're better
off getting your hourly indexes working.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 5 June 2014 02:29, Antonio Augusto Santos <mkh...@gmail.com
<javascript:>> wrote:

Hey There,

Did you remember to change the "Timestamping" on Kibana so that it would
know you are using an hourly index ? Go the index configuration screen to
see that.

Also, if you have the requirement for 24 hour roll out, did you try
enabling _ttl (
Elasticsearch Platform — Find real-time answers at scale | Elastic)
on your indices ? Like that the docs older than the specified time would be
automatically deleted.

On Wednesday, June 4, 2014 12:16:56 PM UTC-3, Kellan Strong wrote:

Hello All,

I have a question about hourly sharding with either logstash or fluentd.
Since we are, or will be using, a set up called FLEKZ. I am trying to
integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour shard
deletion.

When I add

logstash_dateformat %Y.%m.%d.%H

in fluentd and

index => "logstash-%{+YYYY.MM.dd.HH}"

into logstash.

Elasticsearch cannot find the indices anymore. I go onto Kibana and they
cannot be found. I switch back to the normal Y.m.d in both and the
information is back on the screen. Using the api I am also not able to
search any of the indices. Is there something I am doing wrong or is there
something in the config file that I am missing?

Thank you for your help,

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6316097e-06e8-4fb2-97fb-deb532d1893f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I thought I replied to this yesterday....Anyways it was with kibana. Thank
you for that.

On Wednesday, June 4, 2014 9:29:18 AM UTC-7, Antonio Augusto Santos wrote:

Hey There,

Did you remember to change the "Timestamping" on Kibana so that it would
know you are using an hourly index ? Go the index configuration screen to
see that.

Also, if you have the requirement for 24 hour roll out, did you try
enabling _ttl (
Elasticsearch Platform — Find real-time answers at scale | Elastic)
on your indices ? Like that the docs older than the specified time would be
automatically deleted.

On Wednesday, June 4, 2014 12:16:56 PM UTC-3, Kellan Strong wrote:

Hello All,

I have a question about hourly sharding with either logstash or fluentd.
Since we are, or will be using, a set up called FLEKZ. I am trying to
integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour shard
deletion.

When I add

logstash_dateformat %Y.%m.%d.%H

in fluentd and

index => "logstash-%{+YYYY.MM.dd.HH}"

into logstash.

Elasticsearch cannot find the indices anymore. I go onto Kibana and they
cannot be found. I switch back to the normal Y.m.d in both and the
information is back on the screen. Using the api I am also not able to
search any of the indices. Is there something I am doing wrong or is there
something in the config file that I am missing?

Thank you for your help,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/00e45fad-5b07-4cd8-82ca-45dd25d1c71f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

It depends on a few factors, document size, index size, etc etc.

If you are using ES for logging data, then best practise is to use
timestamped indexes and then just drop old ones as needed using curator.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 22:52, Antonio Augusto Santos mkhaos7@gmail.com wrote:

Hey Mark,

What are you calling "lot of resources" ? And how do you go about
detecting it?
Currently I'm ussing ttls for rolling old logs from my cluster. Its pretty
small currently (about 40GB of data), but as its get bigger I want to know
it it will pose a problem.

Thanks

On Wednesday, June 4, 2014 7:46:42 PM UTC-3, Mark Walkom wrote:

TTL isn't the best idea as it consumes a lot of resources. You're better
off getting your hourly indexes working.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 02:29, Antonio Augusto Santos mkh...@gmail.com wrote:

Hey There,

Did you remember to change the "Timestamping" on Kibana so that it would
know you are using an hourly index ? Go the index configuration screen to
see that.

Also, if you have the requirement for 24 hour roll out, did you try
enabling _ttl (Elasticsearch Platform — Find real-time answers at scale | Elastic
reference/current/mapping-ttl-field.html) on your indices ? Like that
the docs older than the specified time would be automatically deleted.

On Wednesday, June 4, 2014 12:16:56 PM UTC-3, Kellan Strong wrote:

Hello All,

I have a question about hourly sharding with either logstash or
fluentd. Since we are, or will be using, a set up called FLEKZ. I am trying
to integrate both logstash and fluentd together, which work well with each
other. However, I have a business requirement for a rolling 24hour shard
deletion.

When I add

logstash_dateformat %Y.%m.%d.%H

in fluentd and

index => "logstash-%{+YYYY.MM.dd.HH}"

into logstash.

Elasticsearch cannot find the indices anymore. I go onto Kibana and
they cannot be found. I switch back to the normal Y.m.d in both and the
information is back on the screen. Using the api I am also not able to
search any of the indices. Is there something I am doing wrong or is there
something in the config file that I am missing?

Thank you for your help,

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/a1bceabb-ea26-4aa5-8358-92f6f8e2ae1e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6316097e-06e8-4fb2-97fb-deb532d1893f%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6316097e-06e8-4fb2-97fb-deb532d1893f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624ZqHS_op963EkQpSBbhqiamuaLLAurMG55HbFmxN-Gydg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.