Cross datacenter cluster for logstash backend

Trying to work out the best way to end up with a single search portal (
Kibana ) for logtash with an elasticsearch backend. We have several
datacenters and I've been asked to try and keep the cross-datacenter
traffic down. This means I would like to be able to jail logstash output
to a particular datacenter. Would I be able to solve this by
tagging nodes per datacenter and then using index templates to specify
which nodes to use to store the data ? If I can't do this I'll probably
need to push back on the restriction about pushing log data between
datacenters.

something like this :-

node.datacenter: DC1

curl -XPUT http://localhost:9200/_template/datacenter_per_index -d '
{
"template" : "DC1_logstash*",
"settings" : {
"index.routing.allocation.include.datacenter" : "DC1"
}
}'

then I can just tell logstash at each location to output to index
'DC1_logstash-%{+YYYY.MM.dd}' and make Kibana aware of each of the index
prefixes.

--

Hello Paul,

Yep, index shard allocation should solve the problem on the ES side.
And in Kibana you seem to be able to provide more index patterns, so
it should work as well.

But if you encounter any issues you can always post here :slight_smile:

Best regards,
Radu

http://sematext.com/ -- Elasticsearch -- Solr -- Lucene

On Fri, Nov 9, 2012 at 10:48 PM, Paul C username.taken@gmail.com wrote:

Trying to work out the best way to end up with a single search portal (
Kibana ) for logtash with an elasticsearch backend. We have several
datacenters and I've been asked to try and keep the cross-datacenter traffic
down. This means I would like to be able to jail logstash output to a
particular datacenter. Would I be able to solve this by tagging
nodes per datacenter and then using index templates to specify which nodes
to use to store the data ? If I can't do this I'll probably need to push
back on the restriction about pushing log data between datacenters.

something like this :-

node.datacenter: DC1

curl -XPUT http://localhost:9200/_template/datacenter_per_index -d '
{
"template" : "DC1_logstash*",
"settings" : {
"index.routing.allocation.include.datacenter" : "DC1"
}
}'

then I can just tell logstash at each location to output to index
'DC1_logstash-%{+YYYY.MM.dd}' and make Kibana aware of each of the index
prefixes.

--

--

On Friday, November 9, 2012 1:48:25 PM UTC-7, Paul C wrote:

Trying to work out the best way to end up with a single search portal (
Kibana ) for logtash with an elasticsearch backend. We have several
datacenters and I've been asked to try and keep the cross-datacenter
traffic down. This means I would like to be able to jail logstash output
to a particular datacenter. Would I be able to solve this by
tagging nodes per datacenter and then using index templates to specify
which nodes to use to store the data ? If I can't do this I'll probably
need to push back on the restriction about pushing log data between
datacenters.

something like this :-

node.datacenter: DC1

curl -XPUT http://localhost:9200/_template/datacenter_per_index -d '
{
"template" : "DC1_logstash*",
"settings" : {
"index.routing.allocation.include.datacenter" : "DC1"
}
}'

then I can just tell logstash at each location to output to index
'DC1_logstash-%{+YYYY.MM.dd}' and make Kibana aware of each of the index
prefixes.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.