Its probably down to not using the correct terminology when searching this
group but whats the recommended way to handle the situation where I need
the same data source to be in 2 different elastic searches?
ES1 data sources
syslogs
firewall logs
webserver logs
ES2 data sources
Twitter
flat file
webserver logs (same as ES1)
The idea would be that each elastic search would have its own kibana but 1
would be used by sys admins and the other would be used for more business
analysis purposes
Sounds reasonable. Yeah at the moment, Kibana pointing to a single cluster
will share all the dashboards among everybody who have access to it. If you
don't want the dashboard sharing, you need 2 separate ES clusters for now,
each with its own Kibana.
However, there are some ideas in this post that you might be able to use to
configure a proxy behind 2 Kibanas (for instance) pointing to a single ES
cluster:
Thanks for that but it wasn’t so much the kibana side of things I was wondering about as I would expect to have to use separate kibanas anyway. Its more what’s the best way to set things up so that I can have the same data source in 2 different ES clusters so that its available to both but each cluster doesn’t have the extra sources required by the other?
Sounds reasonable. Yeah at the moment, Kibana pointing to a single cluster will share all the dashboards among everybody who have access to it. If you don't want the dashboard sharing, you need 2 separate ES clusters for now, each with its own Kibana.
However, there are some ideas in this post that you might be able to use to configure a proxy behind 2 Kibanas (for instance) pointing to a single ES cluster:
Yes it could - although test it to see if it is acceptable to you. If it
becomes a problem, then you can always run multiple LS feeders one per ES
cluster and then just separate the config outputs individually.
Forgive me but when you say feeders do you mean the LS actually processing
the log? Can you run multiple LS's on the same log without having them trip
over each other or end up with missing data read by the other LS first?
On Wednesday, March 12, 2014 3:12:04 PM UTC, Binh Ly wrote:
Yes it could - although test it to see if it is acceptable to you. If it
becomes a problem, then you can always run multiple LS feeders one per ES
cluster and then just separate the config outputs individually.
You can run different instances of LS each with its own config file. When
you define your file input, just point it to a unique since_db location
(that's different for each instance)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.