ELK Stack restrict access to some data


(Alexander Yakovlev) #1

Now I'm touch a elk stack for a log collecting, also install xpack to kibana and elasticsearch. How I can restrict access to some logs group (any hosts) for any people? For log collecting I'm using logstash, listen some udp port, clients send logs with rsyslog. Critical for me is a don't change clients sending settings (rsyslog only).


(Magnus Bäck) #2

This isn't really a Logstash question. I suggest you edit your post and move it to the X-Pack category.


(Alexander Yakovlev) #3

Hi Magnus. Do you have any idea about this? x-pack give security features access to kibana. In my best is a take two or more indexes from one input and give accounts to teams


(Christian Dahlqvist) #4

You can use the role-based access controls that X-Pack provides to control who has access to what. This can be done either directly at the index level, assuming you are storing different categories of data in different indices, or using document level security if you tend to have all data in a single index and want to differentiate data based on the content.


(Alexander Yakovlev) #5

Hi Christian. Document level security is a platinum feature, I can get only gold.


(Christian Dahlqvist) #6

If you have a limited number of log types you need to secure differently, you can store them in different indices. What level of granularity do you need?


(Alexander Yakovlev) #7

I have only one type. My input block looks something like this:
input {
udp {
port => 514
type => syslog
}
}

for example we have a 3 teams and 4 server groups. Team1 must can see logs from grsrv2 and grsrv3
team2 - grsrv1
team3 - all server groups


(Alexander Yakovlev) #8

I'm not sure, but something like this must be work.... or not :slight_smile:
input {
if "%{host}" == "any_host_ip"
udp {
port => any_port
type => "index_one"
}
} else {
udp {
port => any_port
type => "index_two"
}
}

filter {
if [type] == "%{type}" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:%{type}_timestamp} %{SYSLOGHOST:%{type}_hostname} %{DATA:%{type}_program}(?:[%{POSIN$
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "%{type}_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{type}-%{+YYYY.MM.dd}"
document_type => "system_logs"
user => anyuser
password => anypassword
}
}


(Christian Dahlqvist) #9

You can not have conditionals in the input block, but you can set the index to write to based on the value of the host field, which indicates where the event comes from. You can do this using a simple conditional or through the translate plugin.


(Alexander Yakovlev) #10

In grok I'm add a host field
add_field => [ "received_from", "%{host}" ]

so, I can take out to elastic index with host filter?
index => "any_index-%{host}"
Something like this?
For example, I can create index any_index-host_name ? That's right?


(Christian Dahlqvist) #11

You may not want the host in the index name, but if you e.g. wanted to send events from 1.1.1.1 to a separate index you could do something like this:

if [host] =="1.1.1.1" {
  mutate {
    add_field =>  [ "[@metadata][index]", "indexA" ]
  }
} else {
  mutate {
    add_field =>  [ "[@metadata][index]", "indexB" ]
  }
}

Then use the [@metadata][index] field as index prefix in the elasticsearch output plugin.

elasticsearch {
  ...
  index => "%{[@metadata][index]}-%{+YYYY.MM.dd}"
  ...
}

If you have more complex requirement when it comes to mapping hosts to indices, you can use the translate plugin.


(Alexander Yakovlev) #12

Thank you, Christian!


(Mark Walkom) #13

Also FYI we’ve renamed ELK to the Elastic Stack, otherwise Beats feels left out :wink:


(system) #14

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.