Thank you very much for your support around this topic.
The last problem was due to xpack.security
. I disabled it and all seems to be fine now. It will be a further step.
As a summary for others, I had to deal with:
-
/tmp/
settings because native directory was readonly and not writable for java
=> For elasticsearch: Adapt Environment=ES_TMPDIR
in /etc/systemd/system/elasticsearch.service.d/override.conf
to put a directory with elasticsearch RW access
=> For logstash: Adapt I/O temp directory -Djava.io.tmpdir=
in /etc/logstash/jvm.options
-
Modify access rights for logstash and elasticsearch users in tmp and logs repositories, since directories were created for root:root
chown -R root:elasticsearch <path>
chmod 775 <path>
chown -R root:logstash <other_path>
chmod 775 <other_path>
-
Prefer ipv4 in jvm.options
files /etc/elasticsearch/jvm.options
and /etc/logstash/jvm.options
-Djava.net.preferIPv4Stack=true
-
Configure /etc/logstash/conf.d/syslog.conf
.
=> This configuration came from F5 forums and seemed corresponding to my needs for BIG-IP ASM logs.
=> The input and output sections below are working. However grok is still a problem but it will be another topic.
=> geoip as I found it was not including target line. It broked the service then I added the target line found in another thread
input {
syslog {
port => 5140
codec => plain {
# charset => "ISO-8859-1"
}
}
}
filter {
grok {
match => {
"message" => [
",attack_type=\"%{DATA:attack_type}\"",
",blocking_exception_reason=\"%{DATA:blocking_exception_reason}\"",
",bot_anomalies=\"%{DATA:bot_anomalies}\"",
",bot_category=\"%{DATA:bot_category}\"",
",bot_signature_name=\"%{DATA:bot_signature_name}\"",
",client_application=\"%{DATA:client_application}\"",
",client_application_version=\"%{DATA:client_application_version}\"",
",client_class=\"%{DATA:client_class}\"",
",date_time=\"%{DATA:date_time}\"",
",dest_port=\"%{DATA:dest_port}\"",
",enforced_bot_anomalies=\"%{DATA:enforced_bot_anomalies}\"",
",grpc_method=\"%{DATA:grpc_method}\"",
",grpc_service=\"%{DATA:grpc_service}\"",
",ip_client=\"%{DATA:ip_client}\"",
",is_truncated=\"%{DATA:is_truncated}\"",
",method=\"%{DATA:method}\"",
",outcome=\"%{DATA:outcome}\"",
",outcome_reason=\"%{DATA:outcome_reason}\"",
",policy_name=\"%{DATA:policy_name}\"",
",protocol=\"%{DATA:protocol}\"",
",request_status=\"%{DATA:request_status}\"",
",request=\"%{DATA:request}\"",
",request_body_base64=\"%{DATA:request_body_base64}\"",
",response_code=\"%{DATA:response_code}\"",
",severity=\"%{DATA:severity}\"",
",sig_cves=\"%{DATA:sig_cves}\"",
",sig_ids=\"%{DATA:sig_ids}\"",
",sig_names=\"%{DATA:sig_names}\"",
",sig_set_names=\"%{DATA:sig_set_names}\"",
",src_port=\"%{DATA:src_port}\"",
",staged_sig_cves=\"%{DATA:staged_sig_cves}\"",
",staged_sig_ids=\"%{DATA:staged_sig_ids}\"",
",staged_sig_names=\"%{DATA:staged_sig_names}\"",
",staged_threat_campaign_names=\"%{DATA:staged_threat_campaign_names}\"",
",sub_violations=\"%{DATA:sub_violations}\"",
",support_id=\"%{DATA:support_id}\"",
",threat_campaign_names=\"%{DATA:threat_campaign_names}\"",
",unit_hostname=\"%{DATA:unit_hostname}\"",
",uri=\"%{DATA:uri}\"",
",violations=\"%{DATA:violations}\"",
",violation_details=\"%{DATA:violation_details_xml}\"",
",violation_rating=\"%{DATA:violation_rating}\"",
",vs_name=\"%{DATA:vs_name}\"",
",x_forwarded_for_header_value=\"%{DATA:x_forwarded_for_header_value}\""
]
}
break_on_match => false
}
if [violation_details_xml] != "N/A" {
xml {
source => "violation_details_xml"
target => "violation_details"
}
}
mutate {
split => { "attack_type" => "," }
split => { "sig_cves" => "," }
split => { "sig_ids" => "," }
split => { "sig_names" => "," }
split => { "sig_set_names" => "," }
split => { "staged_sig_cves" => "," }
split => { "staged_sig_ids" => "," }
split => { "staged_sig_names" => "," }
split => { "staged_threat_campaign_names" => "," }
split => { "sub_violations" => "," }
split => { "threat_campaign_names" => "," }
split => { "violations" => "," }
remove_field => [
"[violation_details][violation_masks]",
"violation_details_xml",
"message"
]
}
if [x_forwarded_for_header_value] != "N/A" {
mutate { add_field => { "source_host" => "%{x_forwarded_for_header_value}"}}
} else {
mutate { add_field => { "source_host" => "%{ip_client}"}}
}
geoip {
source => "source_host"
target => "source_geo"
}
ruby {
code => "
require 'base64';
data = event.get('[violation_details]');
def check64(value)
value.is_a?(String) && Base64.strict_encode64(Base64.decode64(value)) == value;
end
def iterate(key, i, event)
if i.is_a?(Hash)
i.each do |k, v|
if v.is_a?(Hash) || v.is_a?(Array)
newkey = key + '[' + k + ']';
iterate(newkey, v, event)
end
end
else if i.is_a?(Array)
i.each do |v|
iterate(key, v, event)
end
else
if check64(i)
event.set(key, Base64.decode64(i))
end
end
end
end
iterate('[violation_details_b64decoded]', data, event)
"
}
}
output {
elasticsearch {
hosts => ["http://server_ip:9200"]
user => "some_user"
password => "some_password"
index => "logs-waf-dcb"
}
}
- disable
xpack.security
in /etc/elasticsearch/elasticsearch.yml
. I consider this as a temporary setting.
xpack.security.enabled: false
xpack.security.http.ssl:
enabled: false
xpack.security.transport.ssl:
enabled: false
- change URL used by kibana to access elasticsearch in
/etc/kibana/kibana.yml
to use http and not https. I consider this as a temporary setting.
elasticsearch.hosts: ['http://elasticsearchserver:9200']
Hope it could help.
Feel free to provide any complementary information.