Hi! I'm facing an issue while moving from elasticsearch ingest pipeline to logstash pipeline, I've correctly set up filebeat for IIS logs and configured the elasticsearch pipeline as per documentation (with logstash calling the pipeline in the output), then had to move to logstash processing for the dns reverse plugin, but once I remove the pipeline from the logstash output (keeping the same filebeat alias as index) nothing gets into elasticsearch, works only on a brand new index, I don't think it's a parsing issue because it works on a new index but I'm out of ideas, any help is appreciated
Hi @Daniele_Stefanucci welcome to the community.
Perhaps post your logstash config so we can see ... Please format it with the </>
option above.
Question are you using a custom ingest pipeline you created or the one that is provided with the Filebeat IIS module?
I was originally using the one provided with filebeat iis module then kinda rewrote it for logstash, here's the current pipeline:
filter {
if "TMS" in [tags] {
grok {
break_on_match => true
match => [
"message", "%{TIMESTAMP_ISO8601:iis.access.time} (?:-|%{NOTSPACE:iis.access.site_name}) (?:-|%{NOTSPACE:iis.access.server_name}) (?:-|%{IPORHOST:destination.address}) (?:-|%{WORD:http.request.method}) (?:-|%{NOTSPACE:url.path}) (?:-|%{NOTSPACE:url.query}) (?:-|%{NUMBER:destination.port:long}) (?:-|%{NOTSPACE:user.name}) (?:-|%{IP:netscaler_address}) (?:-|HTTP/%{NUMBER:http.version}) (?:-|%{NOTSPACE:user_agent.original}) (?:-|%{NOTSPACE:iis.access.cookie}) (?:-|%{NOTSPACE:http.request.referrer}) (?:-|%{NOTSPACE:destination.domain}) (?:-|%{NUMBER:http.response.status_code:long}) (?:-|%{NUMBER:iis.access.sub_status:long}) (?:-|%{NUMBER:iis.access.win32_status:long}) (?:-|%{NUMBER:http.response.body.bytes:long}) (?:-|%{NUMBER:http.request.body.bytes:long}) (?:-|%{NUMBER:temp.duration:long}) (?:-|%{IPORHOST:source.address})",
"message", "%{TIMESTAMP_ISO8601:iis.access.time} (?:-|%{NOTSPACE:iis.access.site_name}) (?:-|%{WORD:http.request.method}) (?:-|%{NOTSPACE:url.path}) (?:-|%{NOTSPACE:url.query}) (?:-|%{NUMBER:destination.port:long}) (?:-|%{NOTSPACE:user.name}) (?:-|%{IPORHOST:netscaler_address}) (?:-|%{NOTSPACE:user_agent.original}) (?:-|%{NOTSPACE:iis.access.cookie}) (?:-|%{NOTSPACE:http.request.referrer}) (?:-|%{NOTSPACE:destination.domain}) (?:-|%{NUMBER:http.response.status_code:long}) (?:-|%{NUMBER:iis.access.sub_status:long}) (?:-|%{NUMBER:iis.access.win32_status:long}) (?:-|%{NUMBER:http.response.body.bytes:long}) (?:-|%{NUMBER:http.request.body.bytes:long}) (?:-|%{NUMBER:temp.duration:long}) (?:-|%{IP:source.address})",
"message", "%{TIMESTAMP_ISO8601:iis.access.time} \\[%{IPORHOST:destination.address}\\]\\(http://%{IPORHOST:destination.address}\\) (?:-|%{WORD:http.request.method}) (?:-|%{NOTSPACE:url.path}) (?:-|%{NOTSPACE:url.query}) (?:-|%{NUMBER:destination.port:long}) (?:-|%{NOTSPACE:user.name}) \\[%{IPORHOST:netscaler_address}\\]\\(http://%{IPORHOST:netscaler_address}\\) (?:-|%{NOTSPACE:user_agent.original}) (?:-|%{NUMBER:http.response.status_code:long}) (?:-|%{NUMBER:iis.access.sub_status:long}) (?:-|%{NUMBER:iis.access.win32_status:long}) (?:-|%{NUMBER:temp.duration:long}) (?:-|%{IP:source.address})",
"message", "%{TIMESTAMP_ISO8601:iis.access.time} (?:-|%{IPORHOST:destination.address}) (?:-|%{WORD:http.request.method}) (?:-|%{NOTSPACE:url.path}) (?:-|%{NOTSPACE:url.query}) (?:-|%{NUMBER:destination.port:long}) (?:-|%{NOTSPACE:user.name}) (?:-|%{IP:netscaler_address}) (?:-|%{NOTSPACE:user_agent.original}) (?:-|%{NOTSPACE:http.request.referrer}) (?:-|%{NUMBER:http.response.status_code:long}) (?:-|%{NUMBER:iis.access.sub_status:long}) (?:-|%{NUMBER:iis.access.win32_status:long}) (?:-|%{NUMBER:temp.duration:long}) (?:-|%{IP:source.address})",
"message", "%{TIMESTAMP_ISO8601:iis.access.time} (?:-|%{IPORHOST:destination.address}) (?:-|%{WORD:http.request.method}) (?:-|%{NOTSPACE:url.path}) (?:-|%{NOTSPACE:url.query}) (?:-|%{NUMBER:destination.port:long}) (?:-|%{NOTSPACE:user.name}) (?:-|%{IP:netscaler_address}) (?:-|%{NOTSPACE:user_agent.original}) (?:-|%{NUMBER:http.response.status_code:long}) (?:-|%{NUMBER:iis.access.sub_status:long}) (?:-|%{NUMBER:iis.access.win32_status:long}) (?:-|%{NUMBER:temp.duration:long}) (?:-|%{IP:source.address})"
]
}
mutate {
rename => ["@timestamp", "[event][created]" ]
}
date {
match => [ "[iis][access][time]", "yyyy-MM-dd HH:mm:ss" ]
}
mutate {
remove_field => ["[iis][access][time]"]
}
urldecode {
field => "[user_agent][original]"
}
useragent {
source => "[user_agent][original]"
}
geoip {
source => "[source][address]"
target => "[source][geo]"
}
mutate {
add_field => { "[event][kind]" => "event" }
}
if [netscaler_address] =~ /^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$/ {
dns {
nameserver => {
address => ["192.168.1.6", "192.168.1.7"]
search => ["domain.msft"]
}
reverse => [ "netscaler_address" ]
}
}
}
}
here's the output:
if "TMS" in [tags] {
elasticsearch {
ssl => true
ssl_certificate_verification => true
cacert => '/etc/logstash/logstash.pem'
hosts => ["https://srvelkmaster1:9200", "https://srvelkmaster2:9200", "https://srvelkmaster3:9200"]
index => "filebeat-rollover-alias"
}
}
Pointing index to a new alias works like a charm
Couple questions / Apologies I am trying to understand "The steps and state" of the issue. You don't need to answer all these but trying to understand where the issue is perhaps the questions will lead you to find something as well
What version of the stack are you on?
Did you start with the Filebeat IIS module originally and use the default pipeline or did you write your own ingest pipeline from the beginning?
What is your input section look like is it just the normal 5044 beats input?
Is your intention to have the current config to write to the normal filebeat index via index alias? example alias filebeat-7.8.1
which is then pointing to filebeat-7.8.1-2020.08.16-000007
make sure the alias is pointing to a writeable index ... somewhere along the line it may not be.
When you say a new alias is that alias pointing to a new index or an existing writable index, what does that alias look like? What index does it point to?
Are you getting any logstash errors when you are using the pipeline? It may not be a parsing issues but if you are trying to writing into filebeat index with its existing mapping perhaps something may be breaking there.
If you put stdout {codec => rubydebug}
do you see output?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.