Bluecoat Grok filter

Hello,
I am running the ELK-stack on a server and want to parse the Bluecoat Proxy logs.
I have created a grok filter and it works when i test it in the Grok Debugger (http://grokdebug.herokuapp.com/) but when i add the filter in the logstash.json and run it it doesnt create the fields in kibana for me.

The full message just says on the message field.

This is my logstash.json file

input {
beats {
port => 5044
type => "log"
}
}

filter {
if ([message] =~ /^#/) {
drop { }
}
if [type] == "bluecoat" {
grok => {
match => { "message" => "^%{TIMESTAMP_ISO8601:date} %{NUMBER:time_taken} %{IP:c_ip} %{USER:cs_username} %{NOTSPACE:cs_auth_group} %{NOTSPACE:s_supplier_name} %{NOTSPACE:s_supplier_ip} %{NOTSPACE:s_supplier_country} %{NOTSPACE:s_supplier_failures} %{NOTSPACE:x_exception_id} %{NOTSPACE:sc_filter_result} %{QUOTEDSTRING:cs_categories} %{NOTSPACE:cs_Referer} %{NOTSPACE:cs_status} %{NOTSPACE:s_action} %{NOTSPACE:cs_method} %{NOTSPACE:rs_Content-Type} %{NOTSPACE:cs_uri_scheme} %{NOTSPACE:cs_host} %{NOTSPACE:cs_uri_port} %{NOTSPACE:cs_uri_path} %{NOTSPACE:cs_uri_query} %{NOTSPACE:cs_uri_extension} %{QUOTEDSTRING:cs_User_Agent} %{IP:s_ip} %{NUMBER:sc_bytes} %{NUMBER:cs_bytes} %{NOTSPACE:x_virus_id} %{QUOTEDSTRING:x_bluecoat_application_name} %{QUOTEDSTRING:x_bluecoat_application_operation} %{NUMBER:cs_threat_risk} %{NOTSPACE:x_bluecoat_transaction_uuid} %{NOTSPACE:x_icap_reqmod_header_XICAPMetadata} %{NOTSPACE:x_icap_respmod_header_XICAPMetadata} %{NOTSPACE:cs_auth_type} %{NOTSPACE:x_auth_credential_type}" }
}
}
}

output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
stdout {}
}

and this is the filebeat config related to this file:

  • input_type: log

    Paths that should be crawled and fetched. Glob based paths.

    paths:
    • D:\LOGS\BC*.log.gz
      document_type: bluecoat
      encoding: utf-8
      scan_frequency: 5s

Dont understand why it ignores the new fields.. is this kibana config?

I suspect type => "log" overrides the type assignment in the Filebeat configuration.

Thanks (tack), i removed the line from the logstash config file and restarted logstash, created a new log and its still no new fields available :frowning:

What does an event look like then? Copy/paste from Kibana's JSON tab so we can see the raw JSON.

I took one event:

{
"_index": "filebeat-2017.08.31",
"_type": "doc",
"_id": "AV44TaLGzwkFvvGLDiXa",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2017-08-31T12:39:23.355Z",
"beat": {
"hostname": "SERVER01",
"name": "SERVER01",
"version": "5.5.2"
},
"input_type": "log",
"message": "2017-08-30 08:25:01 78 192.168.10.1 user123 DOMAIN\InternetUsers sub.domain.com 192.168.11.1 Sweden - - OBSERVED "Web Ads/Analytics" - 200 TCP_NC_MISS GET image/gif http sub.domain.com 80 /src/media/cookie.aspx ?xid=JUNga1BqQ7j9twX7R_y1vJUi aspx "Mozilla/5.0 (Windows NT 10.0; Win64; x64; Trident/7.0; rv:11.0) like Gecko" 192.168.12.11 383 761 - "none" "none" 2 80c18e0c98d2dd54-000000007012b0b5-0000000059a6765c - "{ %22expect_sandbox%22: false }" Digest Kerberos",
"offset": 4278,
"source": "D:\LOGS\BC\SG_main__7.log.gz",
"type": "bluecoat"
},
"fields": {
"@timestamp": [
1504183163355
]
},
"sort": [
1504183163355
]
}

I don't know what's going on here. Have you actually configured Filebeat to send to Logstash rather than directly to Elasticsearch?

Hi,
No, doesnt seem like it. This was the config i had in filebeat.yml

output.elasticsearch:

Array of hosts to connect to.

hosts: ["localhost:9200"]

#output.logstash:

The Logstash hosts

#hosts: ["localhost:5044"]

I switched that around, so the output.logstash is now Active (and not the output.elasticsearch) but now new files doesnt arrive into kibana. Do i need to configure any special input in the logstash config?

Looks like this at the moment:
input {
beats {
port => 5044
type => "log"
}
}

But, the information in the output: index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
Seems to have been applied when i look at events in Kibana, _Index have the value "filebeat-2017.08.31" for example.

What am i missing?

Logstash has no idea of whether the files Filebeat should read are new or old, so if old data arrives to Logstash and Elasticsearch as expected but new files aren't picked up then you have a Filebeat problem.

I solved this, updated the beat agent config, and also uploaded the beat template to elasticsearch which i missed. I am new to this but trying to convince my workplace not to buy splunk if we can solve our needs with the ELK-stack.

Alot of help in the Community and i appreciate that i get assistance when i get stuck.

Stort tack Magnus!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.