Filebeat -> logstash -> kibana


(Robin ) #1

Hi,

I send my apache log filebeat -> logstash

in kibana I see this:

{ "@timestamp": "2017-05-22T09:23:33.000Z", "apache2": { "access": { "referrer": "-", "response_code": "200", "remote_ip": "80.12.110.201", "geoip": { "timezone": "Europe/Paris", "ip": "80.12.110.201", "latitude": 48.8582, "country_code2": "FR", "country_name": "France", "continent_code": "EU", "country_code3": "FR", "location": [ 2.3387000000000002, 48.8582 ], "longitude": 2.3387000000000002 }, "method": "GET", "user_name": "-", "http_version": "1.1", "body_sent": { "bytes": "67017" }, "url": "/", "user_agent": 

And I want to see my log like this:

pache2.access.remote_ip:80.12.110.201 apache2.access.geoip.timezone:Europe/Paris apache2.access.geoip.ip:80.12.110.201 apache2.access.geoip.latitude:48.8582 apache2.access.geoip.country_code2:FR apache2.access.geoip.country_name:France apache2.access.geoip.continent_code:EU apache2.access.geoip.country_code3:FR apache2.access.geoip.location:2.3387000000000002, 48.8582

here's my filebeat and logstash file's:

filebeat:
filebeat.prospectors:
- input_type: log
  document_type: apache
  fields_under_root: true
  paths:
    - /var/log/apache2/access.log*
  #document_type: json
  #json.message_key: log
  #json.keys_under_root: true
  #json.overwrite_keys: true
  exclude_files: [".gz$"]
output.logstash:
   hosts: ["my-ip:5044"]

logstash:

nput {
  beats {
    port => 5044
  }
}

filter {
   grok {
      match => { "message" => ["%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \[%{HTTPDATE:[apache2][access][time]}\] \"%{WORD:[apache2][access][method]} %{DATA:[apache2][access][url]} HTTP/%{NUMBER:[apache2][access][http_version]}\" %{NUMBER:[apache2][access][response_code]} %{NUMBER:[apache2][access][body_sent][bytes]}( \"%{DATA:[apache2][access][referrer]}\")?( \"%{DATA:[apache2][access][agent]}\")?",
        "%{IPORHOST:[apache2][access][remote_ip]} - %{DATA:[apache2][access][user_name]} \\[%{HTTPDATE:[apache2][access][time]}\\] \"-\" %{NUMBER:[apache2][access][response_code]} -" ] }
      remove_field => "message"
   }
   mutate {
      add_field => { "read_timestamp" => "%{@timestamp}" }
      mutate {
            remove_field => [ "timestamp", "beat", "fields", "input_type", "tags", "count", "@version", "log", "offset", "type"]
       }

   }
   date {
      match => [ "[apache2][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
      remove_field => "[apache2][access][time]"
   }
   useragent {
      source => "[apache2][access][agent]"
      target => "[apache2][access][user_agent]"
      remove_field => "[apache2][access][agent]"
   }
   geoip {
      source => "[apache2][access][remote_ip]"
      target => "[apache2][access][geoip]"
   }
}

output {
  elasticsearch {
    hosts => localhost
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

How do this?


(Andrew Kroh) #2

I don't understand what you want to have indexed into elasticsearch. Is the above one field that is a string? What's wrong with the format you have now? That's pretty close to what most people use.


(system) #3

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.