Multiple index Pattern

Hello Team,

I am using ELK6.4.0 and filebeat6.4.0. My architecture is Filebeat->Logastash->Elasticsearch->Kibana.

Currently i have 2 index metrcibeat-* and filebeat-* on kibana. In metricbeat index i am getting our application logs as well as nginx, syslog and auth log.

But now we want to create separate index for all, so we can search our logs easily.

I have tried the below config in logstash but no success. Below is my logstash config:-

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate_authorities => ["/etc/pki/tls/ca.crt"]
    ssl_certificate => "/etc/pki/tls/server.crt"
    ssl_key => "/etc/pki/tls/server.key"
    ssl_verify_mode => "peer"
    tls_min_version => "1.2"
  }
}
filter {
grok {
match => { "message" => [ "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}", "\I\,\s\[(?<date-time>[\d\-\w\:\.]+)\s\#(?<pid>\d+)\]\s+(?<loglevel>\w+)\s\-+\s\:\s\[(?<request-id>[\d\w\-]+)\]\s(?<method>[\w\s]+)\s\"(?<path>[\w\/\.]+)\"\s(?<mlp-message>.*)", "\I\,\s\[(?<date-time>[\d\-\w\:\.]+)\s\#(?<pid>[\d]+)\]\s\s(?<loglevel>[\w]+)\s\--\s\:\s\[(?<request-id>[\d\-\w]+)\]\s(?:[cC]urrent\s)?[dD]evice[\s:]+(?<device-id>[\w\s\:]+)", "\I\,\s\[(?<date-time>[\d\-\w\:\.]+)\s\#(?<pid>\d+)\]\s+(?<loglevel>\w+)\s\-+\s\:\s\[(?<request-id>[\d\w\-]+)\]\s(?<mlp-message>.*)", "\w\,\s\[(?<date-time>[\w\-\:\.]+)\s\#(?<pid>\d+)\]\s+(?<loglevel>\w+)\s(?<mlp-message>.*)" ] }
add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ]
}
}
output {

  if [source] in ["/var/log/nginx/access.log", "/var/log/nginx/error.log"]{
    elasticsearch {
      hosts => ["10.133.58.12:9200"]
      sniffing => true
#     manage_template => false
      index => "nginx-%{+YYYY.MM.dd}"
      document_type => "%{[@metadata][type]}"
    }
  }
  else if [source] =~ "/var/log/syslog"{
    elasticsearch {
      hosts => ["10.133.58.12:9200"]
      sniffing => true
#     manage_template => false
      index => "syslog-%{+YYYY.MM.dd}"
      document_type => "%{[@metadata][type]}"
    }
  }
  else if [source] =~ "/var/log/auth.log"{
    elasticsearch {
      hosts => ["10.133.58.12:9200"]
      sniffing => true
#     manage_template => false
      index => "access-%{+YYYY.MM.dd}"
      document_type => "%{[@metadata][type]}"
    }

  }
  else {
    elasticsearch {
    hosts => ["10.133.58.12:9200"]
    sniffing => true
    manage_template => false
#    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    index => "application-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }

  }
}

I am using filebeat system and nginx modules.

Please help me to troubleshoot the issue.

Thanks in advance.

In what way is it not working? What does an example incorrectly processed event look like (copy/paste from Kibana's JSON tab)?

Hello Magnus,

Thank you for your response.

After making that configuration and restarting the logstash service no index pattern were created on kibana. Even i tried to create manually but no success.

Please find the json tab logs for syslog:-

{
  "_index": "filebeat-6.4.0-2018.09.08",
  "_type": "doc",
  "_id": "_U0vt2UBnaXLlJpwLeRE",
  "_version": 1,
  "_score": null,
  "_source": {
    "input": {
      "type": "log"
    },
    "tags": [
      "beats_input_codec_plain_applied"
    ],
    "fileset": {
      "module": "system",
      "name": "syslog"
    },
    "syslog_program": "do-agent",
    "source": "/var/log/syslog",
    "syslog_message": "2018/09/08 03:17:22 Checking for newer version of do-agent",
    "received_from": "{\"name\":\"xyz"}",
    "received_at": "2018-09-08T03:17:26.475Z",
    "prospector": {
      "type": "log"
    },
    "syslog_pid": "1725",
    "@timestamp": "2018-09-08T03:17:26.475Z",
    "@version": "1",
    "message": "Sep  8 03:17:22 xyz do-agent[1725]: 2018/09/08 03:17:22 Checking for newer version of do-agent",
    "beat": {
      "name": "xyz",
      "hostname": "xyz",
      "version": "6.4.0"
    },
    "syslog_timestamp": "Sep  8 03:17:22",
    "offset": 20739,
    "syslog_hostname": "xyz",
    "host": {
      "name": "xyz"
    }
  },
  "fields": {
    "@timestamp": [
      "2018-09-08T03:17:26.475Z"
    ]
  },
  "highlight": {
    "source": [
      "@kibana-highlighted-field@/var/log/syslog@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1536376646475
  ]
}

Please let me know where i need to made changes in my logstash configuration.

Note- I have replaced the hostname with xyz.

Thanks.

After making that configuration and restarting the logstash service no index pattern were created on kibana. Even i tried to create manually but no success.

It's not entirely clear what you mean. Logstash creates indexes in Elasticsearch, but the index patterns that you see in Kibana are created by hand.

Please find the json tab logs for syslog:-

That document can't possibly have been created by the configuration you posted (because the index name doesn't match the configuration of any of your elasticsearch outputs). I can't help you if you don't post configuration that's consistent with the evidence.

Hello Magnus,

Thank you for your response.

I am may be unable to provide the data which will be useful to understand the problem.

But now i have fixed this issue. By defining the type fields in filebeat input and used that fields in logstash output.

Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.