Need to define different index name

I am running with ELK cluster 5.6.16.

My system as below
Filebeat --> Logstash --> Elasticsearch (3 server in cluster) --> KIbana

I have few servers where many web vhosts are configured. From that server, I have configured different doc_type (abc-access , efg-access & xyz-access ) defined into filebeat.yml and defined index abc into file-beat configuration. So, All logs coming through that server goes into same index file i.e.

abc

On the same server, I have defined new doc-type i.e mno-access & I want to send all which matching to same doc-type goes to new index.

i.e. mno

I searched a lots & I didn't find anything where, we can defined new index name under filebeat.yml configuration.

So, I have decided to handle this through logstash. For that, I have setup filter into logstash & using if condition try to define new index name with matching to mno-access doc-type. But that _index meta-field doesn't changed. I tried to overwrite, remore_field options but that didn't worked. Kindly help, How to defined different index name here.

  • Logstash-output section

output {
if [type] =~ "mno-access" {

elasticsearch {
hosts => ["192.168.2.121:9200", "192.168.2.122:9200", "192.168.2.123:9200"]
sniffing => true
manage_template => true
index => "mno-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
else {
elasticsearch {
hosts => ["192.168.2.121:9200", "192.168.2.122:9200", "192.168.2.123:9200"]
sniffing => true
manage_template => true
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}

have you solved this ?

Not yet.. I am still getting abc index name against doc_type mno-access.

Can you post your filebeat conf?

PS : We can't remove defined index name from filebeat configuration as this was configured 1 year back & many scripts are already defined with this name. That's why, we are looking for solution into logstash.

Filebeat configuration :

filebeat.prospectors:

  • input_type: log
    paths:
    • /var/log/nginx/www.access.log
      document_type: abc-access
  • input_type: log
    paths:
    • /var/log/nginx/akbingbot_website.access.log
      document_type: mno-access
  • input_type: log
    paths:
    • /var/log/nginx/akamai-wap.access.log
      document_type: xyz-access
  • input_type: log
    paths:
    • /var/log/php-fpm/wap-error.log
      include_lines: ['Fatal error']
      document_type: wap-fpm-error
  • input_type: log
    paths:
    • /var/log/php-fpm/www-error.log
      include_lines: ['Fatal error']
      document_type: php-fpm-error
      output.logstash:
      hosts: ["192.168.2.122:5044", "192.168.2.121:5044"]
      loadbalance: true
      index: abc
      ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt", "/etc/pki/tls/certs/logstash-forwarder_122.crt"]

Try tu use the fields instead of document_type
fields:
document_type: mno-access
fields_under_root: true

In logstash conf this in the if :
If "mno-access" in [field][document_type]...

Is my understanding correct ?

  • input_type: log
    paths:
    • /var/log/nginx/www.access.log
      document_type: abc-access
  • input_type: log
    paths:
    • /var/log/nginx/akbingbot_website.access.log
      fields:
      document_type: mno-access
      fields_under_root: true
  • input_type: log
    paths:
    • /var/log/nginx/akamai-wap.access.log
      document_type: xyz-access

Logstash config :

output {

# if [type] =~ "mno-access" {

if "mno-access" in [field][document_type] {

    elasticsearch {
        hosts => ["192.168.2.121:9200", "192.168.2.122:9200", "192.168.2.123:9200"]
        sniffing => true
        manage_template => true
        index => "mno-%{+YYYY.MM.dd}"
        document_type => "mno-access"
     }
    }
   else {
    elasticsearch {
        hosts => ["192.168.2.121:9200", "192.168.2.122:9200", "192.168.2.123:9200"]
        sniffing => true
        manage_template => true
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }

}

yes ... working ?

Nope. I got error in filebeat.yml.

Jun 20 14:21:49 12-108-IDC filebeat: filebeat2019/06/20 08:51:49.985148 beat.go:346: CRIT Exiting: error loading config file: yaml: line 28: did not find expected '-' indicator
Jun 20 14:21:49 12-108-IDC filebeat: Exiting: error loading config file: yaml: line 28: did not find expected '-' indicator
Jun 20 14:21:49 12-108-IDC systemd: filebeat.service: main process exited, code=exited, status=1/FAILURE

  • input_type: log
    paths:
    • /var/log/nginx/akbingbot_website.access.log ### - Line no 28.
      fields:
      document_type: mno-access
      fields_under_root: true

Filebeat service started working after making changes as below. But logs are not getting into kibana.

- input_type: log
  paths:
    - /var/log/nginx/akbingbot_website.access.log
  fields:
    document_type: mno-access
  fields_under_root: true

I have enabled debug on filebeat & found that, logs are getting pushed to logstash server. but didn't get inserted into elasticsearch.

2019/06/20 13:01:18.117779 spooler.go:89: DBG  Flushing spooler because of timeout. Events flushed: 4
2019/06/20 13:01:18.118206 client.go:214: DBG  Publish: {
  "@timestamp": "2019-06-20T13:01:13.118Z",
  "beat": {
    "hostname": "12-108-IDC.justdial.com",
    "name": "12-108-IDC.justdial.com",
    "version": "5.6.16"
  },
  "document_type": "mno-access",
  "input_type": "log",
  "message": "40.77.25.145, 67.220.142.5 - - [04/Jun/2019:23:02:05 +0530] \"GET /Chandigarh/Hotels HTTP/1.1\" 200 314832 \"-\" \"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36 (BingLocalSearch)\" \"REMOTE_ADDR : 67.220.142.5\" \"TRUE_CLIENT : 40.77.25.145\" \"AKAXFF : -\" www.justdial.com 0.491 0.393 US \"AKBINGBT : yes\" .",
  "offset": 47470,
  "source": "/var/log/nginx/akbingbot_website.access.log",
  "type": "log"
}
2019/06/20 13:01:18.118349 client.go:214: DBG  Publish: {
  "@timestamp": "2019-06-20T13:01:13.118Z",
  "beat": {
    "hostname": "12-108-IDC.justdial.com",
    "name": "12-108-IDC.justdial.com",
    "version": "5.6.16"
  },
  "document_type": "mno-access",
  "input_type": "log",
  "message": "40.77.25.145, 67.220.142.5 - - [04/Jun/2019:23:02:08 +0530] \"GET /Delhi/mtnl-in-jamia-nagar HTTP/1.1\" 302 5 \"-\" \"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36 (BingLocalSearch)\" \"REMOTE_ADDR : 67.220.142.5\" \"TRUE_CLIENT : 40.77.25.145\" \"AKAXFF : -\" www.justdial.com 0.542 0.542 US \"AKBINGBT : yes\" .",
  "offset": 47833,
  "source": "/var/log/nginx/akbingbot_website.access.log",
  "type": "log"
}
2019/06/20 13:01:18.118550 client.go:214: DBG  Publish: {
  "@timestamp": "2019-06-20T13:01:13.119Z",
  "beat": {
    "hostname": "12-108-IDC.justdial.com",
    "name": "12-108-IDC.justdial.com",
    "version": "5.6.16"
  },
  "document_type": "mno-access",
  "input_type": "log",
  "message": "40.77.25.145, 69.174.30.168 - - [04/Jun/2019:23:02:09 +0530] \"GET /Delhi/search?q=mtnl HTTP/1.1\" 200 285678 \"-\" \"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36 (BingLocalSearch)\" \"REMOTE_ADDR : 69.174.30.168\" \"TRUE_CLIENT : 40.77.25.145\" \"AKAXFF : -\" www.justdial.com 0.612 0.612 US \"AKBINGBT : yes\" .",
  "offset": 48197,
  "source": "/var/log/nginx/akbingbot_website.access.log",
  "type": "log"
}
2019/06/20 13:01:18.118634 output.go:109: DBG  output worker: publish 3 events
2019/06/20 13:01:18.118661 context.go:93: DBG  forwards msg with attempts=-1
2019/06/20 13:01:18.118733 context.go:98: DBG  message forwarded
2019/06/20 13:01:18.118754 context.go:138: DBG  events from worker worker queue
2019/06/20 13:01:18.120342 sync.go:96: DBG  3 events out of 3 events sent to logstash host 192.168.2.122:5044:10200. Continue sending

Can anyone help out here ?

If you have set fields_under_root in filebeat.yml then you should be testing [document_type]

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.