Elasicsearch is not creating an index even there are no errors

I have installed Elasticstash 6.2.4 with one filebeat node and a node with elasticsearch, kibana and logstash on the same node. Now i am trying to send logs of the a server using filebeat to logstash using the below config file located in /etc/logstash/conf.d

input {
  beats {
    port => "5044"
    host => "xxxxxxxx"
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch { hosts => ["xxxxxxxx:9200"]
    hosts => "xxxxxxxxx:9200"
    user => "elastic"
    password => "changeme"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

I am getting no errors in any of the service logs and filebeat is also sending logs to my logstash. But i am not able to see any indexes getting created by elasticsearch

http://xxxxxxxxx:9200/_cat/indices?v

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size

Below are my logstash logs and

[INFO ] 2018-11-26 06:24:47.959 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"10.1.20.140:5044"}
[INFO ] 2018-11-26 06:24:48.025 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2cb14725@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[INFO ] 2018-11-26 06:24:48.031 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2018-11-26 06:24:48.047 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}

filebeat logs

2018-11-26T12:29:20.164Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":20},"total":{"ticks":20,"time":36,"value":20},"user":{"ticks":10,"time":16}},"info":{"ephemeral_id":"58504565-c112-4c4e-81c8-f1d29149bbf8","uptime":{"ms":240008}},"memstats":{"gc_next":4194304,"memory_alloc":1624952,"memory_total":4349504}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":16.88,"15":17.41,"5":17.2,"norm":{"1":2.11,"15":2.1763,"5":2.15}}}}}}
2018-11-26T12:29:50.165Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":22},"total":{"ticks":30,"time":38,"value":30},"user":{"ticks":10,"time":16}},"info":{"ephemeral_id":"58504565-c112-4c4e-81c8-f1d29149bbf8","uptime":{"ms":270007}},"memstats":{"gc_next":4194304,"memory_alloc":1732568,"memory_total":4457120}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":17.26,"15":17.42,"5":17.26,"norm":{"1":2.1575,"15":2.1775,"5":2.1575}}}}}}

Something wrong with the logstash conf file? Am i missing something?

Why do you have two hosts fields in the output? Also remove the "document_type" parameter as it is depreciated, and try changing the index name to something simpler just now like:

index => "filebeat-%{+YYYY.MM.dd}"

Also I assume you have a paid subscription since you are using user and password? If so are you using https in the hosts output because x-pack now requires TLS by default.

How did you determine that Filebeat is actually able to send data to Logstash?

Yes changed them still index not created

input {
  beats {
    port => "5044"
    host => "10.1.20.140"
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch { hosts => ["10.1.20.140:9200"]
    manage_template => false
    index => "filebeat-%{+YYYY.MM.dd}"
  }
}

still anything i am missing?

I am gettting the file beat logs sending to the logstash like below

2018-11-26T12:29:20.164Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":20},"total":{"ticks":20,"time":36,"value":20},"user":{"ticks":10,"time":16}},"info":{"ephemeral_id":"58504565-c112-4c4e-81c8-f1d29149bbf8","uptime":{"ms":240008}},"memstats":{"gc_next":4194304,"memory_alloc":1624952,"memory_total":4349504}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":16.88,"15":17.41,"5":17.2,"norm":{"1":2.11,"15":2.1763,"5":2.15}}}}}}
2018-11-26T12:29:50.165Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":22},"total":{"ticks":30,"time":38,"value":30},"user":{"ticks":10,"time":16}},"info":{"ephemeral_id":"58504565-c112-4c4e-81c8-f1d29149bbf8","uptime":{"ms":270007}},"memstats":{"gc_next":4194304,"memory_alloc":1732568,"memory_total":4457120}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":17.26,"15":17.42,"5":17.26,"norm":{"1":2.1575,"15":2.1775,"5":2.1575}}}}}}

Dont these meant that?

That looks like monitoring metrics to me, and seems to state no data has been sent.

Ok!! I will check on the filebeat server config. But before that index should get created right?

The index usually gets create the first time data is indexed into it, so if no date is reading Logstash no idea will be created.

Below is the filebeat config i am using on the filebeat node

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/rhsm/*.log
    - /var/log/filebeat/*
    - /var/log/secure
    - /var/log/messages

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.1.20.140:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  template.name: "filebeat"
  template.path: "filebeat.template.json"
  template.overwrite: false
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

service is also running and there are no errors in the filebeat logs and this node is able to ping the elasticsearch/Logstash node

This is what i am getting when i restart the filebeat service on the filebeat node

2018-11-26T20:44:09.940Z        INFO    registrar/registrar.go:110      Loading registrar data from /var/lib/filebeat/registry
2018-11-26T20:44:09.941Z        INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-11-26T20:44:09.941Z        INFO    registrar/registrar.go:121      States Loaded from registrar: 80
2018-11-26T20:44:09.941Z        WARN    beater/filebeat.go:261  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-11-26T20:44:09.941Z        INFO    crawler/crawler.go:48   Loading Prospectors: 1
2018-11-26T20:44:09.941Z        INFO    crawler/crawler.go:82   Loading and starting Prospectors completed. Enabled prospectors: 0
2018-11-26T20:44:09.941Z        INFO    cfgfile/reload.go:127   Config reloader started
2018-11-26T20:44:09.941Z        INFO    cfgfile/reload.go:219   Loading of config files completed.
2018-11-26T20:44:39.943Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":12},"total":{"ticks":10,"time":18,"value":10},"user":{"ticks":0,"time":6}},"info":{"ephemeral_id":"a18e08f9-43d2-4af2-97d9-a578de482e50","uptime":{"ms":30007}},"memstats":{"gc_next":4473924,"memory_alloc":2933656,"memory_total":2933656,"rss":11927552}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":8},"load":{"1":17.87,"15":17.64,"5":17.72,"norm":{"1":2.2338,"15":2.205,"5":2.215}}}}}}

It looks like you do not have any enabled prospector, which could explain why no logs are being collected.

Bang on. That worked. Now i can go ahead and integrate with x-pack for ML. Thanks a lot for the help

One thing, if i want to run logstash using the custom config file, i am using

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf

If i run as demon it's doesnt take the file, do i need to mention it anywhere in my config files?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.