Filebeats not working

@andrewkroh

Step1:

I have installed Elastic search 2.3 , Kibana 4.5, Logastash 2.3 and shield 2.3 on single server.

created using esusers command line I have created one admin user with admin role and another user with user role.

Right now I am able to login to Kibana with user credentials created.

Step 2:

Loaded Kibana Dashboards
Load Filebeat Index Template in Elasticsearch
installed filebeat and configured it on client server.

step3 : Tried to test filebeat installation and I get below error.

curl -XGET "http://ipaddress:9200/filebeat-/_search?pretty" -u admin -p
Enter host password for user 'admin':xxx
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-
]"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-*]"
},
"status" : 404
}

Please help me in fixing this issue

Can you share your Filebeat and Logstash configurations and any logs from those services.

Since you are using Shield you will need to ensure that you provide the appropriate credentials to Logstash's elasticsearch output.

@andrewkroh

ELK server configuration of input, filter and output

[ELKserver]# pwd
/etc/logstash/conf.d

In /etc/logstash/conf.d . Please find the conf files and its configuration

[ELKserver]# ll

total 16
-rw-r--r--. 1 root root  185 Jun 26 05:52 02-beats-input.conf
-rw-r--r--. 1 root root  456 Jun 26 05:54 10-syslog-filter.conf
-rw-r--r--. 1 root root  214 Jul  7 06:01 30-elasticsearch-output.conf
-rw-r--r--. 1 root root 1002 Jul  6 14:18 logstash.conf

[ELKserever]# vi 02-beats-input.conf

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "absolutepath of certifcate"
    ssl_key => "absolutepath of key"
  }

vi 10-syslog-filter.conf

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

vi 30-elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["elkserver_private_ip:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

vi logstash.conf

input {
    beats {
        host => "0.0.0.0"
        port => "5400"
    }
}
filter {
    if [type] == "ERROR" {
        grok {
            match => { 'message' => '%{IPORHOST:clientip} %{USER:ident} %{USER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}' }
         }
         geoip {
             source => "clientip"
             target => "geoip"
             database => "/etc/logstash/GeoLiteCity.dat"
             add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
             add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
        }
        mutate {
             convert => [ "[geoip][coordinates]", "float" ]
       }
   }
}
output {
    stdout {
        codec => rubydebug
    }
    elasticsearch {
        hosts => ["elkserver_private_ip:9200"]
        user => beats
        password => xxxx
    }
}

Adding filebeat and kibana configuration

****** filebeat configuration on client server **************************

[root@omsappbuild filebeat]# cat filebeat.yml

################### Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      # Paths that should be crawled and fetched. Glob based paths.
      # To fetch all ".log" files from a specific level of subdirectories
      # /var/log/*/*.log can be used.
      # For each file found under this path, a harvester is started.
      # Make sure not file is defined twice as this can lead to unexpected behaviour.
      paths:
        - /var/log/secure
        - /var/log/messages
        # - /var/log/*.log
        #- c:\programdata\elasticsearch\logs\*

      # Configure the file encoding for reading files with international characters
      # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
      # Some sample encodings:
      #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
      #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
      #encoding: plain

      # Type of the files. Based on this the way the file is read is decided.
      # The different types cannot be mixed in one prospector
      #
      # Possible options are:
      # * log: Reads every line of the log file (default)
      # * stdin: Reads the standard in
      input_type: syslog


      # Exclude files. A list of regular expressions to match. Filebeat drops the files that
      # are matching any regular expression from the list. By default, no files are dropped.
      # exclude_files: [".gz$"]

      # Optional additional fields. These field can be freely picked
      # to add additional information to the crawled log files for filtering
      #fields:
      #  level: debug
      #  review: 1

      # Set to true to store the additional fields as top level fields instead
      # of under the "fields" sub-dictionary. In case of name conflicts with the
      # fields added by Filebeat itself, the custom fields overwrite the default
      # fields.
      #fields_under_root: false

      # Ignore files which were modified more then the defined timespan in the past.
      # In case all files on your system must be read you can set this value very large.
      # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
      #ignore_older: 0

      # Close older closes the file handler for which were not modified
      # for longer then close_older
      # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
      #close_older: 1h

      # Type to be published in the 'type' field. For Elasticsearch output,
      # the type defines the document type these entries should be stored
      # in. Default: log
      #document_type: log

      # Scan frequency in seconds.
      # How often these files should be checked for changes. In case it is set
      # to 0s, it is done as often as possible. Default: 10s
      #scan_frequency: 10s

      # Defines the buffer size every harvester uses when fetching the file
      #harvester_buffer_size: 16384

kibana configuration********************************

[root@logmonitoring config]# cat kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
# server.port: 5601

# This setting specifies the IP address of the back end server.
 server.host: "elkserver_private ip"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This setting
# cannot end in a slash.
# server.basePath: ""


# The URL of the Elasticsearch instance to use for all your queries.
 elasticsearch.url: "http://elkserver_private_ip:9200"


# kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
 elasticsearch.username: "kibana4-server"
 elasticsearch.password: "xxxx"

# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
 server.ssl.cert: absolute path of certificate
 server.ssl.key: absolute path of key



 shield.encryptionKey: 'xxxx'

please let us know if you need further info and help me in getting this issue resolved.

@andrewkroh did you get a chance to look at. I am stuck at the point where I cannot load the index template in elastic search.

Can you please help me.

Did you already run filebeat or only load the template? Do you get any errors during running filebeat?

@Ruflin I ran filebeat on client servers where logs to be monitored I got few errors related to yaml file on space issue but I fixed it and was able to restart the filebeat .

e.g error message : Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 212: did not find expected key. Exiting.

I even tried with topbeat I get same error.Not sure why

curl -XGET 'http://privateip:9200/filebeat-/_search?pretty' -u es_admin
Enter host password for user 'es_admin':
{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-
]"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-*]"
},
"status" : 404

Note: I am using Shield 2.3 trial version

Sequence which I followed

Step 1: download the Filebeat index template on elasticsearch server

curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json

Step2 : load the template on elasticsearch server

curl -XPUT 'http://localhost:
9200/_template/filebeat' -d@/etc/filebeat/filebeat.template.json -u es_admin -p

Enter host password for user 'es_admin':
{"acknowledged":true}

Step3: Install and configure Filebeat on client servers

Able to restart filbeat service

Step4 : Test Filebeat Installation on elasticserver

curl -XGET "http://ipaddress:9200/filebeat-*/_search?pretty" -u es_admin -p
Enter host password for user 'admin':xxx

It fails

{
"error" : {
"root_cause" : [ {
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-]"
} ],
"type" : "index_not_found_exception",
"reason" : "no such index",
"index" : "[filebeat-
]"
},
"status" : 404
}

I see a couple of potential issues.

  1. I don't see any logstash output configured in your Filebeat configuration file.
  2. You have two elasticsearch outputs in your Logstash configuration, but only one of them is configured with your Shield username and password.
  3. The elasticsearch output that is configured with a username and password is missing options like index and document_type.

If this is your first time setting up Beats -> Logstash -> Elasticsearch, I really recommend starting simple and incrementally building up the configuration. It will be much easier to debug.

Setup Filebeat to Logstash in isolation without TLS. Disable any elasticsearch outputs and use only the stdout output. Once you are seeing data on the Logstash console then add in TLS between Filebeat and Logstash. And then add in your filters.

Then with Filebeat to Logstash working, setup your elasticsearch output from Logstash.

After it's all working, you can stop Filebeat, delete the Filebeat registry, delete any indices created in Elasticsearch, and then restart Filebeat to reindex the logs with your final setup.

This topic was automatically closed after 21 days. New replies are no longer allowed.