Unable to fetch mapping while configuring an index pattern

Greetings, guys!

Recently installed Elastic stack on CentOS and faced the problem that a cannot get data pattern in Kibana.

Basically I have kibana elastic logstash and filebeat that is installed locally.

 netstat -plntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1117/nginx: master
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1003/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1087/master
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      667/node
tcp6       0      0 ::1:9200                :::*                    LISTEN      955/java
tcp6       0      0 127.0.0.1:9200          :::*                    LISTEN      955/java
tcp6       0      0 ::1:9300                :::*                    LISTEN      955/java
tcp6       0      0 127.0.0.1:9300          :::*                    LISTEN      955/java
tcp6       0      0 :::22                   :::*                    LISTEN      1003/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      1087/master
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      668/java
tcp6       0      0 :::5443                 :::*                    LISTEN      668/java
udp        0      0 0.0.0.0:514             0.0.0.0:*                           942/rsyslogd
udp6       0      0 :::514                  :::*                                942/rsyslogd

CURL output

curl 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana ugg_qU1KQeejsdheoemj5w 1 1 1 0 3.2kb 3.2kb

Looks like I cant get patterns and I dont know why, I have checked status of services and they all look just fine :frowning:

Elastic

[root@qwsqws]# service elasticsearch status
● elasticsearch.service - ElasticsearchPreformatted text
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-07-07 16:44:54 MSK; 29min ago
     Docs: http://www.elastic.co
  Process: 945 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
 Main PID: 955 (java)
   CGroup: /system.slice/elasticsearch.service
           └─955 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExpl...
Jul 07 16:44:54 s001is-wflogs.sibur.local systemd[1]: Starting Elasticsearch...
Jul 07 16:44:54 s001is-wflogs.sibur.local systemd[1]: Started Elasticsearch.

Logstash

[root@qwsqws]# service logstash Preformatted textstatus
Redirecting to /bin/systemctl status  logstash.service
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-07-07 16:44:45 MSK; 29min ago
 Main PID: 668 (java)
   CGroup: /system.slice/logstash.service
           └─668 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+Disa...

Jul 07 16:44:45 s001is-wflogs.sibur.local systemd[1]: Started logstash.
Jul 07 16:44:45 s001is-wflogs.sibur.local systemd[1]: Starting logstash...
Jul 07 16:45:40 s001is-wflogs.sibur.local logstash[668]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: loggin...onsole.
Jul 07 16:46:03 s001is-wflogs.sibur.local logstash[668]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Jul 07 16:46:26 s001is-wflogs.sibur.local logstash[668]: log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.Request...Cache).
Jul 07 16:46:26 s001is-wflogs.sibur.local logstash[668]: log4j:WARN Please initialize the log4j system properly.
Jul 07 16:46:26 s001is-wflogs.sibur.local logstash[668]: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Hint: Some lines were ellipsized, use -l to show in full.

Filebeat

[root@qwsqws ~]# service filebeat status
● filebeat.service - filebeat
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2017-07-07 17:00:02 MSK; 14min ago
     Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
 Main PID: 1568 (filebeat)
   CGroup: /system.slice/filebeat.service
           └─1568 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var...

Jul 07 17:00:02 s001is-wflogs.sibur.local systemd[1]: Started filebeat.
Jul 07 17:00:02 s001is-wflogs.sibur.local systemd[1]: Starting filebeat...

Can't find anything in the logs.

Sincerely.

hi @amakarenkov,

so apart from the .kibana index, you have no indices whatsoever in your cluster?

Hi, Thomas!

Yes, and I have no clue about what I did wrong. If I am correct - there is no need to configure indices.

thx @amakarenkov

I'm stumped too. This doesn't seem to be a Kibana issue, so I'd suggest you move this question to the Beats forum, they might have better insight in why your data isn't loading at all. https://discuss.elastic.co/c/beats. but if this is a Kibana issue, we can pick up further here:

sorry for the run-around :disappointed:

1 Like

Thank you anyway, Tomas. Have a nice day :slight_smile:

Please share the Filebeat config you are using and the logs from the service (/var/log/filebeat/filebeat).

Greetings, Andrew!

#---- Filebeat Prospectors -----#
filebeat.prospectors:
    - input_type: log
      paths:
        - /var/log/10.3.201.12/*
      fields: {log_type: firewall}
      tail_files: true
      
    - input_type: log
      paths:
        - /var/log/10.3.201.11/*
      fields: {log_type: ise}
      tail_files: true

#---- Logstash output -----------#
    output.logstash:
      hosts: ["localhost:5443"]


#Other default

Log

2017-07-07T17:00:02+03:00 INFO Setup Beat: filebeat; Version: 5.5.0
2017-07-07T17:00:02+03:00 INFO Max Retries set to: 3
2017-07-07T17:00:02+03:00 INFO Activated logstash as output plugin.
2017-07-07T17:00:02+03:00 INFO Publisher name: s001is-wflogs.sibur.local
2017-07-07T17:00:02+03:00 INFO Flush Interval set to: 1s
2017-07-07T17:00:02+03:00 INFO Max Bulk Size set to: 2048
2017-07-07T17:00:02+03:00 INFO filebeat start running.
2017-07-07T17:00:02+03:00 INFO Registry file set to: /var/lib/filebeat/registry
2017-07-07T17:00:02+03:00 INFO Loading registrar data from /var/lib/filebeat/registry
2017-07-07T17:00:02+03:00 INFO States Loaded from registrar: 0
2017-07-07T17:00:02+03:00 INFO Loading Prospectors: 2
2017-07-07T17:00:02+03:00 INFO Starting Registrar
2017-07-07T17:00:02+03:00 INFO Start sending events to output
2017-07-07T17:00:02+03:00 INFO Prospector with previous states loaded: 0
2017-07-07T17:00:02+03:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-07-07T17:00:02+03:00 INFO Starting prospector of type: log; id: 18345352851072542191 
2017-07-07T17:00:02+03:00 INFO Prospector with previous states loaded: 0
2017-07-07T17:00:02+03:00 INFO Starting prospector of type: log; id: 17635311746074770904 
2017-07-07T17:00:02+03:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 2
2017-07-07T17:00:32+03:00 INFO Non-zero metrics in the last 30s: publish.events=1 registrar.states.current=1 registrar.states.update=1 registrar.writes=1
2017-07-07T17:01:02+03:00 INFO No non-zero metrics in the last 30s
2017-07-07T17:01:32+03:00 INFO No non-zero metrics in the last 30s
2017-07-07T17:02:02+03:00 INFO No non-zero metrics in the last 30s
2017-07-07T17:02:32+03:00 INFO No non-zero metrics in the last 30s

Noone can help? :cry:

I recommend removing tail_files: true from your config.

The output.logstash config looks like it is indented incorrectly (this may just be a copy/paste issue since the beat won't start if this is wrong). It should be:

#---- Filebeat Prospectors -----#
filebeat.prospectors:
    - input_type: log
      paths:
        - /var/log/10.3.201.12/*
      fields: {log_type: firewall}
      tail_files: true
      
    - input_type: log
      paths:
        - /var/log/10.3.201.11/*
      fields: {log_type: ise}
      tail_files: true

#---- Logstash output -----------#
output.logstash:
  hosts: ["localhost:5443"]

The Beat data is going to Logstash. So please share your Logstash config.

What indices are present in your Elasticsearch? curl http://localhost:9200/_cat/indices?v

Hi Andrew!

For some reason everything started working fine. After observing normal behaviour I also changed filebeat configuration as you adviced and restarted the machine.

[root@s001is-wflogs ~]# curl http://localhost:9200/_cat/indices?v
health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana             ugg_qU1KQeejsdheoemj5w   1   1          4            0     21.8kb         21.8kb
yellow open   logstash-2017.07.07 i-AYv-UNS8-cPudjxKoDFg   5   1          2            0     24.5kb         24.5kb
yellow open   logstash-2017.07.11 08f9BQUcT06CrH4HyQZ7Kw   5   1       5238            0      3.3mb          3.3mb

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.