Kibana dashboards Could not locate that index-pattern (id: packetbeat-*),

HI, I'm new to ELK so sorry if this is posted in the wrong place.
I have a new installation, there are two VM's #1 is running the full stack ELK version 6.1.2 and the indices are created

[root@elkhost bin]# curl -XGET 'localhost:9200/_cat/indices?v&pretty'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .monitoring-kibana-2-2018.01.16 1MfgIJKaRi2Voq8jLQHtFg 1 1 3283 0 821.5kb 821.5kb
yellow open .kibana AzB9Jda6Sw-o-U1ZyfZKjA 1 1 209 3 174.9kb 174.9kb
yellow open packetbeat-2018.01.19 eHxBnLViSAi4oaHiOqmBeg 5 1 53738 0 18.3mb 18.3mb
yellow open .monitoring-data-2 axS2ZI24SPCCG4Mn2ae-kw 1 1 3 0 7.2kb 7.2kb
yellow open netflow-2018.01.17 4hoRj6VDQ0qCUwOIts8rxA 5 1 16664 0 18.9mb 18.9mb
yellow open .monitoring-es-2-2018.01.15 trSXQLm-T_KPPcm88YfDlA 1 1 405 8 314.9kb 314.9kb
yellow open .monitoring-kibana-2-2018.01.15 zOAnzyjHQ160yDoavyeJCw 1 1 76 0 53.9kb 53.9kb
yellow open .monitoring-es-2-2018.01.16 rPc8fTBtRFGoGlaTvN_KfQ 1 1 23421 72 9.2mb 9.2mb
[root@elkhost bin]#

and I can access Kibana interface and see the indices. On this instance of logstash I have the netflow module active....

VM #2 has packetbeat and logstash running and everything seems to be normal ....

My problem is when I go into Kibana and try and view anything I get the message to reindex both the netflow and the packbeats indices ...

I've re-created the indices with and without timestamps and get the same message, so i'm thinking that there must be something more fundamental wrong with my installation ....

what more can i provide here to help you help me .....

Thanks in advance
Ken

Newer versions of Kibana assign a randomly generated ID to index pattern objects instead of using the index pattern name as the ID. How did you originally create these dashboards and index patterns? Are they from an older Kibana install, or are you using an older version of Packetbeat perhaps?

Hi Bargs, thanks for responding ... This is all a new install on 6.1.2 - SO for the netflow I used

bin/logstash --modules netflow --setup -M netflow.var.input.udp.port=2055

for Packetbeats I enabled the automatic creation of dashboards via the .yml config file ...

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

host: "10.0.1.13:5601"

Packetbeats is ... 6.1.2

Hey, I'm sorry this has lingered so long without a reply. I checked with the Beats team and they're not aware of an existing issue with the pre-made dashboards in 6.1.2. I've been meaning to see if I can reproduce the problem but I just haven't had the bandwidth. I'll see if a Beats team member can take a look at this thread, but if you don't get a response I'd recommend either re-posting the question in the Beats forum or filing a ticket for this issue on the Beats repo if you suspect this is a bug.

I tried to reproduce this with Packetbeat, but couldn't. Can you try loading the dashboards again by running:

./packetbeat setup

Note that you should not define the index pattern manually, because Packetbeat creates an index pattern automatically on the setup command.

I've done a couple of things, first I removed Kibana and Logstash from Host 1 and readded them. Started Kibana, and then started logstash with --modules netflow --setup - that cleared up the netflow dashboards
I then ran packetbeats with setup --dashboards on host 2 and that cleared up the packetbeat dashboards ...

however on some of the dashboards DNS tunnelling I get a couple of errors about saved fields not being valid

I see the exact same thing, the Visualize: "field" is a required parameter and Saved "field" parameter is now invalid. Please select a new field. messages.

I've tried purging all dashboards and the packetbeat-* index pattern and running packetbeat --setup again. It's just not working.

I can tell you why it failed. The default template does not have a hard-mapped @timestamp field, and "date_detection" is set to false. In other words, the visualizations can't work without the non-date-mapped @timestamp field. Just get the mapping, you'll see @timestamp is mapped as a keyword

        "date_detection": false,
        "properties": {
          "@timestamp": {
            "type": "keyword",
            "ignore_above": 1024
          },

Confirm fix: Update mapping template to make @timestamp a date.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.