HI, I'm new to ELK so sorry if this is posted in the wrong place.
I have a new installation, there are two VM's #1 is running the full stack ELK version 6.1.2 and the indices are created
[root@elkhost bin]# curl -XGET 'localhost:9200/_cat/indices?v&pretty'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .monitoring-kibana-2-2018.01.16 1MfgIJKaRi2Voq8jLQHtFg 1 1 3283 0 821.5kb 821.5kb
yellow open .kibana AzB9Jda6Sw-o-U1ZyfZKjA 1 1 209 3 174.9kb 174.9kb
yellow open packetbeat-2018.01.19 eHxBnLViSAi4oaHiOqmBeg 5 1 53738 0 18.3mb 18.3mb
yellow open .monitoring-data-2 axS2ZI24SPCCG4Mn2ae-kw 1 1 3 0 7.2kb 7.2kb
yellow open netflow-2018.01.17 4hoRj6VDQ0qCUwOIts8rxA 5 1 16664 0 18.9mb 18.9mb
yellow open .monitoring-es-2-2018.01.15 trSXQLm-T_KPPcm88YfDlA 1 1 405 8 314.9kb 314.9kb
yellow open .monitoring-kibana-2-2018.01.15 zOAnzyjHQ160yDoavyeJCw 1 1 76 0 53.9kb 53.9kb
yellow open .monitoring-es-2-2018.01.16 rPc8fTBtRFGoGlaTvN_KfQ 1 1 23421 72 9.2mb 9.2mb
[root@elkhost bin]#
and I can access Kibana interface and see the indices. On this instance of logstash I have the netflow module active....
VM #2 has packetbeat and logstash running and everything seems to be normal ....
My problem is when I go into Kibana and try and view anything I get the message to reindex both the netflow and the packbeats indices ...
I've re-created the indices with and without timestamps and get the same message, so i'm thinking that there must be something more fundamental wrong with my installation ....
what more can i provide here to help you help me .....
Newer versions of Kibana assign a randomly generated ID to index pattern objects instead of using the index pattern name as the ID. How did you originally create these dashboards and index patterns? Are they from an older Kibana install, or are you using an older version of Packetbeat perhaps?
Hey, I'm sorry this has lingered so long without a reply. I checked with the Beats team and they're not aware of an existing issue with the pre-made dashboards in 6.1.2. I've been meaning to see if I can reproduce the problem but I just haven't had the bandwidth. I'll see if a Beats team member can take a look at this thread, but if you don't get a response I'd recommend either re-posting the question in the Beats forum or filing a ticket for this issue on the Beats repo if you suspect this is a bug.
I've done a couple of things, first I removed Kibana and Logstash from Host 1 and readded them. Started Kibana, and then started logstash with --modules netflow --setup - that cleared up the netflow dashboards
I then ran packetbeats with setup --dashboards on host 2 and that cleared up the packetbeat dashboards ...
however on some of the dashboards DNS tunnelling I get a couple of errors about saved fields not being valid
I see the exact same thing, the Visualize: "field" is a required parameter and Saved "field" parameter is now invalid. Please select a new field. messages.
I can tell you why it failed. The default template does not have a hard-mapped @timestamp field, and "date_detection" is set to false. In other words, the visualizations can't work without the non-date-mapped @timestamp field. Just get the mapping, you'll see @timestamp is mapped as a keyword
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.