Elasticsearch indices are now jiberish

Hello,

I am upgrading Elastic Stack from 2.4 to 5.6. I came across some bumps, but nothing I couldn't handle until now. it appears that all the indices are now a bunch of random characters. I am not sure if I messed something up or what. I could not get the migration tool to work ( I have a thread open for that ) so I proceeded with what I could find in the documentation to the best of my skills.

Those indexes should be named firewall, logstash, etc. Do you have any advice? Maybe Elasticsearch isn't done doing it's thing? I see these logs in ES:

[2018-02-13T13:47:26,254][INFO ][o.e.c.m.MetaDataMappingService] [lDTtoAj] [firewall3-2018-02-13/_8oF-lTeS-2FjXkwQje-yQ] update_mapping [firewall]

[2018-02-13T13:47:38,631][INFO ][o.e.m.j.JvmGcMonitorService] [lDTtoAj] [gc][4936] overhead, spent [254ms] collecting in the last [1s]
[2018-02-13T13:48:02,699][INFO ][o.e.m.j.JvmGcMonitorService] [lDTtoAj] [gc][4960] overhead, spent [250ms] collecting in the last [1s]

with a lot more of these...

[2018-02-13T13:48:17,765][INFO ][o.e.m.j.JvmGcMonitorService] [lDTtoAj] [gc][4975] overhead, spent 
[270ms] collecting in the last [1s]
[2018-02-13T13:48:19,815][INFO ][o.e.m.j.JvmGcMonitorService] [lDTtoAj] [gc][4977] overhead, spent 
[280ms] collecting in the last [1s]
[2018-02-13T13:48:26,841][INFO ][o.e.m.j.JvmGcMonitorService] [lDTtoAj] [gc][4984] overhead, spent [289ms] collecting in the last [1s]

Logstash is also throwing all kinds of errors saying the indices cant be found, but data is sent anyways and the data according to Kibana is in the firewall index.

Many thanks.

You'll find more info here. In short, indices have their own IDs, which are used for the data structure, but you shouldn't be concerned about that. You still use their names in the API, which is the only way you should use ES.

The log lines containing JvmGcMonitorService are normal, and they log activity related to GC (garbage collection).

1 Like

Ok. It's just strange to see that going from 2.4 where I see all the index names to 5.6 where it's random.

I guess my next problem is this, though it might be best on the logstash side, but logstash is throwing an error per event saying it cannot query elasticsearch for previous events in the index.

{:timestamp=>"2018-02-13T15:11:54.391000-0500", :message=>"Failed to query elasticsearch for previous event", :index=>"firewall-%{+YYYY.MM.dd}"

and at the end of every entry is this...

 @metadata_accessors=#<LogStash::Util::Accessors:0x70ccbbea @store={}, @lut={}>, @cancelled=false>, :error=>#<Faraday::ConnectionFailed>, :level=>:warn}

Every one of these events contains the entire string of data in the log being sent. I had logstash running for about 20 seconds and it has one of these types of errors in the log. As you can guess, the log file filled up rather quickly.

I am not sure if this is normal for having ES on 5.6 while logstash is on 2.4 still. I am just stating some observations during my experience of the upgrade process.

Thanks

I'm far from being a LS expert, but it would help much more if you could describe what exactly you are trying to accomplish, and also the relevant LS configuration. If your hypothesis that an error is shown for every event holds, then I believe it's something related to the configuration you are using, or there is something wrong in LS-ES communication.

I'm upgrading the lab Elastic environment. I've ready that you have to have Elasticsearch and Kibana on the same compatible versions, but LogStash is able to be put off.

I opened a thread on LogStash. Thank you for your help :smiley:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.