Elastic search recovering indices


(Raamee) #1

Hello,

I'm new to ELK stack. I have installed ELK in to my virtual machine which has 1GB of RAM, CentOS 7. It was working great. But recently I'm getting high load spikes and when I searched the logs (/var/log/elasticsearch/elasticsearch.log), I got these running:

[2015-10-04 23:24:43,885][INFO ][node                     ] [Battlestar] version[1.7.2], pid[3359], build[e43676b/2015-09-14T09:49:53Z]
[2015-10-04 23:24:43,885][INFO ][node                     ] [Battlestar] initializing ...
[2015-10-04 23:24:44,139][INFO ][plugins                  ] [Battlestar] loaded [marvel], sites [marvel]
[2015-10-04 23:24:44,335][INFO ][env                      ] [Battlestar] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [5.5gb], net to
tal_space [7.5gb], types [rootfs]
[2015-10-04 23:25:06,325][INFO ][node                     ] [Battlestar] initialized
[2015-10-04 23:25:06,325][INFO ][node                     ] [Battlestar] starting ...
[2015-10-04 23:25:06,637][INFO ][transport                ] [Battlestar] bound_address {inet[/127.0.0.1:9301]}, publish_address {inet[/127.0.0.1:9301]
}
[2015-10-04 23:25:06,725][INFO ][discovery                ] [Battlestar] elasticsearch/13EQuiYzQ4qtr1ts35k_Kw
[2015-10-04 23:25:10,574][INFO ][cluster.service          ] [Battlestar] new_master [Battlestar][13EQuiYzQ4qtr1ts35k_Kw][localhost.localdomain][inet[/
127.0.0.1:9301]], reason: zen-disco-join (elected_as_master)
[2015-10-04 23:25:11,045][INFO ][http                     ] [Battlestar] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[/127.0.0.1:9200]
}
[2015-10-04 23:25:11,046][INFO ][node                     ] [Battlestar] started
[2015-10-04 23:25:11,976][INFO ][gateway                  ] [Battlestar] recovered [265] indices into cluster_state
[2015-10-04 23:25:11,976][INFO ][cluster.service          ] [Battlestar] added {[logstash-localhost.localdomain-2361-11632][4RKgSsBxTs-_3r0xxN-oag][lo
calhost.localdomain][inet[/192.168.122.112:9300]]{data=false, client=true},}, reason: zen-disco-receive(join from node[[logstash-localhost.localdomain
-2361-11632][4RKgSsBxTs-_3r0xxN-oag][localhost.localdomain][inet[/192.168.122.112:9300]]{data=false, client=true}])
[2015-10-04 23:26:17,582][ERROR][marvel.agent.exporter    ] [Battlestar] error sending data to [http://127.0.0.1:9200/.marvel-2015.10.04/_bulk]: SocketTimeoutException[Read timed out]
[2015-10-04 23:27:12,931][WARN ][monitor.jvm              ] [Battlestar] [gc][young][98][54] duration [1.3s], collections [1]/[2s], total [1.3s]/[6.4s], memory [394.8mb]->[341.4mb]/[1015.6mb], all_pools {[young] [65.4mb]->[7.7mb]/[66.5mb]}{[survivor] [8.3mb]->[6.6mb]/[8.3mb]}{[old] [321.1mb]->[327mb]/[940.8mb]}
[2015-10-04 23:27:27,452][WARN ][monitor.jvm              ] [Battlestar] [gc][young][103][55] duration [9.9s], collections [1]/[10.4s], total [9.9s]/[16.3s], memory [383.5mb]->[354.7mb]/[1015.6mb], all_pools {[young] [49.9mb]->[6.7mb]/[66.5mb]}{[survivor] [6.6mb]->[8.3mb]/[8.3mb]}{[old] [327mb]->[339.6mb]/[940.8mb]}
[2015-10-04 23:27:43,404][WARN ][monitor.jvm              ] [Battlestar] [gc][young][113][57] duration [5.4s], collections [1]/[5.9s], total [5.4s]/[22.4s], memory [412.6mb]->[369.7mb]/[1015.6mb], all_pools {[young] [57.1mb]->[1.2mb]/[66.5mb]}{[survivor] [8.3mb]->[8.1mb]/[8.3mb]}{[old] [347.1mb]->[360.4mb]/[940.8mb]}
[2015-10-04 23:28:20,550][WARN ][monitor.jvm              ] [Battlestar] [gc][young][136][65] duration [1.9s], collections [1]/[4.7s], total [1.9s]/[25.9s], memory [443.8mb]->[183.2mb]/[1015.6mb], all_pools {[young] [42.2mb]->[43.6mb]/[66.5mb]}{[survivor] [8.3mb]->[7.1mb]/[8.3mb]}{[old] [393.2mb]->[132.4mb]/[940.8mb]}
[2015-10-04 23:29:33,485][WARN ][monitor.jvm              ] [Battlestar] [gc][young][200][93] duration [1s], collections [1]/[1.1s], total [1s]/[29.6s:

Can any one say what are these? Is it recovering indices? I have totally 265 indices. Its not getting completed and its causing high load in my VM.

Thank you.


(Srinath C) #2

There is a lot of GC activity. If your RAM is 1G, you should have allocated around 512mb as heap but it seems as though its set to 1g, fix that first.
How much of data is held in these 265 indices? Looking at the timestamps, it seems like you just restarted elasticsearch and it may just be that ES is overloaded while initializing the indices.


(Mark Walkom) #3

How many shards in those indices?
Cause if you have kept the default 5 it'll be putting pressure on your node, so you should alter the LS template accordingly.


(Raamee) #4

Thanks for the reply. Total data of the indices is around 400MB. Whenever I switch on elastic search I'm getting heavy load, so I just delete all indices and then re-created it. Now its resolved.


(Raamee) #5

Thanks for the heads up. I I changed the shards to 1. Although I re-created all the indices, this tip helped me.


(system) #6