First of all I'm pretty new to ELK so sorry if my questions seem stupid
I inherited a crashed elasticsearch. It was a basic/default elk configuration running on one server and used for demo purposes. There are 3 devices that send about 1MB of json data each per day. ELK was used because of the kibana dashboards. It was configured with the default 5 primary shards and 1 replica and logstash conf was creating one index per day per device.
I think it ran for about 6 months and then it crashed with out of memory:
java.lang.OutOfMemoryError: Java heap space That server only has 4gb. I tried to restart it, but i can't. If i look at the cluster health it starts with around 12000 unassigned shards. It slowly makes them into active shards, but around 6000 it crashes with out of memory. I couldn't find any solution on the web to restore/restart it.
My first question is can i do anything to recover that data? I can't add more memory to the server so that it completes the shard activation process.
My second question is about configuring a new server for elk. Still for demo, so not much activity, but there will be around 30-40 devices sending around 1MB per day. The new server has 16GB.
After reading a lot on the forum it seems to me that saving the data first in an ACID db is very recommended. Is this correct?
Then, how should i configure the new elasticsearch. As this is only one server is the following correct: 1 node, 1 active shard, 1 replica ?
And lastly, is it ok to still have 1 index per day per device or is it better to have just 1 index per device?