[Solved] Elasticseach stuck at 213 documents and losing data

I have an elasticsearch setup with 1 node and no replica nodes sharing a droplet with a Kibana setup in digital ocean. My droplet has 2GB ram and enough CPU. My elasticsearch JVM is set to use 768MB RAM (so kibana can have its share).

My problem is that I seem to be losing data since my node is stuck at 213 documents and I have already noticed that some important documents are gone.

I couldn't find documentation on how this works. The only thing I found about this is that more ram is better when dealing with large amounts of data, and that having a secondary node to store replicas is a good practice.

Should I allocate more ram? How can I know if my data is beeing deleted to allocate more? Is this some sort of pagination? Can this be a kibana problem?

My cluster health: { "cluster_name" : "***", "status" : "yellow", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 20, "active_shards" : 20, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 5, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 80.0 }

What does _cat/indices show?

I solved it. It was the documents id. I was manually generating them, and each time my sistem updated, the ids where reset. I'm researching now how to generate id correctly.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.