Hi,
I have a 5 node ES cluster (each node is single core and 4Gig RAM) which is receiving data from metricbeat and winbeat via logstash. The data generally amounts to 175 GB and is stored in a per-day index.
Even when I search for a data for an hour, our queries are taking very long time.
Below is our config :
cluster.name: clustername
node.name: ${HOSTNAME}
path.data: /apps/elasticsearch-5.2.2/data,/data1,/data2
bootstrap.memory_lock: true
node.data: true
node.master: true
node.ingest: true
network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["node1", "node2","node3", "node4", "node5"]
discovery.zen.ping_timeout: 30s
discovery.zen.minimum_master_nodes: 3
thread_pool.bulk.queue_size: 1000
xpack.security.enabled: false
indices.memory.index_buffer_size: 30%
indices.memory.min_index_buffer_size: 512mb
Am I doing something wrong?
Do I need to customise my mapping? Do I need to store a week/month's data per index?
Thanks