Low disk watermark exceeded

Hello,
I have installed Elasticsearch, Kibana and filebeat 6.5.4.
I am planning to fetch Windows logs and logs from a custom directory.
I have been successful in doing this but with some issues.

Issue 1: [2019-03-04T09:40:29,629][INFO ][o.e.c.r.a.DiskThresholdMonitor] [SERVERNAME] low disk watermark [85%] exceeded on [5lx750qwSgi6NyP0UXEQng][SERVERNAME][C:\ProgramData\Elastic\Elasticsearch\data\nodes\0] free: 26.3gb[11.3%], replicas will not be assigned to this node

I need a solution for this. Have read other threads regarding this and have changed the config file setting low and high watermark but seems it is still an issue.

Issue 2: I am not able to fetch logs from a custom directory in a correct manner. Logs do appear in kibana but only first few lines of the log file.

Also, when kibana is running, size of my C drive keeps on decreasing. When I started kibana, available space in C drive was 32 Gb and after a while it dropped to 26 Gb. What is causing this?

Any help would be appreciated.
Thanx.

Are you running the stack as a development environment and not a production environment?

Hi,
I am setting up and testing the tool on my local machine first and then we would be using it for our production (logs management).
Thanx.

How would I run the stack as a production? How different it would be than running the stack as development?
Thanx.

Elasticsearch uses conservative values to make sure it can correctly allocate replica of the shards, some operation on the shards require disk space, Elasticsearch uses these values as guards, but it's possible to change the threshold, you have to define the following in your config/elasticsearch.yml and restart it.

cluster.routing.allocation.disk.watermark.low
Controls the low watermark for disk usage. It defaults to 85% , meaning that Elasticsearch will not allocate shards to nodes that have more than 85% disk used. It can also be set to an absolute byte value (like 500mb ) to prevent Elasticsearch from allocating shards if less than the specified amount of space is available. This setting has no effect on the primary shards of newly-created indices or, specifically, any shards that have never previously been allocated.

Issue 2: I am not able to fetch logs from a custom directory in a correct manner. Logs do appear in kibana but only first few lines of the log file.

For the above, I will need more details are you using LS or Filebeat if so can you share your configuration file and a bit of information about the environment?

Also, when kibana is running, size of my C drive keeps on decreasing. When I started kibana, available space in C drive was 32 Gb and after a while it dropped to 26 Gb. What is causing this?

This might be better asked on the Kibana forum, but I think we should fix the log issue first.

Hi, thanx for your reply.
Below are the details of elasticsearch

{
"name" : "...........",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "rybM16R7TWOI7ZvY19gsHQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "default",
"build_type" : "zip",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

I have set the following values in elasticsearch.yml

cluster.name: elasticsearch
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb
cluster.routing.allocation.disk.watermark.low: 20gb
cluster.routing.allocation.disk.watermark.high: 15gb

Still I am getting the same error of disk watermark.

I am using Filebeat to fetch Windows logs and logs from a custom directory.

Filebeat.yml
#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

filebeat.inputs:

  • type: log

    enabled: true

    • 'C:\Test*.log'

I have just stored a single file in Test folder and I can see logs in Kibana.

I have the following concerns:
We need to insert around 5 Gb of data each day.
We need to retain logs for 365 days and perform active search on it.

Will this configuration be appropriate for the above scenario?

Thanx.

Hello,
Any updates on this please?

Thanx.

What setup would be appropriate for what? You don't say anything about the hardware you plan to run in production. So how can anyone make a qualified statement if it's enough?

Hi, thanx for the reply.

I figured what the issue was.
Sorry about the confusion. I have set up the elastic search on my local machine but the prod machine would be a server probably 1 TB.

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.