I'm new to elastic and i would appreciate any help. I have a problem that lies to the error.log file, which size has become enormous and the shards are failing to initialize.
Could you please direct me to the right direction in order to solve the problem and also to the way to take precaution in order for the problem to not appear again? I am in development environment right now but it would be great to take measures in order to avoid such problems when i go to production.
The first thing to do IMO is to get the first lines of your logs when it started to fail
From sysadmin's point of view I'll suggest:
- Activate logrotate by size and add logrotate into hourly cron
- Separate log partition from data one
First i compressed the error.log file and shards started working. Then i checked logrotate by size and i added a file with the text below :
find /var/log/nginx/elasticsearch -name "error.log.*.gz" -mtime +30 -delete
and as i can see it works perfectly. It compresses the error.log file if it's size is greater than 1G and this check is done every hour. Thank you very much Vitaly_il!!
my pleasure; but I strongly suggest to check the reason such big logs - it seems be either serious elastic issue or too high loglevel.
It seems that every time i make a request to my site error.log fills up 200-400kb and when i read the log i see that 97% of the messages are "debug" type. Should i stop loggin these debug messages and how would i do that?? Are they important/usefull?
I suggest to decrease loglevel to "INFO" or even "WARN".
Usually it's in /etc/elasticsearch/log4j2.properties, see https://www.elastic.co/guide/en/elasticsearch/reference/5.6/settings.html for more.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.