Elasticsearch - Not able to see the latest logs

Hi,

Looking for the reason of not having updated logs in the log file.
I did the following sanity tests too,

  • Checked the owner of the process
  • Have enough resources, like memory as well as disk space
  • Log file is generated at the path mentioned at path.logs in elasticsearch.yml

Am I missing anything to validate?

Note: I am running Elasticsearch on the Windows machine.

Thanks,

Hi @chintushah46

Which logs are you talking about the elasticsearch logs? Or some other logs?

If elasticsearch logs, Can you post your elasticsearch.yml? Please format it with the </>button.

How did you install zip it MSI?

Are there any logs at all or just not new logs?

Hi, Stephen, Thanks for your response.

Sorry for not being clear enough on my initial question.

Yes, Elasticsearch logs.
Here is the content of elasticsearch.yml file,

network.host: 0.0.0.0
cluster.routing.allocation.disk.watermark.low: 1gb
cluster.routing.allocation.disk.watermark.high: 512mb
cluster.routing.allocation.disk.watermark.flood_stage: 512mb
cluster.name: domain
node.name: N1
discovery.zen.ping.unicast.hosts: ["N1","N2.domain.com","N3.domain.com"]
path.logs: C:\ProgramData\COMP\YSearch\

I am using OSS version of Elasticsearch, so I have just downloaded the zip file, unzipped it, created the required folders and start the service from bin folder.

Yes, There were logs initially and then it suddenly stopped adding new logs. I restarted Elasticsearch services a few times and every time it was generating logs for few hours and stops again adding new logs.

Note: Elasticsearch version: 6.3.1

Let me know If you need any more details.

Couple things

Why are you setting all the so small, those are incredibly small numbers for disk usage? , plus they are in the wrong order low should be the lowest setting not higher than high and flood_stage.

Also question when the logs stop is elasticsearch still running? Can you still

curl http://localhost:9200

and get a result?

Are you running out of disk space in general?

6.3.1 Is a very old release, but that should not be the issue.

For the QA, I have set with lower numbers. but for the production, I have higher numbers. Thanks for noting the order, I'll correct it.

Yes, Elasticsearch service is running fine. Elasticsearch was doing index, search, etc. operation perfectly fine. It's just the new logs are not being added to the log files.

Not really.

What log level do you have set?
How are looking at the logs?... they do rotate...
Are you looking at the new log file after it rotates?
Perhaps a misconfiguration in the log rotations settings?

Also elastic does not print a lot of logs when things are just static... so are you sure you are actually not getting logs?

It's set as debug level.

Yes, they do rotate, based on size or time.
As this is on windows, I am validating the latest file.

What could be the misconfiguration? It would be great if you can help me with this.
Following are the content of log4j2.properties,

status = error

logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}MobiControlSearch.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}CSSearch-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = CSSearch-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB

Yes, I am pretty sure about the issue. As before the average size of the log file was around 100 MB and now it's just 2 KB.

Let me know If you need any more details.

Apologies but I can not debug the whole logging set up. What I would perhaps suggest is go back to the default log4j2.properties see if that works as expected and then proceed to adjust from there.

I already noticed you hard coded the Cluster name instead of

appender.rolling.strategy.action.condition.glob = CSSearch-*

instead of

appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*

So I would start from the defaults and adjust from there...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.