ES 6.3.0 Log file slammed over 100KB

Hello:

I am running a indexing process that builds an index from a relation database table containing over 26,000 rows.

After this process runs I see my log file has ballooned to over 100,000 KB big! Here is just a tiny snippet of the messages that are continuously being written to the log file:

2018-06-28 13:26:15 DEBUG wire:54 - http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
2018-06-28 13:26:15 DEBUG wire:54 - http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]"
2018-06-28 13:26:15 DEBUG wire:54 - http-outgoing-0 << "content-length: 44576[\r][\n]"
2018-06-28 13:26:15 DEBUG wire:54 - http-outgoing-0 << "[\r][\n]"
2018-06-28 13:26:15 DEBUG wire:68 - http-outgoing-0 << "{"took":112,"errors":false,"items":[{"index":{"_index":"xxx","_type":"resource","_id":"13141048","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2153,"_primary_term":1,"status":201}},{"index":{"_index":"xxx","_type":"resource","_id":"14306911","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2154,"_primary_term":1,"status":201}},{"index":{"_index":"xxx","_type":"resource","_id":"12744619","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2256,"_primary_term":1,"status":201}},{"index":{"_index":"xxx","_type":"resource","_id":"8884190","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":2141,"_primary_term":1,"status":201}}
and on and on and on....

and:

2018-06-28 13:26:15 DEBUG InternalHttpAsyncClient:292 - [exchange: 44] Connection can be kept alive indefinitely
2018-06-28 13:26:15 DEBUG MainClientExec:385 - [exchange: 44] Response processed
2018-06-28 13:26:15 DEBUG InternalHttpAsyncClient:233 - [exchange: 44] releasing connection
2018-06-28 13:26:15 DEBUG ManagedNHttpClientConnectionImpl:190 - http-outgoing-0 127.0.0.1:51219<->127.0.0.1:9200[ACTIVE][r:r]: Remove attribute http.nio.exchange-handler
2018-06-28 13:26:15 DEBUG PoolingNHttpClientConnectionManager:285 - Releasing connection: [id: http-outgoing-0][route: {}->http://localhost:9200][total kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 30]
2018-06-28 13:26:15 DEBUG PoolingNHttpClientConnectionManager:299 - Connection [id: http-outgoing-0][route: {}->http://localhost:9200] can be kept alive indefinitely
2018-06-28 13:26:15 DEBUG ManagedNHttpClientConnectionImpl:154 - http-outgoing-0 127.0.0.1:51219<->127.0.0.1:9200[ACTIVE][r:r]: Set timeout 0
2018-06-28 13:26:15 DEBUG PoolingNHttpClientConnectionManager:305 - Connection released: [id: http-outgoing-0][route: {}->http://localhost:9200][total kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 30]
2018-06-28 13:26:15 DEBUG RestClient:59 - request [POST http://localhost:9200/_bulk?timeout=1m] returned [HTTP/1.1 200 OK]
2018-06-28 13:26:15 DEBUG tracer:83 - curl -iX POST 'http://localhost:9200/_bulk?timeout=1m' -d '{"index":{"_index":"xxx","_type":"resource","_id":"13141048"}}

I am trying to shut off all of these message so that my log file does not get slammed however I have not been successful at this. Can anybody provide the necessary log4j.properties file entries I can use to stop my log file from being slammed like this?

Thank you for your time.
Gary

It looks like someone has enabled debug somewhere. What does your logging config file look like?

Hello.
Thank you for responding.
Yes the rootlogger in our log4j.properties file is set to DEBUG, however I understand you should be able to turn off logging by specifying certain packages, such as you see what we have done here with the org.springfreaework:

# Root logger option

log4j.rootLogger=DEBUG, file, stdout

#Stop flooding log with Spring framework messages:
log4j.logger.org.springframework=WARN

log4j.logger.org.apache.commons.httpclient=OFF;
lof4j logger.org.apache.http.impl.nio.client=OFF
log4j logger.org.apache.http.impl.nio.conn=OFF
log4j logger.org.apache.http.impl.execchain=OFF

Direct log messages to a log file

log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=c:\NRD\logs\nrdapp.log
log4j.appender.file.MaxFileSize=10MB
log4j.appender.file.MaxBackupIndex=20
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

Direct log messages to stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

These:
log4j.logger.org.apache.commons.httpclient=OFF;
lof4j logger.org.apache.http.impl.nio.client=OFF
log4j logger.org.apache.http.impl.nio.conn=OFF
log4j logger.org.apache.http.impl.execchain=OFF

... are what I have entered in attempting to turn off logging of those packages / classes that are slamming the log file.

So, we want to keep DEBUG in place for our custom java code, but shut all logging off for those ES and Apache classes, and any and all dependent classes.

Thanks again
Gary

I managed to get rid of a lot of the DEBUG messages appearing in the log file by adding this Static code to my classes:

        static {
                 Logger.getLogger("org.apache.http").setLevel(Level.OFF);
                Logger.getLogger("org.apache.http.wire").setLevel(Level.OFF);
                Logger.getLogger("org.apache.http.headers").setLevel(Level.OFF);
               Logger.getLogger("org.apache.http.impl.conn").setLevel(Level.OFF);
               Logger.getLogger("org.apache.http.impl.nio.conn").setLevel(Level.OFF);
               Logger.getLogger("org.apache.http.impl.nio.client").setLevel(Level.OFF);
                Logger.getLogger("org.elasticsearch.client").setLevel(Level.OFF);
                Logger.getLogger("jdk.internal.instrumentation").setLevel(Level.OFF);
}

However all of the
"{"_index":"xxx","_type":"resource","_id":"13141048","_version":1,"result":"created","_shards"...." messages that are generated by a:
curl -iX POST 'http://localhost:9200/_bulk?timeout=1m' -d
..command which appears to be generated somewhere deep within the BulkProcessor afterBulk orverriden methods continue to fill up the log file quite extensively.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.