Is there a solution available for issue mentioned in "153152"?


(GPK) #1

Hi Team,

Issue mentioned in below discussion is solved? could you let me know version? I am still seeing this issue in latest GA'ed version, i.e., 6.5.4?

Thanks in advance

GPK


(Guy Boertje) #2

Are you running Logstash in docker?


(GPK) #3

No, its a standalone setup on windows 10.

using CLI, i am running "logstash -conf <logstash_extract_dir>\config\MyFile.conf".
I did NOT install anything as service [ELK Stack]. Download & extract GA version and start running using windows CLI.

Same setting is working with 5.6.14/6.2.4 but issue observed in 6.5.2/6.5.4. Might be specific to windows.


(Guy Boertje) #4

OK.

We included some metrics collectors for docker container stats. You are seeing a debug output log message that the collector emits when it can't find the files to read - in some containers the files are in odd places so we debug log if we can't find them.

Unfortunately, we have not added the ability to disable the cgroup collectors if you know that there is no cgroup info.

The workaround is probably a good thing to know in general, make your debug logging more targeted at the area or component of interest so that the metrics collectors are not logging at DEBUG level.

Logstash has the ability to set logging levels at a finer level of granularity, i.e. you can DEBUG log the grok filter while INFO logging all else. This means that the log files contain only the debug messages you need. In addition, this can be done dynamically through the REST API or permanently in the log4j config files.

This is what I've been pasting into various support channels


The logging API allows for different levels of logging for different components in LS.

First do curl -XGET 'localhost:9600/_node/logging?pretty'
You see something like this:

{
  "host" : "Elastics-MacBook-Pro.local",
  "version" : "6.4.0",
  "http_address" : "127.0.0.1:9600",
  "id" : "8789409b-7126-4034-9347-de47e6ce12a9",
  "name" : "Elastics-MacBook-Pro.local",
  "loggers" : {
    "filewatch.discoverer" : "DEBUG",
    "filewatch.observingtail" : "DEBUG",
    "filewatch.sincedbcollection" : "DEBUG",
    "filewatch.tailmode.handlers.createinitial" : "DEBUG",
    "filewatch.tailmode.processor" : "DEBUG",
    "logstash.agent" : "DEBUG",
    "logstash.api.service" : "DEBUG",
    "logstash.codecs.json" : "DEBUG",
    ...
    "logstash.filters.date" : "DEBUG",
    "logstash.inputs.file" : "DEBUG",
    ...
    "logstash.outputs.stdout" : "DEBUG",
    "logstash.pipeline" : "DEBUG",
    ...
    "slowlog.logstash.codecs.json" : "TRACE",
    "slowlog.logstash.codecs.rubydebug" : "TRACE",
    "slowlog.logstash.filters.date" : "TRACE",
    "slowlog.logstash.inputs.file" : "TRACE",
    "slowlog.logstash.outputs.stdout" : "TRACE"
  }
}

Using the API
Turn trace on:

curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'
{
    "logger.filewatch.discoverer" : "TRACE"
}
'

Turn trace off:

curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'
{
    "logger.filewatch.discoverer" : "WARN"
}
'

Or

curl -XPUT 'localhost:9600/_node/logging/reset?pretty'

NOTE: it might be a good idea to start LS with logging set to WARN in the logstash.yml so other logging is less verbose.