I have 5 queries and issues faced with Logstash.
-
Whether I really need to configure logstash.conf with input and output plugins. I did not configured anything inside logstash.conf , any how I am getting logs in JSON.
-
My application also uses docker. Should I configure logstash inside docker(I am new to docker and the word image in it is very confusing, can't get enough of it)
-
What I observe is that the error stack captured inside ERROR attribute does not come line by line, but they are in a single line. The JSON format not able to detect \n inside the error logs
{ "@timestamp": "2018-01-27T14:52:36.708+04:00", "@version": 1, "message": "Invalid Request Exception:", "logger_name": "com.json.logging.demo.web.controller.RestErrorHandler", "thread_name": "http-nio-3009-exec-1", "level": "ERROR", "level_value": 40000, "stack_trace": "com.json.logging.demo.exception.PullEmployeeDataException: null\n\tat com.json.logging.demo.web.controller.admin.EmployeeCodeController$ $FastClassBySpringCGLIB$$15b35cb5.invoke(<generated>)\n\tat org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)\n\tat org.springframework.aop.framework.CglibAopProxy$ glibAopProxy.java:721)\n\
-
I get the below SL4J message in the logs at he bottom as in red colour. I like to provide JSON logging in our live application which generate thousands of logs daily, does this below message overrides/overwrites any error or debug logs
SLF4J: A number (51) of logging calls during the initialization phase have been intercepted and are SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system. SLF4J: See also http://www.slf4j.org/codes.html#replay
-
Why do some part of the logs don't come in JSON format
20:21:31,215 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-STDOUT] - Worker thread will flush remaining events before exiting. 20:21:31,215 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-STDOUT] - Queue flush finished successfully within timeout. 20:21:31,216 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-FILE] - Worker thread will flush remaining events before exiting. 20:21:31,216 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-FILE] - Queue flush finished successfully within timeout. 20:21:31,218 |-WARN in Logger[org.graylog2.gelfclient.transport.GelfSenderThread] - No appenders present in context [default] for logger