Queries and Issues faced with Logstash

I have 5 queries and issues faced with Logstash.

  1. Whether I really need to configure logstash.conf with input and output plugins. I did not configured anything inside logstash.conf , any how I am getting logs in JSON.

  2. My application also uses docker. Should I configure logstash inside docker(I am new to docker and the word image in it is very confusing, can't get enough of it)

  3. What I observe is that the error stack captured inside ERROR attribute does not come line by line, but they are in a single line. The JSON format not able to detect \n inside the error logs

     {
    "@timestamp": "2018-01-27T14:52:36.708+04:00",
    "@version": 1,
    "message": "Invalid Request Exception:",
    "logger_name": "com.json.logging.demo.web.controller.RestErrorHandler",
    "thread_name": "http-nio-3009-exec-1",
    "level": "ERROR",
    "level_value": 40000,
    "stack_trace": "com.json.logging.demo.exception.PullEmployeeDataException: null\n\tat    
     com.json.logging.demo.web.controller.admin.EmployeeCodeController$ $FastClassBySpringCGLIB$$15b35cb5.invoke(<generated>)\n\tat 
     org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)\n\tat 
     org.springframework.aop.framework.CglibAopProxy$ glibAopProxy.java:721)\n\
    
  4. I get the below SL4J message in the logs at he bottom as in red colour. I like to provide JSON logging in our live application which generate thousands of logs daily, does this below message overrides/overwrites any error or debug logs

    SLF4J: A number (51) of logging calls during the initialization phase have been intercepted and are
    SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
    SLF4J: See also http://www.slf4j.org/codes.html#replay
    
  5. Why do some part of the logs don't come in JSON format

    20:21:31,215 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-STDOUT] - Worker 
    thread will flush remaining events before exiting. 
    20:21:31,215 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-STDOUT] - Queue flush 
    finished successfully within timeout.
    20:21:31,216 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-FILE] - Worker thread 
    will flush remaining events before exiting. 
    20:21:31,216 |-INFO in ch.qos.logback.classic.AsyncAppender[ASYNC-FILE] - Queue flush 
    finished successfully within timeout.
    20:21:31,218 |-WARN in Logger[org.graylog2.gelfclient.transport.GelfSenderThread] - No 
    appenders present in context [default] for logger

Whether I really need to configure logstash.conf with input and output plugins. I did not configured anything inside logstash.conf , any how I am getting logs in JSON.

I don't understand the question. "logstash.conf" is not a standardized configuration filename so you don't need that file at all, but you obviously need inputs and outputs in some file that Logstash reads if you want it to actually do something.

My application also uses docker. Should I configure logstash inside docker(I am new to docker and the word image in it is very confusing, can't get enough of it)

I recommend storing the log files produced by an application persistently (either in a volume or a directory mounted from the host) and run Logstash in a separate container (or directly on the host) to process those files.

Why do some part of the logs don't come in JSON format

This isn't the best place to ask why Logback doesn't behave as you expect.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.