Problem with indexes - Logstash dont create filebeat indexes

I am not able to create indexes using filebeat on windows. Apparently logstash is not even getting the logs.

I disabled the firewalld and selinux on the server that is logstash + elasticsearch, I can also see the communications coming via tcpdump, but the indicies are not created. I also disabled the firewall on windows, which is the machine that is sending the logs, but I was not successful either.

Filebeat configuration:

filebeat.modules:

filebeat.prospectors:
- input_type: log

  paths:
    - C:\Program Files\filebeat\daily-server.json

  fields:
  fields_under_root: true
  document_type: json
  json.keys_under_root: true
  json.overwrite_keys: true


output.logstash:
  enabled: true
  hosts: ["192.168.1.130:5001"]
  ssl.enabled: true
  ssl.certificate_authorities: ['C:\cert\logstash.crt']

Logstash configuration:

input {

  tcp {
    type => "daily"
    port => "5007"
    codec => "json"
  }

    beats {
        port => 5001
        codec => "json_lines"
        ssl => true
        ssl_certificate => "/etc/logstash/logstash.crt"
        ssl_key => "/etc/logstash/logstash.key"
    }

}
filter {

}

output {
   elasticsearch {
   hosts => "127.0.0.1:9200"
   index => "dailyserver-%{+YYYY.MM.dd}"
#   document_type => "dailyserver"
}
}

In the configuration it is with ssl, but also I have already tested without ssl.

I made a video showing the problem. https://youtu.be/2vhpNZC1LSo

Can someone help?

Try removing the json_lines codec from the beats input in Logstash.

Hi @andrewkroh, it worked, logstash is logging in, but it is not separating the json log fields, as can be seen in kibana's print.

I would like the fields to be divided.

There's something strange going on with your config. I think the empty fields: setting is causing issues (or maybe there's an indentation issue). If you are not using it just remove it or set it to null (fields: ~). As you can see you are getting those fields.* keys showing up in Elasticsearch. This is causing filebeat to never honor the JSON decoding that you enabled.

Okay @andrewkroh , I commented the lines on the field and it worked. But I have another problem, the logs are from a java application, and when the log has the stack trace it has _jsonparsefailure, and it can not split the json log fields.

Example:
In this log the fields have been divided:

{"level":"INFO","timestamp":"2017-06-29T14:28:44,358","thread":"noup-3-5","file":"Abstndler.java", "line":"192","message":"Sess at -> io.netty.channel.text@527a8fc","throwable":""}

In this log the fields were not divided:

{"level":"INFO","timestamp":"2017-06-29T13:58:22,440","thread":"pool-22-thread-1","file":"con.java", "line":"101","message":" ms.TimeoutException","throwable":"br.com. ms.exception.TimeoutException: ms.TimeoutException\n at br.com. ms.transport.nio.bytes.NioPeer.read(NioPeer.java:155) ~[ core.jar:?]\n at br.com. ms.transport.nio.bytes.NioChannel.read(NioChannel.java:65) ~[ core.jar:?]\n at br.com. phast.ms.transport.dec.read(dec.java:101) ~[ddd.jar:?]\n at br.com. b.readMessage(b.java:360) ~[ core.jar:?]\n at br.com.b.read(b.java:240) ~[ core.jar:?]\n at br.com. SingleSessionConnection.handleInputConnection(SingleSessionConnection.java:98) ~[ core.jar:?]\n at br.com. SingleSessionConnection.handle(SingleSessionConnection.java:72) ~[ core.jar:?]\n at br.com. bManager.connect(bManager.java:169) ~[ core.jar:?]\n at br.com. BaseConnector.connect(BaseConnector.java:35) ~[ core.jar:?]\n at br.com. ms.transport.BaseBinder.bind(BaseBinder.java:58) ~[ core.jar:?]\n at br.com. ms.transport.nio.bytes.NioPeer.bind(NioPeer.java:196) ~[ core.jar:?]\n at br.com. ms.transport.nio.netty.ServerHandler$1.run(java:52) ~[ core.jar:?]\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.0]\n at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.0]\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.0]\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.0]\n at java.lang.Thread.run(Thread.java:745) [?:1.0]\n"}

The developer was able to put a \n (marked in the image) to try to break the line in the throwable field, which is where the stack trace, but did not work. It only works if you put the \n along with the string where the next line would start, but the developer can not do that.

Do you have any ideas that might help @andrewkroh ?

I do not know if it was obvious, my English is a bit bad. :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.