Problem with indexes - Logstash dont create filebeat indexes

I am not able to create indexes using filebeat on windows. Apparently logstash is not even getting the logs.

I disabled the firewalld and selinux on the server that is logstash + elasticsearch, I can also see the communications coming via tcpdump, but the indicies are not created. I also disabled the firewall on windows, which is the machine that is sending the logs, but I was not successful either.

Filebeat configuration:


- input_type: log

    - C:\Program Files\filebeat\daily-server.json

  fields_under_root: true
  document_type: json
  json.keys_under_root: true
  json.overwrite_keys: true

  enabled: true
  hosts: [""]
  ssl.enabled: true
  ssl.certificate_authorities: ['C:\cert\logstash.crt']

Logstash configuration:

input {

  tcp {
    type => "daily"
    port => "5007"
    codec => "json"

    beats {
        port => 5001
        codec => "json_lines"
        ssl => true
        ssl_certificate => "/etc/logstash/logstash.crt"
        ssl_key => "/etc/logstash/logstash.key"

filter {


output {
   elasticsearch {
   hosts => ""
   index => "dailyserver-%{+YYYY.MM.dd}"
#   document_type => "dailyserver"

In the configuration it is with ssl, but also I have already tested without ssl.

I made a video showing the problem.

Can someone help?

Try removing the json_lines codec from the beats input in Logstash.

Hi @andrewkroh, it worked, logstash is logging in, but it is not separating the json log fields, as can be seen in kibana's print.

I would like the fields to be divided.

There's something strange going on with your config. I think the empty fields: setting is causing issues (or maybe there's an indentation issue). If you are not using it just remove it or set it to null (fields: ~). As you can see you are getting those fields.* keys showing up in Elasticsearch. This is causing filebeat to never honor the JSON decoding that you enabled.

Okay @andrewkroh , I commented the lines on the field and it worked. But I have another problem, the logs are from a java application, and when the log has the stack trace it has _jsonparsefailure, and it can not split the json log fields.

In this log the fields have been divided:

{"level":"INFO","timestamp":"2017-06-29T14:28:44,358","thread":"noup-3-5","file":"", "line":"192","message":"Sess at ->","throwable":""}

In this log the fields were not divided:

{"level":"INFO","timestamp":"2017-06-29T13:58:22,440","thread":"pool-22-thread-1","file":"", "line":"101","message":" ms.TimeoutException","throwable":" ms.exception.TimeoutException: ms.TimeoutException\n at ~[ core.jar:?]\n at ~[ core.jar:?]\n at ~[ddd.jar:?]\n at b.readMessage( ~[ core.jar:?]\n at ~[ core.jar:?]\n at SingleSessionConnection.handleInputConnection( ~[ core.jar:?]\n at SingleSessionConnection.handle( ~[ core.jar:?]\n at bManager.connect( ~[ core.jar:?]\n at BaseConnector.connect( ~[ core.jar:?]\n at ms.transport.BaseBinder.bind( ~[ core.jar:?]\n at ms.transport.nio.bytes.NioPeer.bind( ~[ core.jar:?]\n at ms.transport.nio.netty.ServerHandler$ ~[ core.jar:?]\n at java.util.concurrent.Executors$ ~[?:1.0]\n at ~[?:1.0]\n at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.0]\n at java.util.concurrent.ThreadPoolExecutor$ [?:1.0]\n at [?:1.0]\n"}

The developer was able to put a \n (marked in the image) to try to break the line in the throwable field, which is where the stack trace, but did not work. It only works if you put the \n along with the string where the next line would start, but the developer can not do that.

Do you have any ideas that might help @andrewkroh ?

I do not know if it was obvious, my English is a bit bad. :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.