Filebeat cant send data to logstash

I have problem to send data from *.log file to logstash. This is filebeat configuration:

filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /home/centos/logs/*.log  
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.logstash:
  hosts: "10.206.81.234:5044"

This is logstash configuration:

path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/*.conf
path.logs: /var/log/logstash
xpack.monitoring.elasticsearch.url: ["10.206.81.236:9200", "10.206.81.242:9200", "10.206.81.243:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: logstash
queue.type: persisted
queue.checkpoint.writes: 10

And this is my pipeline in /etc/logstash/conf.d/test.conf

input {
    beats {
        port => "5044"
    }
    file{
        path => "/home/centos/logs/mylogs.log"
        tags => "mylog"
    }
    file{
        path => "/home/centos/logs/syslog.log"
        tags => "syslog"
    }
}
filter {
}
output {
    if [tag] == "mylog" {
        elasticsearch {
            hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
            user => "Test"
            password => "123456"
            index => "mylog-%{+YYYY.MM.dd}"
        }
    }

    if [tag] == "syslog" {
        elasticsearch {
            hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
            user => "Test"
            password => "123456"
            index => "syslog-%{+YYYY.MM.dd}"
        }
    }
}

I tried to have two separate outputs for mylog and syslog. At first, it works like this: everything was passed to mylog-%{+YYYY.MM.dd} index even files from syslog. So I tried change second if statement to else if. It did not work so I changed it back. Now, my filebeat are not able to send data to logstash and I am receiving this errors:

2018/01/20 15:02:10.959887 async.go:235: ERR Failed to publish events caused by: EOF
2018/01/20 15:02:10.964361 async.go:235: ERR Failed to publish events caused by: client is not connected
2018/01/20 15:02:11.964028 output.go:92: ERR Failed to publish events: client is not connected

My second test was change my pipeline like this:

input {
    beats {
        port => "5044"
    }
    file{
        path => "/home/centos/logs/mylogs.log"
    }
}
filter {
    grok{
        match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
    }
}
output {
    elasticsearch {
        hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
        user => "Test"
        password => "123456"
        index => "mylog-%{+YYYY.MM.dd}"
    }
}

If I add some lines to mylog.log file, filebeat will print the same ERR files but it is passed to logstash and I can see it in Kibana. Could anybody explain me why does it not work? What does those errors means?

Hi dorinand,

Per checked your config, you used tags in input plugin file to mark tags.

input {
    beats {
        port => "5044"
    }
    file{
        path => "/home/centos/logs/mylogs.log"
        tags => "mylog"
    }
    file{
        path => "/home/centos/logs/syslog.log"
        tags => "syslog"
    }
}

If you need to use [tags] at condition of output plugin. Like:

if [tags] == "mylog" {
        elasticsearch {
            ## es config
        }
    }

By the way, tags is a array type. Suggest you to use below conditions.

if "mylog" in [tags] {
        elasticsearch {
            ## es config
        }
    }

You can try it. Thanks.

Hello, I don't see any changes in INPUT{} section, I find a typo in my output section, I used [tag] without 's', it should be [tags]. So I expect this should be a problem. Well, it still does not work. How should my config file should look like when i wont to separate more log files? I tried also this output:

output {
#    if [tag] == "mylog" {
    if "mylog" in [tags] {
        elasticsearch {
            hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
            user => "Test"
            password => "123456"
            index => "mylog-%{+YYYY.MM.dd}"
        }
    }

#    if [tag] == "syslog" {
    else if "syslog" in [tags] {
        elasticsearch {
            hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
            user => "Test"
            password => "123456"
            index => "syslog-%{+YYYY.MM.dd}"
        }
    }
}

But it does not matter if I add logs to mylog or syslog. It will add all lines to both indices. So how should I separate logs to different indices?

I doubt that the tags in output plugin has 2 values, like ['mylog', 'syslog'].

Had you tried when you add else if for them two? To see if it can separate both log into different index.

BTW, you can check you index size.

curl -XGET "http://<your-es-host>:9200/_cat/indices/?v"

I can see two indexes in Kibana. First is mylog-%{+YYYY.MM.dd} and second is syslog-%{+YYYY.MM.dd}. Every event is come to each index. Lets consider, that tags in output plugin has 2 values. So I tried id, it should be unique identifier.

input {
    beats {
        port => "5044"
    }
    file{
        path => "/home/centos/logs/mylogs.log"
        id => "mylog"
    }
    file{
        path => "/home/centos/logs/syslog.log"
        id => "syslog"
    }
}
filter {
}
output {
    if [id] == "mylog" {
        elasticsearch {
            hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
            user => "Test"
            password => "123456"
            index => "mylog-%{+YYYY.MM.dd}"
        }
    }

    else if [id] == "syslog" {
        elasticsearch {
            hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
            user => "Test"
            password => "123456"
            index => "syslog-%{+YYYY.MM.dd}"
        }
    }
}

And it does not work. Every event is pass to both indices. Do you have any Idea where is the problem? It is not able to separate input files to different output.

I also tried change index to see, if there is some tags or id.
Before:
index => "mylog-%{+YYYY.MM.dd}"
After:
index => "mylog-%{+YYYY.MM.dd}-%{[tags]}"
or
index => "mylog-%{+YYYY.MM.dd}-%{[id]}"

Nothing works. It looks like there is no tags or id.

Could you provide one record show in kibana? Have better to show detailed json or key/value pair. There should analysis if tags/id has value or not.

Hello,

I am trying to understand what you are trying to achieve; you want to send your events to different indices based on the kind of events you are getting, correct?

You want either to use tags as @lauea suggested or type, like in the following example. Note that you can use string reference in the index option, this is preferable than defining multiple Elasticsearch outputs because it will make batch more efficient.

input {
    beats {
        port => "5044"
    }
    file{
        path => "/home/centos/logs/mylogs.log"
        type => "mylog"
    }
    file{
        path => "/home/centos/logs/syslog.log"
        type => "syslog"
    }
}
filter {
  if type == "mylog" || type == "syslog" {
    mutate {
      add_field => { "%{[metadata][index]}" => "%{[type]}"}
    }
  } else {
    mutate {
      add_field => { "%{[metadata][index]}" => "%{[metadata][beat]}"}
    }
  }
}
output {
  elasticsearch {
    hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
      user => "Test"
      password => "123456"
      index => "%{[metadata][index]}-%{+YYYY.MM.dd}"
  }
}

The example above will do the following:

  • create an index for "mylog"
  • create an index for "syslog"
  • create an index for each kind beats connected to Logstash.

If you want to know what your event looks like before sending it to Elasticsearch, you can use the stdout output with ruby-debug codec, something like this:

output {
  stdout {
 codec => rubydebug {
    metadata => true
 }
 }

@lauea this is one record from kibana - mylogs.log file from yesterday:

@timestamp January 22nd 2018, 10:04:40.799
t @version 1
t _id HwYcHWEB0LbLyxQz0EnW
t _index syslog-2018.01.22
# _score 2
t _type doc
t beat.hostname filebeat-prod-fileabeat1.myname.osdc1.company.local
t beat.name filebeat-prod-fileabeat1.myname.osdc1.company.local
t beat.version 6.1.1
t bytes 138
t client 12.4.14.27
t duration 0.23
t host filebeat-prod-fileabeat1.myname.osdc1.company.local
t message 12.4.14.27 abcd /index.html 138 0.23
t method abcd
# offset 379,156
t prospector.type log
t request /index.html
t source /home/centos/logs/mylogs.log
t tags beats_input_codec_plain_applied

It looks like there is no change with type or id'

@pierhugues
Yes, I am trying to separate each log file to exact index based on, for example, path of log file.

When I use configuration from you, I am receiving errors:

2018/01/23 09:04:11.163183 async.go:235: ERR Failed to publish events caused by: EOF
2018/01/23 09:04:11.166425 async.go:235: ERR Failed to publish events caused by: client is not connected

Thank you for your help guys.

Theses message are from filebeat, any errors on the logstash side when you start?

After systemctl restart logstash i grep the /var/log/logstash/logstash-plain.log file with actual time and error and this is only error that occur:
[2018-01-23T14:52:25,643][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.

Interesting is , that this error have only 2 of 3 nodes. The third one don't have any errors.

Maybe it is good mention, that I have 3 logstash servers and before them is HAproxy with roundrobin. My filebeat sending all data to HAproxy and HAproxy forward data to one of logstash.

@dorinand Yes. Your record from kibana shows no related tag (like mylog or syslog) at tags field.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.