Multiple Filebeat inputs with logstash output

HI ,
i am using filebeat 6.3 with the below configuration , however multiple inputs in the file beat configuration with one logstash output is not working.

Filebeat configuration :

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.


- type: log
  paths:
    - /opt/eureqa/TEST_filebeat/TE.log
  tags: ["TE"]
  fields: {log_type: te}

- type: log
  paths:
    - /opt/eureqa/TEST_filebeat/TMRS.log
  tags: ["TMRS"]
  fields: {log_type: tmrs}


  # Change to true to enable this input configuration.
  enabled: true
  reload.enabled: true
  reload.period: 10s
#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "192.168.0.159:5601"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.0.159:5998"]
#  hosts: ["192.168.0.159:5998"]

Logstash Configuration:

input {
  beats {
    port => 5998
   }
}

filter {
 if [tag] == "TE" {
    grok {
    match => { message => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:thread}] %{LOGLEVEL:log-level}%{DATA:class}- %{GREEDYDATA:message}"}
     }

   kv {
        source => "message"
        remove_field => "kv"
        field_split => " "
        value_split => ":"
        include_brackets => "false"
        remove_char_key => "{,"
        recursive => "true"
      }
                        } 
               

 else  if [tag] == "TMRS" {
    grok {
    match => { message => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:thread}] %{LOGLEVEL:log-level}%{DATA:class}- %{GREEDYDATA:message}"}
     }

   kv {
        source => "message"
        remove_field => "kv"
        field_split => " "
        value_split => ":"
        include_brackets => "false"
        remove_char_key => "{,"
        recursive => "true"
      }
                        }


           } 


output {
  elasticsearch {
    hosts => ["192.168.0.159:9200"]
    manage_template => false
    index => "%{tag}-index"
  }
}

I am not able to create index with the above configuration and suggest me the above configuration has any mistakes.

BR,

Ramesh

Do you get any error message?

There is no field named tag. Iin Logstash conditions like [tag] == "TE" and [tag] == "TMRS" will fail (be always false).

The event field is named tags. tags is not a single entry, but an array of tags (symbols). Logstash processors might even add tags to the tags field.

In beats you configured the log_type field. when sending to Logstash you will get an event like:

{
  ...
  "fields": {"log_type": "tmrs"},
  "tags": ["TMRS"],
  ...
}

Why do you need tags and the log_type field? These seem quite redundant to me.

The format string on the index setting doesn't look correct either. Assuming log_type is what you want you want index => "%{[fields][log_type]-index".

Do you use aliases and some kind of rollover strategy? If not consider to add a timestamp to the index, so you can delete/archive old data after a given retention period.

Hi,
I am using tags to differentiate my logs . my elk stack is on one server and file beat configuration is on other server (where my log files are generated).
I had 5 components logs and which are needs to be parsed by using the logstash output configuration.
I able to find the index names in the kibana dash board " |_index ||
|---|---|
||%{[fields][log_type]-index|"
by using the above configuration.
can you help me where i did mistake mostly .
and my grok pattern and kv filter was not working if i use the above configuration.

There might be a many issue. The most obvious issue I see are:

 if [tag] == "TE" {
   ...
}

The Field [tag] does not even exist. => The grok filter will never be executed because if this condition.

else  if [tag] == "TMRS" {
    ...
}

Same, [tag] does not exist.

But you are writing tags. That changing condition to if "TE" in [tags] or if [fields][log_type] == "te" should fix the conditions.

output {
  elasticsearch {
    hosts => ["192.168.0.159:9200"]
    manage_template => false
    index => "%{tag}-index"
  }
}

The index setting is wrong because:

  • field tag does not exist
  • syntax is wrong. For field access it should say index => "%{[tag]}-index"
  • if you were to use tags, this would not be a string, but a list, generating an invalid index name

As you already have fields.log_type configured in filebeat, I assume you want:

output {
  elasticsearch {
    hosts => ["192.168.0.159:9200"]
    manage_template => false
    index => "%{[fields][log_type]}-index"
  }
}

Please note, index names without timestamp are not recommended. You will have a hard time to delete old data when you are about to run out of disk space.

Having log_type, I don't see why you need to configure tags in filebeat, but it doesn't really hurt to do so.

No idea about the grok and kv filters. I'm no grok debugger, but have you tried a grok debugger like the one that comes with kibana?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.