Filebeat unable to sent logs

Hi,
I have installed ELK 5.6.8 on Redhat server and filebeat on another redhat server.
I have used logstash to collect data from filebeat.
I have changed default port 5044 with 5045 in the filebeat.yml file because 5044 port is already used by metricbeat. I am sending my filebeat logs.

2018-06-05T16:27:17+05:30 DBG Run prospector
2018-06-05T16:27:17+05:30 DBG Start next scan
2018-06-05T16:27:17+05:30 DBG Prospector states cleaned up. Before: 0, After: 0
2018-06-05T16:27:17+05:30 DBG Flushing spooler because of timeout. Events flushed: 0
2018-06-05T16:27:22+05:30 DBG Flushing spooler because of timeout. Events flushed: 0

logstash.conf file

input {
beats {
port => 5045
}
}

output {
elasticsearch {
hosts => ["ipaddress:9200"]
index => "filebeat"
}
stdout { codec => rubydebug }
}

Hi @mamta,

You can use an only logstash for all your beats instances, for that you need to build the index name with the metadata, like in the example in the documentation:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
  }
}

Could you share also your filebeat configuration? (when sharing logs and configuration files please do it as preformated text)

Hi,
Sorry for the late reply. Actually, I was on leave. I am sending you my Filebeat.yml file configuration.

esbapp:/etc/filebeat # cat filebeat.yml
###################### Filebeat Configuration Example #########################

This file is an example configuration file highlighting only the most common

options. The filebeat.reference.yml file from the same directory contains all the

supported options with more comments. You can use it as a reference.

You can find the full configuration reference here:

https://www.elastic.co/guide/en/beats/filebeat/index.html

For more available modules and options, please see the filebeat.reference.yml sample

configuration file.

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so

you can use different prospectors for various configurations.

Below are the prospector specific configurations.

  • type: log

    Change to true to enable this prospector configuration.

    enabled: false

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/*.log
      #- c:\programdata\elasticsearch\logs*

    Exclude lines. A list of regular expressions to match. It drops the lines that are

    matching any regular expression from the list.

    #exclude_lines: ['^DBG']

    Include lines. A list of regular expressions to match. It exports the lines that are

    matching any regular expression from the list.

    #include_lines: ['^ERR', '^WARN']

    Exclude files. A list of regular expressions to match. Filebeat drops the files that

    are matching any regular expression from the list. By default, no files are dropped.

    #exclude_files: ['.gz$']

    Optional additional fields. These fields can be freely picked

    to add additional information to the crawled log files for filtering

    #fields:

    level: debug

    review: 1

    Multiline options

    Mutiline can be used for log messages spanning multiple lines. This is common

    for Java Stack Traces or C-Line Continuation

    The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

    #multiline.pattern: ^[

    Defines if the pattern set under pattern should be negated or not. Default is false.

    #multiline.negate: false

    Match can be set to "after" or "before". It is used to define if lines should be append to a pattern

    that was (not) matched before or after or as long as a pattern is not matched based on negate.

    Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

    #multiline.match: after

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

host: "10.10.73.228:5601"

#============================= Elastic Cloud ==================================

These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:

The Logstash hosts

hosts: ["10.10.73.228:5045"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: critical, error, warning, info, debug

logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

#logging.selectors: ["*"]

Hi,

I've checked your config file and found you didn't enable below prospector configuration.

# Change to true to enable this prospector configuration.
enabled: false

Please set above parameter true and restart your service and do let me know if you still are facing issue.

Regards,

Thank you.

Let me try now.

Hey,

ELK version is 5.6.8 and filebeat version is 6.1.2
I am getting an error like below after change it as true and restart filebeat . Is there version compatibility error?

esbapp:/var/log/filebeat # tail -50 filebeat
2018-06-20T11:38:41+05:30 DBG [log] Disable stderr logging
2018-06-20T11:38:41+05:30 INFO Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-06-20T11:38:41+05:30 DBG [beat] Beat metadata path: /var/lib/filebeat/meta.json
2018-06-20T11:38:41+05:30 INFO Beat UUID: 4708e448-9d38-4113-aed7-7ec16e004a2f
2018-06-20T11:38:41+05:30 INFO Setup Beat: filebeat; Version: 6.1.2
2018-06-20T11:38:41+05:30 DBG [beat] Initializing output plugins
2018-06-20T11:38:41+05:30 INFO Metrics logging every 30s
2018-06-20T11:38:41+05:30 DBG [processors] Processors:
2018-06-20T11:38:41+05:30 DBG [publish] start pipeline event consumer
2018-06-20T11:38:41+05:30 INFO Beat name: esbapp
2018-06-20T11:38:41+05:30 INFO Kibana url: http://10.10.73.228:5601
2018-06-20T11:38:41+05:30 CRIT Exiting: Error importing Kibana dashboards: fail to create the Kibana loader: Error creating Kibana client: fail to get the Kibana version:fail to unmarshal the response from GET http://10.10.73.228:5601/api/status: json: cannot unmarshal string into Go struct field kibanaVersionResponse.version of type struct { Number string "json:"number""; Snapshot bool "json:"build_snapshot"" }. Response: {"name":"esbjs","version":"5.6.8","buildNum":15616,"buildSha":"f5df7657dd0477ab65412f2841fa5470a012459f","uuid":"15d64833-cef4-4a67-8077-9b27c7f28e1b","status":{"overall":{"state":"green","title":"Green","nickname":"Looking good","icon":"success","si... (truncated)

Hi,
Which version of kibana you are using ???

Regards,

Hey,

Kibana version 5.6.8

Hi,

It looks like there is some backward compatibility problems with older versions of kibana regarding the dashboard setup in filebeat; Version: 6.1.2.

You have to upgrade your kibana version to make it compatible.

Regards,

Hey,
Should I upgrade kibana or degrade filebeat?

its better to upgrade kibana.

Hey,

Then I have to upgrade kibana,logstash and elasticsearch as well?

yes it would be better because there will be some new features and functionality which helps you in future.

Regards,

Thank you.

Let me try it. Will let you know.

Hey,
I have upgraded my ELK and filebeat as version 6.2.3 but still, I am getting the same error and not able to create an index.

Hi,

Could you please share your config file and debug logs?

Regards,

Hey,

Config file is too large as per limit. May I send you only changes which are done from my side?

Yes please.

Filebeat.yml configuration

filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so

you can use different prospectors for various configurations.

Below are the prospector specific configurations.

  • type: log

    Change to true to enable this prospector configuration.

    enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/*.log
      #- c:\programdata\elasticsearch\logs*
      #============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

host: "10.10.73.228:5601"

#============================= Elastic Cloud ==================================

These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:

The Logstash hosts

hosts: ["10.10.73.228:5045"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: debug

filebeat log

2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:501 File didn't change: /var/log/pbl.log
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:361 Check file for harvesting: /var/log/snapper.log
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:447 Update existing file for harvesting: /var/log/snapper.log, offset: 3352
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:501 File didn't change: /var/log/snapper.log
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:361 Check file for harvesting: /var/log/zypper.log
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:447 Update existing file for harvesting: /var/log/zypper.log, offset: 1478788
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:501 File didn't change: /var/log/zypper.log
2018-06-21T11:57:23.890+0530 DEBUG [prospector] log/prospector.go:168 Prospector states cleaned up. Before: 6, After: 6

Hi,

I've check above details and i'm not able see that error anymore and also there is no error in filebeat logs.

could you please share the logstash config and logs to understand the issue.

regards,