Filebeat .csv file is not updating on Kibana

Hi,
there are 2 .conf files under conf.d on server where ELK is hosted. one .conf file for jenkins logs & another one for filebeat. filebeat is hosted on another server. So, which .conf file is taken as config file?
Also, in pipeline.yml file, following path is mentioned -
path.config: "/etc/logstash/conf.d/*.conf"

Your question is a little confusing. Could you explain more what your issue is?

Consider there are 2 servers x & y. ELK is installed on server x & filebeat is installed on server y to pass CSV data to logstash which on server x. In server x under logstash, there are 2 config files under conf.d folder. One is for Jenkins log & another for filebeat data. But only Jenkins logs are updating in kibana index. So, which config file is considered while starting logstash?

Both they get concatonated together into 1 big pipeline. All .conf files in the conf.d directory get concatonated together as 1 big pipeline. So if there are not proper structure then the behavior may not be what is expected.

If you want them to act as separate pipelines see here

If the CSV has already been read and no new lines have been added then it will no be read again as Logstash keeps track of what it already processed.

Thanks for reply.
I have getting following error for command systemctl status filebeat.service -l

ERROR pipeline/output.go:100 Failed to connect to failover(backoff(async(tcp://<ip_address>:5046)),backoff(async(tcp://<ip_address>:5046)),backoff(async(tcp://<ip_address>:5046))): dial tcp <ip_address>:5046: connect: no route to host

Just as it says Filebeat can not connect the port 5046 on that host,

I suspect you are trying to connect to Logstash on port 5046.

Are you sure you have Logstash running on that port? If so most likely you have a a connectivity issue.

From the filebeat box try

telnet ip port

If everything is ok it should connect, if not you have a connectivity issue.

Thanks, error is not there now. but getting new error in logstash log file.
[io.netty.channel.DefaultChannelPipeline][main][2323a3678316342c7d85d258d984048dc5675cf3576758228f54389ceed37de5] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

If you would like help please post the Logstash conf files properly formatted and more of the Logstash log, what you provided is not enough information for us to help.

PFB filebeat.yml:-

#=========================== Filebeat inputs =============================

filebeat.inputs:

  • type: log

    Change to true to enable this input configuration.

    enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /Kibana_latest.csv

    exclude_files: ['.log$']

    scan_frequency: 10s

    harvester_buffer_size: 163840000

    max_bytes: 10485760000

    multiline.pattern: ^\d+[A-Za-z0-9]+

    Defines if the pattern set under pattern should be negated or not. Default is false.

    multiline.negate: true

    multiline.match: after

    multiline.max_lines: 10000000000

    multiline.timeout: 10s

    harvester_limit: 1

    close_inactive: 5s

    close_renamed: true

    close_removed: true

    close_eof: true

    clean_removed: true

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"

Kibana Space ID

ID of the Kibana Space into which the dashboards should be loaded. By default,

the Default Space will be used.

#space.id:

#============================= Elastic Cloud ==================================

These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:

Boolean flag to enable or disable the output module.

enabled: true

The Logstash hosts

hosts: ["<ip_address>:5044"]

Number of workers per Logstash host.

worker: 3

If enabled only a subset of events in a batch of events is transferred per

transaction. The number of events to be sent increases up to bulk_max_size

if no error is encountered.

slow_start: true

The maximum number of seconds to wait before attempting to connect to

Logstash after a network error. The default is 60s.

backoff.max: 30s

Optional index name. The default index name is set to filebeat

in all lowercase.

index: "logstash-filebeat-%{+YYYY.MM.dd}"

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~
  • add_docker_metadata: ~
  • add_kubernetes_metadata: ~

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: info

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

#logging.selectors: ["*"]

Logging to rotating files. Set logging.to_files to false to disable logging to

files.

logging.to_files: true
logging.files:

Configure the path where the logs are written. The default is the logs directory

under the home path (the binary location).

path: /var/log/filebeat-fireworks

The name of the files where the logs are written to.

name: filebeat

Number of rotated log files to keep. Oldest files will be deleted first.

keepfiles: 2

The permissions mask to apply when rotating log files. The default value is 0600.

Must be a valid Unix-style file permissions mask expressed in octal notation.

permissions: 0644

#============================== X-Pack Monitoring ===============================

filebeat can export internal metrics to a central Elasticsearch monitoring

cluster. This requires xpack monitoring to be enabled in Elasticsearch. The

reporting is disabled by default.

Set to true to enable the monitoring reporter.

#monitoring.enabled: false

Sets the UUID of the Elasticsearch cluster under which monitoring data for this

Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch

is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.

#monitoring.cluster_uuid:

Uncomment to send the metrics to Elasticsearch. Most settings from the

Elasticsearch output are accepted here as well.

Note that the settings should point to your Elasticsearch monitoring cluster.

Any setting that is not set is automatically inherited from the Elasticsearch

output configuration, so if you have the Elasticsearch output configured such

that it is pointing to your Elasticsearch monitoring cluster, you can simply

uncomment the following line.

#monitoring.elasticsearch:

#================================= Migration ==================================

This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true

& logstash-filebeat.conf :-

INPUT: get messages from Filebeat at host machine in the port 5044

For more information, take a look at the output configuration of Filebeat in filebeat.yml file (host)

input {
beats {
port => 5044
add_field => { "input_type" => "filebeat" }
}
}

filter{
csv{
separator => ","
columns => ["PROJECT_NAME","TESTSUITE_NAME","TEST_GROUP_NAME","TEST_NAME","TEST_STATUS","FAILURE_REASON","JENKINS_SLAVE_NAME","JENKINS_JOB_NAME","JENKINS_BUILD_ID"]
add_field => { "event_type" => "filebeat_log" }
}
}

OUTPUT: send messages to elasticsearch (VM port 9200)

output {
elasticsearch {
hosts => [ "<ip_address>:9200" ]
index => "logstash-filebeat-%{+YYYY.MM.dd}"
ilm_enabled => true
ilm_policy => "logstash-policy"
}
stdout {}
}

please check above config. Thanks in advance

Please edit your post and format your configs / code it is unreadable

Select the code and push the </> button at the top of the editor.

We will take a look after you do that.

Thanks for reply @stephenb.

Filebeat is working now, but there is one issue in jenkins log.
I am sending jenkins log using plugin & setting post build action->Send console log to logstash->Max_lines = -1.

But, when jenkins log is large, at that time, logs are not reflecting on Kibana. Can you please share the solution for this.

Please open a separate thread in this and provide more complete information on this issue. There is not enough details to understand the issue.

Also people will probably not help if you do not format your code as I requested. It takes 1 min to format your code.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.