I'm trying to collect multiple logs from filebeat and send them to logstash.
I would like to use logstash's filter to process each log file individually, but I'm having trouble.
For example, first I want to rename the Index.
In filebeat.yml, I have written a description that collects nginx log files and sends them to logstash.
In logstash, I collect log files from beats (filebeat), set the Index name of nginx-access.log to nginx-access, and send them to elasticsearch.
This creates an Index called "metricbeat-" in elasticsearch, contrary to my expectation.
What I have tried is to use the tags and fields set in filebeat in the filter in logstash.
However, all of them fail to rename the Index as shown above.
Do you have a better idea?
Here is the description of the configuration file.
# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log*
tags: ["nginx-access"]
exclude_files: ['.gz$']
fields: {
log_type: "nginx_access"
}
- { type: log
enabled: true
paths:
- /var/log/nginx/error.log*
tags: ["nginx-access"]
exclude_files: ['.gz$']
fields: {
log_type: "nginx_error"
}
#- type: log
# enabled: true
# paths:
# - /var/log/messages
output.logstash:
hosts: ["http://***. ***. ***. ***:5044"]
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
# cat /etc/logstash/logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
if ([fields][logtype] == "nginx-access") {
mutate {
replace => {
"%{type}" => "nginx-access"
}
}
}
} else {
mutate {
replace => {
"%{type}" => "other"
}
}
}
if ("nginx-access" in [tags]) {
mutate {
replace => {
"%{type}" => "nginx-access"
}
}
}
} else {
mutate {
replace => {
"%{type}" => "other"
}
}
}
}
output {
elasticsearch {
hosts => ["http://***. ***. ***. ***:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
}
The actions in the two descriptions below overlap, but both mean that I have tried them.
I'm hoping for some new suggestions.
if ([fields][logtype] == "nginx-access") {
mutate {
replace => {
"%{type}" => "nginx-access"
}
}
}
} else {
mutate {
replace => {
"%{type}" => "other"
}
}
}
if ("nginx-access" in [tags]) {
mutate {
replace => {
"%{type}" => "nginx-access"
}
}
}
} else {
mutate {
replace => {
"%{type}" => "other"
}
}
}
leandrojmp
(Leandro Pereira)
March 3, 2021, 3:34am
2
Your configuration has a few mistakes, first you need to fix your filebeat inputs, it should be something like this:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log*
tags: ["nginx-access"]
exclude_files: ['.gz$']
fields:
log_type: "nginx_access"
- type: log
enabled: true
paths:
- /var/log/nginx/error.log*
tags: ["nginx-access"]
exclude_files: ['.gz$']
fields:
log_type: "nginx_error"
There are no curly brackets.
Also, you are creating a field named [field][log_type]
, but your conditional is testing against a field named [field][logtype]
, if this wasn't a typo you need to fix the field name in your conditional.
The replace
action inside the mutate
filter replaces the value of a field, but this is not what you are trying to do, it seems that you are trying to add a new field named [type]
, you should use the add_field
action.
mutate {
add_field => { "type" => "nginx-access_or_other" }
}
Thank you for your comment.
First, I removed curly brackets.
And [field][logtype] is a typo on my part. Sorry about that.
The description of add_field that you gave me is
The mutate {
add_field => { "type" => "nginx-access_or_other" }
}
Is it safe to assume that the %{type}
described in the index is affected?
I would like to determine the index for each log file.
leandrojmp
(Leandro Pereira)
March 3, 2021, 12:09pm
4
This mutate
filter will add a new field named type
to your document with the value specified.
If you have something like this:
mutate {
add_field => { "type" => "other" }
}
Then after this filter you will have a field named type
with the value other
.
When you use %{type}
anywhere in your pipeline after that filter, this means that logstash will replace this for the value of the field.
If your output is something like this:
elasticsearch {
hosts => ["http://host:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
}
Your index name will be other-2021.03.03
.
Thank you.
That is the result I am hoping for.
However, when I look at elasticsearch, it doesn't seem to be adding any indexes.
Is it a problem other than the config?
The logstash-plain.log is as follows
# tail -n 20 /var/log/logstash/logstash-plain.log
[2021-03-04T10:52:47,601][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2021-03-04T10:52:48,568][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/etc/logstash/conf.d/*.conf"}
[2021-03-04T10:52:48,589][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2021-03-04T10:52:48,844][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-03-04T10:52:50,898][ERROR][logstash.agent ] Internal API server error {:status=>500, :request_method=>"GET", :path_info=>"/_node/pipelines", :query_string=>"graph=true", :http_version=>"HTTP/1.1", :http_accept=>nil, :error=>"Unexpected Internal Error", :class=>"LogStash::Instrument::MetricStore::MetricNotFound", :message=>"For path: pipelines. Map keys: [:reloads]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:241:in `block in get_recursively'",
... snip ...
:in `block in spawn_thread'"]}
[2021-03-04T10:52:50,914][ERROR][logstash.agent ] API HTTP Request {:status=>500, :request_method=>"GET", :path_info=>"/_node/pipelines", :query_string=>"graph=true", :http_version=>"HTTP/1.1", :http_accept=>nil}
[2021-03-04T10:52:51,079][ERROR][logstash.agent ] Internal API server error {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"vertices=true", :http_version=>"HTTP/1.1", :http_accept=>nil, :error=>"Unexpected Internal Error", :class=>"LogStash::Instrument::MetricStore::MetricNotFound", :message=>"For path: events. Map keys: [:reloads]", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:241:in `block in get_recursively'",
... snip ...
:in `block in run'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/puma-4.3.7-java/lib/puma/thread_pool.rb:134:in `block in spawn_thread'"]}
[2021-03-04T10:52:51,091][ERROR][logstash.agent ] API HTTP Request {:status=>500, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"vertices=true", :http_version=>"HTTP/1.1", :http_accept=>nil}
[2021-03-04T10:52:53,900][INFO ][logstash.runner ] Logstash shut down.
[2021-03-04T10:52:53,910][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.13.0.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.13.0.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89) ~[?:?]
The cause of the "No configuration found in the configured sources." might be pipeline.yml.
I found the following in the default description of pipline.yml.
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
By commenting this out, the Error is gone, but the problem of logs not being sent to elasticsearch persists.
The logstash-plain.log now looks like this
# tail -n 20 /var/log/logstash/logstash-plain.log
[2021-03-04T16:26:31,882][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5. 7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2021-03-04T16:26:32,396][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.13.0.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.13.0.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89) ~[? :?]
Do you have any clue how to solve this problem?
There are very few clues for me.
Please help me.
Badger
March 4, 2021, 5:00pm
8
Enable log.level debug and see if you get a more informative message when it exits.
@Badger
Thank you for your comment.
Are you sure that the log level setting is logger.slowlog.level
in log4j2.properties
?
I set it as follows.
# cat /etc/logstash/log4j2.properties
... snip ...
#logger.slowlog.level = trace
logger.slowlog.level = debug
... snip ...
However, the log contents do not change.
Badger
March 5, 2021, 2:35am
10
No. Use --log.level debug
on the command line, or change log.level in logstash.yml.
@Badger
Thank you for pointing this out.
When I ran the following command, it output more logs than I can post here.
/usr/share/logstash/bin/logstash -f /etc/logstash/logstash-sample.conf --log.level debug
However, I cannot find any description of the cause of the error.
It even seems to be working well.
For example, there is a message like the following
# /usr/share/logstash/bin/logstash -f /etc/logstash/logstash-sample.conf --log.level debug
[INFO ] 2021-03-05 11:42:21.905 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
... snip ...
[DEBUG] 2021-03-05 11:42:24.041 [Converge PipelineAction::Create<main>] registry - On demand adding plugin to the registry {:name=>"beats", :type=>"input", :class=>LogStash::Inputs::Beats}
... snip ...
[DEBUG] 2021-03-05 11:42:24.247 [Converge PipelineAction::Create<main>] elasticsearch - config LogStash::Outputs::ElasticSearch/@index = "test-%{+YYYY.MM.dd}"
... snip ...
[DEBUG] 2021-03-05 11:42:24.800 [[main]-pipeline-manager] PoolingHttpClientConnectionManager - Connection [id: 0][route: {}->http://***.***.***.***:9200] can be kept alive indefinitely
... snip ...
[DEBUG] 2021-03-05 11:42:25.632 [Ruby-0-Thread-10: :1] CompiledPipeline - Compiled output
P[output-elasticsearch{"hosts"=>["http://***.***.***.***:9200"], "index"=>"test-%{+YYYY.MM.dd}"}|[file]/etc/logstash/logstash-sample.conf:90:3:```
... snip ...
[INFO ] 2021-03-05 11:42:25.754 [[main]-pipeline-manager] beats - Starting input listener {:address=>"0.0.0.0:5044"}
... snip ...
[DEBUG] 2021-03-05 11:45:30.603 [Ruby-0-Thread-43@puma threadpool 001: :1] service - [api-service] start
[DEBUG] 2021-03-05 11:45:30.606 [Ruby-0-Thread-43@puma threadpool 001: :1] agent - API HTTP Request {:status=>200, :request_method=>"GET", :path_info=>"/", :query_string=>"", :http_version=>"HTTP/1.1", :http_accept=>nil}
... snip ...
[DEBUG] 2021-03-05 11:45:30.610 [Ruby-0-Thread-43@puma threadpool 001: :1] service - [api-service] start
[DEBUG] 2021-03-05 11:45:30.618 [Ruby-0-Thread-43@puma threadpool 001: :1] agent - API HTTP Request {:status=>200, :request_method=>"GET", :path_info=>"/_node/stats", :query_string=>"vertices=true", :http_version=>"HTTP/1.1", :http_accept=>nil}
... snip ...
What do these tell you?
Badger
March 5, 2021, 3:24am
12
its-ogawa:
What do these tell you?
Nothing worth discussing. I would expect the useful DEBUG message to be very close to the
[2021-03-04T16:26:32,396][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
message. You could post a larger (and more focused) fragment of the DEBUG log at gisthub . Be aware that the "gist" sub-site of github is equally public.
Thanks for the advice.
The whole debug log looks like this
debug.log
Using bundled JDK: /usr/share/logstash/jdk
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-03-05 11:42:21.905 [main] runner - Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[DEBUG] 2021-03-05 11:42:21.913 [main] scaffold - Found module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[DEBUG] 2021-03-05 11:42:21.914 [main] registry - Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0xa353dff @directory="/usr/share/logstash/modules/fb_apache/configuration", @module_name="fb_apache", @kibana_version_parts=["6", "0", "0"]>}
[DEBUG] 2021-03-05 11:42:21.915 [main] scaffold - Found module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[DEBUG] 2021-03-05 11:42:21.915 [main] registry - Adding plugin to the registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x1050fcff @directory="/usr/share/logstash/modules/netflow/configuration", @module_name="netflow", @kibana_version_parts=["6", "0", "0"]>}
[DEBUG] 2021-03-05 11:42:22.311 [LogStash::Runner] runner - -------- Logstash Settings (* means modified) ---------
[DEBUG] 2021-03-05 11:42:22.312 [LogStash::Runner] runner - node.name: "ITS-ELS-01"
[DEBUG] 2021-03-05 11:42:22.312 [LogStash::Runner] runner - *path.config: "/etc/logstash/logstash-sample.conf"
This file has been truncated. show original
Badger
March 5, 2021, 4:07pm
14
As you say, the FATAL error has gone away, so you changed something else that fixed it.
I'm happy to report that I was able to get filebeat and logstash to communicate with each other using the following configuration.
# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/messages
tags: ["messages"]
- type: log
enabled: true
paths:
- /var/log/nginx/access.log*
tags: ["nginx", "access"]
exclude_files: ['.gz$']
- type: log
enabled: true
paths:
- /var/log/nginx/error.log*
tags: ["nginx", "error"]
exclude_files: ['.gz$']
- type: log
enabled: true
paths:
- /var/log/secure
tags: ["secure"]
output.logstash:
hosts: ["***.***.***.***:5044"]
# cat /etc/logstash/first-pipeline.config
input {
beats {
port => "5044"
}
}
filter {
if( [tags][0] == "messages" ) {
grok {
... snip ...
}
date {
... snip ...
}
}
... snip ...
}
output {
if( [tags][0] == "messages" ) {
elasticsearch {
hosts => ["http://***.***.***.***:9200"]
index => "messages-%{+YYYY.MM.dd}"
}
}
else if( [tags][0] == "nginx" ) {
if ([tags][1] == "access" ) {
elasticsearch {
hosts => ["http://***.***.***.***:9200"]
index => "nginx-access-%{+YYYY.MM.dd}"
}
}
else if ( [tags][1] == "error" ) {
elasticsearch {
hosts => ["http://***.***.***.***:9200"]
index => "nginx-error-%{+YYYY.MM.dd}"
}
}
else if( [tags][0] == "secure" ) {
elasticsearch {
hosts => ["http://***.***.***.***:9200"]
index => "secure-%{+YYYY.MM.dd}"
}
}
else {
elasticsearch {
hosts => ["http://***.***.***.***:9200"]
index => "other-%{+YYYY.MM.dd}"
}
}
}
The cause of the error is not clear, but we believe it is due to the firewall settings and the opening of port 5044.
If there is any concern, this writing style looks very verbose.
After reading the "multiple pipelines" section of the official documentation, I decided I wanted to organize this config file.
For example, I have created the following files, but they do not allow filebeat and logstash to communicate properly.
# cat /usr/share/logstash/config/pipelines.yml
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: syslog
path.config: "/etc/logstash/conf.d/{01_input_filebeat, logstash-syslog}.conf"
- pipeline.id: nginx
path.config: "/etc/logstash/conf.d/{01_input_filebeat, logstash-nginx}.conf"
- pipeline.id: secure
path.config: "/etc/logstash/conf.d/{01_input_filebeat, logstash-secure}.conf"
# cat /etc/logstash/conf.d/01_input_filebeat.conf
# https://www.elastic.co/jp/blog/how-to-create-maintainable-and-reusable-logstash-pipelines
input {
beats {
port => 5044
}
}
# cat /etc/logstash/conf.d/logstash-syslog.conf
#input {
# beats {
# port => 5044
# }
#}
filter {
if( [tags][0] == "messages" ) {
grok {
... snip ...
}
date {
... snip ...
}
}
}
output {
if( [tags][0] == "messages" ) {
elasticsearch {
hosts => ["http://***.***.***.***:9200"]
index => "messages-%{+YYYY.MM.dd}"
}
}
}
Is it right to create such a config file when I want the config file to be readable?
Why do these give errors?
The log of logstash at this time will look like the following.
# tail -n 50 /var/log/logstash/logstash-plain.log
[2021-03-11T20:16:17,923][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2021-03-11T20:16:18,566][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-03-11T20:16:19,538][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 9, column 1 (byte 290) after ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:367:in `block in converge_state'"]}
[2021-03-11T20:16:19,822][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-03-11T20:16:24,728][INFO ][logstash.runner ] Logstash shut down.
[2021-03-11T20:16:24,739][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.13.0.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.13.0.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89) ~[?:?]
[2021-03-11T20:16:38,808][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.11.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2021-03-11T20:16:39,267][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-03-11T20:16:39,917][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"input\", \"filter\", \"output\" at line 9, column 1 (byte 290) after ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:367:in `block in converge_state'"]}
[2021-03-11T20:16:40,104][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
There is no pipeline_id:main
in my config file. I don't know why it is called pipeline_id:main
.
Is pipeline_id:main
always necessary?
system
(system)
Closed
April 8, 2021, 11:28am
19
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.