Logstash forwader and indexer

Hi
Wanted to know if within the logstash output if I want to put another filter is it possible.

The output section can only contain outputs. Filters go in the filter section.

If you ask the real, underlying question instead you'll get more relevant help.

actually I have a logstash forwader and a logstash indexer at seperate locations or say vm but now I am trying to combine both the forwaders and indexer as one so that the flow of my code be like

input{
path}
filter{
match (path)
}
output
{
filter(match for indexer)
}
elasticsearch output}}

is this possible

If you want to apply different filters for different events you can use conditionals.

https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html

I will elaborate my question a bit
what I wanted is to combine my forwarder and indexer as one i.e as of now my forwarder runs on a different vm and sends it data to redis now from redis queue my indexer pulls the data.
But now I want to combine the indexer and forwarder and I do not want redis in between that is what my question is. So now my config file will look like that its input plugin will have the input of the forwader the filter plugin will contain the filter of the forwarder and the output of the config file will have the filter and output of the indexer. So is it possible to do this.

Yes. In fact, in sounds like a very simple Logstash setup with an input that injects events that are processed by filters and eventually sent to an output. I don't see the complication.

currently we have four forwarders for four different types of logs namely one for app logs one access logs one for others and one for status logs and similarly there are indexers,
There are multiple types of apps and I want to feed in the data from all in one so the path extension becomes something like this pah => "remote/user/gaurav/cnt/app_*/.log to pull all the logs of the apps then have a filter running on the path and according to the he filter we put another filter to match the tags that is what I think of
But it is not working for me

So what does your configuration look like so far? What does the input look like? What does Logstash's output look like?

input {
file {
path => "/..../..../logs/gaurav/03a/aa.log"

	exclude => [
		"access*.log*",
		"fe*.log*",
		"stat*.log*",
		"gc*.log*",
		"*dump*.log*"
	]
	sincedb_path => "/dev/null"
	type => "app"

codec => multiline {
pattern => "^%{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}"
negate => true
what => "previous"
}
}

}
this is how input is

[gagarwal3@nceapttls13]~/tmp% docker run -h logstash --name logstash --link elasticsearch:elasticsearch -it --rm -v "$PWD":/remote/backup/logs/APT/CINT logstash -f /remote/backup/logs/APT/CINT/new1.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
06:55:33.180 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
06:55:33.185 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
06:55:33.191 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
06:55:33.193 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
06:55:33.217 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"9d38957d-f52c-47af-814f-7f124621df51", :path=>"/var/lib/logstash/uuid"}
06:55:33.468 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, {, } at line 77, column 33 (byte 2110) after output {\r\nstdout{ codec => rubydebug}\r\n\tif [app] == "fom" or [app] == "sepl" {\r\n\t\telasticsearch {\r\n\t\t\thosts => "nceapttlnet:9200"\r\n\t\t\tindex => "apt-%{+YYYY.MM.dd}""}
[gagarwal3@nceapttls13]~/tmp% docker run -h logstash --name logstash --link elasticsearch:elasticsearch -it --rm -v "$PWD":/remote/backup/logs/APT/CINT logstash -f /remote/backup/logs/APT/CINT/new1.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
06:56:31.309 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
06:56:31.314 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
06:56:31.321 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
06:56:31.322 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
06:56:31.352 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"426f7c96-9e43-48f9-828c-b3f898e09ddf", :path=>"/var/lib/logstash/uuid"}
06:56:32.511 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://nceanet:9200/]}}
06:56:32.512 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://nceapttnet:9200/, :path=>"/"}
06:56:32.619 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://ncea.net:9200/"}
06:56:32.938 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
06:56:32.942 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
06:56:32.961 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//nce.net:9200"]}
06:56:32.982 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://nceanet:9200/]}}
06:56:32.983 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://nce.net:9200/, :path=>"/"}
06:56:33.000 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://nce.net:9200/"}
06:56:33.008 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
06:56:33.010 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
06:56:33.023 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//nce.net:9200"]}
06:56:33.157 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
06:56:33.466 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
06:56:33.575 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

can u let me know what is the error

Cannot create pipeline {:reason=>"Expected one of #, {, } at line 77, column 33 (byte 2110) after output {\r\nstdout{ codec => rubydebug}\r\n\tif [app] == "fom" or [app] == "sepl" {\r\n\t\telasticsearch {\r\n\t\t\thosts => "nceapttlnet:9200"\r\n\t\t\tindex => "apt-%{+YYYY.MM.dd}""}

You haven't showed us that part of the configuration so we can't help.

input {
file {
path => "/remote/backup/logs/APT/CINT/httpd/*"

	exclude => [
		"access*.log*",
		"fe*.log*",
		"stat*.log*",
		"gc*.log*",
		"*dump*.log*"
	]
	sincedb_path => "/dev/null"
	type => "app"

codec => multiline {
pattern => "^%{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{HOUR}:%{MINUTE}:%{SECOND}"
negate => true
what => "previous"
}
}

}
filter {

grok {
	match => {
		"path" => "/logs/(?<env>[^/]+)/(?<instance>[^/]+)/[^/]+"
	}
	overwrite => [ "host" ]
}

if [type] == "web_access" {
grok {
match => {
"message" => "%{APT_APACHE_ACCESS:logline}"
}
named_captures_only => false
patterns_dir => "/remote/backup/logs/APT/CINT/pattern"
break_on_match => true
remove_field => [ "message", "HOUR", "MINUTE", "MONTH", "MONTHDAY", "SECOND", "TIME", "YEAR" ]
overwrite => [ "port" ]
}
}

if [type] == "app" {
	grok {
		match => {
			# "message" => "%{APT_TIMESTAMP:timestamp} \| %{HOSTNAME:hostname} \| %{DATA:application} \| %{APT_LOGLEVEL:loglevel} *\| 1-%{DATA:thread:int} \| %{DATA:class} *\| %{DATA:correlationId} \| %{GREEDYDATA:msg}"
			"message" => "%{APT_TIMESTAMP:timestamp} \| \S+ \| \S+ \| %{APT_LOGLEVEL:loglevel} *\| 1-%{DATA:thread:int} \| %{DATA:class} *\| %{DATA:correlationId} \| %{GREEDYDATA:msg}"
		}
		patterns_dir => "/remote/backup/logs/APT/CINT/pattern"
		remove_field => [ "message" ]
	}
}

if "_grokparsefailure" not in [tags] {
    date {
        match => [ "timestamp", "YYYY/MM/dd HH:mm:ss.SSS", "YYYY/MM/dd HH:mm:ss,SSS", "dd/MMM/YYYY:HH:mm:ss +0000", "EEE MMM dd HH:mm:ss YYYY" ]
        timezone => "UTC"
	}
	if [type] == "web_access" or [type] == "web_error" {
        mutate {
	        gsub => [
                "referrer","\"","",
                "agent","\"","",
                "JSESSIONID","\"","",
                "APT_SESSIONID","\"","",
                "correlationId","\"","",
                "transactionOriginator","\"","",
                "customerId","\"",""
            ]
	        remove_field => [ "logline", "timestamp", "BASE10NUM", "INT", "HOSTNAME", "IPV4", "day", "month", "monthday", "time", "year"]
		}
	}

}

}

output {
stdout{ codec => rubydebug}
if [app] == "fom" or [app] == "sepl" {
elasticsearch {
hosts => "nce13"
index => "apt-%{+YYYY.MM.dd}"

	}
}
else {
	elasticsearch {
		hosts => "nce13"
		index => "apt-%{+YYYY.MM.dd}"
        
}

	
        


}

}

This is how the whole config file looks like

[gagarwal3@nceapttls13]~/tmp% docker run -h logstash --name logstash --link elasticsearch:elasticsearch -it --rm -v "$PWD":/remote/backup/logs/APT/CINT logstash -f /remote/backup/logs/APT/CINT/new1.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
06:56:31.309 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
06:56:31.314 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
06:56:31.321 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
06:56:31.322 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
06:56:31.352 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"426f7c96-9e43-48f9-828c-b3f898e09ddf", :path=>"/var/lib/logstash/uuid"}
06:56:32.511 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://nceanet:9200/]}}
06:56:32.512 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://nceapttnet:9200/, :path=>"/"}
06:56:32.619 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://ncea.net:9200/"}
06:56:32.938 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
06:56:32.942 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
06:56:32.961 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//nce.net:9200"]}
06:56:32.982 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://nceanet:9200/]}}
06:56:32.983 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://nce.net:9200/, :path=>"/"}
06:56:33.000 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://nce.net:9200/"}
06:56:33.008 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
06:56:33.010 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
06:56:33.023 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//nce.net:9200"]}
06:56:33.157 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
06:56:33.466 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
06:56:33.575 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

can u let me know what is the error

Okay, now the error message is gone. So Logstash still isn't working as expected? In what way?

it is not pulling any data as expected I guess

Typo in the filename pattern in the file input? Or does the user Logstash runs as lack permissions to access the files?

If you increase the log level you can see what files (if any) the filename pattern expands to.

I hosted it on docker and made a compose file with few changes it is working fine

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.