Custom log file not getting pushed to Elasticsearch via logstash

Hi,

I am trying to push a custom log file generated by my program into elasticsearch(5.3.2) via logstash(5.3.2).

I dont get any error but index is also not getting created... Am I missing something? I am pretty new to ELK stack and trying out things of my own by reading the ELK documentation.

Here is the output:
C:\Data\ELK\logstash-5.3.2\logstash-5.3.2\bin>logstash -f AE-log.conf
Could not find log4j2 configuration at path /Data/ELK/logstash-5.3.2/logstash-5.3.2/config/log4j2.properties. Using default config which logs to conso
le
13:06:15.598 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http:
//elastic:xxxxxx@localhost:9200/]}}
13:06:15.604 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:
healthcheck_url=>http://elastic:xxxxxx@localhost:9200/, :path=>"/"}
13:06:15.827 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>#<URI::HTTP:0x48d8acfb URL:htt
p://elastic:xxxxxx@localhost:9200/>}
13:06:15.829 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
13:06:16.209 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash
-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_te
mplates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"
=>{"match"=>"
", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properti
es"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "p
roperties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
13:06:16.221 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :h
osts=>[#<URI::Generic:0x327ae81d URL://localhost:9200>]}
13:06:16.499 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "
pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
13:06:17.440 [[main]-pipeline-manager] INFO logstash.pipeline - Pipeline main started
13:06:17.568 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

Here is the config file:

input {

file {
path => [ "C:\Data\ELK\input\AElogs\AssistEdge_SE.log" ]
start_position => "beginning"
type => "log"

}

}

filter {

kv {
value_split => "~"
field_split => " "
}

grok {

match => {"message" => "%{NUMBER:id} %{WORD:level} %{NUMBER:priority} %{WORD:srcmodule} %{WORD:method} %{WORD:message} %{WORD:description} %{WORD:userid} %{IP:client}" }

 }

}

output {
elasticsearch {

hosts => ["localhost:9200"]
user => "elastic"
password => "XXXXXX"
action => "index"
index => "assistedge_se"
}
stdout { }
}

Here is the log file content for reference:

instid~1 level~Info priority~1 srcmodule~Utilities.Logging method~Logging message~Loaded logUserID from app.config. Value is : True description~NA userid~L3\john.kam ipaddress~10.1.99.121
instid~1 level~Info priority~1 srcmodule~Utilities.Loggnig method~Logging message~Loaded logErrorDetails from app.config. Value is : True description~NA userid~L3\john.kam ipaddress~10.1.99.121

Logstash is probably tailing the file. Set sincedb_path to "nul" or delete the sincedb file.

Thanks Magnus.

I did that but didn't resulted into desired results.

Hence, I changed input to stdin and started sending individual log file record into logstash.

See below the stdin and the output I got...

You'll notice that now I am getting a grokparsefailure failure:

instid~1 level~Info priority~10 srcmodule~Utilities.CommonServices.AppListGenerator method~GenerateApplicationListConfig message~Completed generation of applicationlist.xml configuration file description~NA userid~LEVEL3\karir.amit ipaddress~10.1.47.180
{
"ipaddress" => "10.1.47.180",
"method" => "GenerateApplicationListConfig",
"level" => "Info",
"srcmodule" => "Utilities.CommonServices.AppListGenerator",
"description" => "NA",
"message" => "Completed generation of applicationlist.xml configuration file",
"priority" => "10",
"userid" => "LEVEL3\karir.amit",
"tags" => [
[0] "_grokparsefailure"
],
"instid" => "1",
"@timestamp" => 2017-05-12T15:03:15.394Z,
"@version" => "1",
"host" => "US-HPELNVV5J"
}

Here is the updated config file, not sure what am I doing wrong here..

input {

stdin{ }
}

filter {

kv {
value_split => '~'
field_split => ' '
}

grok {

match => {'message' => '%{NUMBER:instid} %{WORD:level} %{NUMBER:priority} %{WORD:srcmodule} %{WORD:method} %{WORD:message} %{WORD:description} %{WORD:userid} %{IP:ipaddress}' }

}

}

output {
stdout {codec => rubydebug }

}

What are you trying to accomplish with the grok filter? The preceding kv filter already extracts the fields for you.

Thanks Magnus.. I am able to make it work now.. thanks for your valuable input.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.