Data not getting indexed in elastic search

[root@ukmr0xtmd01 conf.d]# /usr/share/logstash/bin/logstash -f siae.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-04-16 15:19:35.812 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-04-16 15:19:35.818 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-04-16 15:19:36.333 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-04-16 15:19:36.530 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.3"}
[INFO ] 2018-04-16 15:19:36.809 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601}
[INFO ] 2018-04-16 15:19:38.093 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-04-16 15:19:38.356 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[INFO ] 2018-04-16 15:19:38.360 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
[WARN ] 2018-04-16 15:19:38.449 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[INFO ] 2018-04-16 15:19:38.632 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2018-04-16 15:19:38.636 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-04-16 15:19:38.644 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-04-16 15:19:38.647 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-04-16 15:19:38.672 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
[INFO ] 2018-04-16 15:19:38.978 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x2c2e84dc@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}
[INFO ] 2018-04-16 15:19:39.007 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}

OK, so it started normally. What does the configuration look like? If you add a stdout output do you see any events?

And what's in siae.conf?

this is the name if the config file.

input{
file {
path => "/data/input/TXN_NEMs/SIAE_NSM5UX/**/*"
start_position => "beginning"
type => "siae"
sincedb_path => "/dev/null"
}
}

filter {
if [type] == "siae" {
grok {
patterns_dir => ["/etc/logstash/conf.d/patterns"]
match => { "message" => "%{CISCOTIMESTAMP:update_time} %{WORD:name} %{NOTSPACE:id}: %{GREEDYDATA:msg}" }

  }
  if ("sshd" in [id]) {
            mutate { add_tag => "shouldexist" }
  }

 date {
    match => ["update_time" , "MMM dd HH:mm:ss", "MMM  d HH:mm:ss"]
    locale => "en"
 }

if "_grokparsefailure" in [tags] {
    drop {}
}

}
}

output {
if [type] == "siae" {
if "shouldexist" in [tags] {
elasticsearch {
hosts => ["127.0.0.1"]
index => "siae-%{+YYYY.MM.dd}"
}

stdout { codec => rubydebug }

}
}
}

no
input{
file {
path => "/data/input/TXN_NEMs/SIAE_NSM5UX/**/*"
start_position => "beginning"
type => "siae"
sincedb_path => "/dev/null"
}
}

filter {
if [type] == "siae" {
grok {
patterns_dir => ["/etc/logstash/conf.d/patterns"]
match => { "message" => "%{CISCOTIMESTAMP:update_time} %{WORD:name} %{NOTSPACE:id}: %{GREEDYDATA:msg}" }

  }
  if ("sshd" in [id]) {
            mutate { add_tag => "shouldexist" }
  }

 date {
    match => ["update_time" , "MMM dd HH:mm:ss", "MMM  d HH:mm:ss"]
    locale => "en"
 }

if "_grokparsefailure" in [tags] {
    drop {}
}

}
}

output {
if [type] == "siae" {
if "shouldexist" in [tags] {
elasticsearch {
hosts => ["127.0.0.1"]
index => "siae-%{+YYYY.MM.dd}"
}

stdout { codec => rubydebug }

}
}
}

I suspect the grok is not matching, so you are dropping the message. Can you comment out the drop {} and show us rubydebug output for one of the messages?

how to get that can you help me on that

Put a # at the start of the line that says drop{} and then re-run logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.