Logstash not reading more than 2 files

Hi ,
I am new to ELK , i am trying to read the logs for IBM dmgr ,node and Application logs . but it is reading top 2 file input and creating index but 3rd file is not creating index and not reading could you please help me .
Below is my logstash config file .

input {
  file {
    path => [ “/opt/WebSphere/AppServer8.5.5/profiles/x.x.x.xManager/logs/dmgr/SystemOut.log” ]
    start_position => "beginning"
    type => “websphere1”
    # important! logstash read only logs from files touched the last 24 hours
    # 8640000 = 100 days
    ignore_older => "8640000"
    codec => multiline {
      pattern => "^\s"
      what => “previous”
    }
  }
  file {
    path => [ “/opt/WebSphere/AppServer8.5.5/profiles/x.x.x.x/logs/nodeagent/SystemOut.log” ]
    start_position => "beginning"
    type => “app1”
    # 8640000 = 100 days
    ignore_older => "8640000"
    codec => multiline {
      pattern => "^\s"
      what => “previous”
    }
  }
  file {
    path => [ “/opt/WebSphere/AppServer8.5.5/profiles/x.x.x.x/logs/ActiveVOS-v01/SystemOut.log” ]
    start_position => "beginning"
    type => “websphere”
    # 8640000 = 100 days
    ignore_older => "86400"
    codec => multiline {
      pattern => "^\s"
      what => “previous”
    }
  }
}
output {
  if [type] == “websphere1” {
    elasticsearch{
      hosts => [“x.x.x.x:9200”]
      index => “x.x.x.x_dmgr1_%{+YYYY.MM.dd}”
    }
  }
  if [type] == “app1” {
    elasticsearch{
      hosts => [“x.x.x.x:9200”]
      index => “x.x.x.x_node_%{+YYYY.MM.dd}”
    }
  } else if [type] == “websphere” {
    elasticsearch{
      hosts => [“x.x.x.x:9200”]
      index => “x.x.x.x_ActiveVOS-v01_%{+YYYY.MM.dd}”
    }
  } else {
    stdout{
      codec => rubydebug
    }
  }
}

Thanks in advance…

If you change the type of the 3rd file config to say "websphere2" you should then see the output in the console via the stdout output.

Are you familiar with what the sincedb does? https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#_tracking_of_current_position_in_watched_files

If you have read the file once before, the sincedb records the point in the file that the read up to. This is to prevent older, previously seen data from being read again.

If you are testing or developing just delete the sincedb before each run.

I want to create index and want to see in kibana for all 3 logs,

I ran the logstash and it also not showing permission issue .
bin/logstash -f test.conf --verbose
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Sending Logstash's logs to /opt/WebSphere/logstash-5.5.0/logs which is now configured via log4j2.properties
[2017-08-01T14:59:24,030][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://x.x.x.x:9200/]}}
[2017-08-01T14:59:24,035][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://x.x.x.x:9200/, :path=>"/"}
[2017-08-01T14:59:24,131][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#Java::JavaNet::URI:0x3894a327}
[2017-08-01T14:59:24,132][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-08-01T14:59:24,182][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-08-01T14:59:24,188][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#Java::JavaNet::URI:0x622cc5ca]}
[2017-08-01T14:59:24,200][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://x.x.x.x:9200/]}}
[2017-08-01T14:59:24,201][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://x.x.x.x:9200/, :path=>"/"}
[2017-08-01T14:59:24,212][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#Java::JavaNet::URI:0x27af150e}
[2017-08-01T14:59:24,213][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-08-01T14:59:24,219][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-08-01T14:59:24,224][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#Java::JavaNet::URI:0x4c299a87]}
[2017-08-01T14:59:24,235][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://x.x.x.x:9200/]}}
[2017-08-01T14:59:24,236][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://x.x.x.x:9200/, :path=>"/"}
[2017-08-01T14:59:24,240][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#Java::JavaNet::URI:0x4e6333f8}
[2017-08-01T14:59:24,241][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-08-01T14:59:24,248][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-08-01T14:59:24,252][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#Java::JavaNet::URI:0xd71b619]}
[2017-08-01T14:59:24,256][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-08-01T14:59:24,440][INFO ][logstash.pipeline ] Pipeline main started
[2017-08-01T14:59:24,597][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

As @guyboertje says, the problem could very well be that Logstash thinks it already has processed the third file. you can peek into the sincedb file to see what position in the file Logstash thinks it's at. You can also set the file inputs' sincedb_path option to /dev/null to effectively disable sincedb.

Thank you for responding . i will try sincedb_path .

I have tried this sincedb_path to /dev/null but still it is not reading Running Application logs. Is there any way to set in logstash to read 2days old application logs . because application already started and logs generated and we starting logstash now .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.