Hi,
I had installed ELK on a CentOS7 machine and then make my ELK work with more than 10GB data. After 2 weeks I stopped Logstash and clean all data and indices at Kibana. It took 3 more weeks until I didn't start it again with more memory space on the machine. Now I'm working with a test for my new grok. The think is that it does not work fine. Logstash works correctly but, I can't find my indices at Kibana.
Here you have my .conf file:
input {
file {
path => "/home/admin/envoirments/mytests/02/messages.csv*"
sincedb_path => "/dev/null"
mode => "read"
ignore_older => "37 d"
file_completed_action => "delete"
}
}filter {
grok {
match => { "message" => "I'M NOT GONA SHOW MY GROK CAUSE IS INNECESARY AND CONTAINS WORK INFORMATION."}
}
}output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => ["mytest_02"]
}
}
And here my Logstash logs:
[2019-08-09T12:05:09,926][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2019-08-09T12:06:37,669][WARN ][logstash.runner ] SIGTERM received. Shutting down. [2019-08-09T12:06:37,826][INFO ][filewatch.observingread ] QUIT - closing all files and shutting down. [2019-08-09T12:06:38,677][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>"main"} [2019-08-09T12:06:38,930][INFO ][logstash.runner ] Logstash shut down. [2019-08-09T12:07:02,721][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.2.0"} [2019-08-09T12:07:09,604][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}} [2019-08-09T12:07:09,837][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"} [2019-08-09T12:07:09,889][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7} [2019-08-09T12:07:09,893][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7} [2019-08-09T12:07:09,924][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]} [2019-08-09T12:07:10,035][INFO ][logstash.outputs.elasticsearch] Using default mapping template [2019-08-09T12:07:10,150][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}} [2019-08-09T12:07:10,324][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [2019-08-09T12:07:10,329][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, :thread=>"#<Thread:0x45e78d92 run>"} [2019-08-09T12:07:10,946][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"} [2019-08-09T12:07:11,034][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2019-08-09T12:07:11,208][INFO ][filewatch.observingread ] START, creating Discoverer, Watch with file and sincedb collections [2019-08-09T12:07:11,648][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I read many people that changing the sincedb_path => "NUL"
they could solve their problems as happened here and here (both thanks to Badger). I could not solve my problem this way.
Any recomendation?