Hi Magnus,
-
Under Discover page, I couldn't see any information for the created index.
-
I have seen some info with errors. So, I deleted that index and tried to create a new one but this time, I am not sure I couldn't even create an index
Here is my conf file:
input {
s3 {
access_key_id => "accesskey"
secret_access_key => "secretkey"
bucket => "gtologs"
prefix => "dummy/"
region => "us-east-1"
}
}
filter {
csv {
columns => ["id","name","age","money"]
}
}
output {
elasticsearch {
hosts => ["ec2-10-10-10-10.compute-1.amazonaws.com:9200"]
index => "gto"
document_id => "%{id}"
}
}
*.CSV file:
[root@elkserver conf.d]# cat test.csv
id,name,age,money
1234,jyoti,38,200000
5678,rannjan,58,4000000
7890,panda,68,8000000
8904,jyoti ranjan panda,88,980000000
Output:
[root@elkserver conf.d]# /usr/share/logstash/bin/logstash -f s3.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-06-11 10:16:32.812 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-06-11 10:16:32.821 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-06-11 10:16:33.384 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-06-11 10:16:33.587 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
[INFO ] 2018-06-11 10:16:33.750 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-06-11 10:16:46.621 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-06-11 10:16:46.993 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/]}}
[INFO ] 2018-06-11 10:16:46.996 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/, :path=>"/"}
[WARN ] 2018-06-11 10:16:47.123 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/"}
[INFO ] 2018-06-11 10:16:47.351 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2018-06-11 10:16:47.351 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type
event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-06-11 10:16:47.355 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-06-11 10:16:47.358 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-06-11 10:16:47.365 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ec2-10-10-10-10.compute-1.amazonaws.com:9200"]}
[INFO ] 2018-06-11 10:16:47.374 [[main]-pipeline-manager] s3 - Registering s3 input {:bucket=>"gtologs", :region=>"us-east-1"}
[INFO ] 2018-06-11 10:16:47.540 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xdca3909@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[INFO ] 2018-06-11 10:16:47.561 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
[INFO ] 2018-06-11 10:16:48.693 [[main]<s3] s3 - Using default generated file for the sincedb {:filename=>"/usr/share/logstash/data/plugins/inputs/s3/sincedb_0eb55c321eb3e531487f435a77836115"}
Please specify, what went wrong here? Thanks for your ongoing support
Regards,
Panneer S