Not able to create an indexes for files stored in S3

Hi Team,

Able to execute and create an index in kibana if I keep the *.csv in local machine but, I don't if I keep them in S3 bucket. Below are the config details.

input {
s3 {
access_key_id => "Access KEY"
secret_access_key => "Secret Access Key"
bucket => "gtologs"
prefix => "test/" # (No folder has created in bucket. File name is test.csv.)
region => "us-east-1"
}
}

filter {
csv {
columns => ["id","name","age","money"]
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "gto"
document_id => "%{id}"

stdout { codec => rubydebug }

}

}

Can someone please help me to fix this issue.

Regards,
Panneer S

prefix => "test/" # (No folder has created in bucket. File name is test.csv.)

But test/ is not a prefix of test.csv.

Hi Magnus,

Thanks for the reply. I have done the necessary changes but still I am having the same issues.

PFB, the input conf and output details, for your reference.

Input conf:

input {
s3 {
access_key_id => "accesskey"
secret_access_key => "secretkey"
bucket => "gtologs"
prefix => "dummy/test.csv"
region => "us-east-1"
}
}

filter {
csv {
columns => ["id","name","age","money"]

separator => ","

  }

}

output {
elasticsearch {
hosts => ["ec2-10-10-10-10.compute-1.amazonaws.com:9200"]
index => "ctsgto"
document_id => "%{id}"

stdout { codec => rubydebug }

}

}

Output details:
[root@elkserver conf.d]# /usr/share/logstash/bin/logstash -f s3.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-06-11 06:23:21.912 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-06-11 06:23:21.919 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-06-11 06:23:22.453 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-06-11 06:23:22.643 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
[INFO ] 2018-06-11 06:23:22.834 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-06-11 06:23:35.910 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-06-11 06:23:36.284 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/]}}
[INFO ] 2018-06-11 06:23:36.286 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/, :path=>"/"}
[WARN ] 2018-06-11 06:23:36.424 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/"}
[INFO ] 2018-06-11 06:23:36.649 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2018-06-11 06:23:36.650 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-06-11 06:23:36.653 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-06-11 06:23:36.656 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-06-11 06:23:36.663 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ec2-10-10-10-10.compute-1.amazonaws.com:9200"]}
[INFO ] 2018-06-11 06:23:36.672 [[main]-pipeline-manager] s3 - Registering s3 input {:bucket=>"gtologs", :region=>"us-east-1"}
[INFO ] 2018-06-11 06:23:36.742 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x64036b97@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}
[INFO ] 2018-06-11 06:23:36.774 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}

Please confirm, is index will take time to create since the logs are in S3 bucket?

I am facing this issue for the past 2 weeks. So, I request you to kindly provide your guidance in order to fix this issue

Things to try:

  • Increase Logstash's log level.
  • Remove the prefix option altogheter.

Hi Magnus,

Have removed the prefix and executed but it was throwing an error.

Then, I have changed the prefix as "prefix => "dummy/" and executed. It was working and the index has created as expected but unable to view any information as mentioned in *.csv file.

In index page, I couldn't see any information as mentioned in excel sheet. Kindly help.

PFA, the logstash.log link in below, for your reference.
https://s3.amazonaws.com/gtologs/logs/logstash-plain.log

Regards,
Panneer S

In index page,

Which index page?

I couldn't see any information as mentioned in excel sheet.

So what do you see?

Hi Magnus,

  1. Under Discover page, I couldn't see any information for the created index.

  2. I have seen some info with errors. So, I deleted that index and tried to create a new one but this time, I am not sure I couldn't even create an index

Here is my conf file:
input {
s3 {
access_key_id => "accesskey"
secret_access_key => "secretkey"
bucket => "gtologs"
prefix => "dummy/"
region => "us-east-1"
}
}

filter {
csv {
columns => ["id","name","age","money"]
}
}

output {
elasticsearch {
hosts => ["ec2-10-10-10-10.compute-1.amazonaws.com:9200"]
index => "gto"
document_id => "%{id}"
}
}

*.CSV file:
[root@elkserver conf.d]# cat test.csv
id,name,age,money
1234,jyoti,38,200000
5678,rannjan,58,4000000
7890,panda,68,8000000
8904,jyoti ranjan panda,88,980000000

Output:
[root@elkserver conf.d]# /usr/share/logstash/bin/logstash -f s3.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-06-11 10:16:32.812 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-06-11 10:16:32.821 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-06-11 10:16:33.384 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-06-11 10:16:33.587 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.4"}
[INFO ] 2018-06-11 10:16:33.750 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-06-11 10:16:46.621 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-06-11 10:16:46.993 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/]}}
[INFO ] 2018-06-11 10:16:46.996 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/, :path=>"/"}
[WARN ] 2018-06-11 10:16:47.123 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://ec2-10-10-10-10.compute-1.amazonaws.com:9200/"}
[INFO ] 2018-06-11 10:16:47.351 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>6}
[WARN ] 2018-06-11 10:16:47.351 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[INFO ] 2018-06-11 10:16:47.355 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[INFO ] 2018-06-11 10:16:47.358 [[main]-pipeline-manager] elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[INFO ] 2018-06-11 10:16:47.365 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ec2-10-10-10-10.compute-1.amazonaws.com:9200"]}
[INFO ] 2018-06-11 10:16:47.374 [[main]-pipeline-manager] s3 - Registering s3 input {:bucket=>"gtologs", :region=>"us-east-1"}
[INFO ] 2018-06-11 10:16:47.540 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xdca3909@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[INFO ] 2018-06-11 10:16:47.561 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}
[INFO ] 2018-06-11 10:16:48.693 [[main]<s3] s3 - Using default generated file for the sincedb {:filename=>"/usr/share/logstash/data/plugins/inputs/s3/sincedb_0eb55c321eb3e531487f435a77836115"}

Please specify, what went wrong here? Thanks for your ongoing support

Regards,
Panneer S

So, I deleted that index and tried to create a new one but this time, I am not sure I couldn't even create an index

Did you also delete the sincedb file where Logstash's current position in the S3 files are stored?

Hi Magnus,

Yes. I deleted all the sincedb files and created a new index and I able to see them under discover page with the below error

12 Visualize: Zero or negative time interval not supported

Error: Request to Elasticsearch failed: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Zero or negative time interval not supported"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"filebeat-6.2.4-2018.06.08","node":"KbysAteuSbaOVVsL-IktGA","reason":{"type":"illegal_argument_exception","reason":"Zero or negative time interval not supported"}}]},"status":400}
at http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/commons.bundle.js?v=16627:1:712440
at Function.Promise.try (http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/commons.bundle.js?v=16627:1:503538)
at http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/commons.bundle.js?v=16627:1:502926
at Array.map ()
at Function.Promise.map (http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/commons.bundle.js?v=16627:1:502884)
at callResponseHandlers (http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/commons.bundle.js?v=16627:1:712018)
at http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/commons.bundle.js?v=16627:1:701368
at processQueue (http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/vendors.bundle.js?v=16627:58:132456)
at http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/vendors.bundle.js?v=16627:58:133349
at Scope.$digest (http://ec2-10-10-10-10.compute-1.amazonaws.com:5601/bundles/vendors.bundle.js?v=16627:58:144239)

Regards,
Panneer S

Please find the link of the error screen shot

Regards,
Panneer S

Hi Magnus,

Now, I was able to read s3 logs from EL and view the indexes in Kibana without issues.

Regards,
Panneer S

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.