Unable to see indexes in Logstash

Hi Team,

I am able to execute the following code and get the below output as follows but there is no indexes creation happening in logstash. I request you to kindly assist me and let me know, in case if I miss anything else here.

And, also share the link to execute the *.gz file

input {
s3 {
access_key_id => "AccessKey"
secret_access_key => "SecretKey"
bucket => "gtologstash"
prefix => "test/"
interval => 60
codec => "plain"
region => "us-east-1"
}
}

filter {
csv {
columns => ["id","name","age","money"]
#separator => "\t"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "dummy"
document_id => "%{id}"
codec => rubydebug {
metadata => true
}
}
}

Output:
[root@elkserver conf.d]# /usr/share/logstash/bin/logstash -f s3.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/confe defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.pr
[INFO ] 2018-06-04 07:23:59.711 [main] scaffold - Initializing module {:module_n
[INFO ] 2018-06-04 07:23:59.718 [main] scaffold - Initializing module {:module_n
[WARN ] 2018-06-04 07:24:00.278 [LogStash::Runner] multilocal - Ignoring the 'pi
[INFO ] 2018-06-04 07:24:00.568 [LogStash::Runner] runner - Starting Logstash {"
[INFO ] 2018-06-04 07:24:00.750 [Api Webserver] agent - Successfully started Log
[INFO ] 2018-06-04 07:24:13.670 [Ruby-0-Thread-1: /usr/share/logstash/vendor/buneline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.b
[INFO ] 2018-06-04 07:24:14.068 [[main]-pipeline-manager] elasticsearch - Elastimpute-1.amazonaws.com:9200/]}}
[INFO ] 2018-06-04 07:24:14.071 [[main]-pipeline-manager] elasticsearch - Runnin//ec2-52-104-156-9.compute-1.amazonaws.com:9200/, :path=>"/"}
[WARN ] 2018-06-04 07:24:14.206 [[main]-pipeline-manager] elasticsearch - Restor00/"}
[INFO ] 2018-06-04 07:24:14.449 [[main]-pipeline-manager] elasticsearch - ES Out
[WARN ] 2018-06-04 07:24:14.450 [[main]-pipeline-manager] elasticsearch - Detectnt _type {:es_version=>6}
[INFO ] 2018-06-04 07:24:14.464 [[main]-pipeline-manager] elasticsearch - Using
[INFO ] 2018-06-04 07:24:14.467 [[main]-pipeline-manager] elasticsearch - Attemp "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamg", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@ti"properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitud
[INFO ] 2018-06-04 07:24:14.483 [[main]-pipeline-manager] elasticsearch - New El46-92.compute-1.amazonaws.com:9200"]}
[INFO ] 2018-06-04 07:24:14.498 [[main]-pipeline-manager] s3 - Registering s3 in
[INFO ] 2018-06-04 07:24:14.591 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bunsfully {:pipeline_id=>"main", :thread=>"#<Thread:0x183f614b@/usr/share/logstash/
[INFO ] 2018-06-04 07:24:14.627 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bun>1, :pipelines=>["main"]}

Remove the codec setting from your elasticsearch output.

Hi Magnus,

Thanks for your reply. I have removed the codec setting from ES output and executed but, still I am facing the same issues.

Regards,
Panneer S

Hi,

Can anyone please help me on this issue.

Regards,
Panneer S

show me logstash.log and elasticsearch.log.

Hi Vishal,

Thanks for your reply. Please find attached the link for logs in below, for your reference.

https://s3.amazonaws.com/gtologs/Logs.7z

Please note, I was able to create an indexes in kibana when I having the *.csv file in local machine but it is not happening when I have them in S3. Hope, it will help you to understand the current issue and provide a resolution

Regards,
Panneer S

Hi Team,

Can anyone please help me on this issue.

Regards,
Panneer S

Hi Vishal,

It would be thankful and help me to move further, if you could please provide details once done analyzing the given logs file.

Thanks for your ongoing support

Regards,
Panneer S

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.