Logstash not indexing s3 input

Hi,

I have an issue diffucult to troubleshoot.

I had set up an architecture with filebeat as forwarder on my web APIs, a logstash listening to filebeat on port 5044 on a dedicated machine, and indexing log messages on a cluster of elasticsearh.

I then dicide to update the architecture like this : filebeat as forwarder on my web APIs, a redis broker (managed in aws), a logstash indexer on a dedicated machine, and the cluster of elasticsearh to store indices.

In the logstash indexer I have configured two inputs sources : redis and aws s3, like this :

Logs ELB API

s3
{
bucket => "s3.prod.elb.logs.eu-west-1.mydomain"
prefix => "elb_api/AWSLogs/653588882345/elasticloadbalancing/"
interval => 60
region => "eu-west-1"
type => "elb_access_log"
}

Logs REDIS API

redis
{
data_type => "list"
batch_count => 100
key => "filebeat"
host => "redis.prod.eu-west-1.mydomain.com"
}
}

In the first architechture without the redis broker, logstash was indexing my api logs from filebeat as well as elb s3 logs.

But it just stopped indexing s3 inputs without any errors. Now I just have only my api logs indexed from the redis broker.

I restarted logstash service many times without any changes, I still have only api logs in elasticsearch.

I didn't find anyway to debug s3 indexing when logstash is running as a service/daemon. I then stop the service and ran it like this :

/opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/main.conf --debug

And this way, I can see logstash indexing s3 and redis inputs in real time.

When I stop the above forground command and restart the service in the background I have no more elb_logs in my elascticsearch indices.

Please can you tell me where the problem can be? Is there a way to activate s3 debuging in logstash daemon?

Thanks for help.

Regards

Hi,

I've still not find the solution to my issue.

I removed redis input configuration, and create a single s3 input configuration file and I still don't have s3 inputs messages in my indices, nor eny error messages.

Please can someone tell me where this behavior can come from?

I've checked carefully, it's not a matter of aws right or credentials. I can get, put and even delete objects from the s3 bucket from my logstash instance command line.

Howerver when I use all redis and s3 input configurations, I get this this mapping parsing exception in one api message field (errorDescription) when in indexing api input messages :

{"_index"=>"logstash-2016.12.07", "_type"=>"api_error_log", "_id"=>"AVjZT1Xx8yUuqXGaUBHF", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [errorDescription] tried to parse field [errorDescription] as object, but found a concrete value"}}}, :level=>:warn}

As you can see it only concerns api messages, not s3 inputs, but do you think this exception cans prevent logstash for indexing messages from ohter sources/types?

Concerning the mapping errro I tried to ignore the above exception by configuring the index module like this in elasticsearch:

index:
mapping:
ignore_malformed: true

But I still have the mapping parsing exception

Do you think the s3 indexing issue can be between logstash and elasticsearch? How can check this accurately please?

Please I need a little more troubleshooting indication .

Thanks.

Regards.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.