Logstash taking too long to process data

@Christian_Dahlqvist, I'm used

curl -u user:password -XPUT elasticsearchip1:9200_template/template_1 -d '{"template": "*","settings": {"number_of_shards": 1},"mappings": {"type1": {"_source": {"enabled": false},"properties": {"host_name": {"type": "keyword"},"created_at": {"type": "date","format": "EEE MMM dd HH:mm:ss Z YYYY"}}}}}'

To define a template for all indexes create automatically depending on the field source but this is not working. I still have index created automatically with the shard set to 5.

Is there another way to get that done?

Now regarding changing of hardware, I will first reduced the shards and see how it behave because I don't want to get in a situation were I'm buying a new hard drive and the situation doesn't changes.

Thanks.

My bad instead of

I should have rather do

curl -u user:password -XPUT elasticsearchip1:9200**/**_template/template_1 -d '{"template": "*","settings": {"number_of_shards": 1},"mappings": {"type1": {"_source": {"enabled": false},"properties": {"host_name": {"type": "keyword"},"created_at": {"type": "date","format": "EEE MMM dd HH:mm:ss Z YYYY"}}}}}'

Notice the absence of /.

Now I'm having a stats of 845299 logs in 176996ms=2.9minutes from logstash which is acceptable.

Thanks @Christian_Dahlqvist and @guyboertje you know your things :wink:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.