Logstash taking too long to process data

@Christian_Dahlqvist thanks for the reply.
My indexing rate is ~ 4959 per secs. I'm indexing logs a typically example of such document is

{
  "_index": "myindexLog",
  "_type": "applogs",
  "_id": "AVn9dBqSuCQnz8OHTnYX",
  "_score": null,
  "_source": {
    "@timestamp": "2017-02-02T06:12:34.588Z",
    "offset": 8705,
    "beat": {
      "hostname": "london208",
      "name": "london208",
      "version": "5.1.1"
    },
    "input_type": "log",
    "@version": "1",
    "source": "myindexLog",
    "message": "  myindexLog 2017-02-02 01:12:30 INFO  TaskSetManager:54 - Finished task 3.0 in stage 20086.0 (TID 1352263) in 15 ms on 192.168.0.201 (executor 0) (4/200)",
    "type": "applogs"
  },
  "fields": {
    "@timestamp": [
      1486015954588
    ]
  },
  "sort": [
    1486015954588
  ]
}

The average size of a document is ~ 372.61 bytes.

Now regarding the sharding and indexing policy, Please can tell me how I can reduce my sharding and indexing policy. Because my indexes are automatically created depending on the field source of the event.

And by so doing, they are created by default with a shard of 5 and replica of 5. I tried using index.number_of_shards and index.number_of_replicas in elasticsearch.yml but this does not works with elasticsearch 5.x. As per this link ES 5 wont start with config setting index.number_of_shards · Issue #18073 · elastic/elasticsearch · GitHub, this does not for elasticsearch 5.x.
But with

curl -u user:password -XPUT 'elasticsearchip1:9200/_all/_settings?preserve_existing=true' -d '{
   "index.number_of_shards" : "1"
}'

I get a

{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[london205][elasticsearchip2:9300][indices:admin/settings/update]"}],"type":"illegal_argument_exception","reason":"can't change the number of shards for an index"},"status":400}

Any clue please?