Logstash is running with a pipeline from command line with w = 1.
After successfully starting, it sends some data successfully to elasticsearch. But after sending dome data it will not able so send the data with the following error
[2019-11-22T00:00:13,210][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://xxxxx:9210/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://xxxxxx:9210/][Manticore::ConnectTimeout] connect timed out"}
And in the server where the elasticsearch's nodes are running having less amount of memory left. (Only 270 -350 MB Primary memory)
Can any one tell me how to avoid this issue so that logstash can send data to elasticsearch continuously.
I have done
http://xxxxxxx:9200/_nodes/thread_pool and got the following result
"thread_pool": {
"watcher": {
"type": "fixed",
"min": 20,
"max": 20,
"queue_size": 1000
},
"force_merge": {
"type": "fixed",
"min": 1,
"max": 1,
"queue_size": -1
},
"security-token-key": {
"type": "fixed",
"min": 1,
"max": 1,
"queue_size": 1000
},
"ml_datafeed": {
"type": "fixed",
"min": 20,
"max": 20,
"queue_size": 200
},
"fetch_shard_started": {
"type": "scaling",
"min": 1,
"max": 8,
"keep_alive": "5m",
"queue_size": -1
},
"listener": {
"type": "fixed",
"min": 2,
"max": 2,
"queue_size": -1
},
"ml_autodetect": {
"type": "fixed",
"min": 80,
"max": 80,
"queue_size": 80
},
"index": {
"type": "fixed",
"min": 4,
"max": 4,
"queue_size": 200
},
"refresh": {
"type": "scaling",
"min": 1,
"max": 2,
"keep_alive": "5m",
"queue_size": -1
},
"generic": {
"type": "scaling",
"min": 4,
"max": 128,
"keep_alive": "30s",
"queue_size": -1
},
"rollup_indexing": {
"type": "fixed",
"min": 4,
"max": 4,
"queue_size": 4
},
"warmer": {
"type": "scaling",
"min": 1,
"max": 2,
"keep_alive": "5m",
"queue_size": -1
},
"search": {
"type": "fixed_auto_queue_size",
"min": 7,
"max": 7,
"queue_size": 1000
},
"ccr": {
"type": "fixed",
"min": 32,
"max": 32,
"queue_size": 100
},
"flush": {
"type": "scaling",
"min": 1,
"max": 2,
"keep_alive": "5m",
"queue_size": -1
},
"fetch_shard_store": {
"type": "scaling",
"min": 1,
"max": 8,
"keep_alive": "5m",
"queue_size": -1
},
"management": {
"type": "scaling",
"min": 1,
"max": 5,
"keep_alive": "5m",
"queue_size": -1
},
"ml_utility": {
"type": "fixed",
"min": 80,
"max": 80,
"queue_size": 500
},
"get": {
"type": "fixed",
"min": 4,
"max": 4,
"queue_size": 1000
},
"analyze": {
"type": "fixed",
"min": 1,
"max": 1,
"queue_size": 16
},
"write": {
"type": "fixed",
"min": 4,
"max": 4,
"queue_size": 200
},
"snapshot": {
"type": "scaling",
"min": 1,
"max": 2,
"keep_alive": "5m",
"queue_size": -1
},
"search_throttled": {
"type": "fixed_auto_queue_size",
"min": 1,
"max": 1,
"queue_size": 100
}
}