Filebeat and Winlogbeat problem

Hi,

A couple of days now i have problem of providing logs on kibana. (filebeat - winlogbeat)
All these logs are parsing through logstash.

Logstash error logs:

[2019-07-11T12:34:12,246][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-7.0.1-2019.07.08][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-7.0.1-2019.07.08][0]] containing [6] requests]"})
[2019-07-11T12:34:12,246][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[filebeat-7.0.1-2019.07.08][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[filebeat-7.0.1-2019.07.08][0]] containing [6] requests]"})
[2019-07-11T12:34:12,246][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>6}
[2019-07-11T12:34:13,470][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://atdevxhv03.emea.nsn-net.net:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://atdevxhv03.emea.nsn-net.net:9200/, :error_message=>"Elasticsearch Unreachable: [http://atdevxhv03.emea.nsn-net.net:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2019-07-11T12:34:13,471][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://atdevxhv03.emea.nsn-net.net:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[2019-07-11T12:34:14,821][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
[2019-07-11T12:34:15,472][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[2019-07-11T12:34:15,735][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://atdevxhv03.emea.nsn-net.net:9200/"}
[root@atdevxhv03 logstash]#

And this is the test of the filebeat:
[root@atdevxhv03 logstash]# filebeat test output
logstash: atdevxhv03.emea.nsn-net.net:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 10.158.67.175
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK

For one reason no filebeat and winlogbeat indices are creating.

Do you have any idea?

Best Regards,
Thanos

GET /_cluster/allocation/explain

{
"index" : ".monitoring-es-7-2019.07.09",
"shard" : 0,
"primary" : true,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "ALLOCATION_FAILED",
"at" : "2019-07-09T08:50:33.586Z",
"failed_allocation_attempts" : 5,
"details" : "failed shard on node [OETgEHqTR9Ku30WwPWADyg]: failed to create shard, failure IOException[failed to obtain in-memory shard lock]; nested: ShardLockObtainFailedException[[.monitoring-es-7-2019.07.09][0]: obtaining shard lock timed out after 5000ms, previous lock details: [shard creation] trying to lock for [shard creation]]; ",
"last_allocation_status" : "no"
},
"can_allocate" : "no",
"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes that hold an in-sync shard copy",
"node_allocation_decisions" : [
{
"node_id" : "14b6D9VCR36schkKD3k74A",
"node_name" : "xh-fr-elastic-2",
"transport_address" : "135.238.239.132:9300",
"node_attributes" : {
"ml.machine_memory" : "269930721280",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
},
"node_decision" : "no",
"store" : {
"in_sync" : false,
"allocation_id" : "G6Psm4HaTyekZGsP1POQ_g"
}
},
{
"node_id" : "6q5asfwjQ_eoI3xkl2-JXg",
"node_name" : "xh-gr-elastic-1",
"transport_address" : "10.158.67.175:9300",
"node_attributes" : {
"ml.machine_memory" : "16654884864",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
},
"node_decision" : "no",
"store" : {
"found" : false
}
},
{
"node_id" : "OETgEHqTR9Ku30WwPWADyg",
"node_name" : "xh-gr-elastic-2",
"transport_address" : "10.159.166.9:9300",
"node_attributes" : {
"ml.machine_memory" : "269930721280",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
},
"node_decision" : "no",
"store" : {
"in_sync" : true,
"allocation_id" : "i2O_it0NRce7ZjGTedPrnw"
},
"deciders" : [
{
"decider" : "max_retry",
"decision" : "NO",
"explanation" : "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2019-07-09T08:50:33.586Z], failed_attempts[5], delayed=false, details[failed shard on node [OETgEHqTR9Ku30WwPWADyg]: failed to create shard, failure IOException[failed to obtain in-memory shard lock]; nested: ShardLockObtainFailedException[[.monitoring-es-7-2019.07.09][0]: obtaining shard lock timed out after 5000ms, previous lock details: [shard creation] trying to lock for [shard creation]]; ], allocation_status[deciders_no]]]"
}
]
},
{
"node_id" : "Vwfbqe-rTeCaWtWG5zlNgA",
"node_name" : "xh-gr-elastic-3",
"transport_address" : "10.158.67.107:9300",
"node_attributes" : {
"ml.machine_memory" : "17179332608",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
},
"node_decision" : "no",
"store" : {
"found" : false
}
},
{
"node_id" : "_BIH8swLQz6rJa_ImpYJuA",
"node_name" : "xh-it-elastic-2",
"transport_address" : "151.98.17.34:9300",
"node_attributes" : {
"ml.machine_memory" : "8186568704",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
},
"node_decision" : "no",
"store" : {
"in_sync" : false,
"allocation_id" : "AvYpapXlS2WHslFYceeKrg"
}
},
{
"node_id" : "isTX9Dk7SMSaP3GARPtU9A",
"node_name" : "xh-fr-elastic-1",
"transport_address" : "135.238.239.48:9300",
"node_attributes" : {
"ml.machine_memory" : "16654970880",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
},
"node_decision" : "no",
"store" : {
"found" : false
}
},
{
"node_id" : "uHTHPU56QfmRVfP29OsS0Q",
"node_name" : "xh-it-elastic-1",
"transport_address" : "151.98.17.60:9300",
"node_attributes" : {
"ml.machine_memory" : "34359738368",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true"
},
"node_decision" : "no",
"store" : {
"found" : false
}
}
]
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.