Add docker metadata not working even after trying all possible combinations


(diya) #1

Am able to visualise the docker logs on kibana but with container id. I want to add container name to my logs for easy filtering.I have tried add_docker_metadata: ~ but in vain.I have tried differnt possible combinations but container names are not getting displayed in kibana.Is there any other way out?Please help me ASAP as am stuck in this for a long time and in a urgent need to complete the task soon.Thanks in advance.


(xeraa) #2
  1. Are you running the beat on the host or also in a container (as a sidecar)?
  2. Please show your configuration and the logs. There is no way for us to guess what is going on.

(diya) #4

Thank yu for the reply ! Please help me in resolving this issue..i want to add container name to my log files for displaying in kibana


(diya) #5

My filebeat service is working fine now after i made some indentation changes to filebeat.yml.The only prb that persists is related to fetching container names now..please help !


(xeraa) #6

Then please provide your current config and log output as text (formatted as a code block).

There should be a log line about the processors you are loading. And it looks like you are not running on a cloud, so remove the add_cloud_metadata first.


(diya) #7

I have removed it and my docker metadata is working fine..am able to fetch container names after setting the prospectors to true ..but now my filebeat is not fetching logs from all files..its fetching only from 2 containers ..though my path for logs is /var/log/docker/containers//.log .Previously it was working fine..can yu suggest on this?


(xeraa) #8

again, we need the logs. there you will have an entry for every file that is being collected.

taking a wild guess: only those 2 contsiners have more recent log entries. remove filebeat‘s registry file and restart it to collect everything again.


(diya) #10

i removed registry and restarted ..still the same issue


(xeraa) #11
  1. Please stop the screenshots and paste your outputs. It's a pain to read and nobody running into the same issue will be able to find it since it's not searchable text.
  2. Do you happen to have tried this with an older Filebeat version and are using the index template of that? A GET _template in Console should show you more.
  3. "when i run the filebeat in debug mode am visualising all the logs but they are not showing up in kibana" — not sure how to read this. Is it being shown correctly in Kibana or not?

(diya) #12

Sorry..will paste them hereafter ! Kibana was working fine but now am not able to reach it via remote machine..status of kibana is active in server ..
these are the logs i get while fetching kibana status


(diya) #15

{"type":"error","@timestamp":"2018-11-27T11:51:21Z","tags":["warning","stats-collection"],"pid":20736,"level":"error","error":{"message":"[illegal_index_shard_state_exception] CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED], with { index_uuid="qK_E17FQRVS2ck-5AYhCjA" & shard="0" & index=".kibana_1" }","name":"Error","stack":"[illegal_index_shard_state_exception] CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED], with { index_uuid="qK_E17FQRVS2ck-5AYhCjA" & shard="0" & index=".kibana_1" } :: {"path":"/.kibana/doc/kql-telemetry%3Akql-telemetry","query":{},"statusCode":503,"response":"{"error":{"root_cause":[{"type":"illegal_index_shard_state_exception","reason":"CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED]","index_uuid":"qK_E17FQRVS2ck-5AYhCjA","shard":"0","index":".kibana_1"}],"type":"no_shard_available_action_exception","reason":"No shard available for [get [.kibana][doc][kql-telemetry:kql-telemetry]: routing [null]]","caused_by":{"type":"illegal_index_shard_state_exception","reason":"CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED]","index_uuid":"qK_E17FQRVS2ck-5AYhCjA","shard":"0","index":".kibana_1"}},"status":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickDomainCallback (internal/process/next_tick.js:218:9)"},"message":"[illegal_index_shard_state_exception] CurrentState[RECOVERING] operations only allowed when shard state is one of [POST_RECOVERY, STARTED], with { index_uuid="qK_E17FQRVS2ck-5AYhCjA" & shard="0" & index=".kibana_1" }"}
Nov 27 05:51:21 kibana[20736]: {"type":"log","@timestamp":"2018-11-27T11:51:21Z","tags":["warning","stats-collection"],"pid":20736,"message":"Unable to fetch data from kql collector"}
Nov 27 05:51:21 kibana[20736]: {"type":"error","@timestamp":"2018-11-27T11:51:21Z","tags":["warning","stats-collection"],"pid":20736,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"ignore_unavailable":true,"filter_path":"aggregations.types.buckets"},"body":"{"size":0,"query":{"terms":{"type":["dashboard","visualization","search","index-pattern","graph-workspace","timelion-sheet"]}},"aggs":{"types":{"terms":{"field":"type","size":6}}}}","statusCode":503,"response":"{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickDomainCallback (internal/process/next_tick.js:218:9)"},"message":"[search_phase_execution_exception] all shards failed"}
Nov 27 05:51:21 kibana[20736]: {"type":"log","@timestamp":"2018-11-27T11:51:21Z","tags":["warning","stats-collection"],"pid":20736,"message":"Unable to fetch data from kibana collector"}
kibana[20736]: {"type":"error","@timestamp":"2018-11-27T11:51:21Z","tags":["warning","stats-collection"],"pid":20736,"level":"error","error":{"message":"[search_phase_execution_exception] all shards failed","name":"Error","stack":"[search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"size":1000,"ignore_unavailable":true,"filter_path":"hits.hits._id"},"body":"{"query":{"bool":{"filter":{"term":{"index-pattern.type":"rollup"}}}}}","statusCode":503,"response":"{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}"}\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:308:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:267:7)\n at HttpConnector. (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4949:19)\n at emitNone (events.js:111:20)\n at IncomingMessage.emit (events.js:208:7)\n at endReadableNT (_stream_readable.js:1064:12)\n at _combinedTickCallback (internal/process/next_tick.js:138:11)\n at process._tickDomainCallback (internal/process/next_tick.js:218:9)"},"message":"[search_phase_execution_exception] all shards failed"}