Hi there!
Let me say my scenario first, logstash index from AWS CWL and inject to AWS ES domain.
We have application logs stored in Amazon CloudWatch Logs.
Eg: Under Test-Log-Group, App1-Log-Stream, App2-Log-Stream, App3-Log-Stream, App4-Log-Stream, App5-Log-Stream
These log streams will get the logs continously.
These are used plugins >> logstash-input-cloudwatch-logs input plugin and logstash-output-amazon_es output plugin.
The problem is that some logs are missing in ES Domain when I do search with text value in kibana ui.
When I check in logstash logs i saw
2018-01-24T08:43:04,967][ERROR][logstash.outputs.amazones] Attempted to send a bulk request to Elasticsearch configured at '["https://search-bbbbbb-.es.amazonaws.com:443"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["https://search-bbbbbb-.es.amazonaws.com:443"], :region=>"ap-southeast-1", :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil, :headers=>{"Content-Type"=>"application/json"}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS}, :error_message=>"search-bbbbbb-.es.amazonaws.com:443 failed to respond", :error_class=>"Faraday::ClientError", :backtrace=>nil}
[2018-01-24T08:43:05,217][WARN ][logstash.outputs.amazones] Failed to flush outgoing items {:outgoing_count=>1, :exception=>"Faraday::ClientError", :backtrace=>nil}
But even I don't see above error time, still missing logs.
does anybody have experienced this kind of issue?
Thank you.