Logstash not able to reach Elasticsearch

I am trying to run ELK in 3 separate docker containers but every time I get errors that Logstash is not able to reach Elasticsearch. The configuration is very simple, this is my logstash.conf file:

input {
  file {
    path => "/opt/elk/logstash/sample.log" # Sample log file on local machine
    type => "apachelogs"
    start_position => "beginning"
  }
}

filter {
  if [type] == "apache-access" { 
    grok {
      match => [ "message", "%{COMBINEDAPACHELOG}" ]
    }
  }
}

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
  }
}

And this is docker-compose.yml file:

version: '2'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:5.3.0
    ports:
      - 5601:5601
    networks:
      - docker_elk

  logstash:
    image: docker.elastic.co/logstash/logstash:5.3.0
    ports:
      - 5044:5044
    volumes:
      - /opt/elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    networks:
      - docker_elk

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
    cap_add:
      - IPC_LOCK
    ports:
      - 9200:9200
    networks:
      - docker_elk

networks:
  docker_elk:
    driver: bridge

Everything looks pretty clear and correct but Logstash is still not able to connect to Elasticsearch and shows the following logs:

[2017-04-12T02:48:31,436][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"9a6c69b2-4d0a-457b-b0cc-f0da3ab25f01", :path=>"/usr/share/logstash/data/uuid"}
[2017-04-12T02:49:04,923][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s]}}
[2017-04-12T02:49:04,933][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-04-12T02:49:06,775][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x46abdaa0 URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2017-04-12T02:49:06,781][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x3fe7f61c URL:http://elasticsearch:9200>]}
[2017-04-12T02:49:06,789][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>2}
[2017-04-12T02:49:06,809][INFO ][logstash.pipeline        ] Pipeline .monitoring-logstash started
[2017-04-12T02:49:06,888][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2017-04-12T02:49:06,893][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elasticsearch:9200/, :path=>"/"}
[2017-04-12T02:49:06,912][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x2a8adde URL:http://elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2017-04-12T02:49:06,938][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-04-12T02:49:06,955][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2017-04-12T02:49:06,959][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elas

[2017-04-12T03:02:04,611][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x78b0071e URL:http://logstash_system:xxxxxx@elasticsearch:9200/_xpack/monitoring/?system_id=logstash&system_api_version=2&interval=1s>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
[2017-04-12T03:02:09,613][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2017-04-12T03:02:10,087][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x5bce79ab URL:http://logstash_system:xxxxxx@elasticsearch:9200/>}
[2017-04-12T03:02:13,454][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
[2017-04-12T03:02:23,458][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

Could anybody please explain what is wrong in my configuration and why Logstash is not able to reach Elasticsearch, though I can see both Logstash and Elasticsearch on Kibana dashboard. I can also attach Elasticsearch logs if needed.

Thanks!

Elasticsearch log:

[2017-04-12T05:15:14,434][INFO ][o.e.p.PluginsService     ] [uxSyVcL] loaded module [transport-netty4]
[2017-04-12T05:15:14,439][INFO ][o.e.p.PluginsService     ] [uxSyVcL] loaded plugin [x-pack]
[2017-04-12T05:15:39,971][INFO ][o.e.n.Node               ] initialized
[2017-04-12T05:15:39,972][INFO ][o.e.n.Node               ] [uxSyVcL] starting ...
[2017-04-12T05:15:40,939][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 8d:f6:69:ea:4e:f2:bb:fb
[2017-04-12T05:15:41,404][INFO ][o.e.t.TransportService   ] [uxSyVcL] publish_address {172.20.0.2:9300}, bound_addresses {[::]:9300}
[2017-04-12T05:15:41,421][INFO ][o.e.b.BootstrapChecks    ] [uxSyVcL] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-04-12T05:15:48,782][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][young][3][5] duration [6.1s], collections [1]/[6.3s], total [6.1s]/[16.6s], memory [164.8mb]->[112mb]/[1.9gb], all_pools {[young] [121.2mb]->[35.3mb]/[133.1mb]}{[survivor] [16.6mb]->[11.7mb]/[16.6mb]}{[old] [26.9mb]->[64.9mb]/[1.8gb]}
[2017-04-12T05:15:48,857][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][3] overhead, spent [6.1s] collecting in the last [6.3s]
[2017-04-12T05:15:48,900][INFO ][o.e.c.s.ClusterService   ] [uxSyVcL] new_master {uxSyVcL}{uxSyVcLQQQik-8Auid_xBA}{74cgoakYQhS9_X2OWjMjKA}{172.20.0.2}{172.20.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-04-12T05:15:49,089][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [uxSyVcL] publish_address {172.20.0.2:9200}, bound_addresses {[::]:9200}
[2017-04-12T05:15:49,100][INFO ][o.e.n.Node               ] [uxSyVcL] started
[2017-04-12T05:15:50,260][ERROR][o.e.x.m.c.i.IndicesStatsCollector] [uxSyVcL] collector [indices-stats-collector] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
	at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:70) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:47) ~[elasticsearch-5.3.0.jar:5.3.0]
...

[2017-04-12T05:15:50,334][ERROR][o.e.x.m.c.i.IndexStatsCollector] [uxSyVcL] collector [index-stats-collector] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
	at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:70) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:47) ~[elasticsearch-5.3.0.jar:5.3.0]
...

[2017-04-12T05:15:50,367][ERROR][o.e.x.m.c.i.IndexRecoveryCollector] [uxSyVcL] collector [index-recovery-collector] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
	at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:114) ~[elasticsearch-5.3.0.jar:5.3.0]
	at org.elasticsearch.action.admin.indices.recovery.TransportRecoveryAction.checkGlobalBlock(TransportRecoveryAction.java:52) ~[elasticsearch-5.3.0.jar:5.3.0]
...

[2017-04-12T05:15:51,787][INFO ][o.e.g.GatewayService     ] [uxSyVcL] recovered [5] indices into cluster_state
[2017-04-12T05:16:06,154][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][young][14][6] duration [6.3s], collections [1]/[7.2s], total [6.3s]/[23s], memory [202.7mb]->[115.6mb]/[1.9gb], all_pools {[young] [126mb]->[1.1mb]/[133.1mb]}{[survivor] [11.7mb]->[13.3mb]/[16.6mb]}{[old] [64.9mb]->[101.2mb]/[1.8gb]}
[2017-04-12T05:16:06,160][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][14] overhead, spent [6.3s] collecting in the last [7.2s]
[2017-04-12T05:16:06,678][INFO ][o.e.c.r.a.AllocationService] [uxSyVcL] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.monitoring-es-2-2017.04.12][0]] ...]).
[2017-04-12T05:16:19,700][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:16:29,645][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:16:39,648][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:16:49,655][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:16:59,653][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:17:09,654][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:18:13,106][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][young][136][13] duration [5.5s], collections [1]/[5.7s], total [5.5s]/[28.9s], memory [246.4mb]->[116.7mb]/[1.9gb], all_pools {[young] [131.2mb]->[1.3mb]/[133.1mb]}{[survivor] [8.9mb]->[6.4mb]/[16.6mb]}{[old] [106.2mb]->[108.8mb]/[1.8gb]}
[2017-04-12T05:18:13,122][WARN ][o.e.m.j.JvmGcMonitorService] [uxSyVcL] [gc][136] overhead, spent [5.5s] collecting in the last [5.7s]
[2017-04-12T05:18:23,219][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.
[2017-04-12T05:18:33,219][WARN ][o.e.d.r.RestController   ] The Content-Type [application/x-ldjson] has been superseded by [application/x-ndjson] in the specification and should be used instead.

...

It looks like you have x-pack installed where security is enabled by default but you haven't configured Logstash to use security to connect to Elasticsearch.