Docker image doesn't expose ports in custom image

I'm working with a custom image and I want to expose some of container's ports. But when I run my image in Rancher and execute "nmap " from another service in the same host, it reaches the service but no ports are avaible. Other services are normal, with their ports exposed just like defined in Dockerfile. Logstash also runs normally.

I've added the line above to the service in docker compose, but to no effect:

ports:
    - 5000:5000/tcp

Here's my Dockerfile and my logstash.yml. What am I doing wrong, please?

# Dockerfile
FROM docker.elastic.co/logstash/logstash:6.4.2

RUN rm -f /usr/share/logstash/pipeline/logstash.conf
COPY logstash.conf /usr/share/logstash/pipeline/logstash.conf

RUN rm -f /usr/share/logstash/config/logstash.yml
COPY logstash.yml /usr/share/logstash/config/logstash.yml

EXPOSE 5000	

# logstash.yml
http.host: "127.0.0.1"
http.port: 9600

Are you sure Logstash is even running on port 5000? Please add your logstash.conf and the relevant section from your logs (when Logstash is starting up).

Hi Rafael,

Please share your logstash.conf so we can see what you are trying to do.

Here are the requested files (with some minor changes after some tests). Thanks in advance!

# LOGS
14/11/2018 10:39:31Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
14/11/2018 10:39:31[2018-11-14T12:39:31,555][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
14/11/2018 10:39:31[2018-11-14T12:39:31,569][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
14/11/2018 10:39:32[2018-11-14T12:39:32,331][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"923f9031-1cb4-4a69-b3a6-3c3b67b05459", :path=>"/usr/share/logstash/data/uuid"}
14/11/2018 10:39:33[2018-11-14T12:39:33,414][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.2"}
14/11/2018 10:39:40[2018-11-14T12:39:40,750][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
14/11/2018 10:39:41[2018-11-14T12:39:41,716][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[<MY_ELASTIC_SEARCH_HOST_WHICH_IS_CORRECT_IN_THE_LOG>]}}
14/11/2018 10:39:41[2018-11-14T12:39:41,747][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=><MY_ELASTIC_SEARCH_HOST_WHICH_IS_CORRECT_IN_THE_LOG>, :path=>"/"}
14/11/2018 10:39:42[2018-11-14T12:39:42,929][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"<MY_ELASTIC_SEARCH_HOST_WHICH_IS_CORRECT_IN_THE_LOG>"}
14/11/2018 10:39:43[2018-11-14T12:39:43,600][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
14/11/2018 10:39:43[2018-11-14T12:39:43,605][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
14/11/2018 10:39:43[2018-11-14T12:39:43,648][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["<MY_ELASTIC_SEARCH_HOST_WHICH_IS_CORRECT_IN_THE_LOG>"]}
14/11/2018 10:39:43[2018-11-14T12:39:43,711][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
14/11/2018 10:39:43[2018-11-14T12:39:43,790][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
14/11/2018 10:39:44[2018-11-14T12:39:44,320][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x174dd0bd run>"}
14/11/2018 10:39:44[2018-11-14T12:39:44,377][INFO ][logstash.inputs.http     ] Starting http input listener {:address=>"127.0.0.1:5000", :ssl=>"false"}
14/11/2018 10:39:44[2018-11-14T12:39:44,412][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
14/11/2018 10:39:44[2018-11-14T12:39:44,958][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

# logstash.yml
http.host: "127.0.0.1"
http.port: 9600

# Dockerfile
FROM docker.elastic.co/logstash/logstash:6.4.2
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
COPY logstash.conf /usr/share/logstash/pipeline/logstash.conf
RUN rm -f /usr/share/logstash/config/logstash.yml
COPY logstash.yml /usr/share/logstash/config/logstash.yml
EXPOSE 5000 9600

# logstash.conf (simplified without the "filter" section)	
input { 
	http {
		host => "127.0.0.1"
		port => 5000
	}
}
output {
	elasticsearch {
		hosts => "${ELASTIC_SEARCH_HOST}"
		user => "${ELASTIC_SEARCH_USER}"
		password => "${ELASTIC_SEARCH_PASSWORD}"
		index => "${ELASTIC_SEARCH_INDEX}"
	}
}

Cool. Thanks for the extra info.

Your Logstash configuration is explicitly binding the HTTP input to a loopback address. A lot of people think that this means it's bound to the loopback interface on the host system that's running Docker. It's not though. The container, unless told otherwise, is running in its own network namespace. It has its own loopback interface and any 127.* address in the container belongs to it alone. Every other container, and the host system, each has its own completely separate version of the 127.* address space.

If you remove the explicit bind to 127.0.0.1, then Logstash will listen on all the container's interfaces (0.0.0.0). Then, other containers and the host system will be able to talk to it.

For more context, consider the interfaces on a very basic container:

» docker run --rm -it alpine ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
879: eth0@if880: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

Just like an actual host, it has a loopback interface and a "real" interface, eth0. Any traffic coming from somewhere else needs to come in eth0, not lo.

Great! That worked! Thanks for the help!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.