Which port is my logstash using?

Hello all

I have a running server where logstash is working. In my pipelines.yml there's a line path.config: "/etc/logstash/conf.d/*.conf" - this points to a directory where my configuration is.
The directory contains following files:
filter-nb.conf
grok_patterns - that is a directory where is a file containing grok patterns
input-kafka.conf
output-elasticsearch.conf

The content of output-elasticsearch.conf is as follows:

output {
        elasticsearch {
                        user => "elastic"
                        password => "*"
                        hosts => "*:9200"
                        manage_template => false
                        index => "benefia-%{[fields][app]}-logs-%{+YYYYMM}"
                        ssl => true
                        ssl_certificate_verification => false
                        cacert => "/etc/logstash/klucze/SUBCA1.crt"
                        ilm_enabled => "false"
        }
}

As you can see there's no input. On which port is my logstash listening?
If there'a a need I can send content of the other files.

Check input-kafka.conf, there would be details for the connection.
If is the Kafka connection, then LS is connecting/subscribing to Kafka. So there is no listening port on LS side, except TCP 9600 for internal LS monitoring like event statistics: curl -XGET 'localhost:9600/_node/stats/events?pretty .

In case of beats, the default port is 5044

input {
  beats {
    port => 5044
  }
}

Hello Pawel,

Logstash allows structuring the pipeline over multiple files so all *.conf files are read and loaded as a single pipeline. Therefore, you can find the input in the file input-kafka.conf. I this case, Logstash will not open a port for listening but instead connect to a Kafka server and subscribe to a Kafka topic.

Best regards
Wolfram

Thank you for your posts

input-kafka.conf is as follows:

input {
  kafka {
    bootstrap_servers => "*:9092"
    topics => ["csou-logs"]
    codec => json
    group_id => "cluser_logstash"
        security_protocol => "SSL"
        ssl_truststore_location => "/etc/logstash/klucze/kafka.logstash.truststore.jks"
        ssl_truststore_password => "*"
        ssl_keystore_location => "/etc/logstash/klucze/kafka.logstash.keystore.jks"
        ssl_keystore_password => "*"
  }
}

If i log to 'bootstrap_server' there is docker-compose.yml file '/docker/kafka/docker-compose.yml':

version: '3.2'

services:
    zookeeper:
        image: wurstmeister/zookeeper
        restart: always
        volumes:
            - type: bind
              source: /vol/data/zookeeper
              target: /opt/zookeeper-3.4.13/data
    kafka:
        image: wurstmeister/kafka:2.11-2.0.0
        depends_on:
            - zookeeper
        restart: always
        ports:
            - target: 9092
              published: 9092
              protocol: tcp
              mode: host
        volumes:
            - type: bind
              source: /vol/data/kafka
              target: /kafka
            - type: bind
              source: ./ssl/broker
              target: /opt/kafka/ssl-keys
              read_only: true
            - type: bind
              source: /vol/logs/kafka
              target: /opt/kafka/logs
        environment:
            KAFKA_BROKER_ID: 1
            KAFKA_LOG_DIRS: /kafka/kafka-logs-09d0ef1b7c74
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
            KAFKA_ZOOKEEPER_SESSION_TIMEOUT_MS: 30000
            KAFKA_ZOOKEEPER_CONNECTION_TIMEOUT_MS: 30000
            KAFKA_ADVERTISED_HOST_NAME: *
            KAFKA_ADVERTISED_LISTENERS: INSIDE://:9094,OUTSIDE://*:9092
            KAFKA_LISTENERS: INSIDE://:9094,OUTSIDE://:9092
            KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:SSL
            KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
            KAFKA_SSL_TRUSTSTORE_LOCATION: /opt/kafka/ssl-keys/kafka.server.truststore.jks
            KAFKA_SSL_TRUSTSTORE_PASSWORD: *
            KAFKA_SSL_KEYSTORE_LOCATION: /opt/kafka/ssl-keys/kafka.server.keystore.jks
            KAFKA_SSL_KEYSTORE_PASSWORD: *
			KAFKA_SSL_CLIENT_AUTH: required
            KAFKA_SSL_ENABLED_PROTOCOLS: TLSv1.2,TLSv1.1,TLSv1
            KAFKA_SOCKET_REQUEST_MAX_BYTES: 469296128

Are the messages logged to 'bootstrap_servers':9094 ?
If I change 'output-elasticsearch.conf' to:

input {
  tcp {
    host => "0.0.0.0"
    port => 5044
    codec => "json"
  }
}
output {
  if [application] =~ "fointe-test" {
    elasticsearch {
      hosts => "https://*:9200"
      index => "fointe-test-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "*"
      cacert => "/etc/logstash/certs/newfile.crt.pem"
      ssl_certificate_verification => false
    }
  }
  else {
	elasticsearch {
      user => "elastic"
      password => "*"
      hosts => "*:9200"
      manage_template => false
      index => "benefia-%{[fields][app]}-logs-%{+YYYYMM}"
      ssl => true
      ssl_certificate_verification => false
      cacert => "/etc/logstash/klucze/SUBCA1.crt"
      ilm_enabled => "false"
    }
  }
}

Will it work?
The point is to leave the current process unchanged (production environment) except when post contains field named 'application' and the value is 'fointe-test' then save the log in elasticsearch host in index 'fointe-test-%{+YYYY.MM.dd}'.