Unable to access secured Elasticsearch using logstash

Hello Team,

I am using Openshift environment and i deployed EFK (Elasticsearch, fluentd and Kibana) stack in it using some logging operators . and then due to some requirement i deployed Logstash as docker container in same openshift environment to collect logs from outside and send it to elasticsearch deployed in openshift .

My configuration is as below , so please help me what configuration is required at Logstash to connect with elasticsearch properly.

Please format your code, logs or configuration files using </> icon as explained in this guide. It will make your post more readable.

Or use markdown style like:

```
CODE
```

This is the icon to use if you are not using markdown format:

There's a live preview panel for exactly this reasons.

Lots of people read these forums, and many of them will simply skip over a post that is difficult to read, because it's just too large an investment of their time to try and follow a wall of badly formatted text.
If your goal is to get an answer to your questions, it's in your interest to make it as easy to read and understand as possible.
Please update your post.

Thanks I will share config and output again in proper formatting. thanks for such useful information I am new user so I didn't know priviously.

Please get the configmap and deployment file for Logstash

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: openshift-logging
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    filter {
      grok {
          match => { "message" => "%{COMBINEDAPACHELOG}" }
      }
      date {
        match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
      }
      geoip {
          source => "clientip"
        }
      }
    output {
        elasticsearch {
          hosts => ["elasticsearch.openshift-logging.svc.cluster.local:9200"]
          ssl => "true"
          cacert => ['/etc/ssl/certs/ca']
          sniffing => false
          user => "fluentd"
          password => "changeme"
      }
    }
[root@localhost logstash]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-deployment
  namespace: openshift-logging
spec:
  replicas: 1
  selector:
    matchLabels:
       app: logstash
  template:
    metadata:
      labels:
        app: logstash
      annotations:
          k8s.v1.cni.cncf.io/networks: test-network-2
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:6.3.0
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
          - name: certs
            mountPath: /etc/ssl/certs
          - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            name: logcollector-token-8bjvh
            readOnly: true
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf
      - name: certs
        secret:
          secretName: curator
          Optional: false
      - name: logcollector-token-8bjvh
        secret:
          defaultMode: 420
          secretName: logcollector-token-l4nqh

Please check successfully output

[root@localhost logstash]# oc exec logstash-deployment-586b96bc85-gc7mq -- curl -s --cacert /etc/ssl/certs/ca --cert /etc/ssl/certs/cert --key /etc/ssl/certs/key https:
//elasticsearch.openshift-logging.svc.cluster.local:9200/_cat/indices
green open infra-000928                   X38a_VscRIqwdc6eIyLxdQ 3 1 702080 0 901.3mb 449.6mb
green open audit-000919                   oLudAXbXQ36sWto1FFqPPw 3 1      0 0   1.5kb    783b
green open infra-000929                   4Xucr9s4RxCoq_SSwZHAcQ 3 1 277126 0 363.3mb 181.2mb
green open audit-000920                   jJS0RRo7TZKUookexyX3rQ 3 1      0 0   1.5kb    783b
green open audit-000917                   LsqVs-JFR4aeZt1mMSDSOw 3 1      0 0   1.5kb    783b
green open infra-000927                   UeG7h28MQFqvhvRQdzv5dw 3 1 709341 0 908.1mb 453.7mb
green open infra-000924                   SGxFE0aoQvCeMCcZ7SzUqA 3 1 708035 0 906.6mb   453mb
green open app-000936                     dxRDyoWcTCGM4WDPKa76Sg 3 1     14 0 396.1kb   198kb
green open app-000937                     oOm3LaDxSrGVA13UC1PKTg 3 1     22 0 375.3kb 187.6kb
green open infra-000925                   cLFDDGgQRBqSQTkh9GTVmQ 3 1 717281 0 920.5mb 459.8mb
green open audit-000918                   -uAMZIYzTv-m2RTbygZr8A 3 1      0 0   1.5kb    783b
green open infra-000926                   jeT3ZKFnS0eBTRctEqe9Ag 3 1 706895 0 906.1mb 452.5mb
green open .security                      RqxeVXGbQBCxVrgJf7lNKA 1 1      5 1  46.3kb    27kb
green open .kibana_1                      S7eVFWTKTm65mMIxAEgDnQ 1 1      0 0    522b    261b
green open .kibana_-377444158_kubeadmin_1 1Osr_99JQMaXS5cws6sqgQ 1 1      3 0  38.2kb  19.1kb
green open app-000935                     jKHwSGHaS5WKEMWWU6xbUA 3 1     16 0 452.7kb 226.3kb
green open .tasks                         hdLtNI_sT72LbWCZi5jaCQ 1 1      1 0  12.6kb   6.3kb
green open .kibana_-377444158_kubeadmin_2 EkcGFlgrQX2LvLKyz8Tosg 1 1      3 0  38.2kb  19.1kb
and then check problematic output
[root@localhost logstash]# oc logs logstash-deployment-586b96bc85-gc7mq
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2020-10-23 14:33:38.509 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2020-10-23 14:33:38.521 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2020-10-23 14:33:38.795 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2020-10-23 14:33:38.806 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"e1111e93-8c93-4874-bfd6-9c74cce7cdbb", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2020-10-23 14:33:38.917 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.3.0"}
[INFO ] 2020-10-23 14:33:39.936 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2020-10-23 14:33:40.174 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/]}}
[INFO ] 2020-10-23 14:33:40.176 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/, :path=>"/"}
[WARN ] 2020-10-23 14:33:40.417 [[main]-pipeline-manager] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '500' contacting Elasticsearch at URL 'https://elasticsearch.openshift-logging.svc.cluster.local:9200/'"}
[INFO ] 2020-10-23 14:33:40.419 [[main]-pipeline-manager] elasticsearch - Using mapping template from {:path=>nil}
[ERROR] 2020-10-23 14:33:40.421 [[main]-pipeline-manager] elasticsearch - Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:31:in `read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:17:in `get_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:96:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:26:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:97:in `register'", "org/logstash/config/ir/compiler/OutputDelegatorExt.java:93:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:340:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:351:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:728:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:361:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:288:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:248:in `block in start'"]}
[INFO ] 2020-10-23 14:33:40.422 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch.openshift-logging.svc.cluster.local:9200"]}
[INFO ] 2020-10-23 14:33:40.479 [[main]-pipeline-manager] geoip - Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-geoip-5.0.3-java/vendor/GeoLite2-City.mmdb"}
[INFO ] 2020-10-23 14:33:40.929 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2020-10-23 14:33:40.997 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2020-10-23 14:33:40.997 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x354b0eb5@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 sleep>"}
[INFO ] 2020-10-23 14:33:41.024 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2020-10-23 14:33:41.095 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2020-10-23 14:33:45.428 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/, :path=>"/"}
[WARN ] 2020-10-23 14:33:45.467 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '500' contacting Elasticsearch at URL 'https://elasticsearch.openshift-logging.svc.cluster.local:9200/'"}
[INFO ] 2020-10-23 14:33:50.469 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/, :path=>"/"}
[WARN ] 2020-10-23 14:33:50.479 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://fluentd:xxxxxx@elasticsearch.openshift-logging.svc.cluster.local:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '500' contacting Elasticsearch at URL 'https://elasticsearch.openshift-logging.svc.cluster.local:9200/'"}

Anybody from team , pleas help me.

Do you need additional info , config and output . Please let me know i will share.

Thanks in advanced.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.