Pipeline-to-Pipeline Configuration Does Not Work with TLS Cluster

Hi guys,

I have an ECK cluster that receives logs from Logstash. Logstash is deployed and running as a k8s /deployment/ pod next to that cluster. Filebeat (deployed elsewhere) is used to read logs and ship them to Logstash. Pretty straight forward.

All communication is secured. For Filebeat and Logstash I use the elasticsearch-certutil tool to issue and manage Filebeat and Logstash certificates. I'm using the Multiple Pipeline feature over in Logstash to handle different filebeat cases. All good.

This was all working fine until I decided to enable Pipeline-to-Pipeline configuration.
Reasoning - multiple beats input cases need to send data to the same beats input plugin, then be routed to their individual conditional cases and whatnot. Thus the Distributor Pattern needs to be used.
I have the Distributor Pattern working too, Logstash starts just fine. I can trace that data is received successfully by Logstash, however then Logstash fails to send this on to the ECK cluster.

Logstash shows this:

    [INFO ] 2020-10-29 11:30:05.663 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@XXX-es-http:9200/]}}
    [DEBUG] 2020-10-29 11:30:05.664 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://elastic:xxxxxx@XXX-es-http:9200/, :path=>"/"}
    [WARN ] 2020-10-29 11:30:06.142 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@XXX-es-http:9200/"}
    [INFO ] 2020-10-29 11:30:06.311 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://XXX-es-http:9200"]}
    [INFO ] 2020-10-29 11:30:06.315 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@XXX-es-http:9200/]}}
    [DEBUG] 2020-10-29 11:30:06.316 [[main]-pipeline-manager] elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://elastic:xxxxxx@XXX-es-http:9200/, :path=>"/"}
    [WARN ] 2020-10-29 11:30:06.374 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@XXX-es-http:9200/"}
    [INFO ] 2020-10-29 11:30:06.410 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://XXX-es-http:9200"]}
     P[output-elasticsearch{"hosts"=>["https://XXX-es-http:9200"], "index"=>"XXX", "document_id"=>"%{ID}_%{Severity}", "ssl"=>"true", "cacert"=>"/etc/logstash/certificates/output/ca.crt", "user"=>"elastic", "password"=>"XXX"}|[file]/usr/share/logstash/pipeline/logstash-appd.conf:97:3:```
        hosts => ["https://XXX-es-http:9200"]
     P[output-elasticsearch{"hosts"=>["https://XXX-es-http:9200"], "index"=>"XXX", "document_id"=>"%{ID}_%{Severity}", "ssl"=>"true", "cacert"=>"/etc/logstash/certificates/output/ca.crt", "user"=>"elastic", "password"=>"XXX"}|[file]/usr/share/logstash/pipeline/logstash-obm.conf:100:3:```
        hosts => ["https://XXX-es-http:9200"]

but at the same time is successfully receiving and processing data from beats:

    [DEBUG] 2020-10-29 11:31:08.393 [nioEventLoopGroup-2-1] SslHandler - [id: 0xc8902d00, L:/10.244.4.69:5044 - R:/10.244.4.1:11325] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
    [DEBUG] 2020-10-29 11:31:08.431 [defaultEventExecutorGroup-4-1] plain - config LogStash::Codecs::Plain/@id = "plain_516d2711-397a-4383-a8ae-c1b59f7af4ed"
    [DEBUG] 2020-10-29 11:31:08.431 [defaultEventExecutorGroup-4-1] plain - config LogStash::Codecs::Plain/@enable_metric = true
    [DEBUG] 2020-10-29 11:31:08.431 [defaultEventExecutorGroup-4-1] plain - config LogStash::Codecs::Plain/@charset = "UTF-8"
    [DEBUG] 2020-10-29 11:31:08.471 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.472 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.472 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.473 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.508 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.509 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.510 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.546 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.547 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.548 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.582 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.583 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.599 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Received a new payload
    [DEBUG] 2020-10-29 11:31:08.600 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 1
    [DEBUG] 2020-10-29 11:31:08.661 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 2
    [DEBUG] 2020-10-29 11:31:08.665 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 3
    [DEBUG] 2020-10-29 11:31:08.666 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.666 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.667 [nioEventLoopGroup-2-1] ConnectionHandler - c8902d00: batches pending: true
    [DEBUG] 2020-10-29 11:31:08.668 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 4
    [DEBUG] 2020-10-29 11:31:08.673 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 5
    [DEBUG] 2020-10-29 11:31:08.676 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 6
    [DEBUG] 2020-10-29 11:31:08.676 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 7
    [DEBUG] 2020-10-29 11:31:08.678 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 8
    [DEBUG] 2020-10-29 11:31:08.680 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 9
    [DEBUG] 2020-10-29 11:31:08.681 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 10
    [DEBUG] 2020-10-29 11:31:08.682 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 11
    [DEBUG] 2020-10-29 11:31:08.682 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 12
    [DEBUG] 2020-10-29 11:31:08.684 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 13
    [DEBUG] 2020-10-29 11:31:08.684 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 14
    [DEBUG] 2020-10-29 11:31:08.685 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 15
    [DEBUG] 2020-10-29 11:31:08.686 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 16
    [DEBUG] 2020-10-29 11:31:08.687 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 17
    [DEBUG] 2020-10-29 11:31:08.688 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 18
    [DEBUG] 2020-10-29 11:31:08.689 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 19
    [DEBUG] 2020-10-29 11:31:08.690 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 20
    [DEBUG] 2020-10-29 11:31:08.691 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 21
    [DEBUG] 2020-10-29 11:31:08.691 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 22
    [DEBUG] 2020-10-29 11:31:08.692 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 23
    [DEBUG] 2020-10-29 11:31:08.693 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 24
    [DEBUG] 2020-10-29 11:31:08.694 [defaultEventExecutorGroup-4-1] BeatsHandler - [local: 10.244.4.69:5044, remote: 10.244.4.1:11325] Sending a new message for the listener, sequence: 25

On ECK side I was seeing SSL errors stating plain text was sent over TLS. The error is gone now due to logging and sending new data to Logstash logs no useful information on ECK side.

My pipeline to pipeline configuration below.

pipelines.yml:

- pipeline.id: beats-server
  path.config: "/usr/share/logstash/pipeline/beats-server.conf"
- pipeline.id: receiver-1
  path.config: "/usr/share/logstash/pipeline/receiver-1.conf"
- pipeline.id: receiver-2
  path.config: "/usr/share/logstash/pipeline/receiver-2.conf"

beats-server.conf:

input {
  beats {
    port => "5044"
    ssl => true
    ssl_key => '/etc/logstash/certificates/input/tls.key'
    ssl_certificate => '/etc/logstash/certificates/input/tls.crt'
    ssl_certificate_authorities => '/etc/logstash/certificates/input/ca.crt'
  }
}
output {
  if "str1" in [fields][doctype] {
    pipeline { send_to => [receiver1] }
  } else if "str2" in [fields][doctype] {
    pipeline { send_to => [receiver2] }
  }
}

receiver-1.conf:

input { 
  pipeline { address => receiver1 }
}
filter {
  if [fields][doctype] == "str1" {

  # Do some processing here

  }
}
output {
  elasticsearch {
    hosts => ["https://XXX-es-http:9200"]
    index => "receiver1"
    document_id => "%{XXX}_%{XXX}"
    ssl => true
    cacert => "/etc/logstash/certificates/output/ca.crt"
    user => "elastic"
    password => "XXX"
  }
}

receiver-2.conf:

input { 
  pipeline { address => beatsappd } 
}
filter {
  if [fields][doctype] == "str2" {
  
  # Do some processing here

  }
}
output {
  elasticsearch {
    hosts => ["https://XXX-es-http:9200"]
    index => "receiver2"
    document_id => "%{XXX}_%{XXX}"
    ssl => true
    cacert => "/etc/logstash/certificates/output/ca.crt"
    user => "elastic"
    password => "XXX"
  }
}

This configuration is working fine and Logstash successfully receives data from beats and processes it according to the conditions in conf files. However data never makes it to ECK. Once I switch back to using pipelines individually it all starts working again.

Is there a problem to support encrypted channel with pipeline to pipeline? Or perhaps an additional TLS/SSL requirements when you want to use that over individual pipelines?

Thanks.

I've managed to get this working for a single case. So far I found problems with the syntax of pipeline-to-pipeline setup.

  • Avoid hyphen in virtual address name
  • Use double quotes when virtual address is alphanumeric
  • Only tags seem to have worked this far in my conditional logic. If you have multiple tags coming from a single filebeat agent make sure to use if "name" in [tags] {} as opposed to if [tags] == "name" {}

I am yet to see why my events are not indexed with occasional indexing of one or two. The same configuration works just fine without pipeline to pipeline configuration.

It appears that both pipelines, that beats pipeline forwards to, are receiving all the documents. As a result, the Logstash instance throws all kinds of validation errors. Eventually, some of the documents would make it through the madness and get some of its fields updated by both receiver pipelines.

Leaving one pipeline running with beats pipeline re-directing to it leaves a false impression it's all working, where actually the distributor pattern is not functional at all but it acts as a forked path one instead.

For this test I used Logstash on Kubernetes.
pipelienes.yml:

- pipeline.id: beats-server
  path.config: "/usr/share/logstash/pipeline/beats-server.conf"
- pipeline.id: receiver1
  path.config: "/usr/share/logstash/pipeline/receiver1.conf"
- pipeline.id: receiver2
  path.config: "/usr/share/logstash/pipeline/receiver2.conf"

beats-server.conf:

input {
  beats {
    port => "5044"
    ssl => true
    ssl_key => '/etc/logstash/certificates/input/tls.key'
    ssl_certificate => '/etc/logstash/certificates/input/tls.crt'
    ssl_certificate_authorities => '/etc/logstash/certificates/input/ca.crt'
  }
}
output {
  if "receiver1" in [tags] {
    pipeline { send_to => receiver1address }
  } else if "receiver2" in [tags] {
    pipeline { send_to => receiver2address }
  }
}

receiver1.conf:

input { 
  pipeline { 
    address => receiver1address
  }
}

filter {
  # Do some processing here
}

output {
  if ("_jsonparsefailure" in [tags]) {
    elasticsearch {
      hosts => ["XXX"]
      index => "failure"
      document_id => "receiver1"
      ssl => true
      cacert => "/etc/logstash/certificates/output/ca.crt"
      user => "XXX"
      password => "XXX"
    }
  } else {
    elasticsearch {
      hosts => ["XXX"]
      index => "receiver1"
      document_id => "XXX"
      ssl => true
      cacert => "/etc/logstash/certificates/output/ca.crt"
      user => "elastic"
      password => "XXX"
    }
  }
}

receiver2.conf:

input { 
  pipeline { 
    address => receiver2address
  }
}

filter {
  # Do some processing here
}

output {
  if ("_xmlparsefailure" in [tags]) {
    elasticsearch {
      hosts => ["XXX"]
      index => "failure"
      document_id => "receiver2"
      ssl => true
      cacert => "/etc/logstash/certificates/output/ca.crt"
      user => "XXX"
      password => "XXX"
    }
  } else {
    elasticsearch {
      hosts => ["XXX"]
      index => "receiver2"
      document_id => "XXX"
      ssl => true
      cacert => "/etc/logstash/certificates/output/ca.crt"
      user => "elastic"
      password => "XXX"
    }
  }
}

On filebeat side I have tried many different iterations:

  • Using tags as part of processors (add_tags: tags: [value])
  • Using tags as part of input (tags: [value])
  • Using custom fields as part of input (fields: doctype: value)
  • I can't use type as all cases are log

Logstash doesn't seem to like the syntax at all. In most of the cases it throws errors like below:

[DEBUG] 2020-10-30 13:49:57.710 [nioEventLoopGroup-2-2] ConnectionHandler - 51a29330: batches pending: true
[DEBUG] 2020-10-30 13:49:57.710 [nioEventLoopGroup-2-2] ConnectionHandler - 51a29330: batches pending: true
[DEBUG] 2020-10-30 13:49:57.739 [nioEventLoopGroup-2-2] ConnectionHandler - 51a29330: batches pending: true

which blocks the sending of data to the ES cluster.

I have followed the documentation, went through similar issues in this forum, and implemented all recommendations. Unfortunately, this doesn't seem to work.

I found that path.config was left in the logstash.yml configuration. In addition, I had a type-O in my pipelines.yml file name (missed the s).

So it boils down to the following:

  • Avoid hyphen in virtual address name
  • Use double quotes when virtual address is alphanumeric
  • Use tags on filebeat side
  • config.path left in logstash.yml
  • Wrong file name for pipelines.yml

I will test some more scenarios before considering this as solved.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.