Cannot send logs from Logstash to Elastic Search

Hello,

I deployed ELK Stack to k8s by using helm. In the cluster, Elasticsearch, Kibana and Filebeat are running. I also configured Logstash to send Filebeat logs and logs from external resource.

My external resource is running in another server so I created logstash service as nodePort from 30123. Here is my values.yaml for logstash.

logstashConfig:
  logstash.yml: |
    http.host: 0.0.0.0  
  pipelines.yml: |
    # This file is where you define your pipelines. You can define multiple.
    # For more information on multiple pipelines, see the documentation:
    #   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
    - pipeline.id: logstash
      path.config: "/usr/share/logstash/pipeline/logstash.conf"
    - pipeline.id: devopsdashboard
      path.config: "/usr/share/logstash/pipeline/devopsdashboard.conf"
#  log4j2.properties: |
#    key = value

# Allows you to add any pipeline files in /usr/share/logstash/pipeline/
### ***warn*** there is a hardcoded logstash.conf in the image, override it first
logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    filter {
    }
    output {
      elasticsearch {
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        hosts => [ "elasticsearch-master:9200" ]
      }
    }
  devopsdashboard.conf: |
    input {
       tcp {
         host => "0.0.0.0"
         port => 30123
         codec => "json_lines"
       }
    }
    filter {
       mutate {
          remove_field => ["host", "port"]
       }
    }
    output {
      elasticsearch {
        index => "logstash-%{+YYYY.MM.dd}"
        hosts => [ "elasticsearch-master:9200" ]
      }
    }

When I want to create index on Kibana, I can see filebeat index but cannot see logstash. If curl to elastic inside of logstash pod, my index is created but if I try to send logs by tcp via logstash, nothing happens. Do you have any ideas that how I can send logs from logtash to elastic?

I tried a python script to send logs by tcp from my local and I received "400 Bad Request". I do not know what I am doing wrong. Here is my python script.

import socket
import json
import logging
from datetime import datetime
import sys

print("starting to send data to Elastic search")
# Create TCP/IP socket
print("Creating TCP/IP socket")
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
message = []
try:
    # Connect to port where server is running
    server_address = ('Cluster_IP', 30123)
    sock.connect(server_address)
    data = {'@test' : 'test1', '@message': 'python test message', '@tags': ['python', 'test']}
    sock.sendall(json.dumps(data).encode())
    print("Sent")
    print(sock.recv(1024)) 
except socket.error as e:
    sys.stderr.write(str(e))
finally:
    sock.close()

Thanks a lot!

Can I get an answer please?

If you are getting a "400 Bad Request" error it cannot be from the tcp input, since that does not return an HTTP status. It would have to be from the Elasticsearch input. In that case you should check the Elasticsearch log to see if there is a more informative message there.

Hi Badger,

Thank you for your response.

I installed a new filebeat in my external server (Windows) and connect it to the Logstash. I also updated pipelines for new filebeat.

logstashConfig:
  logstash.yml: |
    http.host: 0.0.0.0  
  pipelines.yml: |
    # This file is where you define your pipelines. You can define multiple.
    # For more information on multiple pipelines, see the documentation:
    #   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
    - pipeline.id: logstash
      path.config: "/usr/share/logstash/pipeline/logstash.conf"
    - pipeline.id: devopsdashboard
      path.config: "/usr/share/logstash/pipeline/devopsdashboard.conf"
    - pipeline.id: filebeat
      path.config: "/usr/share/logstash/pipeline/filebeat.conf"
      pipeline.workers: 3
#  log4j2.properties: |
#    key = value

# Allows you to add any pipeline files in /usr/share/logstash/pipeline/
### ***warn*** there is a hardcoded logstash.conf in the image, override it first
logstashPipeline:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    filter {
    }
    output {
      elasticsearch {
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        hosts => [ "elasticsearch-master:9200" ]
      }
    }
  devopsdashboard.conf: |
    input {
       tcp {
          port => 30124
          codec => "json_lines"
       }
    }
    filter {
       mutate {
          remove_field => ["host", "port"]
       }
    }
    output {
      elasticsearch {
        index => "logstash-%{+YYYY.MM.dd}"
        hosts => [ "elasticsearch-master:9200" ]
      }
    }
  filebeat.conf: |
    input {
      beats {
        port => 30123
      }
    }
    filter {
    }
    output {
      elasticsearch {
        index => "dd-%{+YYYY.MM.dd}"
        hosts => [ "elasticsearch-master:9200" ]
      }
    }

When I check connection between two servers, telnet works and network seems fine but filebeat gives error.

|2021-12-13T10:36:22.059+0300|INFO|[publisher_pipeline_output]|pipeline/output.go:151|Connection to backoff(async(tcp://target_server:30123)) established|
|---|---|---|---|---|
|2021-12-13T10:36:22.121+0300|ERROR|[logstash]|logstash/async.go:280|Failed to publish events caused by: read tcp source_server -> target_server:30123 wsarecv: An established connection was aborted by the software in your host machine.|
|2021-12-13T10:36:22.121+0300|INFO|[publisher]|pipeline/retry.go:219|retryer: send unwait signal to consumer|
|2021-12-13T10:36:22.121+0300|INFO|[publisher]|pipeline/retry.go:223|  done|

2021-12-13T10:42:21.788+0300	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp://target_server:30123)) established
2021-12-13T10:42:21.832+0300	ERROR	[logstash]	logstash/async.go:280	Failed to publish events caused by: lumberjack protocol error
2021-12-13T10:42:21.833+0300	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2021-12-13T10:42:21.833+0300	INFO	[publisher]	pipeline/retry.go:223	  done

Here is my filebeat.yml

output.logstash:
  # The Logstash hosts
  hosts: ["target_server:30123"]

When I change output.logstash to output.Elasticsearch, I can see index created in Kibana. What might be wrong with Logstash?

Try any other port, maybe default filebeat 5044

Logstash is running in k8s cluster but filebeat is in another server. That is why I configured Logstash as nodePort and expose the port 30123.

Elastic team probably busy today with log4j and 7.16.1 stuff. they can better answer that when they are available.
regards.

Any ideas?

Can I please get an answer from Elastic team?

The error you are getting is normally related to network issues, I do not know much about k8s, but do you have something in front of your logstash server and port? Like a load balancer or anything like that?

When you connect to the logstash IP and port you connect directly to it? Also, can you check logstash logs to see if there is something that can give a hint of the issue?

It is exposed as nodePort and I can reach logstash from browser in the same server with filebeat. When I try telnet, it also connects to k8s but filebeat logs are still same.

|2021-12-31T14:52:47.719+0300|INFO|[publisher_pipeline_output]|pipeline/output.go:151|Connection to backoff(async(tcp://k8s_master_server:30123)) established|
|---|---|---|---|---|
|2021-12-31T14:52:47.774+0300|ERROR|[logstash]|logstash/async.go:280|Failed to publish events caused by: lumberjack protocol error|
|2021-12-31T14:52:47.774+0300|INFO|[publisher]|pipeline/retry.go:219|retryer: send unwait signal to consumer|
|2021-12-31T14:52:47.774+0300|INFO|[publisher]|pipeline/retry.go:223|  done|

Also the same server can send logs to Elasticsearch which is in the same k8s cluster through filebeat if I change the output. Something is wrong with Logstash but I could not figure it out.

Did you check logstash logs? You need to check logstash logs to see if there is any hint of the issue, just filebeat logs is not enough for troubleshooting.

There was no error log in logstash log but I fixed the problem.
I gave logstash pipeline the port which is nodePort before but it should have been internal port.
So I changed logstash service to serve from 9500, edited pipeline with 9500 and it worked.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.