Logstash cannot send data to elasticsearch

Hi Community,

I'm trying to push some data from a data directory from filebeat to logstash and then elasticsearch. The server runs on Centos7 with the installations of all 4 components. I've pasted below the config file I'm using.

input {
	beats{
        port =>5044
}
filter {
	csv {
		columns => [ "Desc","time","util","data" ]

	}

	mutate {convert => ["time", "float"] }
     mutate {convert => ["util", "float"] }

  if [data] == "data_one" {
		mutate {
		add_field => [ "Desc", "Snap" ]
		}
	}
	else if [data] == "data_two" {
		mutate {
		add_field => [ "Desc", "Snaptwo" ]
		}
	}
	else if [data] == "data_three" {
		mutate {
		add_field => [ "Description", "Snapthree" ]
		}
	}
} 
output {
	elasticsearch {
		hosts => ["localhost:9200"]
		index => "test"
	}
}

Filebeat is rightly processing the data. However when I run Logstash config file I get the below error:

Attempted to resurrect connection to a dead ES instance, but got an error.

I've checked the logstash logs and they say:

Couldn't find any input plugin named lumberjack, are you sure this correct?

I've run this same config file in a windows installation before and it'd worked seamlessly. Can anyone point out, is there a problem with the config file or what needs to be checked to troubleshoot this error??

PS: I'm using I'm not able to copy text logs, hence uploaded the console images.

Please don't post pictures of text, they are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

@warkolm I'm aware of the difficulty, but the stack is being used in a VM, due to which I cannot copy the logs. :frowning: But I've made sure, to attach the config file and the host info which should help in replication. Do let me know, if there's any more info you need to replicate this.

Is Elasticsearch listening on localhost on the VM?

@warkolm No it is present on an AWS Centos7 server. All of the components are installed on the same server (awselk01). I've tried changing the output to, since that was mentioned as one of the potential solutions, but to no avail :

hosts => ["http://awselk01:9200"]

Can you share you elasticsearch.yml contents?

@warkolm Here it is .

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible 
nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.audit.enabled: true
xpack.security.enabled: true

xpack:
security:
    authc:
        realms:
            NativeRealm:
                type: native
                order: 0

Thanks. And Logstash is running on this same VM right?

Given that Logstash complains about the lumberjack plugin which is not in the config you shared, is it possible that you have other files in the config directory that would get picked up. If you have multiple files Logstash will read all and concatenate into a single pipeline. If the other file(s) had errors this could cause the pipeline to fail and no data being indexed.

Are you starting Logstash with a single file as argument or using the config directory?

Yes Filebeat, Logstash, Kibana and Elasticsearch are all running on this server (awselk01). And I was able to run filebeat and its able to communicate with Logstash on port 5044, I could see that on the logs. I've tried to run the syntax check of the conf file as well, it passes with the message 'OK'. It's just that it is not able to pass the indexes to elasticsearch on port 9200.

@Christian_Dahlqvist Yes, I think there's one more config file in that dir. I'll try to remove that from there and run it. I'll let you know how it goes.

Also make sure the Elasticsearch output uses localhost.

Yes, it is already set as localhost:9200 as you can see in the conf file I've attached at the top.

@Christian_Dahlqvist I was running the config file passed as the parameter. Nevertheless, I removed the other conf file and run it. Still getting the same error.

@warkolm

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ---------------------------------

Should the http.port:9200 be an active line in the .yml file?

Please change localhost to the private ip of ec2 machine
You can get it via hostname -I or hostname -I or ifconfig or curl metadata for private ip or from console of AWS.
Please revert if this don't fix the issue.
Also please note security group should open up port 9200 for that private ip
Also change elactisearch.yml "localhost" to private ip

@Abhishek_Kumar6 Localhost works in my case, since all of the services are running on a single server. Nevertheless, I found the fix to my problem. It was to disable the xpack authentication, since the services running were open source based.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.