Getting connection refused error while trying to send log file from filebeat to elasticsearch in different server.( Modified elasticsearch port to 9201)

(Jeemi Sinha) #1

2017-08-16T08:46:52-04:00 ERR Connecting error publishing events (retrying): Get http://x.x.x.x:9201: dial tcp x.x.x.x:9201: getsockopt: connection refused
2017-08-16T08:47:17-04:00 INFO No non-zero metrics in the last 30s.

Below's my filebeat configuration, No idea, what am i missing


# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

** # Paths that should be crawled and fetched. Glob based paths.**
** paths:**
** - /var/log/*.log**

** # Array of hosts to connect to.**
** hosts: ["x.x.x.x:9201"]**
** certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]**

(Jeemi Sinha) #2

My input.conf contents :slight_smile:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"


output {
elasticsearch {
hosts => ["localhost:9201"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"


filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

Do we need to configure in input.conf to accept from filebeat server ? Please suggest

(Steffen Siering) #3

You want to send from filebeat to Logstash or Elasticsearch? Filebeat is no server, but a client to both Elasticsearch and Logstash. In the first post, it is the Elasticsearch host machine actively refusing the connection, not filebeat. Is Elasticsearch running and accessible from machine filebeat is running on? Try curl http://x.x.x.x:9201 from machine you want to have filebeat running on.

(Jeemi Sinha) #4

Thanks steffens for reply.

No, Elasticsearch is not accessible from machine filebeat is running on, getting below error

curl http://x.x.x.x:9201
curl: (7) couldn't connect to host

You are correct "Elasticsearch host machine actively refusing the connection"

I saved certificate from ELK server to filebeat server, still am i missing any configuration ?

(Jeemi Sinha) #5

</>My Elasticsearch configuration

# ---------------------------------- Network -----------------------------------
# Set the bind address to a specific IP (IPv4 or IPv6):
# localhost
# Set a custom port for HTTP:
http.port: 9201
# For more information, see the documentation at:
# <>

My logstash configuration

# ------------ Metrics Settings --------------
# Bind address for the metrics REST endpoint
# "localhost"
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
 http.port: 9600-9700
# ------------ Debugging Settings --------------

Is this proper configuration, so that elasticsearch/logstash would listen.

For me i am trying anything possible for making filebeat to send logs to either elasticsearch/logstash</>

(Steffen Siering) #6

please properly format logs and configuration files using the </> button. Your recent post is mostly unreadable.

Check the setting. This one configures the hostname/device elasticsearch is bound upon. Setting this to IP will make Elasticsearch available from all network devices.

(Jeemi Sinha) #7

Hi Steffens,

Tried configuring to, still getting same error

(Steffen Siering) #8

Can you check with netstat or ss tools which device/IP elasticsearch is listening on?

Curl/telnet not working? It's a networking issue. Maybe you have a firewall or something else in place?

(Jeemi Sinha) #9

Done configuration changes in elasticsearch also in kibana, post configuration i am getting below output when trying to hit URL:

Status Breakdown
ID Status
ui settings Elasticsearch plugin is red
plugin:kibana@5.0.2 Ready
plugin:elasticsearch@5.0.2 Unable to connect to Elasticsearch at http://x.x.x.x:9201.
plugin:console@5.0.2 Ready
plugin:timelion@5.0.2 Ready

(Steffen Siering) #10

Getting 'red' from kibana is not a good signal. Seems like kibana can not access it?

(Jeemi Sinha) #11

When i am changing elasticsearch configuration( :, then i am not able to restart elasticsearch, tried evrything to stop elasticsearch so that it may take changes, and when i revert my changes to :localhost, elasticsearch works fine.

Kibana issue resolved : Modified kibana .yml from #elasticsearch.url: "http://ELk_server_IP_Address:9201" to #elasticsearch.url: "http://localhost:9201"

Verified form NETSTAT, port 9201 is listening in ELK server

(Steffen Siering) #12

Have you checked why Elasticsearch is not starting. Binding to localhost is some kind of development/test mode. By binding to another device, Elasticsearch runs some bootstrap checks to verify it will be stable when used in production.

(Jeemi Sinha) #13

After doing :, elasticsearch is not starting and also getting below error.

[2017-08-23T01:51:22,446][WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
at org.elasticsearch.bootstrap.Seccomp.linuxImpl( ~[elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Seccomp.init( ~[elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.JNANatives.trySeccomp( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Natives.trySeccomp( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Bootstrap.setup( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Bootstrap.init( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.init( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.cli.SettingCommand.execute( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.cli.Command.main( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.main( [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.main( [elasticsearch-5.0.2.jar:5.0.2]

[2017-08-23T03:09:02,055][ERROR][o.e.b.Bootstrap ] [Kwck9Rg] node validation exception
bootstrap checks failed
max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

I verified in elasticsearch.yml, m not getting where to modify configuration

(Steffen Siering) #14

Can you ask for the bootstrap checks in the Elasticsearch forum? I don't know every check + good solutions how to resolve the checks in a stable way.

(Jeemi Sinha) #15

Hi Steffens, Thank you so much for help

System_call filter issue is resolved By adding bootstrap.system_call_filter: false in elasticsearch.yml, but still i am getting error as

[2017-08-29T07:08:14,723][ERROR][o.e.b.Bootstrap ] [Kwck9Rg] node validation exception
[1] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

(Steffen Siering) #16

Still Elasticsearch bootstrap checks. See elasticsearch docs for important settings. I guess you are looking for this article.

(Jeemi Sinha) #17

Hi Steffens,

i have added @elasticsearch hard nproc 2048 in /etc/security/limits.conf, but still getting below error

[2017-08-29T09:27:45,011][ERROR][o.e.b.Bootstrap ] [wS6o9sH] node validation exception
[1] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

(Jeemi Sinha) #18

Also getting Output as elasticsearch dead but subsys locked when given service elasticsearch status command

(Jeemi Sinha) #19

i have removed elasticsearch from/var/lock/subsys, but when i am again restarting, again automatically it is creating lock.

(Steffen Siering) #20

Isn't Elasticsearch supposed to create a lock file? To protect itself from multiple instance modifying the same files for example? For these internal Elasticsearch troubleshooting, better ask in the Elasticsearch forums. I've never encountered this error and have no real idea. I'd assume to first ensure no instance is running (no java process) and deleting the file should do the trick.