Getting connection refused error while trying to send log file from filebeat to elasticsearch in different server.( Modified elasticsearch port to 9201)

2017-08-16T08:46:52-04:00 ERR Connecting error publishing events (retrying): Get http://x.x.x.x:9201: dial tcp x.x.x.x:9201: getsockopt: connection refused
2017-08-16T08:47:17-04:00 INFO No non-zero metrics in the last 30s.

Below's my filebeat configuration, No idea, what am i missing

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

** # Paths that should be crawled and fetched. Glob based paths.**
** paths:**
** - /var/log/*.log**

output.elasticsearch:
** # Array of hosts to connect to.**
** hosts: ["x.x.x.x:9201"]**
** certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]**

My input.conf contents :slight_smile:
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

Output.conf

output {
elasticsearch {
hosts => ["localhost:9201"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

filter.conf

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGLINE}" }
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Do we need to configure in input.conf to accept from filebeat server ? Please suggest

You want to send from filebeat to Logstash or Elasticsearch? Filebeat is no server, but a client to both Elasticsearch and Logstash. In the first post, it is the Elasticsearch host machine actively refusing the connection, not filebeat. Is Elasticsearch running and accessible from machine filebeat is running on? Try curl http://x.x.x.x:9201 from machine you want to have filebeat running on.

Thanks steffens for reply.

No, Elasticsearch is not accessible from machine filebeat is running on, getting below error

curl http://x.x.x.x:9201
curl: (7) couldn't connect to host

You are correct "Elasticsearch host machine actively refusing the connection"

I saved certificate from ELK server to filebeat server, still am i missing any configuration ?

</>My Elasticsearch configuration

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#
# Set a custom port for HTTP:
#
http.port: 9201
#
# For more information, see the documentation at:
# <https://www.elastic.co/guide/en/elasticsearch/reference/5.0/modules-network.html>

My logstash configuration

# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
 http.host: "localhost"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
 http.port: 9600-9700
# ------------ Debugging Settings --------------
#

Is this proper configuration, so that elasticsearch/logstash would listen.

For me i am trying anything possible for making filebeat to send logs to either elasticsearch/logstash</>

please properly format logs and configuration files using the </> button. Your recent post is mostly unreadable.

Check the network.host setting. This one configures the hostname/device elasticsearch is bound upon. Setting this to IP 0.0.0.0 will make Elasticsearch available from all network devices.

Hi Steffens,

Tried configuring network.host to 0.0.0.0, still getting same error

Can you check with netstat or ss tools which device/IP elasticsearch is listening on?

Curl/telnet not working? It's a networking issue. Maybe you have a firewall or something else in place?

Done configuration changes in elasticsearch also in kibana, post configuration i am getting below output when trying to hit URL:

Status Breakdown
ID Status
ui settings Elasticsearch plugin is red
plugin:kibana@5.0.2 Ready
plugin:elasticsearch@5.0.2 Unable to connect to Elasticsearch at http://x.x.x.x:9201.
plugin:console@5.0.2 Ready
plugin:timelion@5.0.2 Ready

Getting 'red' from kibana is not a good signal. Seems like kibana can not access it?

When i am changing elasticsearch configuration(network.host :0.0.0.0), then i am not able to restart elasticsearch, tried evrything to stop elasticsearch so that it may take changes, and when i revert my changes to network.host :localhost, elasticsearch works fine.

Kibana issue resolved : Modified kibana .yml from #elasticsearch.url: "http://ELk_server_IP_Address:9201" to #elasticsearch.url: "http://localhost:9201"

Verified form NETSTAT, port 9201 is listening in ELK server

Have you checked why Elasticsearch is not starting. Binding to localhost is some kind of development/test mode. By binding to another device, Elasticsearch runs some bootstrap checks to verify it will be stable when used in production.

After doing network.host : 0.0.0.0, elasticsearch is not starting and also getting below error.

[2017-08-23T01:51:22,446][WARN ][o.e.b.JNANatives ] unable to install syscall filter:
java.lang.UnsupportedOperationException: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
at org.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:361) ~[elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:630) ~[elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.JNANatives.trySeccomp(JNANatives.java:215) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Natives.trySeccomp(Natives.java:99) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:104) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:158) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:291) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:96) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.cli.Command.main(Command.java:62) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) [elasticsearch-5.0.2.jar:5.0.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) [elasticsearch-5.0.2.jar:5.0.2]

[2017-08-23T03:09:02,055][ERROR][o.e.b.Bootstrap ] [Kwck9Rg] node validation exception
bootstrap checks failed
max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

I verified in elasticsearch.yml, m not getting where to modify configuration

Can you ask for the bootstrap checks in the Elasticsearch forum? I don't know every check + good solutions how to resolve the checks in a stable way.

Hi Steffens, Thank you so much for help

System_call filter issue is resolved By adding bootstrap.system_call_filter: false in elasticsearch.yml, but still i am getting error as

[2017-08-29T07:08:14,723][ERROR][o.e.b.Bootstrap ] [Kwck9Rg] node validation exception
[1] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

Still Elasticsearch bootstrap checks. See elasticsearch docs for important settings. I guess you are looking for this article.

Hi Steffens,

i have added @elasticsearch hard nproc 2048 in /etc/security/limits.conf, but still getting below error

[2017-08-29T09:27:45,011][ERROR][o.e.b.Bootstrap ] [wS6o9sH] node validation exception
[1] bootstrap checks failed
[1]: max number of threads [1024] for user [elasticsearch] is too low, increase to at least [2048]

Also getting Output as elasticsearch dead but subsys locked when given service elasticsearch status command

i have removed elasticsearch from/var/lock/subsys, but when i am again restarting, again automatically it is creating lock.

Isn't Elasticsearch supposed to create a lock file? To protect itself from multiple instance modifying the same files for example? For these internal Elasticsearch troubleshooting, better ask in the Elasticsearch forums. I've never encountered this error and have no real idea. I'd assume to first ensure no instance is running (no java process) and deleting the file should do the trick.