Filebeat running on remote serverand sending logs to elasticsearch but not visible on kibana

Hi, i have my elastic search configured as master and data node on same machine. The filebeats installed on that machine ships logs to kibana and its indexed but when i install filebeat on the remote server , and point it to the same elasticsearch cluster, i dont see the logs of the remote server in kibana even though the logs are shipped to elasticsearch.

edited

Please don't post pictures of text, they are difficult to read and some people may not be even able to see them.

when i install filebeat on the remote server

Would you mind posting the filebeat.yml that you are using on this remote server? Please be sure to mask any sensitive information in it.

i dont see the logs of the remote server

Could you elaborate on what type of logs you want to collect? Could you also post the results of ./filebeat modules list, run on the remote server?

Thanks,

Shaunak

master node

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: newes

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: data-node-1

Add custom attributes to the node:

Allow this node to be eligible as a master node (enabled by default):

node.master: false

Allow this node to store data (enabled by default):

node.data: true

#node.attr.rack: r1

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 172.16.7.86

Set a custom port for HTTP:

http.port: 9300

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when this node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["172.16.7.93:9300"]

#discovery.seed_hosts: ["172.16.7.86":9300"]

Bootstrap the cluster using an initial set of master-eligible nodes:

#cluster.initial_master_nodes: ["-master-node-1"]

For more information, consult the discovery and cluster formation module documentation.

so tis is what iv done after my earlier post:

i want to create the master and data nodes in different instances instead of the same instance as before. what i posted now is the elasticsearch.yaml for the datanode and im getting errors. i will post the error below:

[2019-06-05T20:46:23,581][DEBUG][o.e.a.ActionModule ] [data-node-1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-06-05T20:46:23,947][INFO ][o.e.d.DiscoveryModule ] [data-node-1] using discovery type [zen] and seed hosts providers [settings]
[2019-06-05T20:46:24,996][INFO ][o.e.n.Node ] [data-node-1] initialized
[2019-06-05T20:46:24,996][INFO ][o.e.n.Node ] [data-node-1] starting ...
[2019-06-05T20:46:25,147][INFO ][o.e.t.TransportService ] [data-node-1] publish_address {172.16.7.86:9300}, bound_addresses {172.16.7.86:9300}
[2019-06-05T20:46:25,157][INFO ][o.e.b.BootstrapChecks ] [data-node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-06-05T20:46:35,175][WARN ][o.e.c.c.ClusterFormationFailureHelper] [data-node-1] master not discovered yet: have discovered ; discovery will continue using [172.16.7.93:9300] from hosts providers and [{data-node-1}{US2PX-xWRpyAHt7r5mnDdA}{KBRRQG79S4KmZf3C9jIIzQ}{172.16.7.86}{172.16.7.86:9300}{ml.machine_memory=8054480896, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
[2019-06-05T20:46:45,176][WARN ][o.e.c.c.ClusterFormationFailureHelper] [data-node-1] master not discovered yet: have discovered ; discovery will continue using [172.16.7.93:9300] from hosts providers and [{rsc-data-node-1}{US2PX-xWRpyAHt7r5mnDdA}{KBRRQG79S4KmZf3C9jIIzQ}{172.16.7.86}{172.16.7.86:9300}{ml.machine_memory=8054480896, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
^X^C[2019-06-05T20:46:50,076][INFO ][o.e.x.m.p.NativeController] [data-node-1] Native controller process has stopped - no new native processes can be started
[2019-06-05T20:46:50,082][INFO ][o.e.n.Node ] [data-node-1] stopping ...
[2019-06-05T20:46:50,083][INFO ][o.e.x.w.WatcherService ] [data-node-1] stopping watch service, reason [shutdown initiated]
[2019-06-05T20:46:50,330][INFO ][o.e.n.Node ] [data-node-1] stopped
[2019-06-05T20:46:50,331][INFO ][o.e.n.Node ] [data-node-1] closing

the master node cluster is running fine. below is the elasticsearch.yml for the master
---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: newes

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: master-node-1

Add custom attributes to the node:

#node.attr.rack: r1

Allow this node to be eligible as a master node (enabled by default):

node.master: true

Allow this node to store data (enabled by default):

node.data: false

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6):

network.host: 172.16.7.93

Set a custom port for HTTP:

http.port: 9200

network.bind_host: 172.16.7.93

network.publish_host : 172.16.7.93

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when this node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

discovery.zen.ping.unicast.hosts: ["172.16.7.93:9200","172.16.7.86:9300"]

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

please explain? i just copied the yaml files and the erros and pasted here.

Yes but it's extremely hard to read all of that. YAML is indent sensitive, so pasting things without making sure the format is optimised means we lose the ability to see if something is not indented correctly.

what do you want me to do?please let me know...

Preformatted text# ---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster:

cluster.name: rsc

------------------------------------ Node ------------------------------------

Use a descriptive name for the node:

node.name: rsc-data-node-1

Add custom attributes to the node:

Allow this node to be eligible as a master node (enabled by default):

node.master: false

Allow this node to store data (enabled by default):

node.data: true

#node.attr.rack: r1Preformatted text

can i email you?

this has been resolved. DataNodes working and filebeats sending logs from remote server to ES

1 Like

Please share the solution in the thread, it might help someone in future :slight_smile:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.