Unable to send or receive log details

Hi Team,

This is the new setup of ELK. Till now, everything has been installed properly. Kibana dashboard is up and running, elasticsearch, logstash, filebeat and nginx has been installed successfully. I am facing some issue while sending logs. This is the output of :

curl http://localhost:9200/_cat/indices
=> green open .kibana_1 aMpvYvH1SG2zK-98zSyjZw 1 0 2 0 8.6kb 8.6kb

curl http://localhost:9200/logstash-*/_search
{"took":0,"timed_out":false,"_shards":{"total":0,"successful":0,"skipped":0,"failed":0},"hits":{"total":0,"max_score":0.0,"hits":}}

Here it goes WRONG:
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 0,
"timed_out" : false,
"_shards" : {
"total" : 0,
"successful" : 0,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : 0.0,
"hits" :
}
}

Even the telnet from client to ELK server is successful. Please help.

Thank You

Helllo @Yashwant_Shettigar,

From what I see you are using Filebeat with Logstash, have look at their respective logs?
Usually we should log something if we cannot send events.

Thanks

Hi Pier,

Actually, there was silly mistake that I did in filebeat.yml file. Now it is been corrected and working fine.

Can you please let me know is there need to save logstash-forwarder.crt file for windows machine too ? Because in the installation steps, I can see no-where mentioned and neither it is working for me.

Thank You

@Yashwant_Shettigar Yes, you will need to make it available on the windows machine, which documentation are you refering?

Hi Pier,

Below is the link that I'm using :
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html

Do I have to use Winlogbeat for windows log forwarding ?

Also, can you please let me know

  1. what is the retention period of logs (sent via client machine via filebeat).
  2. In which configuration file I can change retention period of logs
  3. When logs are sent from client machine via filebeat, where it is been saved in ELK server
  4. Is there any link which answers my queries

Hi Guys,

Can someone please help me here

Thank You

Do I have to use Winlogbeat for windows log forwarding ?

Filebeat takes care of physical files and forward the content to either Elasticsearch or Logstash.
If you also want to index your Windows event logs you will have to install winlogbeat. Accessing the events require us to use specific Windows API and Filebeat cannot do that.

  1. what is the retention period of logs (sent via client machine via filebeat).

If you are talking about log inside Elasticsearch we do by default daily indices, but we won't delete any older indices, but you can configure curator to do just that.

  1. In which configuration file I can change retention period of logs

see comment above.

  1. When logs are sent from client machine via filebeat, where it ise saved in ELK server

Logs will be saved to a daily indices inside your Elasticsearch cluster, since you are using Logstash your indices will follow this pattern "logstash-*", depending on the settings you are using on your indices and the topology logs data could be replicated or not on multiple nodes.

It is simple and clear information Pier. Great and Thanks a lot !!!

There is one more issue occurred today. Two days ago, filebeat was installed on a client ( A machine) and that was working fine. Now, yesterday, I installed filebeat on one more machine (B) and that too worked fine. But now, log forwarding is not happening from A machine but logs are getting forwarded by B machine, though I can see from the logs of filebeat that logs are getting sent from A machine to ELK server. In Kibana dashboard, I can see old logs for A machine but not the latest one.

I'm sorry, for asking lot of questions !!

@Yashwant_Shettigar I would check the following:

Any errors on A in the Filebeat log?
Are looking at the right time range in kibana to see the log?

Hi Pier,

Can't see errors in the filebeat file, but below are the contents of it :

One thing, that I need to specify is while doing the setup of B (Suse OS) machine, I copied filebeat.yml contents of machine A(Redhat OS) directly to machine B and only changed SSL cert location in that. Didn't made any other changes. It seems that as soon as B machine setup was completed, log forwarding stopped for machine A.

2018-11-23T02:12:43.087-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3620},"total":{"ticks":8580,"time":{"ms":5},"value":8580},"user":{"ticks":4960,"time":{"ms":5}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"185a8745-6cd1-4929-8153-fe610c79a9e7","uptime":{"ms":70263014}},"memstats":{"gc_next":4194304,"memory_alloc":1566528,"memory_total":695033560}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":5}},"system":{"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}

2018-11-23T02:13:13.087-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3620,"time":{"ms":1}},"total":{"ticks":8590,"time":{"ms":4},"value":8590},"user":{"ticks":4970,"time":{"ms":3}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"185a8745-6cd1-4929-8153-fe610c79a9e7","uptime":{"ms":70293014}},"memstats":{"gc_next":4194304,"memory_alloc":1865152,"memory_total":695332184}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":5}},"system":{"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}
2018-11-23T02:13:43.087-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":3620},"total":{"ticks":8590,"time":{"ms":2},"value":8590},"user":{"ticks":4970,"time":{"ms":2}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":7},"info":{"ephemeral_id":"185a8745-6cd1-4929-8153-fe610c79a9e7","uptime":{"ms":70323014}},"memstats":{"gc_next":4194304,"memory_alloc":2236792,"memory_total":695703824}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":5}},"system":{"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}

While below is the log that I can see in /var/log/logstash/logstash-plain.log file :

[2018-11-20T03:52:51,540][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: undefined] Handling exception: javax.net.ssl.SSLHandshakeException: error:1000009c:SSL routines:OPENSSL_internal:HTTP_REQUEST
[2018-11-20T03:52:51,542][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: error:1000009c:SSL routines:OPENSSL_internal:HTTP_REQUEST
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:459) ~[netty-all-4.1.18.Final.jar:4.1.18.Final]

This look like a problem with the SSL authentication?
I would probably try to make it work without SSL first and add SSL after.

Hi Pier,

But to establish connectivity between logstash and filebeat SSL is required, right ?

Hello,

its not required, Its off by default.

But its always better to use TLS (SSL), but lets make sure everything work without it.

Do you mean, I just need to disable SSL settings in logstash.yml file ?

You in your logstash.conf in the beats input you can set it to false using ssl => false

If you don't define any tls options in Filebeat it will just start with plain text.
Lets get the plain text scenario working. After we can move to fix the TLS, maybe is a certificate issue.

Yes Pier, this has stopped generating SSL errors.

Also, the log forwarding isn't happening properly from any machine. Like as mentioned earlier. Tried a work-around. Deleted registry file from client machine and then restarted filebeat service. It will work only for some seconds, after that there are no logs forwarded. To send logs again, I have to again delete registry file and restart filebeat service and the story continues in loop.

Below is the content of filebeat logs :

2018-11-27T15:48:32.140-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50},"total":{"ticks":180,"time":{"ms":3},"value":180},"user":{"ticks":130,"time":{"ms":3}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"cab00d04-b675-4d59-9572-ef7480dcc927","uptime":{"ms":1263016}},"memstats":{"gc_next":4194304,"memory_alloc":2085560,"memory_total":19487712}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":3}},"system":{"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}

2018-11-27T15:49:02.140-0500 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":1}},"total":{"ticks":180,"time":{"ms":3},"value":180},"user":{"ticks":130,"time":{"ms":2}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":8},"info":{"ephemeral_id":"cab00d04-b675-4d59-9572-ef7480dcc927","uptime":{"ms":1293016}},"memstats":{"gc_next":4194304,"memory_alloc":2227872,"memory_total":19630024}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":3}},"system":{"load":{"1":0,"15":0.05,"5":0.01,"norm":{"1":0,"15":0.05,"5":0.01}}}}}}

Hi Pier,

There was mistake in configuration. It is been resolved now. Logs forwarding is frequently happening. Thanks for your valuable time and help.

There were two files available in my logstash conf directory, which were conflicting.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.