I am new in ELK. I have ELK stack 7.12.0 (filebeat -> logstash -> elasticsearch cluster -> kibana)
I installed filebeat on nginx server and enabled nginx module. these are my configuration files:
It looks like the Filebeat Nginx module is not parsing the access logs correctly at ingest time -- so when the document is ingested into Elasticsearch, the entire line is showing up in the "message" field, and the other fields are not populated.
The Filebeat Nginx module documentation states that it has been tested with Nginx version 1.10. What version of Nginx are you using? Also, can you share a sample of a few lines from your Nginx access log here?
Pay special attention the the pipeline reference, what is probably happening is the the ingest pipeline (the ngnix ingest pipeline) is not being called, thus the logs are not parse correctly
If you first point filebeat at elasticsearch and run filebeat setup -e
Then point filebeat to logstash and use this all the ILM, Templates, Pipelines, dashboards etc should work.
After some changes, Now I get this error when I run again filebeat setup -e:
2021-04-10T11:03:38.921+0430 WARN beater/filebeat.go:178 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2021-04-10T11:03:38.921+0430 ERROR instance/beat.go:971 Exiting: Index management requested but the Elasticsearch output is not configured/enabled
Exiting: Index management requested but the Elasticsearch output is not configured/enabled
I am new. May I ask you help me what should I do step by step to troubleshot or send me some links to study?
I would be appreciated you.
I provided my suggestion... But looks like you are still trying other options.
You are making this more complex than it needs to be. You are trying to run set up with all the individual commands that can often lead to issues as opposed to just running the basic command that I supplied so let's try this again.
go into elasticsearch and clean up / delete any filebeat indices
in your filebeat.yml configure the output to point to elasticsearch. Comment out the logstash output. The kibana setup looks good.
run this command no extra parameters just this command. This will set up everything
filebeat setup -e
If you're running 7. 12 there's actually a little bug and it'll throw some errors at the bottom of that command but it should be fine ignore them for now.
now go back into your filebeat.yml and comment out (or take out) the elasticsearch output and put in your logstash output.
use the logstash configuration file that I provided above and start logstash.
start filebeat with no extra paramaters
filebeat -e
Perhaps I didn't provide enough information there is an nginx pipeline that's automatically loaded and is used to parse the data on the elasticsearch side if you don't configure all this correctly like I showed above that will not be used and therefore your data will not be processed and therefore the dashboards will not work.
Please try my suggestions and then if they don't work we can take another look I have set this up many times exactly like this for nginx logs and I can load the dashboards and everything.
Can you try to send the data directly from filebeat to elasticsearch just to test? (skip logstash) I suspect if you when filebeart -> Elasticsearch it would work.
Can you share what the nginix documents looks like in elasticsearch?
GET filebeat-*/_search
Can you show
GET _cat/indices/filebeat*/?v
Also curious why you have this, are there other logs beside nginx logs?
Also I just realized somethings I may have found it!!
What other conf files do you have in /etc/logstash/conf.d/
Please do a
ls -l /etc/logstash/conf.d/
And show that results... All conf files in that directory are concatenated together so if you have other conf files they may be interfering, we have see that before....
If you want separate pipelines you define them in the pipelines.yml
2- I will try to skip logstash and will feedback and I think it can be the problem.
3- GET filebeat-*/_search output:
woooow, I don't have any ouput for /var/log/nginx as log path. I have only /var/log/messages.
4- GET _cat/indices/filebeat*/?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open filebeat-7.12.0-2021.04.11-000001 2KoY1dA6TiqL4ShZoS8GfA 1 1 28928 0 11.1mb 5.5mb
This is telling.... ^^^^ it looks like filebeat is not harvesting the ngnix logs.
On the filebeat server with ngnix logs....
ls -l /var/log/ngnix
What do you see?
I think you maybe missing a wildcard, let's look at that directory.
Perhaps it should be
var.paths: ["/var/log/nginx/access.log*"]
Also I would keep it as one Logstash conf file for now, use the ones I gave you... and you don't need a pipeline file for now that's advanced let's figure this out.
Perhaps but this module is used by many people if it wasn't working we would be getting a lot of reports. ... but it may not be working for you, lets figure it out.
I think there is 1 of 2 things...
The config is not correct
Or perhaps your ngnix logs have been modified and they are not standard.
First Important do not configure the ngnix path in the filebeat.yml only the ngnix.yml not both.
in filebeat.yml : <!---- TAKE this out... this is overiding the module.... and thus not parsing the logs
paths:
- /var/log/nginx/*
In fact please disable that input entirely while we debug ... take out or set to false
filebeat.inputs:
- type: log
enabled: false <!----- HERE or take out entirely
paths:
- /var/log/*.log
Request : Can you provide 3 sample ngnix raw log lines I want to make sure they parse correctly.
You can anonymize the IPs if you wish.
Now can also try directly from Filebeat -> Elasticsearch (with only the ngnix module enabled)
You will need to clean up the filebeat registry so it will reload the files, you do this by removing the data directory in filebeat. The side effect of this is it will reload all the logs... if you are ok.
so from inside the filebeat directory...
rm -fr ./data
Now comment out the logstash output in the filebeat.yml and configuration
Those are custom log formats ... not standard / default format that is why the logs are not getting parsed, I should have looked at that earlier.
I did ask for this and never got it... that is why I asked... that would have shortened this, I should have re-asked... it is why I ask certain question... my fault.
You have 3 numbers in the sequence, the default has 2
So first we need to find out what they are. You need to talk to the ngnix team and find out what their log format is. What are the other 2 numbers other than status_code, 1 is mostly likely response.body.bytes,
which is the other? and what order are they in. 200 3451 3829
It looks like the error logs have the same issue.
The default format has 2 fields, example 200 43707
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.