Unable to get nginx messages to kibana through filebeat

Hi Guys,

I am unable to get the proper messages in elastic through filebeat. I have nginx reverse proxy with at least 50 sites catering to and I just configured or first time gave a try with ELK and filebeat. I am somehow getting messages but those are not proper.

Can someone confirm if shipping nginx logs do need any other configuration?

What does that mean exactly?

Well I did receive few messages but those are not indexed and unable to search it. So does filebeat natively supports nginx logs? Or any other parser is needed for the same?

Also can I injest DNS logs with filebeat?

You have any configs, logs, indexed documents, errors you can share with us?

How did you setup/configure the nginx filebeat module?

Here is the filebeat configuration

filebeat:

List of prospectors to fetch data.

prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log//.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/nginx/xyz.com/.log
- /var/log/
.log
- /var/log/nginx//.log
- /var/log/messages
- /var/log/secure

Type of the files. Based on this the way the file is read is decided.

  # The different types cannot be mixed in one prospector
  #
  # Possible options are:
  # * log: Reads every line of the log file (default)
  # * stdin: Reads the standard in
  input_type: log

Aug 8 08:21:49 labmumwaf01 filebeat: Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 281: did not find expected key. Exiting.
Aug 8 08:21:49 labmumwaf01 systemd: filebeat.service: main process exited, code=exited, status=1/FAILURE
Aug 8 08:21:49 labmumwaf01 systemd: Unit filebeat.service entered failed state.
Aug 8 08:21:49 labmumwaf01 systemd: filebeat.service holdoff time over, scheduling restart.
Aug 8 08:21:49 labmumwaf01 systemd: Stopping filebeat...
Aug 8 08:21:49 labmumwaf01 systemd: Starting filebeat...
Aug 8 08:21:49 labmumwaf01 systemd: filebeat.service start request repeated too quickly, refusing to start.
Aug 8 08:21:49 labmumwaf01 systemd: Failed to start filebeat.
Aug 8 08:21:49 labmumwaf01 systemd: Unit filebeat.service entered failed state.

######################
281 ### Logstash as output
282 logstash:
283 # The Logstash hosts
284 hosts: ["172.16.3.69:5044"]

And here is the s

nap where kibana shows filebeat- index is not found

You may want to check that line.

I rectifed that error and it started successfully. However my confusion is since this is an nginx server is it advisable to use logstash parsers through filebeat or can I directly ingest messages from filebeat to elasticsearch?

Well I tried pushing messages to elasticsearch directly from filebeat; messages did appear but seems those are not indexed as the entire message appeard in message column and not as source, destination and blah blah.

What is most advisable then? Same with sysmon or winlogbeat to elasticsearch? messages do need pushed to logstash or can be directly pushed to elastic through winbeat/sysmon?

Either, there is an nginx module that should make this simpler - Nginx module | Filebeat Reference [8.11] | Elastic

That needs to be installed on filebeat server or nginx server I suppose? Please correct me if I am wrong? Or on elasticserach box?

Well I did install ingest-user-agent and ingest-geoip on elk box but not sure where to install the nginx plugin for filebeat? Would you please share the procedure

Oh, you are not using the nginx filebeat module? The filebeat module contains kibana dashboards, filebeat configurations and the ingest pipeline configuration for elasticsearch. As you don't use the module, you have to configure the parsing via Elasticsearch ingest pipeline or logstash yourself.

See the nginx module sources, for Ingest Node configs (<name>/ingest/default.json files). The <name>/config/... files do include templates for building the filebeat prospector settings. The nginx/_meta directory contains the kibana dashboards.

Check out the [Module Overview] and [Tutorial] docs, to get started with modules. It's much more user-friendly, then having to configure everything yourself.

You prospector also includes all the different logs. Consider defining multiple prospectors per log-type. This allows you to add additional meta-data to the different log types and configure another processing pipeline in ES Ingest Node or Logstash. This is how modules work, they create specialized prospector configurations in filebeat.

I just somehow figured out the modules and ran on filebeat machine however I am getting below error on kibana dashboard and unable to see the dashboards.

Can someone pls help?

Error
Saved Visualization Service: Visualization type of "tagcloud" is invalid. Please change to a valid type.

Sounds like the dashboard is not fully compatible with your kibana version.

I am using 5.x do I need to use anything else.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.