Filebeat > Nginx Module

Part 4: Final


Did that come from?

GET /_template/filebeat-7.1.1

If so further down you will see this.......which is correct. The out put is very long.

   "source" : {
      "properties" : {
        "geo" : {
          "properties" : {
            "region_iso_code" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "continent_name" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "city_name" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "country_iso_code" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "country_name" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "name" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "region_name" : {
              "ignore_above" : 1024,
              "type" : "keyword"
            "location" : {
              "type" : "geo_point"

In the index pattern you should see these fields like this shown below...

if not delete that index pattern from the Kibana GUI
Mangement -> Index Patterns
and run ./filebeat setup again.

Apologies but I don't think I am able to help much more ... to do a clean install ... uninstall and re-install elasticsearch and make sure the the data directory under the elasticsearch install is removed before you reinstall.

Hi may I check with you. My client side is installing filebeat 7.3.0 while the ES server's Filebeat is on 7.1.1. Will there be an issue in terms of compatibility and the data format?

Hi @stephenb, I manage to get this fields, but the dashboard still cannot capture.

What I did is I removed the client Filebeat and installed the 7.1.1 version.


After that I notice I am facing some difficulties installing the logstash on the client server, its Centos 6.10 and with Java 1.8.0. Will there be any problem connecting these 2 servers ?

Hi is there anyone can help to troubleshoot this issue? thx

@stephenb , may I know what is your index name for this Nginx elastic ?

I came across some forum suggest that the template only recognize nginx-* indexes in order to load those templates.

If you use the filebeat nginx module with all the default settings the nginx logs will be indexed into indexes name with pattern filebeat-*

Hi bro, so this is expected? Are you able to help me on this issue? I have no solution to it

I found this online, wanted to load this template, but it fails.

Can anyone share why? is it incompatible ?

How can I check the Filebeat's dashboard compatibility and which version did I installed ?

Dear all, is there anyone can help on my question? It's been sometimes.

Almost 99% of my dashboards cannot display the data collected, while there are data collected from the other servers stored and received into our Elasticsearch DB. Wonder why the default dashboard cannot display?

I have tried various method to resolve this, but all of it just doesn't work.

Next, the indexes ./filebeat has been deleted and reindex previously, but yet there is no data is reflecting on the dashboards.

Hi, is there any command I can used to check what went wrong?

Currently I found most solution was to use "logstash" to compile, but I am facing some challenges to get it work. May I know is it possible to use "filebeat" to do the data collection instead?

Hi all, how come after I added these lines, I cant get the filebeat to start properly?

  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

Once I add the code above, the module listing is able to work, but my filebeat will have issue running and keep restarting. What is the correct way to do it?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.