How to get the sum in time of values in Lens

Of course, I included agent.version in nginx.yml here.

# vi /etc/filebeat/modules.d/nginx.yml
# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.11/filebeat-module-nginx.html

- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/access.log"]
    input.index: "filebeat-%{[agent.version]}-else02-httpd-access-%{+yyyy.MM.dd}"

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/error.log"]
    input.index: "filebeat-%{[agent.version]}-else02-httpd-error-%{+yyyy.MM.dd}"

  # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
  ingress_controller:
    enabled: false

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

The same problem still occurs even with the minimum configuration as you mentioned.

filebeat.config.modules:
  path: /etc/filebeat/modules.d/nginx.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1
  #index.number_of_replica: 0


setup.kibana:
  #host: "localhost:5601"

output.elasticsearch:
  hosts: ["localhost:9200"]


#logging:
#  level: info
#  to_files: true
#  to_syslog: false

Apologies I do not know what is not working with your setup

I just did this ... this is literally all I did.

  1. Completely fresh default install of Elasticsearch / Kibana 7.12.0
  2. Edited and combined filebeat.yml and nginx.yml into single minimal file see below.
  3. $ ./filebeat setup -e
  4. $ ./filebeat -e

Result

GET _cat/indices/file*?v

health status index                                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   filebeat-7.12.0-2022.08.02-000001       Lg5TuGtcRwKYc7DN7YeXqQ   1   1          0            0       208b           208b
yellow open   filebeat-7.12.0-nginx-access-2022.08.02 hiiPnYcIRSOe3BoYn1PYMQ   1   1          7            0     36.1kb         36.1kb

This is my entire filebeat.yml (I combined them which is perfectly valid, to reduce variables)

filebeat.modules:
- module: nginx

  access:
    enabled: true
    input.index: "filebeat-%{[agent.version]}-nginx-access-%{+yyyy.MM.dd}"
    input.tags: ["customer-a"]
    var.paths: ["/Users/sbrown/workspace/sample-data/nginx/nginx-test.log"]

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:

output.elasticsearch:
  hosts: ["localhost:9200"]

You have something going on... another .yml (this has happened to me before), your not using the .yml you think you are... bad syntax ... something ... or something is not default with the pipeline, alias something cluster etc.

I can provide my docker compose and test data if you like...

Something is also weird have you set some odd refresh rate I see even on the index you appear to be writing to 0 docs.. also 1 has 1 replica and the other 0... this leads me to believe there is something else going on... did you create your own templates or something with the same matching patterns... there could be conflict or the order they are applied.. something strange is going on

# curl -X GET "localhost:9200/_cat/indices?v"
health status index                                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   filebeat-7.12.0-else02-httpd-access-2022.08.02 I_rWmCFZTMCr4f8TGcK4zg   1   0          0            0       208b           208b
yellow open   filebeat-7.12.0-2022.08.02-000001              jUGcGI4hTX2pZF643ObQ2Q   1   1          0            0     71.5kb         71.5kb

technically looking very close there is 1 issue we would resolve and that is removing the ILM but that is not the cause of your issue...

it looks to me that filebeat is still writing to the write alias....

{
  "filebeat-7.12.0-2022.08.02-000001" : {
    "aliases" : {
      "filebeat-7.12.0" : {
        "is_write_index" : true
      }
    },

which means your filebeat is still writing to the default filebeat-7.12.0 why I am not sure.

@its-ogawa Think I may have found it!

I don't think this is correct / it is doing nothing.

How did you install?

If you installed via .deb or .rpm that is not the correct directory see here

data The location for persistent data files. /var/lib/filebeat

So your rm command is doing nothing and thus the data is not getting re-loaded.
it should be rm -rf /var/lib/filebeat/*

# rm -rf /var/lib/filebeat/*
# /usr/share/filebeat/bin/filebeat setup -e
# /usr/share/filebeat/bin/filebeat -e

Yes. I did indeed install using the rpm package.

What do you mean by persistent data files?
Is it meta.json?

I have both /var/lib/filebeat/meta.json and /usr/share/filebeat/bin/data/meta.json in my environment.

I have made a step forward.
The single minimal configuration file you created worked.

Current Status. I am able to reproduce the same situation as you.

# curl -X GET "210.148.155.195:9200/_cat/indices/file*?v"
health status index                                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   filebeat-7.12.0                         GHo0AkwMR_metNn97AF00A   1   1          2            0     35.7kb         35.7kb
yellow open   filebeat-7.12.0-nginx-access-2022.08.03 jjrmhSTsQh6RmGiBcUwnSA   1   1          2            0     47.1kb         47.1kb

Here are a few questions.

  • Why is the index filebeat-7.12.0 being created?

    • I am intentionally creating the index input.index." filebeat-%{[agent.version]}-nginx-access-%{+yyyy.MM.dd}"!
    • Duplicate indexes are undesirable because they cut in half the number of indexes (number of shards) that can be maintained.
  • Why is the index.number_of_replica setting not enabled?

    • I intentionally choose not to create replicas (index.number_of_replica: 0).
    • This is undesirable because creating a replica halves the number of indexes (number of shards) that can be kept
# curl -X GET "210.148.155.195:9200/_cat/shards/file*?v"
index                                   shard prirep state      docs  store ip              node
filebeat-7.12.0-nginx-access-2022.08.03 0     p      STARTED       2 47.1kb XXX.XXX.XXX.XXX ELSTEST-01
filebeat-7.12.0-nginx-access-2022.08.03 0     r      UNASSIGNED
filebeat-7.12.0                         0     p      STARTED       2 35.7kb XXX.XXX.XXX.XXX ELSTEST-01
filebeat-7.12.0                         0     r      UNASSIGNED

That is fine if you want 0
replicas default is1

Something is not right with your installation. You shouldn't have both of those. I don't know if it's how you started it one time etc. I'm not sure if you have more than one installation. I'm not sure if you have one running in the background somewhere. Did you check? I don't know why. The behavior you are seeing is not normal... You're going to have to figure out what's Not correct.

If I were you I would try this.

You should clean up all the data files... In both locations, everything in the data directory and everything in All of it.

/var/lib/filebeat/
/usr/share/filebeat/bin/data

Leave my minimal filebeat.yml only...

Make sure all the files in the modules.d directory are disabled.

Then clean up the indices

Then start with system control

systemctl start filebeat

Something is not right with your installation... Or the way you're starting this or something?.

I have reinstalled Filebeat.

In the configuration of #34, The problem of not being able to retrieve documents as shown in #22 does not currently occur.

However, there are index and replica issues as shown in #37.

What about this?

@its-ogawa Apologies I have no answer.... I can not replicate the issue.
Check your PM.

So I have 1 more idea

Clean up the indices...

Install a local tar.gz dist for a test

$ cd /tmp
$ curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.0-linux-x86_64.tar.gz
$ tar -xvf filebeat-7.12.0-linux-x86_64.tar.gz 
$ filebeat-7.12.0-linux-x86_64

cp in my minimal filebeat.yml
run these commands make sure you use the ./

$ ./filebeat setup -e 
$ ./filebeat -e

see what happens

Thank you for your reply.

I installed Filebeat using the package for linux (.tar.gz) and tried again.
The result is the same as #37.
In addition to the index with the specified name, a separate index of the form filebeat-%{[agent.version]}-%{+yyyy.MM.dd} is created, which seems to be the default name.

The steps performed are as follows

  • Delete t the currently used Filebea (obtained from rpm)
# rpm -e filebeat
# systemctl daemon-reload
  • Get Filebeat (.tar.gz package for linux)
# curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.0-linux-x86_64.tar.gz
# tar -xvf filebeat-7.12.0-linux-x86_64.tar.gz
# cd filebeat-7.12.0-linux-x86_64
  • Filebeat Setup
# vi filebeat.yml
filebeat.modules:
- module: nginx

  access:
    enabled: true
    input.index: "filebeat-%{[agent.version]}-nginx-access-%{+yyyy.MM.dd}"
    #input.index: "else02-nginx-access-%{+yyyy.MM.dd}"
    input.tags: ["customer-a"]
    var.paths: ["/var/log/nginx/access.log"]

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replica: 0

#setup.kibana:

output.elasticsearch:
  hosts: ["XXX.XXX.XXX.XXX:9200"] 
  • Running Filebeat
# ./filebeat setup -e
# ./filebeat -e
  • Confirmation of index (before accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana-event-log-7.12.0        VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001 d6WvTHvXR963epPgZJLO6g   1   0          9         1708    257.1kb        257.1kb
green  open   .apm-custom-link                z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration        aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                   EKWvKjbjQqO_z9z5cM3WUg   1   0          8           21     41.4kb         41.4kb
green  open   .kibana_7.12.0_001              6T4UGiR6RBuF_GdTeO4shg   1   0         44            3      2.1mb          2.1mb
  • Confirmation of index (after accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   filebeat-7.12.0-nginx-access-2022.08.04 xPkY5wdxQfyvWDfhCykiSw   1   1          1            0     23.6kb         23.6kb
yellow open   .kibana-event-log-7.12.0                VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001         d6WvTHvXR963epPgZJLO6g   1   0          9         1717    223.8kb        223.8kb
green  open   .apm-custom-link                        z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration                aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                           EKWvKjbjQqO_z9z5cM3WUg   1   0          8           21     41.4kb         41.4kb
yellow open   filebeat-7.12.0-2022.08.04-000001       HXAFD2stRgukHnVBezU98g   1   1          1            0     12.2kb         12.2kb
green  open   .kibana_7.12.0_001                      6T4UGiR6RBuF_GdTeO4shg   1   0         44            3      2.1mb          2.1mb

Of the indexes created, filebeat-7.12.0-nginx-access-2022.08.04 is the one described in filebeat.yml and is the one intended to be created.
However, filebeat-7.12.0-2022.08.04-000001 is created at the same time.

When I check these on Kibana, they both collect the same access logs.

Interesting and it looks like the default named index has the message field and the other does not...

Do the documents look the same or is one parse with all the fields and the other is not.

Did both have the tag with customer-a?

I need to think about this it is very odd / strange... it is behaving like there are 2 harvesters / collectors running when we only have 1 defined.

can you add two more settings and try again... clean up everything?

setup.ilm.enabled: false
setup.template.enable: false
Take out the other template settings we will work them later.

put them at the root level right before

setup.ilm.enabled: false
setup.template.enable: false
output.elasticsearch:
  hosts: ["XXX.XXX.XXX.XXX:9200"] 

I have a few more ideas but let me know about this first...

Thanks for your comment.

The @timestamp is slightly off, but the ip, request, and user agent fields look the same.

As you can see with and without the message field, the way the fields are output is very different.

The filebeat-7.12.0-nginx-access-2022.08.04 has richer field entries, such as source.geo.location and traefik.access.user_agent.os_name, which parses the user agent, indicating that it uses Filebeat's nginx module.

However, the filebeat-7.12.0-2022.08.04 has simpler field entries.
There appears to be no parsed information, only an additional message field.

Thanks for the additional configuration info. I will try it now!

1 Like

Ok that is good that means the module is working ... it actually pretty much means everything is working we just need to find out WHY the extra index is getting created / written too!

Actually it is not off ... the timestamp is the time of the actual event in the logs.. that is exactly correct, I can see that from the message...

We are close we just need to stop that other index... I am really really puzzled where it is coming from.. I can not reproduce

OHHHHH I THINK I may have found it!!!! Give me a moments...

Actually can you share the startup logs from filebeat!

First 30 or so lines... .

look for a line that says

2022-08-03T21:32:13.507-0700 INFO [crawler] beater/crawler.go:71 Loading Inputs: 3

I DID FIND it!!!! it is the ^^^^ I exactly reproduced it

TRY this and tell me if it works then I will explain why.. in short the other filesets are enabled by default and the ingress controller is look at the same place.

@its-ogawa DARN apologies this was my fault... try this then we can got back to the rest!!!
I introduced this when I simplified.

filebeat.modules:
- module: nginx

  access:
    enabled: true
    input.index: "filebeat-%{[agent.version]}-nginx-access-%{+yyyy.MM.dd}"
    input.tags: ["customer-a"]
    var.paths: ["/Users/sbrown/workspace/sample-data/nginx/nginx-test.log"]

  # Disable other nginx filesets. 
  error:
    enabled: false
  ingress_controller:
    enabled: false

setup.template.enabled: false
setup.ilm.enabled: false

output.elasticsearch:
  hosts: ["localhost:9200"]
1 Like

Excellent!!! It works as expected!

After setting up the above.

  • Confirmation of index (before accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana-event-log-7.12.0        VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001 d6WvTHvXR963epPgZJLO6g   1   0          9         4015    486.9kb        486.9kb
green  open   .apm-custom-link                z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration        aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                   EKWvKjbjQqO_z9z5cM3WUg   1   0          8            0        6kb            6kb
green  open   .kibana_7.12.0_001              6T4UGiR6RBuF_GdTeO4shg   1   0       1962          108      4.7mb          4.7mb
  • Confirmation of index (after accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   filebeat-7.12.0-nginx-access-2022.08.04 hjnYOTS-TQqqXpqYeEJPyA   1   1          0            0       208b           208b
yellow open   .kibana-event-log-7.12.0                VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001         d6WvTHvXR963epPgZJLO6g   1   0          9         4018    486.9kb        486.9kb
green  open   .apm-custom-link                        z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration                aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                           EKWvKjbjQqO_z9z5cM3WUg   1   0          8            0        6kb            6kb
green  open   .kibana_7.12.0_001                      6T4UGiR6RBuF_GdTeO4shg   1   0       1962          108      4.7mb          4.7mb

  • Filebeat logs at startup

2022-08-04T15:06:35.200+0900    INFO    instance/beat.go:660    Home path: [/root/tmp/filebeat-7.12.0-linux-x86_64] Config path: [/root/tmp/filebeat-7.12.0-linux-x86_64] Data path: [/root/tmp/filebeat-7.12.0-linux-x86_64/data] Logs path: [/root/tmp/filebeat-7.12.0-linux-x86_64/logs]
2022-08-04T15:06:35.200+0900    INFO    instance/beat.go:668    Beat ID: 71baa16f-15bc-4024-8b6d-67ac9501790e
2022-08-04T15:06:35.201+0900    INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2022-08-04T15:06:35.201+0900    INFO    [beat]  instance/beat.go:996    Beat info       {"system_info": {"beat": {"path": {"config": "/root/tmp/filebeat-7.12.0-linux-x86_64", "data": "/root/tmp/filebeat-7.12.0-linux-x86_64/data", "home": "/root/tmp/filebeat-7.12.0-linux-x86_64", "logs": "/root/tmp/filebeat-7.12.0-linux-x86_64/logs"}, "type": "filebeat", "uuid": "71baa16f-15bc-4024-8b6d-67ac9501790e"}}}
2022-08-04T15:06:35.201+0900    INFO    [beat]  instance/beat.go:1005   Build info      {"system_info": {"build": {"commit": "08e20483a651ea5ad60115f68ff0e53e6360573a", "libbeat": "7.12.0", "time": "2021-03-18T06:16:51.000Z", "version": "7.12.0"}}}
2022-08-04T15:06:35.201+0900    INFO    [beat]  instance/beat.go:1008   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.15.8"}}}
2022-08-04T15:06:35.202+0900    INFO    [beat]  instance/beat.go:1012   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-02-15T12:23:53+09:00","containerized":false,"name":"ITS-ELSE-02","ip":["127.0.0.1/8","::1/128","202.221.140.169/26","fe80::250:56ff:fe89:f117/64"],"kernel_version":"3.10.0-1160.15.2.el7.x86_64","mac":["00:50:56:89:f1:17"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":9,"patch":2009,"codename":"Core"},"timezone":"JST","timezone_offset_sec":32400,"id":"f90df61f81c0432e8875d3d5489faf19"}}}
2022-08-04T15:06:35.202+0900    INFO    [beat]  instance/beat.go:1041   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/root/tmp/filebeat-7.12.0-linux-x86_64", "exe": "/root/tmp/filebeat-7.12.0-linux-x86_64/filebeat", "name": "filebeat", "pid": 23503, "ppid": 23572, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2022-08-04T15:06:34.210+0900"}}}
2022-08-04T15:06:35.202+0900    INFO    instance/beat.go:304    Setup Beat: filebeat; Version: 7.12.0
2022-08-04T15:06:35.203+0900    INFO    eslegclient/connection.go:99    elasticsearch url: http://XXX.XXX.XXX.XXX:9200
2022-08-04T15:06:35.203+0900    INFO    [publisher]     pipeline/module.go:113  Beat name: ELSE-02
2022-08-04T15:06:35.204+0900    INFO    beater/filebeat.go:117  Enabled modules/filesets: nginx (access),  ()
2022-08-04T15:06:35.205+0900    INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
2022-08-04T15:06:35.205+0900    INFO    instance/beat.go:468    filebeat start running.
2022-08-04T15:06:35.205+0900    INFO    memlog/store.go:119     Loading data file of '/root/tmp/filebeat-7.12.0-linux-x86_64/data/registry/filebeat' succeeded. Active transaction id=0
2022-08-04T15:06:35.210+0900    INFO    memlog/store.go:124     Finished loading transaction log file for '/root/tmp/filebeat-7.12.0-linux-x86_64/data/registry/filebeat'. Active transaction id=250
2022-08-04T15:06:35.210+0900    INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 5
2022-08-04T15:06:35.210+0900    INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2022-08-04T15:06:35.211+0900    INFO    log/input.go:157        Configured paths: [/var/log/nginx/access.log]
2022-08-04T15:06:35.211+0900    INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 68902100857948553)
2022-08-04T15:06:35.211+0900    INFO    [crawler]       beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 1


If it has Lens Visualization, you can first use Count in the Y-axis and then use the Top Values option under the Horizontal Axis ..

I really like nginx's module functionality because it parses location, user agent, and OS information.

I have a simple question: if I use Filebeat's module functionality, can I register documents with Elasticsearch via Logstash?

When I put both Logstash and Elasticsearch in the destination, I get the following error

error unpacking config data: more than one namespace configured accessing 'output' (source:'filebeat.yml')

In this case we used the module functionality to get nginx logs, but we have multiple logs on the same server that we want to collect.
It would be nice if there was a module function for those as well, but not for common logs like nginx.

So far we have been parsing them with Logstash filters and registering the documents in Elasticsearch.

1st Yay.

2nd now if you want you can go back to a filebeat.yml and nginx.yml if you want and set or you can keep in a simple version...

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replicas: 0

We should really open a new topic on this ...

You can not have more than 1 output in filebeat that is a limitation of filebeat right now.. either elasticsearch or logstash not both.

So you have 2 choices of architecture

  1. Filebeat-> Elasticsearch
    Probably what I would recommend.
    Use the ngnix for the nginx logs
    And us a normal file / log input and create and ingest pipeline in elasticsearch to parse your other logs.
    No logstash needed.
    If you shared a sample log line and the parsing you are doing in logstash we could probably create a ingest pipeline

  2. Filebeat -> logstash -> Elasticsearch..
    More complicated
    Use the ngnix moduled for the nginx logs and then point them to logstash which will work as a pass through.
    Collect the logs and parse them in logsstash and send them to elastic.
    This will require addition logic in logstash in the input and output sections.

Which one would you like then you should open a new thread on that

1 Like

Thanks for replying.

I will ask a new question about the REPLICA settings in another topic.

I am sure this is true, but I am very disappointed.
Because I have been using Logstash.

Thanks for the two suggestions.
I will consider them carefully.

I found something called geoip, useragent filter in Logstash.
These may help, but I am unclear about geoip.
I hope to clarify this also in another topic.