How to get the sum in time of values in Lens

So I have 1 more idea

Clean up the indices...

Install a local tar.gz dist for a test

$ cd /tmp
$ curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.0-linux-x86_64.tar.gz
$ tar -xvf filebeat-7.12.0-linux-x86_64.tar.gz 
$ filebeat-7.12.0-linux-x86_64

cp in my minimal filebeat.yml
run these commands make sure you use the ./

$ ./filebeat setup -e 
$ ./filebeat -e

see what happens

Thank you for your reply.

I installed Filebeat using the package for linux (.tar.gz) and tried again.
The result is the same as #37.
In addition to the index with the specified name, a separate index of the form filebeat-%{[agent.version]}-%{+yyyy.MM.dd} is created, which seems to be the default name.

The steps performed are as follows

  • Delete t the currently used Filebea (obtained from rpm)
# rpm -e filebeat
# systemctl daemon-reload
  • Get Filebeat (.tar.gz package for linux)
# curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.12.0-linux-x86_64.tar.gz
# tar -xvf filebeat-7.12.0-linux-x86_64.tar.gz
# cd filebeat-7.12.0-linux-x86_64
  • Filebeat Setup
# vi filebeat.yml
filebeat.modules:
- module: nginx

  access:
    enabled: true
    input.index: "filebeat-%{[agent.version]}-nginx-access-%{+yyyy.MM.dd}"
    #input.index: "else02-nginx-access-%{+yyyy.MM.dd}"
    input.tags: ["customer-a"]
    var.paths: ["/var/log/nginx/access.log"]

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replica: 0

#setup.kibana:

output.elasticsearch:
  hosts: ["XXX.XXX.XXX.XXX:9200"] 
  • Running Filebeat
# ./filebeat setup -e
# ./filebeat -e
  • Confirmation of index (before accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana-event-log-7.12.0        VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001 d6WvTHvXR963epPgZJLO6g   1   0          9         1708    257.1kb        257.1kb
green  open   .apm-custom-link                z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration        aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                   EKWvKjbjQqO_z9z5cM3WUg   1   0          8           21     41.4kb         41.4kb
green  open   .kibana_7.12.0_001              6T4UGiR6RBuF_GdTeO4shg   1   0         44            3      2.1mb          2.1mb
  • Confirmation of index (after accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   filebeat-7.12.0-nginx-access-2022.08.04 xPkY5wdxQfyvWDfhCykiSw   1   1          1            0     23.6kb         23.6kb
yellow open   .kibana-event-log-7.12.0                VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001         d6WvTHvXR963epPgZJLO6g   1   0          9         1717    223.8kb        223.8kb
green  open   .apm-custom-link                        z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration                aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                           EKWvKjbjQqO_z9z5cM3WUg   1   0          8           21     41.4kb         41.4kb
yellow open   filebeat-7.12.0-2022.08.04-000001       HXAFD2stRgukHnVBezU98g   1   1          1            0     12.2kb         12.2kb
green  open   .kibana_7.12.0_001                      6T4UGiR6RBuF_GdTeO4shg   1   0         44            3      2.1mb          2.1mb

Of the indexes created, filebeat-7.12.0-nginx-access-2022.08.04 is the one described in filebeat.yml and is the one intended to be created.
However, filebeat-7.12.0-2022.08.04-000001 is created at the same time.

When I check these on Kibana, they both collect the same access logs.

Interesting and it looks like the default named index has the message field and the other does not...

Do the documents look the same or is one parse with all the fields and the other is not.

Did both have the tag with customer-a?

I need to think about this it is very odd / strange... it is behaving like there are 2 harvesters / collectors running when we only have 1 defined.

can you add two more settings and try again... clean up everything?

setup.ilm.enabled: false
setup.template.enable: false
Take out the other template settings we will work them later.

put them at the root level right before

setup.ilm.enabled: false
setup.template.enable: false
output.elasticsearch:
  hosts: ["XXX.XXX.XXX.XXX:9200"] 

I have a few more ideas but let me know about this first...

Thanks for your comment.

The @timestamp is slightly off, but the ip, request, and user agent fields look the same.

As you can see with and without the message field, the way the fields are output is very different.

The filebeat-7.12.0-nginx-access-2022.08.04 has richer field entries, such as source.geo.location and traefik.access.user_agent.os_name, which parses the user agent, indicating that it uses Filebeat's nginx module.

However, the filebeat-7.12.0-2022.08.04 has simpler field entries.
There appears to be no parsed information, only an additional message field.

Thanks for the additional configuration info. I will try it now!

1 Like

Ok that is good that means the module is working ... it actually pretty much means everything is working we just need to find out WHY the extra index is getting created / written too!

Actually it is not off ... the timestamp is the time of the actual event in the logs.. that is exactly correct, I can see that from the message...

We are close we just need to stop that other index... I am really really puzzled where it is coming from.. I can not reproduce

OHHHHH I THINK I may have found it!!!! Give me a moments...

Actually can you share the startup logs from filebeat!

First 30 or so lines... .

look for a line that says

2022-08-03T21:32:13.507-0700 INFO [crawler] beater/crawler.go:71 Loading Inputs: 3

I DID FIND it!!!! it is the ^^^^ I exactly reproduced it

TRY this and tell me if it works then I will explain why.. in short the other filesets are enabled by default and the ingress controller is look at the same place.

@its-ogawa DARN apologies this was my fault... try this then we can got back to the rest!!!
I introduced this when I simplified.

filebeat.modules:
- module: nginx

  access:
    enabled: true
    input.index: "filebeat-%{[agent.version]}-nginx-access-%{+yyyy.MM.dd}"
    input.tags: ["customer-a"]
    var.paths: ["/Users/sbrown/workspace/sample-data/nginx/nginx-test.log"]

  # Disable other nginx filesets. 
  error:
    enabled: false
  ingress_controller:
    enabled: false

setup.template.enabled: false
setup.ilm.enabled: false

output.elasticsearch:
  hosts: ["localhost:9200"]
1 Like

Excellent!!! It works as expected!

After setting up the above.

  • Confirmation of index (before accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana-event-log-7.12.0        VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001 d6WvTHvXR963epPgZJLO6g   1   0          9         4015    486.9kb        486.9kb
green  open   .apm-custom-link                z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration        aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                   EKWvKjbjQqO_z9z5cM3WUg   1   0          8            0        6kb            6kb
green  open   .kibana_7.12.0_001              6T4UGiR6RBuF_GdTeO4shg   1   0       1962          108      4.7mb          4.7mb
  • Confirmation of index (after accessing with a browser)
# curl -X GET "XXX.XXX.XXX.XXX:9200/_cat/indices?v"
health status index                                   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   filebeat-7.12.0-nginx-access-2022.08.04 hjnYOTS-TQqqXpqYeEJPyA   1   1          0            0       208b           208b
yellow open   .kibana-event-log-7.12.0                VBJPx8d9SPuItMyyIFg_cA   1   1          1            0      6.9kb          6.9kb
green  open   .kibana_task_manager_7.12.0_001         d6WvTHvXR963epPgZJLO6g   1   0          9         4018    486.9kb        486.9kb
green  open   .apm-custom-link                        z8w4mV7ySPWEj4ODSirapw   1   0          0            0       208b           208b
green  open   .apm-agent-configuration                aeIElP-4TVqIe5oDKnMA2Q   1   0          0            0       208b           208b
green  open   .async-search                           EKWvKjbjQqO_z9z5cM3WUg   1   0          8            0        6kb            6kb
green  open   .kibana_7.12.0_001                      6T4UGiR6RBuF_GdTeO4shg   1   0       1962          108      4.7mb          4.7mb

  • Filebeat logs at startup

2022-08-04T15:06:35.200+0900    INFO    instance/beat.go:660    Home path: [/root/tmp/filebeat-7.12.0-linux-x86_64] Config path: [/root/tmp/filebeat-7.12.0-linux-x86_64] Data path: [/root/tmp/filebeat-7.12.0-linux-x86_64/data] Logs path: [/root/tmp/filebeat-7.12.0-linux-x86_64/logs]
2022-08-04T15:06:35.200+0900    INFO    instance/beat.go:668    Beat ID: 71baa16f-15bc-4024-8b6d-67ac9501790e
2022-08-04T15:06:35.201+0900    INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2022-08-04T15:06:35.201+0900    INFO    [beat]  instance/beat.go:996    Beat info       {"system_info": {"beat": {"path": {"config": "/root/tmp/filebeat-7.12.0-linux-x86_64", "data": "/root/tmp/filebeat-7.12.0-linux-x86_64/data", "home": "/root/tmp/filebeat-7.12.0-linux-x86_64", "logs": "/root/tmp/filebeat-7.12.0-linux-x86_64/logs"}, "type": "filebeat", "uuid": "71baa16f-15bc-4024-8b6d-67ac9501790e"}}}
2022-08-04T15:06:35.201+0900    INFO    [beat]  instance/beat.go:1005   Build info      {"system_info": {"build": {"commit": "08e20483a651ea5ad60115f68ff0e53e6360573a", "libbeat": "7.12.0", "time": "2021-03-18T06:16:51.000Z", "version": "7.12.0"}}}
2022-08-04T15:06:35.201+0900    INFO    [beat]  instance/beat.go:1008   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":8,"version":"go1.15.8"}}}
2022-08-04T15:06:35.202+0900    INFO    [beat]  instance/beat.go:1012   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2021-02-15T12:23:53+09:00","containerized":false,"name":"ITS-ELSE-02","ip":["127.0.0.1/8","::1/128","202.221.140.169/26","fe80::250:56ff:fe89:f117/64"],"kernel_version":"3.10.0-1160.15.2.el7.x86_64","mac":["00:50:56:89:f1:17"],"os":{"type":"linux","family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":9,"patch":2009,"codename":"Core"},"timezone":"JST","timezone_offset_sec":32400,"id":"f90df61f81c0432e8875d3d5489faf19"}}}
2022-08-04T15:06:35.202+0900    INFO    [beat]  instance/beat.go:1041   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/root/tmp/filebeat-7.12.0-linux-x86_64", "exe": "/root/tmp/filebeat-7.12.0-linux-x86_64/filebeat", "name": "filebeat", "pid": 23503, "ppid": 23572, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2022-08-04T15:06:34.210+0900"}}}
2022-08-04T15:06:35.202+0900    INFO    instance/beat.go:304    Setup Beat: filebeat; Version: 7.12.0
2022-08-04T15:06:35.203+0900    INFO    eslegclient/connection.go:99    elasticsearch url: http://XXX.XXX.XXX.XXX:9200
2022-08-04T15:06:35.203+0900    INFO    [publisher]     pipeline/module.go:113  Beat name: ELSE-02
2022-08-04T15:06:35.204+0900    INFO    beater/filebeat.go:117  Enabled modules/filesets: nginx (access),  ()
2022-08-04T15:06:35.205+0900    INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
2022-08-04T15:06:35.205+0900    INFO    instance/beat.go:468    filebeat start running.
2022-08-04T15:06:35.205+0900    INFO    memlog/store.go:119     Loading data file of '/root/tmp/filebeat-7.12.0-linux-x86_64/data/registry/filebeat' succeeded. Active transaction id=0
2022-08-04T15:06:35.210+0900    INFO    memlog/store.go:124     Finished loading transaction log file for '/root/tmp/filebeat-7.12.0-linux-x86_64/data/registry/filebeat'. Active transaction id=250
2022-08-04T15:06:35.210+0900    INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 5
2022-08-04T15:06:35.210+0900    INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2022-08-04T15:06:35.211+0900    INFO    log/input.go:157        Configured paths: [/var/log/nginx/access.log]
2022-08-04T15:06:35.211+0900    INFO    [crawler]       beater/crawler.go:141   Starting input (ID: 68902100857948553)
2022-08-04T15:06:35.211+0900    INFO    [crawler]       beater/crawler.go:108   Loading and starting Inputs completed. Enabled inputs: 1


If it has Lens Visualization, you can first use Count in the Y-axis and then use the Top Values option under the Horizontal Axis ..

I really like nginx's module functionality because it parses location, user agent, and OS information.

I have a simple question: if I use Filebeat's module functionality, can I register documents with Elasticsearch via Logstash?

When I put both Logstash and Elasticsearch in the destination, I get the following error

error unpacking config data: more than one namespace configured accessing 'output' (source:'filebeat.yml')

In this case we used the module functionality to get nginx logs, but we have multiple logs on the same server that we want to collect.
It would be nice if there was a module function for those as well, but not for common logs like nginx.

So far we have been parsing them with Logstash filters and registering the documents in Elasticsearch.

1st Yay.

2nd now if you want you can go back to a filebeat.yml and nginx.yml if you want and set or you can keep in a simple version...

setup.template.settings:
  index.number_of_shards: 1
  index.number_of_replicas: 0

We should really open a new topic on this ...

You can not have more than 1 output in filebeat that is a limitation of filebeat right now.. either elasticsearch or logstash not both.

So you have 2 choices of architecture

  1. Filebeat-> Elasticsearch
    Probably what I would recommend.
    Use the ngnix for the nginx logs
    And us a normal file / log input and create and ingest pipeline in elasticsearch to parse your other logs.
    No logstash needed.
    If you shared a sample log line and the parsing you are doing in logstash we could probably create a ingest pipeline

  2. Filebeat -> logstash -> Elasticsearch..
    More complicated
    Use the ngnix moduled for the nginx logs and then point them to logstash which will work as a pass through.
    Collect the logs and parse them in logsstash and send them to elastic.
    This will require addition logic in logstash in the input and output sections.

Which one would you like then you should open a new thread on that

1 Like

Thanks for replying.

I will ask a new question about the REPLICA settings in another topic.

I am sure this is true, but I am very disappointed.
Because I have been using Logstash.

Thanks for the two suggestions.
I will consider them carefully.

I found something called geoip, useragent filter in Logstash.
These may help, but I am unclear about geoip.
I hope to clarify this also in another topic.

You can put those back in.

I meant we should open another topic on the logstash / filebeat stuff.

The filebeat to logstash will be fine not too hard.

We can show you...

I will be away on for a while.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.