Discovery problem

Hi,

I am trying to implement a sample example with autodiscover and modules

I created 2 docker :

  • nginx
  • filebeat

I want filebeat container to get logs from nginx container with autodiscover and use nginx module to parse them and send to elastic.

Here is my filebeat.yml, mounted in my container.

`
filebeat.modules:

  • modules:
    nginx:
    enabled: true

filebeat.autodiscover:
providers:
- type: docker
templates:
- condition:
contains:
docker.container.image: nginx
config:
- module: nginx
log:
input:
type: docker
containers.ids:
- "${data.docker.container.id}"

output.elasticsearch:
hosts: ["https://elasticsearch:9200"]
`
I doesn't work but I am wondering why?

I added labels to nginx container (docker-compose) :
labels:
co.elastic.logs/module: nginx
co.elastic.logs/fileset.stdout: access
co.elastic.logs/fileset.stderr: error

any idea?

Regards

Kindly format your configuration with </> for better clarity of your configuration.

Any luck on this one? I'm having the same issue right now.

As I see it, this is the relevant part for my Nginx container in docker-compose.yml:

nginx:
  container_name: nginx
  build: ./nginx-vts/.
  labels:
    co.elastic.logs/disable: false
    co.elastic.logs/module: nginx
    co.elastic.logs/fileset.stdout: access
    co.elastic.logs/fileset.stderr: error

filebeat:
  container_name: filebeat
  image: docker.elastic.co/beats/filebeat:7.0.0
  user: root
  volumes:
    - ${MY_DOCKER_DATA_DIR}/filebeat/nginx-access-ingest.json:/usr/share/filebeat/module/nginx/access/ingest/default.json
    - ${MY_DOCKER_DATA_DIR}/filebeat/filebeat.autodiscover.docker.yml:/usr/share/filebeat/filebeat.yml:ro
    - filebeat_data:/usr/share/filebeat/data
    - /var/run/docker.sock:/var/run/docker.sock
    - /var/lib/docker/containers/:/var/lib/docker/containers/:ro
    - /var/log/:/var/log/:ro
  environment:
    - ELASTICSEARCH_HOST=elasticsearch:9200
    - KIBANA_HOST=kibana:5601
    command: ["--strict.perms=false"]

My filebeat.yml is deemed Config OK on filebeat test config:

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: true

filebeat.modules:
- module: nginx

filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      default.disable: true

output.elasticsearch:
  hosts: 'elasticsearch:9200'

setup.kibana:
  host: "kibana:5601"

I verified that the nginx module is enabled inside my filebeat container. Data is being sent to Elasticsearch, but I get the following error.message when I inspect a log entry in Kibana:

Provided Grok expressions do not match field value: [my.domain.com 192.168.1.1 - my_user [20/Apr/2019:20:21:37 +0000] \"GET /ocs/v2.php/apps/serverinfo/api/v1/info HTTP/2.0\" 200 764 \"-\" \"Go-http-client/2.0\"]

I've used the Kibana dev tools to evaluate the Grok-pattern I use for Nginx, that is present in the following file:

/usr/share/filebeat/module/nginx/access/ingest/default.json

My altered Nginx pattern works in the debugger. This is the content of my grok "patterns": [...] for my nginx access pipeline:

"\"%{IPORHOST:nginx.access.host} %{IPORHOST:nginx.access.remote_ip_list} - %{DATA:user.name} \\[%{HTTPDATE:nginx.access.time}\\] \"%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}\" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} \"%{DATA:http.request.referrer}\" \"%{DATA:user_agent.original}\"\""

The pattern I actually input in the debugger, and that works:

%{IPORHOST:nginx.access.host} %{IPORHOST:nginx.access.remote_ip_list} - %{DATA:user.name} \[%{HTTPDATE:nginx.access.time}\] \"%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}\" %{NUMBER:http.response.status_code:long} %{NUMBER:http.response.body.bytes:long} \"%{DATA:http.request.referrer}\" \"%{DATA:user_agent.original}\"

So what I'm looking for is how to debug my setup - or tips on what I'm doing wrong so that the fields are properly "nginx-parsed" when they enter Elasticsearch.

Sorry for hijacking the thread - but I hope OP already found the answer.

I finally solved this!

For anyone finding this post, I'll go ahead and say what solved it for me.

Almost all my configs were spot on. What made a difference started with doing a slight adjustment in my filebeat.yml; I added the following:

setup.template.overwrite: true

Why? The first time you set up filebeat, if you follow the setup procedures, you are supposed to do something like this:

# enable module
./filebeat modules enable nginx
# check that your module list is ok
./filebeat modules list
# set up your output (in my case elastic) and kibana
./filebeat setup -e

The thing is, I re-defined both the enabled modules and the Grok pattern for my Nginx module access ingest pipeline after the first time I ran the initial setup procedure. So when I ran the setup command after changing my module list later on, it didn't do anything to Elasticsearch, because the default behaviour is to leave the index alone. Setting "setup.template.overwrite: true" made the difference! My Grok pattern "took" and now things are working as expected.

I suggest updating the docs to make this a little more obvious for beginners. Although I caught up on it after a few days of reading the docs up and down times over, I'm still not completely sure if I have to reload the templates into Elasticsearch when I redefine my Grok pattern but I suppose it's needed if what the template is doing is creating field definitions in Elasticsearch based upon the Grok pattern matching definition.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.