Filebeat writing to its own index

Apologies, our system only went live a few months ago, so I just assumed we were on a sensibly recent version!

I have reset the config back to a version that previously worked, but now that's not doing anything!!! What I cannot work out is what filebeat is trying to do. Is it no longer reading the logs, or is filebeat not sending the data to elastic?? Is anyone able to provide any advise on how to investigate.

    ###################### Filebeat Configuration #########################

    # You can find the full configuration reference here:
    # https://www.elastic.co/guide/en/beats/filebeat/index.html

    #=========================== Filebeat inputs =============================

    filebeat.inputs:
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.

    - type: log

      # Change to true to enable this input configuration.
      enabled: true

      # Paths that should be crawled and fetched. Glob based paths.
      paths:
          - /var/log/myindex-app/*.log
   
    # matching on this type 2022-07-20 10:56:29,393
      multiline:
        pattern: '^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}'
        negate: true
        match: after

    #============================= Filebeat modules ===============================

    filebeat.config.modules:
      # Glob pattern for configuration loading
      path: ${path.config}/modules.d/*.yml

      # Set to true to enable config reloading
      reload.enabled: false

      # Period on which files under path should be checked for changes
      #reload.period: 10s

    #==================== Elasticsearch template setting ==========================
    setup.template:
      #name: "myindex-%{[agent.version]}"
      name: "myindex"
      pattern: "myindex-*"
      #pattern: "myindex-%{[agent.version]}-*"
      alias: "myindex"
      overwrite: true
      settings:
      index.number_of_shards: 1
      #index.codec: best_compression
      #_source.enabled: false
    
    

    #==========================  Modules configuration =============================
    filebeat.modules:
    #-------------------------------- Nginx Module --------------------------------
    - module: nginx
      # Access logs
      access:
        enabled: true

        # Set custom paths for the log files. If left empty,
        # Filebeat will choose the paths depending on your OS.
        var.paths: ["/var/log/nginx/access.log"]

        # Input configuration (advanced). Any input configuration option
        # can be added under this section.
        #input:

      # Error logs
      error:
        enabled: true

        # Set custom paths for the log files. If left empty,
        # Filebeat will choose the paths depending on your OS.
        var.paths: ["/var/log/nginx/error.log"]

        # Input configuration (advanced). Any input configuration option
        # can be added under this section.
        #input:

      # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
      #ingress_controller:
      #  enabled: false
      #
      #  # Set custom paths for the log files. If left empty,
      #  # Filebeat will choose the paths depending on your OS.
      #  #var.paths:

    #================================ Outputs =====================================

    # Configure what output to use when sending the data collected by the beat.  
    # ---------------------------- Elasticsearch Output ----------------------------
    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["${ELASTIC_URL}"]

      # Protocol - either `http` (default) or `https`.
      protocol: "https"

      # Certificate for SSL client authentication

      # Client Certificate Key
      
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: ${ELASTIC_USERNAME}
      password: ${ELASTIC_PASSWORD}
      
      # %{[fileset.module]}-%{[fileset.name]} to be added as an option - TBC
      #indices:
      #index: "myindex-%{[agent.version]}-%{+yyyy.MM.dd}"
      index: "myindex"
      #index: "myindex-%{[agent.version]}"
      : {
      "is_write_index": true
                   }
    

    #setup.ilm:
    #  enabled: true
    #  policy_name: "myindex"
    #  overwrite: true
    #  rollover_alias: "myindex-%{[agent.version]}"
    #  pattern: "{now/d}-0000001"
    #  policy_file: "/usr/share/filebeat/config/myindex.policy.json"
      

    #================================ Processors =====================================

    # Configure processors to enhance or manipulate events generated by the beat.

    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~

So this config allows filebeat to write to elastic, but it doesn't write the the index that I have named, just to the default. Is there anyone that is able to tell me what I am doing wrong?

When you use modules like ngnix the index names is set in the module and overwrites the output ...

See a sample here / read this pay a special attention to the nginx input settings and explanation

What's the relevance of nginx here? I was just trying to get the application logs to write to named indexes. nginx logs were something to look at later.

adding the values in the link makes no difference, although I have no seen any nginx logging in my app so far, so wasn't unexpected.

I still need to get my application to log into elastic with a named index. I was expecting to spend a couple of hours tweaking config. Current nearly 3 weeks of trying and counting !!

I have no manually created the index, and filebeat is still not writing to it.

So I think

  1. the filebeat config is not writing to the myindex index
  2. the index section of config is not creating the index myindex

As explained in the post I linked, You have a moduled enabled in your configuration whether it is important now or not... It overrides the output settings in some cases but let's put that aside there are other issues...

Also from the docs here... which is your key issue...

When index lifecycle management (ILM) is enabled, the default index is "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}-%{index_num}" , for example, "filebeat-7.12.1-2022-07-28-000001" . Custom index settings are ignored when ILM is enabled. If you’re sending events to a cluster that supports index lifecycle management, see Index lifecycle management (ILM) to learn how to change the index name.

So without this setting... you will never change the output index.
setup.ilm.enabled: false

OR you have to set ILM all up...

in the above combinations I don't see a valid combination.

So it seems like you are struggling a bit I have a couple suggestions if you are open to it... it is pretty much back to basics....

So two approaches...
1 Set it all up in filebeat (when you do this create a default ILM policy for you, which you can later edit)
2 setup your template, policy and rollover alias etc... etc. in elasticsearch then use minimal filebeat config

To be clear these are NOT snippets of filebeat.yml they are fully functional without all the extra stuff

Method 1

This works this is minimal filebeat below.
filebeat setup -e
filebeat-e

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/myindex-app/*.log

# matching on this type 2022-07-20 10:56:29,393
  multiline:
    pattern: '^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}'
    negate: true
    match: after

setup.template.enabled: false

setup.ilm:
  enabled: true
  policy_name: "myindex"
  overwrite: true
  rollover_alias: "myindex-%{[agent.version]}"
  pattern: "{now/d}-0000001"

output.elasticsearch:
  hosts: ["localhost:9200"]
health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   myindex-7.12.0-2022.08.03-0000001 8rjPeIKlRuCD-BWMHRMQcw   1   1          7            0     14.3kb         14.3kb

Method 2

Setup your policy and and template directly in elasticsearch
Then important you have to create a a boot strap index this is the alias from the write alias to the concrete index...

PUT myindex-7.12.0-2022.08.03-0000001?pretty
{
  "aliases": {
    "my-index-7.12.0": {
      "is_write_index": true
    }
  }
}

then this config will work

filebeat setup -e
filebeat-e

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/myindex-app/*.log

# matching on this type 2022-07-20 10:56:29,393
  multiline:
    pattern: '^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}'
    negate: true
    match: after

setup.ilm.enabled: false
setup.template.enabled: false

output.elasticsearch:
  hosts: ["localhost:9200"]
  index: "myindex-%{[agent.version]}"

results

health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   myindex-7.12.0-2022.08.03-0000001 TsXkf6AkQGeVLs25Ico7iw   1   1          7            0     14.1kb         14.1kb

Now you have 2 working options....

Thank you.

I have implemented the config as above. No errors that I can find, but the index is not created and filebeat has stopped sending logs (or at least kibana is unable to see anything).

I guess based on above that the index should create as part of the deploy, therefore that must be failing, but I cannot find any errors hence it's only a guess.

The requirement that I have is to get ilm working, so your second method isn't suitable.

I have tried to link the default filebeat index to the default filebeat ILM configuration through the ui, which seemed to work (after I add an alias), but then I got an error overnight. I am going to raise a different question on that so as not to mix different issues.

I have manually created the index based on your second method, but that is erroring with an error message of
illegal_argument_exception: index.lifecycle.rollover_alias [myindex-%{[agent.version]}] does not point to index [myindex-7.0.1-2022.08.03-0000001]
which i cannot argue with!

I guess something else is wrong?

Oddly the error comes and goes, so even less sure about what is actually going on. All I know is that documents are not going into the index, although still not tracked down any error messages that assist in identifying the error.

Apologies... I am not following... can please help me help you by being a little more precise / detailed...

Which method are you referring to?

Method 1 or Method 2?

and did you clean up the filebeat data registry.. are you trying to load the same file?

And lets stick to one or the other...

Exactly where does the error show up? In filebeat logs... when you create the manual alias in Dev Tools?

Is that in the filebeat logs?

If you tried method 2 and are now trying method 1 did you clean the manual index and alias before you tried method 1 again

(BTW Method 2 does allow full ILM in fact more control although set up manually, I would show you later, but lets stick with method 1 for now since I think that is what you are using)

Can you share the method and exact filebeat.yml at this point and exact error and where it is occuring...

Lets pick a single method and proceed on one... and please try to be a bit more precise / detailed with your response it will help me help you.

Sure. I tried Method 1, but the index did not create, so tried to create the index using method two whilst retaining the method 1 filebeat yml file.

The error is in the kibana ui. Although it comes and goes. But is illegal_argument_exception: index.lifecycle.rollover_alias [myindex-%{[agent.version]}] does not point to index [myindex-7.0.1-2022.08.04-0000001]

My current yml file is

###################### Filebeat Configuration #########################

    # You can find the full configuration reference here:
    # https://www.elastic.co/guide/en/beats/filebeat/index.html

    #=========================== Filebeat inputs =============================

    filebeat.inputs:
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.

    - type: log

      # Change to true to enable this input configuration.
      enabled: true

      # Paths that should be crawled and fetched. Glob based paths.
      paths:
          - /var/log/myindex-app/*.log
   
    # matching on this type 2022-07-20 10:56:29,393
      multiline:
        pattern: '^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}'
        negate: true
        match: after

    #============================= Filebeat modules ===============================

    filebeat.config.modules:
      # Glob pattern for configuration loading
      path: ${path.config}/modules.d/*.yml

      # Set to true to enable config reloading
      reload.enabled: false

      # Period on which files under path should be checked for changes
      #reload.period: 10s

    #==================== Elasticsearch template setting ==========================
    setup.template:
      #name: "myindex-%{[agent.version]}"
      #name: "myindex"
      #pattern: "myindex-*"
      #pattern: "myindex-%{[agent.version]}-*"
      #alias: "myindex"
      #overwrite: true
      settings:
      index.number_of_shards: 1
      #index.codec: best_compression
      #_source.enabled: false
    
    

    #==========================  Modules configuration =============================
    filebeat.modules:
    #-------------------------------- Nginx Module --------------------------------
    - module: nginx
      # Access logs
      access:
        enabled: true

        # Set custom paths for the log files. If left empty,
        # Filebeat will choose the paths depending on your OS.
        var.paths: ["/var/log/nginx/access.log"]

        # Input configuration (advanced). Any input configuration option
        # can be added under this section.
        #input:
        #index: "myindex"
          
      # Error logs
      error:
        enabled: true

        # Set custom paths for the log files. If left empty,
        # Filebeat will choose the paths depending on your OS.
        var.paths: ["/var/log/nginx/error.log"]

        # Input configuration (advanced). Any input configuration option
        # can be added under this section.
        #input:
        #index: "myindex"
      # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs
      #ingress_controller:
      #  enabled: false
      #
      #  # Set custom paths for the log files. If left empty,
      #  # Filebeat will choose the paths depending on your OS.
      #  #var.paths:

    #================================ Outputs =====================================

    # Configure what output to use when sending the data collected by the beat.  
    # ---------------------------- Elasticsearch Output ----------------------------
    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["${ELASTIC_URL}"]

      # Protocol - either `http` (default) or `https`.
      protocol: "https"

      # Certificate for SSL client authentication

      # Client Certificate Key
      
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: ${ELASTIC_USERNAME}
      password: ${ELASTIC_PASSWORD}
      
      # %{[fileset.module]}-%{[fileset.name]} to be added as an option - TBC
      #indices:
      #index: "myindex-%{[agent.version]}-%{+yyyy.MM.dd}"
      #index: "myindex"
      #index: "myindex-%{[agent.version]}"
      #: {
      #"is_write_index": true
      #             }
    

    setup.ilm:
      enabled: true
      policy_name: "myindex"
      overwrite: true
      rollover_alias: "myindex-alias-%{[agent.version]}"
      pattern: "{now/d}-0000001"
      policy_file: "/usr/share/filebeat/config/myindex.policy.json"
      

    #================================ Processors =====================================

    # Configure processors to enhance or manipulate events generated by the beat.

    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~

I do typically delete the ILP and Index.

Ok lets stick to Method 1

Your filebeat.yml still look odd to me.... not properly indented etc.. so I am not sure what settings are taking affect... what are not.. .you still have the modules enable etc which could cause issue with a malformed yaml... so really hard for me to debug...

Could you try my simple exact filebeat.yml and let me know the results..

There is a reason I am asking you to do this, to eliminate variables etc... this is how I have helped many people, If you can not... I understand but then I don't think I can help more too many variables / unknowns etc..

Clean up and try this with minimal changes.

Exactly this with just add your credentials and url.. .leave out your policy file etc.. etc..

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /var/log/myindex-app/*.log

# matching on this type 2022-07-20 10:56:29,393
  multiline:
    pattern: '^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}'
    negate: true
    match: after

setup.ilm:
  enabled: true
  policy_name: "myindex"
  overwrite: true
  rollover_alias: "myindex-%{[agent.version]}"
  pattern: "{now/d}-0000001"

output.elasticsearch:
  hosts: ["https://localhost:9200"]
  username: username
  password: password
 ###################### Filebeat Configuration #########################

    # You can find the full configuration reference here:
    # https://www.elastic.co/guide/en/beats/filebeat/index.html

    #=========================== Filebeat inputs =============================

    filebeat.inputs:
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.

    - type: log

      # Change to true to enable this input configuration.
      enabled: true

      # Paths that should be crawled and fetched. Glob based paths.
      paths:
          - /var/log/myindex-app/*.log
   
    # matching on this type 2022-07-20 10:56:29,393
      multiline:
        pattern: '^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}'
        negate: true
        match: after

    #================================ Outputs =====================================

    # Configure what output to use when sending the data collected by the beat.  
    # ---------------------------- Elasticsearch Output ----------------------------
    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["${ELASTIC_URL}"]

      # Protocol - either `http` (default) or `https`.
      protocol: "https"

      # Certificate for SSL client authentication

      # Client Certificate Key
      
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: ${ELASTIC_USERNAME}
      password: ${ELASTIC_PASSWORD}
      

    setup.ilm:
      enabled: true
      policy_name: "myindex"
      overwrite: true
      rollover_alias: "myindex-alias-%{[agent.version]}"
      pattern: "{now/d}-0000001"

The index is not creating. The ILP however is.

First ... I am not sure if your filebeat.yml is actually malformed or it is just pasted wrong... all the setting at the root like output.elasticsearch: should be all the way to the left

     output.elasticsearch: <!- Not Like This. 
output.elasticsearch: <!--- Like This 

Not sure what you mean by

Please share

GET _cat/indices/myindex-*?v

Also please share the filebeat startup logs

Did you cleanup the filebeat data registry ... do you know what that is? Otherwise filebeat will not reload the data?

How did you install filebeat?

How are you running filebeat (command line systemctl) ? and you running setup?

The more information you provide the more I can help....

Also I am asking all these question for a reason... if you want my help please answer them...

The file is correct just my pasting. The fact the ilp gets created suggests the file is valid. Also the file works until I try to create a named index.

GET _cat/indices/myindex-*?v

returns

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size

Did you cleanup the filebeat data registry ... do you know what that is? Otherwise filebeat will not reload the data? No idea what it is I just delete the index and ILP

How did you install filebeat? - No it's centrally managed

How are you running filebeat (command line systemctl) ? It's run centrally so not something I know

and you running setup? It's managed centrally

I appreciate your help so answering as much as I can.

Sorry no idea where these are held.

What does "centrally managed" mean?
I am confused how do you make any changes?
Are you logging into the server where filebeat runs or not?