Logstash not indexing date to the filebeat write index for data stream

Hi everybody.
I've got a three nodes ES cluster. I've got the Kibana, one logstash and one filebeat running along in one of those three ES nodes. 'ubuntuelk'. Everything on version 8.2.1.
My goal is to use that filebeat using the system module to send data to Logstash, and using logstash to send data to my ES Cluster.

First thing I did was adding these two lines into my filebeat.yml
setup.ilm.overwrite: true
setup.template.enabled: false

As my filebeat.yml has been set to point to Logstash instead of ES according to the documentation it seems that I need to follow this three steps:

1st
Load the index template manually

In this part I guess I got succes as a datastream index gets created:
It returned 'Index setup finished' as output.
DataStream.

2nd
Load Kibana Dashboards
I also guess it worked well as it returned 'Loaded Dashboards' and this was the output:

Besides, from Kibana -> Dashboard I've got four pages with different dasboards and for of them are tagged as Dashboards [Filebeat System] ECS
LoadedDashboards

And 3th
Load ingest pipelines.

Here I have run the: filebeat setup --pipelines --modules system command and it returned:
exiting module system is configured but has no enabled filesets.
Before I reached this point I had already run the filebeat modules enable system command and modified from false to true the values of the /etc/filebeat/modules/system
I also read that this is the expected behavior from version 8x and I'm using 8.2.1

Also, folowing the instructions of the third step/link I end in this last link
Use ingest pipelines for parsing where I find a configuration example that reads data from the Beats input and uses Filebeat ingest pipelines to parse data collected by modules:


Compared with this config I only changed the user/password and added the certificate config.

And it worked, but instead of adding the date in my datastream that was created on the first step. All documents are addedd to a new index
WrongIndex

I tried changing the settings in the Filebeat so its sends directly data to ES and it worked, as you can see in the picture 437 documents were added and i can see finally data in the dashboards. But once it set it back to point to Logstash, Logstash send it to the wrong index and the dashboards stop showing any data apart of those 437 documents.

My goal is that documents are added to the .ds-filebeat... index so I can see data in the dasboards. An other solution for me would be to know how can I change the Dashboards so they search into and show data from a different index.

Could please someone tell me what am I missing?

Thank you very much in advance

Hi @Carlos_T Welcome to the community!

First please do not post images of text it is hard to read, can not be searched, can not be copied to debug, and some people can not even read them. Please post formatted Text.

That said the docs are a bit behind Issue Here:

Try This...

The explanation is that you want logstash to write to the "write alias" which follows the format index => "%{[@metadata][beat]}-%{[@metadata][version]}" e.g. filebeat-8.2.1

input {
  beats {
    port => 5044
  }
}
output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "http://localhost:9200"
      pipeline => "%{[@metadata][pipeline]}"
      user => "elastic"
      password => "password"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      action => "create"
    }
  } else {
    elasticsearch {
      hosts => "http://localhost:9200"
      user => "elastic"
      password => "password"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      action => "create"
    }
  }
}

Hi Stephen.

Thank you very much for both , the welcome and the fast answer:
I've tried to modify the index creation pattern as you suggested but once I start Logstash and afterwards Filebeat no data is added into ES.

Nothing new is written in the logstash log but I can find the following message in /var/log/syslog about what's happening:

*Jun 4 19:15:12 ubuntuelk logstash[3989]: [2022-06-04T19:15:12,504][WARN [logstash.outputs.Elasticsearch][main][b05858163a0a978876ca903a3a166a8816ecaff32f9545ac031ec1f3da86c2e3] Could not index event to Elasticsearch. {:status=>400, :action=>["create", {:_id=>nil, :_index=>"filebeat-8.2.1", :routing=>nil, :pipeline=>"filebeat-8.2.1-system-syslog-pipeline"}, {"@timestamp"=>2022-06-04T17:14:54.229Z, "log"=>{"offset"=>285784599, "file"=>{"path"=>"/var/log/syslog"}}, "service"=>{"type"=>"system"}, "host"=>{"hostname"=>"ubuntuelk", "id"=>"4293b3061fe540d9b3cdd77a03af46c3", "mac"=>["00:0c:29:f5:4e:bf"], "architecture"=>"x86_64", "name"=>"ubuntuelk", "containerized"=>false, "ip"=>["192.168.0.111", "fe80::5d73:dcbe:a241:d705", "fe80::6b28:900b:b61d:b756"], "os"=>{"type"=>"linux", "codename"=>"focal", "family"=>"debian", "name"=>"Ubuntu", "kernel"=>"5.13.0-44-generic", "version"=>"20.04.3 LTS (Focal Fossa)", "platform"=>"ubuntu"}}, "tags"=>["beats_input_codec_plain_applied"], "ecs"=>{"version"=>"1.12.0"}, "agent"=>{"version"=>"8.2.1", "type"=>"filebeat", "id"=>"df054f72-18ee-4022-a6d2-b812ac595b9c", "ephemeral_id"=>"5d42c8a8-c253-4ee3-a09a-ecff5c16025f", "name"=>"ubuntuelk"}, "@version"=>"1", "event"=>{"original"=>"

As I mentioned previously after following the instructions to load the index template manually, the following index was created: .ds-filebeat-8.2.1-2022.06.04-000001
I've got the feeling that because of that ES is expecting to get the data in that precise datastream index

It is necessary to load the index template as the documentation suggests?

I also want to remark that If I try to send the data directly from filebeat to ES it goes directly into the .ds-filebeat-8.2.1-2022.06.04-000001 index and it works, although I haven't tried again.

Thank you in advance for your help.

Carlos T

I think you cut off the log before it got to the good part :slight_smile: the actual error comes soon after that.

Yes you need to load the templates etc. actually I always suggest running the full setup not just parts (but that part of the doc is fine) = when you have filebeat configured to point to Kibana and Elasticsearch.

filbeat setup -e

That is correct ... the data steam is filebeat-8.2.1 which is what the data gets written into the undelying .ds-filebeat-8.2.1-2022.06.04-000001 index

I would

  • Clean up the data stream.
  • Point filebeat to Elasticsearch
  • run setup filebeat setup -e
  • Start Filebeat ... observe data getting written
  • Stop Filebeat
  • Point Filebeat to logstash
  • Start logstash with the config above
  • Start Filebeat
  • Observe Data

BTW I just did this above process and it worked

IF it does not would please include the entire logs line were it says it can not index.

Indeed Stephen.
It worked!. the whole process!. Honestly after reading the step by step guide you made I had the feeling of having being complicating things unnecessarily. I've been hours and hours stuck with this problem. Also your explanation about the relation between the datastream and the index allowed me to see much more of the picture. In fact, when you suggested to get this line:

index => "%{[@metadata][beat]}-%{[@metadata][version]}"

I expected that index to be the final destination of the data, and I started to wonder how would I deal with index rotation in the future and things like that. But now I have a different and not worring guess about it :slight_smile:

Said that, I can only thank you for your speed, technical support and explanation of some concepts I have to learn. Step by step.

Thank you very much Sir.

Hope you have a great week.

Carlos T.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.