but I still don't have this index created in elasticsearch.
I checked connection and filebeat is connected to elasticsearch.
I still don't understand how using ingest pipelines I can create indexes.
Maybe I'm not going to do at right way, because in my understanding,
Filebeat has pipeline name, it's point to elasticsearch, and on ingest pipeline I have created grok.
In my environment in logstash, I have input (listening to filebeat), grok parrern and output.
I cannot find anything to output (create index) in ingest pipelines
Ingest pipelines do not create indices... They are used to transform data before writing them to an index .. ingest pipeline live in elasticsearch
The default datastream (collection ofindices) from filebeat will be something like filebeat-8.17..4
Ok well your filebeat is not configured to point to logstash it is pointing to Elasticsearch
So that is another issue.
The Ingest pipeline you showed above is not in Logstash that is in Elasticsearch
Ingest pipelines are in Elasticsearch
Logstash pipelines are in Logstash
The both transform data, but are 2 different architectural patterns.
Why are you using Logstash, do you need to? It adds complexity if you are just getting started.
So you need to decide if you want
A) Filebeat -> Elasticsearch (with ingest pipleine)
B) Filebeat -> Logstash (logstash pipeline) -> Elasticsearch
C) Filebeat -> Logstash (passthrough) -> Elasticsearch (with ingest pipleine)
If you are just getting started A is the simplest.
So what are you trying to do? and why?
Have you looked at the Filebeat quickstart?
There are commands to test the configurations as well.
Also once filebeat reads a file it will not read it again because it keeps track of what it ready so if you want to test and re-read the same file over again you will need to clean up the data registry
I'm happy that accidentally founded that I have in datastream data from filebeat side.
But I don't know how to name this data stream.
I have "filebeat-8.11.1" which is version of my filebeat.
I founded that automatically is it created template.
How can I name this data stream - probably from filebeat side but I couldn't find anything.
In logstash I had indexes created weekly or monthly. I don't know if it is possible to use such settings here.
But basically I'm happy to have data stream from this implementation.
In my config of filebeat I have pipeline "filebeat-1" and I expected that this will be the same name of data stream.
ok sorry for that late - but my basic explanation.
I'm not starting my adventure with ELK but I'm quite deeply skilled in ELK,
but my current environment are with elastic+logstash+kibana.
I have concept of to get rid ofn logstash, and I wanted to go with elastic with ingest pipelines and kibana.
Someone before me installed such environment and probably this guy didn't have experience with ingest pipelines.
This is my first shot with ingest pipelines and I'm working to adapt it to my environment.
For me pipeline in elastic is much more clear than in logstash.
More about that logstash is much more complicated speaking about grok patterns and requires in my case a lot of resources.
Right now I'm learning about ingest pipelines but I see that looks much more better than logstash.
In my case logstash is used mostly with flat log files (not json files) but when I putted very simple grok like greedydata - ingest pipelines are also work with that kind of files.
Ingest pipelines do not set the name of the index or data.... They are used to transform data.
I would read about data streams
How frequently and how big the backing data indexes are are governed by the index life cycle management policy applied to the data stream
There is a default ILM policy for filebeat
Also, 8.11 is pretty old. You should consider upgrading at some point
Use Kibana UI if is easier. You can also set only 2-3 fields timestamp and greedy msg just to make sure the grok is parsing. Make the grok pattern outside IPipeline, can be easier.
Make a test index and insert test data
POST /test/_doc/
{
"message": "<your line from the log>"
}
Create an index template under Index Management, no need to specify fields, ES will do it for you, just name the template and pattern as you set in filebeat.yml.
Test filebeat.yml
filebeat.exe test config
Run it: filebeat.exe -e
If there is an error will be on screen and log. You can set log level to debug if you wish, but enough is -e.
Create a data view in Kibana, should point on test-* index, or how you name it.
Hello,
Thanks for Your answer.
I solved issue by delete template for it and recreate pipeline with other name.
I have a question because I don't understand that. On one time I have created index, and on another implementation of filebeat I have created datastream
First do not add the Date to the data stream name that is anti-pattern and does not make sense since the backing indices will have the the date.
should be something like
I believe The reason for your issue is
output:
elasticsearch:
enabled: true
hosts: ["http://es1:9200"]
timeout: 60
index: "dpkg-%{+YYYY.MM}". <<<< THIS HERE
pipeline: "filebeat-dpkg"
action: create
#manage_template:
setup.template:
name: "filebeat-dpkg"
pattern: "dpkg-*" <<<< MATCHES THIS PATTERN SO TEMPLATE IS APPLIED
But here
output:
elasticsearch:
enabled: true
hosts: ["http://es1:9200"]
timeout: 60
index: "sddm-%{+YYYY.MM}" <<< THIS HERE
pipeline: "filebeat-sddm"
action: create
setup.template:
name: "filebeat-sddm"
pattern: "filebeat-sddm-*" <<<< DOES NOT MATCHES THIS PATTERN SO TEMPLATE IS ***NOT*** APPLIED AND SO A SIMPLE DEFAULT INDEX IS CREATED
enabled: true
I don't know why it is creating index to me not datastream.
But funny thing when I will change it to something like "dpkg-prod" it is creating datastream.
It is somewhere remembers this index dpkg but I don't know where.
I removed old pipeline for it, and removed filebeat from ILM because I'm not using it.
I don't know where to look other to not have index but like the rest - datastreams
The index name does not match the pattern....do not match
Try
index: "dpkg-prod"
The pattern is to match the data stream name not the backing indices... So your pattern has a -* but the data stream name you're setting it does not...
Yes it's a little confusing that the file beats still says index but it's really what you're writing to which in this case is a data stream
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.