OK, this is what you are going to do: You are going to meet your manager's needs (which is fine) and still use best practices and a bunch of built-in capabilities.
You going to use the new 8.X capabilities.
Eventuayyou should look at Elastic Agent. However, we will start with filebeat which still is a great solution. ... Your initial filebeat looks like a filebeat config from several years ago... which is OK, but much has changed, perhaps much has changed since your manager worked with elastic in the past... which is OK... but I would recommend to use the new 8,x capabilities... we will use some of the new built in agent capabilities but still use filebeat.
I will show you the code and then point you to read some docs as I am not going to write all the reasons / docs here
1st upgrade to 8.13.1 as 8.13.0 had some issues
Then we are going to leverage data streams and all their power and we are going to follow the agent data streams naming patterns to leverage all the best practices (naming, data streams, ILM etc..etc..) ... with little work.
The days of naming indices with apigw-%{+yyyy.MM.dd}"
are over that is so 6.x, 7.x you will control with an ILM Policy.. you will get a default one logs
which you can edit (event though it will warn you not to or you will create your own) That says agent but it is really about data streams ILM
We will use the data stream naming convention PLEASE read about the naming conventions from the link above what it means and do not use reserved characters
<type>-<dataset>-<namespace>
In this case the type
will always be logs
this will enable many many automatic capabilities so you will not need to create templates to start... you can customize later if needed, it will create a time series data stream (DO NOT ADD %{+yyyy.MM.dd}
) you will adjust with the time series...
In this case (You can kinda use namespace how you want... we often see users use it as environment prod, qa etc or something e
So you data streams will be
logs-apigw-default
or
logs-apigw-prod
filebeat.yml
filebeat.inputs:
- type: filestream << Use File Streams log is deprecated
id: my-filestream-id
enabled: true
paths:
- /var/log/*.log
fields_under_root: true
fields:
data_stream.type: logs << These will name the data stream
data_stream.dataset: system_log << These will name the data stream
data_stream.namespace: default << These will name the data stream
- type: filestream
enabled: true
id: my-apigw-id
paths:
- /opt/apigw/apigw-service/src/service.log
fields_under_root: true
fields:
data_stream.type: logs
data_stream.dataset: apigw
data_stream.namespace: prod
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
data_stream => true << Enables data stream Will use the naming we set
#user => "elastic"
#password => "changeme"
}
}
That is all you need plus whatever authentication SSL etc..etc..
Create a logs-* dataview and you can create a logs-apiw-* if you only want to see those...
There is A LOT here ... try it get started you will be in a decent place .
NOTE: This will not work automatically with filebeat modules, would need to think about that.. I think it can by just adding the integration and then setting the correct data_stream.dataset