I have issues on lacking field on index in elk of beats (filebeat, auditbeat and winlogbeat). current my log flows is
log source hosts >> kafka >> logstash >> elastic cloud (elk).
according to my understanding, I need to load manually index templates and ingest pipeline if I am not ship the logs directly to elk from beats. So I am thinking to use separate two machines (linux and windows) to ship the logs to elk directly (only to load index templates).
so I want to know if I do ingest the logs like this, can I get more field on elk before doing manual on other machines which installed beats.
It is a pretty common pattern when you have a complex/multi-step ingest architecture to have a "setup" VM where you install the beats, configure them, point them at Kibana and Elasticsearch and then run setup. That way all the beats assets (Templates, Ingest pipelines, Dashboards etc) are loaded into Kibana and Elasticssearch. That VM does not even need real log sources just connectivity to Kibana and Elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.