Logstash Indexing Questions


I have been setting up a proof of concept elastic stack for our company. We had data flowing through the elastic stack without errors when we were just shipping the Winlogbeat data directly to Kibana. All of the Winlogbeat dashboards were working perfectly because we loaded them in according to the Winlogbeat tutorial. When we started implementing Logstash things got confusing.

We assumed that all we had to do was set the correct index in the logstash.conf file and that everything would work fine because everything was working fine previously and we don't tell our logstash to change the data at all. Here is our configuration file:

# comments to describe your configuration.
input {
  beats {
     port => "5044"
# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
# }
output {
  stdout { codec => rubydebug }
  elasticsearch {
    hosts => ""
    index => "winlogbeat-7.14.0-2021.08.12000000"
#       data_stream => "true"

When we use logstash and this .conf file we get this error inside of kibana:

illegal_argument_exception at shard 0index winlogbeat-logstashnode S-L5fRJ7SjiKdhrVsWJmhA

    Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. 
Please use a keyword field instead. Alternatively, set fielddata=true on [log.level]
in order to load field data by uninverting the inverted index. Note that this can use significant memory.

Has logstash changed our data? Why are the fields messed up? I know that this is an understanding problem on our part but any help would be greatly appreciated.

logstash will install a template that sets the mappings on indexes, that may be causing you issues. Note that the index option is ignored by default because ILM is enabled by default.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.