Unable to make a field aggregatable in kibana

I have a field called "message" . I need to make it as aggregatable. I am unable to do so .
Can someone please suggest how to achieve that.

Capture_kibana_aggregation

Use a keyword type in your mapping.
Or use doc_values for this field.

You can't ask for that on a public forum manned by volunteers.

Read this and specifically the "Also be patient" part.

It's fine to answer on your own thread after 2 or 3 days (not including weekends) if you don't have an answer.

I did this in Dev tools
PUT /filebeat
{

      "mappings": {

        "_doc": {

          "properties": {

            "message": {

              "type":  "keyword"

            }

          }

        }

      }

    }

But still message field is not aggregatable
Capture_kibana_aggregation

If my field is message , do you mean i should use message.doc_values

This is my painless script to extract last word from message:
String parts = /-/.split(doc['message'].value);
return parts[-1]

Can you please explain with example . I am very new to ES/kibana.
Thanks

What you did is correct.
If you reload the index in Kibana it will show that message is aggregatable.

I reloaded the kibana dashboard & reloaded the index pattern as well , but still message field is not aggregatable as shown in above screenshot.

You need to click on

image

If it does not work, start again from scratch (delete your existing index in elasticsearch and index template in Kibana).
If it still does not work ask in #kibana and explain exactly all the steps you followed.

1 Like

I already cicked ithat refesh button .. didnt work

Then i deleted the index & created again .. still message field is not aggregatable

Start from scratch.
Paste here every command you are running.

Also check with the GET mapping API what is the mapping for your index and check again the field

I tried this command in dev tools :
GET /filebeat/_mapping/_doc

I got this result :
{
"filebeat" : {
"mappings" : {
"_doc" : {
"properties" : {
"message" : {
"type" : "keyword"
}
}
}
}
}
}
This means message field has keyword type now . Still message field is non-aggregatable

What steps are you doing in Kibana?
Can you remove the index pattern in Kibana and add it again?

These are the steps i followed . Pls correct me if anything is wrong
1)removed index filebeat-*
2) I restarted filebeat service from srevices.msc and again created the index filebeat-*
3) I did this in the dev tools

PUT /filebeat
{

  "mappings": {

    "_doc": {

      "properties": {

        "message": {

          "type":  "keyword"

        }

      }

    }

  }

}

Got the resource already exists exception:
{
"error": {
"root_cause": [
{
"type": "resource_already_exists_exception",
"reason": "index [filebeat/EDCdViXES3uOGaPJFUMv9A] already exists",
"index_uuid": "EDCdViXES3uOGaPJFUMv9A",
"index": "filebeat"
}
],
"type": "resource_already_exists_exception",
"reason": "index [filebeat/EDCdViXES3uOGaPJFUMv9A] already exists",
"index_uuid": "EDCdViXES3uOGaPJFUMv9A",
"index": "filebeat"
},
"status": 400
}

4)still i found that message field is not aggregatable .
Is there any step which i might be missing ?

It seems that you are using filebeat.

When you run:

PUT /filebeat
...

You are creating an index named filebeat. Which is not the name filebeat is using by default which is filebeat-(timestamp).

That's probably why you can't see that in Kibana index settings where you have filebeat-* as the index names. This does not match filebeat.

I can definitely help to fix that but now as I understand that you are totally new to Elastic stack, I wonder if you really want to do aggregation on the filebeat message field.

What do you have in this field?
What do you want to aggregate and what do you expect?

Thanks David for response

My answer to why I am trying to aggregate message field ?

Ans:
I have 2 fields of string type in filebeat index:

  1. _index = kibana_sample_data_flights
    (This field is marked aggregatable by default)

  2. message = i_want_to_extract_first_word_from_this_string
    (This field is marked non-aggregatable by default)

Now I create a scripted field called firstword (i am splitting the string using underscore as delimiter and getting the firstword)

When i try for _index field, the painless query works as expected:

String[] parts = /_/.split(doc['_index'].value);
return  parts[0]

But When i try for message field, i get error ( 3 of 6 shards failed) when in click on Discover:

String[] parts = /_/.split(doc['message'].value);
return  parts[0]

The only difference i see between _index and message is _index is aggregatable but message is not . So I wanted to make message aggregatable.

Updates : I have been able to make the message field aggregatable , but it did not solve my above mentioned purpose of splitting and getting first word

Now When i try for message field, i get error ( 3 of 11 shards failed) when in click on Discover

String[] parts = /_/.split(doc['message'].value);
return  parts[0]

As you already know i am a beginner in ELK . Can you please help me resolve this.

Thanks a lot in advance

I don't think that's the right way to do what you want to achieve.

You should better use an ingest pipeline which will extract the data you need at index time to a dedicated field where you can set it as a keyword.

1)Can u please guide me how to do that with a code snippet .
Because all i want to do is split a sting , i guess it should not be this complicated.

2)Also :By Extracting to a dedicated field , do you mean scripted field ?
(I am not using logstash)

3)Also i think there is something special with the message field beacause :
Even this query gives error ( 3 of 14 shards failed) when in click on Discover:

return doc['message'].value;

my actual message field is

message : 2018-12-21 02:31:31,792;INFO ;XSYD.2.5.0.1a5e8-uye1-9d87-8744-5343db306cd8;1;0;;GETCONFPRO;0;

I want to split by ; and get the timestamp

Then you probably want to use https://www.elastic.co/guide/en/elasticsearch/reference/6.5/dissect-processor.html or

In the previous reply you suggested to use ingest pipeline ,
can you explain how to do that ?

If you just the README of the latest link I shared, you have some examples.

Otherwise start by reading the documentation :

https://www.elastic.co/guide/en/elasticsearch/reference/6.5/ingest.html

For that GITHUB link you shared , the setup given is for the mac os and those command does'nt run on windows .

When i run

.\gradlew clean check

I get error
.\gradlew is not recognised as iternal or external command

I can't find installation procedure for windows there . Can you please check once

Read this part of the documentation.