Kibana visualisation-

Hi I am creating a Kibana visualization. Using table in kibana lens
I want to fetch the time a token is first created in the log file. I am able to view the last value of it. but how can i see the timestamp when the tokens first entry in the file.
Here is the sample log

[**09/May/2023:10:58:06 +0530**] | 200 | 16 ms | 15670 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/common/images/iconActionAdd.png HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 17 ms | 417 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/common/images/iconActionNewWindow.png HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 17 ms | 866 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/common/images/iconActionAdd.gif HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 18 ms | 1142 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/common/images/iconActionThumbnail-view.png HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 66 ms | 3109 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "POST /3dspace/common/emxFreezePaneGetData.jsp?fpTimeStamp=1683609411004&objectId=&firstTime=true&IsStructureCompare=FALSE HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 3 ms | 8959 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/ENOAEFStructureBrowser/assets/xslt/emxFreezePaneTableFragment.xsl HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 8 ms | 18838 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/ENOAEFStructureBrowser/assets/xslt/emxFreezePaneTreeFragment.xsl HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 2 ms | 3229 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/ENOAEFStructureBrowser/assets/xslt/emxFreezePaneTableHeaderFragment.xsl HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 3 ms | 3718 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/ENOAEFStructureBrowser/assets/xslt/emxFreezePaneTreeHeaderFragment.xsl HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 1 ms | 1335 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/ENOAEFStructureBrowser/assets/xslt/emxFreezePaneToolbarFragment.xsl HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 8 ms | 295 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/common/emxMQLNoticeWrapper.jsp?clearLimitNotice=true HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 2 ms | 2737 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/W3DXNavigationMenu/W3DXNavigationMenu.js HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 3 ms | 2821 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/ENODragAndDrop/ENODragAndDrop.js HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 1 ms | 984 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/W3DXNavigationMenu/W3DXNavigationMenu.css HTTP/2.0"
[09/May/2023:10:58:06 +0530] | 200 | 2 ms | 167 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/webapps/W3DXNavigationMenu/W3DXNavigationMenu_en.js HTTP/2.0"
**[09/May/2023:10:58:06 +0530**] | 200 | 74 ms | 388 B | 172.31.40.179 | - | 957ED931A7CB09A364979C012A7816CB | - | "GET /3dspace/resources/modeler/pno/person?current=true&select=collabspaces HTTP/2.0"

I want to see 09/May/2023:10:58:06 +0530 as timestamp
Here is the Kibana visualisation

In the table second column i am getting the timestamp when the token last appeared in the file. i want the first appearance

Wouldn´t "minimum" of @timestamp and then sort Ascending do what you want?

@timestamp is the autogenerated timestamp. using grok i have created another timestamp field. Minimum function is not getting applied on it

@Neelam_Zanvar that is because it is not a date field. You will have te change your index template or the source field type so it will be recognised as a date type field.
Now it is a text field. I am sorry I missed that in my first answer.

The format of the date in logs is 09/May/2023:10:59:05 +0530 and i used grok httpdate for this

Well if that is the case then you need to check your index template. Because the field type is not date.
You can use GET /<indexname> in dev tools or through curl, it will show you the index field types in effect.

Yes it is getting saved as text, can you please help me on how can i save it as date? Here is the grok pattern i used in logstash

\[%{HTTPDATE:date}\]%{SPACE}\|%{SPACE}%{NUMBER:response}%{SPACE}\|%{SPACE}(?<duration>%{NUMBER}%{SPACE}%{WORD})%{SPACE}\|%{SPACE}(?<bytes>%{NUMBER}%{SPACE}%{WORD}|%{DATA})%{SPACE}\|%{SPACE}%{IP:hostip}%{SPACE}\|%{SPACE}(?<tag>%{NUMBER}%{SPACE}%{WORD}|%{DATA})%{SPACE}\|%{SPACE}(%{WORD:token}|%{DATA:token})%{SPACE}\|%{SPACE}(?<tag1>%{NUMBER}%{SPACE}%{WORD}|%{DATA})%{SPACE}\|%{SPACE}\"(?<method>%{WORD})%{SPACE}(?<url>%{URIPATHPARAM})%{SPACE}(?:HTTP/%{NUMBER:http_version})\"

The log entry is 

[09/May/2023:10:46:27 +0530] | 200 | 48 ms | 5377 B | 172.31.40.179 | - | 6D9AE4C9A7B3BE11A3FC61A8B3E8B7CB | - | "GET /3dspace/common/scripts/emxUIConstants.js HTTP/2.0"

I am not sure which version you are using, but check out the documentation on index templates: Index templates | Elasticsearch Guide [8.7] | Elastic
In my opinion, this is vital knowledge to make efficient use of Elasticsearch.
Before creating an index, make sure you understand what kind and type of data you are storing. Then use this knowledge to create an index template, this will ensure the proper mapping of data types.
You can use the GET /<indexname> command to retrieve the current index view of your index with the current mappings. You can then use this to create an index template.
Once you have made the necessary changes, you will have to re-index your data to ensure that all data is correctly mapped and that your indexes use the new index template.

I am not an expert on index templates, but I can help you create a working template.
If you want, you can then share the output of GET /<indexname>, your elasticsearch version and the sanitized output of GET /<indexname>/_search.

Here is the output of GET /

{
  "3dx_apache2023.05.16": {
    "aliases": {},
    "mappings": {
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "@version": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "bytes": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "duration": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "event": {
          "properties": {
            "original": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        },
        "host": {
          "properties": {
            "name": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        },
        "hostip": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "http_version": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "log": {
          "properties": {
            "file": {
              "properties": {
                "path": {
                  "type": "text",
                  "fields": {
                    "keyword": {
                      "type": "keyword",
                      "ignore_above": 256
                    }
                  }
                }
              }
            }
          }
        },
        "message": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "method": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "response": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "tag": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "tag1": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "tags": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "timestamp": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "token": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "url": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        }
      }
    },
    "settings": {
      "index": {
        "routing": {
          "allocation": {
            "include": {
              "_tier_preference": "data_content"
            }
          }
        },
        "number_of_shards": "1",
        "provided_name": "3dx_apache2023.05.16",
        "creation_date": "1684218918820",
        "number_of_replicas": "1",
        "uuid": "jeRI0kulQIOhM6eqN_t2sA",
        "version": {
          "created": "8070099"
        }
      }
    }
  }
}

Also when i check the type of field in the data view it is text for all fields. Why does thegrok pattern save it as text and not to the desired format? like httpdate as date?

@Neelam_Zanvar
As mentioned, it would be nice to know what version of Elasticsearch you are running. My answers are based on version 8.7.
I would love to have the output of GET 3dx_apache2023.05.16/_search, this way we can check if the values in the fields are the one we are expecting or if they need to change.
For now I created a new grok pattern that should match the component template and index template. I removed a lot of %{SPACE} tags to make it more readable and it should still work.
In your grok pattern you name the time field "date" but in your index the field is called "timestamp", are you doing any mutate actions in a filter?

Grok pattern:

\[%{HTTPDATE:timestamp}\] \| %{NUMBER:response} \| %{NUMBER:duration} ms \| %{NUMBER:bytes} B \| %{IP:hostip} \| (?<tag>%{NUMBER}%{SPACE}%{WORD}|%{DATA}) \| (%{WORD:token}|%{DATA:token}) \| (?<tag1>%{NUMBER}%{SPACE}%{WORD}|%{DATA}) \| \"(?<method>%{WORD}) (?<url>%{URIPATHPARAM}) (?:HTTP/%{NUMBER:http_version})\"

My grok pattern will only match succesfull queries.
This is your original with only the needed addaptions to make it work:

\[%{HTTPDATE:timestamp}\]%{SPACE}\|%{SPACE}%{NUMBER:response}%{SPACE}\|%{SPACE}(?<duration>%{NUMBER})%{SPACE}ms%{SPACE}\|%{SPACE}((?<bytes>%{NUMBER})%{SPACE}B|%{WORD}|%{DATA})%{SPACE}\|%{SPACE}%{IP:hostip}%{SPACE}\|%{SPACE}(?<tag>%{NUMBER}%{SPACE}%{WORD}|%{DATA})%{SPACE}\|%{SPACE}(%{WORD:token}|%{DATA:token})%{SPACE}\|%{SPACE}(?<tag1>%{NUMBER}%{SPACE}%{WORD}|%{DATA})%{SPACE}\|%{SPACE}\"(?<method>%{WORD})%{SPACE}(?<url>%{URIPATHPARAM})%{SPACE}(?:HTTP/%{NUMBER:http_version})\"

First you create a component template:

PUT _component_template/apache_log
{
  "template": {
    "mappings": {
      "properties": {
        "timestamp": {
          "type": "date",
          "format": "dd/MMM/yyyy:HH:mm:ss Z"
        },
        "response": {
          "type": "short"
        },
        "duration": {
          "type": "long"
        },
        "bytes": {
          "type": "long"
        },
        "hostip": {
          "type": "ip"
        },
        "tag": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "token": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "tag1": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "method": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "url": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "http_version": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        }
      }
    }
  }
}

Then you create the index template

PUT _index_template/3dx_apache_template
{
  "index_patterns": ["3dx_apache*"],
  "template": {
    "settings": {
      "number_of_shards": 1,
      "number_of_replicas": 1,
      "routing": {
        "allocation": {
          "include": {
            "_tier_preference": "data_content"
          }
        }
      }
    },
    "mappings": {
      "_source": {
        "enabled": true
      },
      "properties": {
        "@timestamp": {
          "type": "date"
        },
        "@version": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "event": {
          "properties": {
            "original": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        },
        "host": {
          "properties": {
            "name": {
              "type": "text",
              "fields": {
                "keyword": {
                  "type": "keyword",
                  "ignore_above": 256
                }
              }
            }
          }
        },
        "log": {
          "properties": {
            "file": {
              "properties": {
                "path": {
                  "type": "text",
                  "fields": {
                    "keyword": {
                      "type": "keyword",
                      "ignore_above": 256
                    }
                  }
                }
              }
            }
          }
        },
        "message": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        },
        "tags": {
          "type": "text",
          "fields": {
            "keyword": {
              "type": "keyword",
              "ignore_above": 256
            }
          }
        }
      }
    }
  },
  "priority": 500,
  "composed_of": ["apache_log"]
}

The field "index_patterns": ["3dx_apache*"] in the index template makes sure any index created that starts with the name "3ds_apache", will be created based on this index template.
The existing index will not change, only newly created indexes will be created based on this index template. So, for you to be able to use these data types, you will need to create a new index. For you to be able to use historical data you will need to reindex your existing data into new indexes.
When you start your index name with "3dx_apache" they will use this index template and these data types.

In the component template I changed timestamp to "date" with a format that should match your date format. Next I changed response to "short". I use this to filter with >= or <= to filter on HTTP errors like 400, 404, etc., although catching error log lines would probably need an additional grok pattern. Making sure duration and bytes are of type "long" provides you with the ability to calculate with these fields and you can assign the type "Bytes" in your kibana data view to the bytes field and "Duration" to your duration field. This makes your data look nice :slight_smile:


You first need to have the index created based on your templates, so creating and modifying your data view is the last step.

This is the syntax to reindex data:

POST _reindex
{
  "source": {
    "index": "3dx_apache2023.05.16"
  },
  "dest": {
    "index": "3dx_apache2023.05.16-000001"
  }
}

Note that these names are examples. Please see Reindex API | Elasticsearch Guide [8.7] | Elastic

I also see that you don't use aliases, which make your life a lot easier and are worth looking into. It takes time to set it up properly because they go hand in hand with ILM policies and the way you send data to Elasticsearch.

It is a lot of information and I am assuming you try to read the documentation available to provide context. Good luck! :slight_smile:

Can you please tell me Where can i go and create the index template and component template? i went to the

can't it be done using mutate or something else in logstash pipeline?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.