Create TimeSeries plots with Timelion

Hi, I am new user to ELK (ver 6.5.1)
I want to create TimeSeries plots with Timelion on ELK Server using the "fields" captured and shipped from Filebeat on the Client side.

This log is generated continuously from application running on the client side
2018-11-05 00:38:17 bench CDataViewDB::GetData : 0.02ms, overall (n=33): mean 0.2ms, stddev 0.01ms, max 0.04ms, min 0.01, total time [0.00s]

I want to plot "mean", or "max" or stddev (each on on Y-axis) vs "timestamp" (which is on X-axis)

After "created index pattern" from Management screen, I can see the fields (min, max, mean, stddev) from filebeat-* from Discover screen.

Nothing is displayed from Timelion when I try to create (stddev on Y-axis, and timestamp on X-axis) plot with 'metric=' with expression:
.es(index=filebeat-*, timefield='@timestamp', metric='avg:stddev')

I can only see them displayed from query 'q='
.es(q=mean), .es(q=stddev), .es(q=max), .es(q=min)

Also the Y-axis becomes "count" instead of values
.es(q=mean:0.04), .es(q=stddev:0.03), .es(q=max:0.16), .es(q=min:0.01)

I also check the Type under Management - filebeat-* from the Filter box
stddev or mean, etc...

They all have "string" as Type. Should they be all "number" as Type?
Is this the reason why query .es(index=filebeat-*, timefield='@timestamp', metric='avg:stddev')
can not find "stddev"? (pls see step 8 at the bottom of the page)

My steps are detailed as following steps. Is there any steps I have missed or done incorrectly.

Thank you very much
BR,
Sam

Steps

(1) Setup a ELK on Host Server (Step 1 to 3 in the link) and Filebeat on Client (Step 4)

(2) Log lines captured from filebeat
2018-11-05 00:38:17 bench CDataViewDB::GetData : 0.02ms, overall (n=33): mean 0.2ms, stddev 0.01ms, max 0.04ms, min 0.01, total time [0.00s]

Grok Pattern (tested on Dev Tools - Grok Debugger)
%{TIMESTAMP_ISO8601:logdate} bench %{GREEDYDATA:typeofevent} : %{NUMBER:stddev}ms, overall (n=%{NUMBER:n}): mean %{NUMBER:mean}ms, stddev %{NUMBER:stddev}ms, max %{NUMBER:max}ms, min %{NUMBER:min}, total time [%{NUMBER:time}s]

Structured Data
{
"min": "0.01",
"max": "0.04",
"mean": "0.2",
"logdate": "2018-11-05 00:38:17",
"time": "0.00",
"stddev": "0.02",
"typeofevent": "CDataViewDB::GetData",
"n": "33"
}

(3) Entered the Grok pattern in the config file for logstash.
elktest@ELKServer:/etc/logstash/conf.d$ cat 10-syslog-filter.conf
filter {
if [input][type] == "log" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:logdate} bench %{GREEDYDATA:typeofevent} : %{NUMBER:first_ms}ms, overall (n=%{NUMBER:n}): mean %{NUMBER:mean}ms, stddev %{NUMBER:stddev}ms, max %{NUMBER:max}ms, min %{NUMBER:min}, total time [%{NUMBER:time}s]" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
}
}

(4) Test Logstash configuration with this command:
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
Configruation OK

(5) Verified Elasticsearch is receiving data from Filebeat index with this command:
test@ELKServer:/etc/logstash/conf.d$ curlcurl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
{
"took" : 9,
"timed_out" : false,
"_shards" : {
"total" : 35,
"successful" : 35,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 2556058,
"max_score" : 1.0,
.......

(6) Now access Kibana user interface from Chrome Browser

  • Click "Create index pattern" from Management -> Kibana -> Index Patterns
  • Type filebeat-* under Index pattern input box
  • Select @timestamp in Time filter field name and then Create.. button to finish

(7) Goto Discover and can see (min, max, mean, stddev) from filebeat-*

(8) Enter query from Timelion
.es(index=filebeat-*, timefield='@timestamp', metric='avg:stddev')
Nothing is displayed

Hey @stckwok, are you indexing documents that they themselves have fields to represent the current mean, stddev, max, and min and wanting the values from the individual documents to be displaying in Timelion?

Timelion, like almost all of our visualizations, use the equivalent of Elasticsearch aggregations against the documents to perform various calculations before they're displayed to the end-user, so this likely isn't what you want if you're indexing documents with the already calculated metrics.

Hi Brandon,
Thanks for your prompt reply.
Please see details below
BR,
Sam.

Within debug.log, the log line we interested is just plan text. (please see filebeat.yml at the bottom of this message).
I made one change on the grok pattern to add "float", and I can see values shown as float only in the debugger. Please see below.
But after I put the same change to /etc/logstash/conf.d/10-debuglog-filter.conf, the fields are still displayed with Type = string instead of "number".
Could you tell me where I should look at when Type is being populated after grok filter for debugging?

This log is generated continuously from the application running on the client side in random time interval.
Example here is two consecutive log 10 secs apart with different values of mean, max, and stddev

2018-11-05 00:38:17 bench CDataViewDB::GetData : 0.02ms, overall (n=33): mean 0.2ms, stddev 0.01ms, max 0.04ms, min 0.01, total time [0.00s]
..
..
2018-11-05 00:38:27 bench CDataViewDB::GetData : 0.02ms, overall (n=33): mean 0.3ms, stddev 0.02ms, max 0.05ms, min 0.01, total time [0.00s]
..
.
.

So if we capture "mean" values and others continuously from debug.log over time, Can we have a table of two columns to be plotted in Timelion ?
I just need to plot Y-axis with mean against X-axis with Timestamp. Is this a valid use case of Timelion?

.es(index=filebeat-, timefield='@timestamp', metric='avg:mean') <<< for first plot
.es(index=filebeat-
, timefield='@timestamp', metric='avg:stddev') <<< for second plot etc...

Table example,
Mean vs Time
0.2 , 00:38:17
0.3 , 00:38:27
0.3 , 00:38:47
0.4 , 00:39:00
... etc...

mean is display under "Available fields" on Kibana Discover Tab.
mean show also have Type = number, "# mean". But it is showing up as "t mean"

Available fields
Popular
t message
t _id
t _index
t beat.name
t mean <<<<< should be "# mean"
t max
t min

Pipeline summary:

  1. Filebeat capture log from /home/myapp/debug.log from client IPAddress_1 : "111.111.111.001"
  • debug.log is just plain text
  1. Filebeat output to Logstash running on ELK hosts: ["123.122.169.51:5044"]
  2. Logstash grok filter the input from Beats (port 5044) before forwarding to elasticsearch
  • interested fields are: min, max, stddev, mean as shown in the log line.
  1. These fields are indexed into elasticsearch
  2. But their Type are "string" instead of "number"

On step 5, I fixed the grok filter with extra "float" from %{NUMBER:mean}ms, to %{NUMBER:mean:float}ms
Now, from the tested on Dev Tools - Grok Debugger, I can see all the values changed to float (without the "" in Structured Data below).

Updated Grok Pattern:
%{TIMESTAMP_ISO8601:logdate} bench %{GREEDYDATA:typeofevent} : %{NUMBER:first_ms:float}ms, overall (n=%{NUMBER:n:int}): mean %{NUMBER:mean:float}ms, stddev %{NUMBER:stddev:float}ms, max %{NUMBER:max:float}ms, min %{NUMBER:min:float}, total time [%{NUMBER:time}s]"

Structured Data
{
"min": 0.01,
"max": 0.04,
"mean": 0.2,
"logdate": "2018-11-05 00:38:17",
"time": "0.00",
"stddev": 0.02,
"typeofevent": "CDataViewDB::GetData",
"n": 33
}

After the above Grok changes, I restarted logstash, and re-created index pattern in Management of Kiabana

  1. Update grok filter in the file on ELK host:
    sudo vi /etc/logstash/conf.d/10-debuglog-filter.conf

  2. Restart logstash
    sudo systemctl restart logstash
    sudo systemctl enable logstash

  3. Verified it’s running
    sudo systemctl | grep logstash

  4. Re-created index pattern for Kiabana

  5. Now when I checked the fields (min, max, mean, stddev), they are still displayed as Type string

We configured filebeat on the Client side with application running and logging to /home/myapp/debug.log

/etc/filebeat/filebeat.yml

filebeat:
prospectors:
paths:
- /home/myapp/debug.log
input_type: log
document_type: log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["123.122.169.51:5044"]
...

You've presented a number of questions and issues above, let me try to break them down.

If we want to perform any calculations on the fields, we need to be indexing the document with type number. If you're needing help with the Grok filter in Logstash, I'd recommend posting over there: https://discuss.elastic.co/c/logstash

Once your data is in the correct format, you'll want to refresh your index patterns in Kibana and ensure that the fields are showing up with the correct types. Then you can build your visualizations using Timelion. If you're looking to calculate the average of the mean field represented in the documents, you can do so using Timelion and graph this as a line chart.

Hi Brandon,
Thanks for your confirmation. I think, please correct me if not, the fix I put in with "float" on Grok is correct as the fields are displayed as numbers now in Grok debugger.

filebeat -> logstash -> Elasticsearch -> Kibana

  • What comes out from logstash should be corrected.
  • The fields are indexed into Elasticsearch, as they are displayed under "Available fields" of filebeat-*
  • But the Type is displayed as "string" not "number" in Kibana

Could you tell me where I should look at when Type is being populated. Any log file I should examine?

Thanks again for your help.
BR,
Sam.

@stckwok have you refreshed your index pattern in Kibana? To do so, go to Management -> Index Patterns, select the relevant index pattern, and click the refresh button highlighted below:

Hi Brandon,
I did refresh the index from Management. Type still show up as "string".
And I also did refresh from Discover in Kibana

I also set filter with "message: bench AND CDataViewDB", there are no new message from yesterday "Last 24 hours"

I can only see logs using "Last 7 days", and latest message is on November 23rd 2018, 15:31:50.760
And at that time I did not add "float" in the grok filter yet.

I added the "float" last night. Since client is not sending the interested log line out after Nov 23rd, the index Type" will not be updated with my change?

I will verify on the client side to generate the message.

OR

After the Grok changes, if I restarted logstash, and re-created index pattern in Management of Kiabana, the field's Type will be updated.

Thanks,
Sam.

@stckwok, we base the index patterns on the mappings for the index. If you run a query similar to the follow to look at the index mappings, what type do you see for those fields?

GET your-index-name/_mapping

I ran this in Dev Tools - Console
GET filebeat-6.5.1-2018.11.28/_mapping/

There are 3029 line on the right side of the GET Response
I can see syslog, system, etc... for many different components.
They come from
https://www.elastic.co/guide/en/beats/metricbeat/6.5/metricbeat-modules.html

So I just look for logstash
I can not see stddev, or mean.
I can only see them from Discover under "Available fields" and Type are string

{
"filebeat-6.5.1-2018.11.28" : {
"mappings" : {
"doc" : {
"_meta" : {
"version" : "6.5.1"
},
...
"logstash" : {
"properties" : {
"log" : {
"properties" : {
"level" : {
"type" : "keyword",
"ignore_above" : 1024
},
"log_event" : {
"type" : "object"
},
"message" : {
"type" : "text",
"norms" : false
},
"module" : {
"type" : "keyword",
"ignore_above" : 1024
},
"thread" : {
"type" : "text",
"norms" : false
}
}
},

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.