Customized Visualization

I've one query as i started using Kibana. and started building couple of simple visualization and dashboard. Now I'm trying to generate some complex visualization by the data/logs i have.

Basically, we do generate logs for each update user performed in our app. And the logs is generated something like this fashion "[User ID : hussain] Got UpdateStatus:SUCCESS".

Currently I'm able to discover logs and generated visualization for total updates being performed by all users with a simple query "UpdateStatus*SUCCESS", so this give all the count of successful update performed by all users.

However, what i m trying to get is a kind of visualization which says Updates count against each user and can be displayed in a form of piechart/bar chart etc. But I'm finding difficult to query & extract the data in such form.

Any kind of thoughts suggestion is appreciated.

Regards,
Hussain

How are you breaking down the message in ES?

Hi Mark, Messages are the same as I mentioned. I'm not familiar how to break the messages again in ES.

So you just pass that in directly?

I'd suggest you look to break it separate sections, ie one for the user ID and one for the status. You can use Logstash for that.

Mark, Logstash is configured for our app tomcat logs and I believe that is the reason I'm able to see those logs in Kibana and create dashboard and Visualization.

I believe I'm using it directly.
Can you share any example or methods how to break it?

The best place to start is https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

Thanks Mark for the quick and continuous support. Appreciated. I did have a looks on page you shared also did some google for the same.

A stupid question: Grok is a part of logstash or I need to ask our DevOps/TechOps team install something for this.

It's part of Logstash :slight_smile:

Hey Mark,

Currently I have Kibana interface with my log (assuming logstash is installed/configured to my logs). So in Kibana interface where i should i write grok Filter :frowning:

Regards,
Hussain

It happens in Logstash, not Kibana, as it needs to be separated before it goes into Elasticsearch.

Hello Mark, Thank you for the support so far.

Can you share some more insight how we can separate the logs before it goes to Kibana/Elasticsearch.?

Basically, our production environment is being monitored or controlled by a different teams, we don't have full permission to those servers. We (as a development team) has only read access to production servers.

Configuration of logstash is also done by the TechOps/DevOps team. So I need to ask them for separation but we need to give them the instruction what needs to be done.

Sorry, m very new to this ELK. My queries would be very stupid.

Regards,
Hussain

Hello Mark, After doing some research and learnings, what I've learned is our apache app custom logs are been parsed and filtered into different fields which are available in Kibana are as follows :
.timestamp
.version
.id
._index
.host
message
etc..

So, what i feel is i need to further parsed the message into another custom filter to meet what I'm expecting? Isn't it?. I've attached a snapshots for your reference. I've wiped out the host name and user ID.

Please share your thoughts for the same.

Hi Hussein,

As said by Mark, your logs needs to be parsed, each line being analyzed and broken in fields
before being sent to Elasticsearch, this way your queries should be more straightforward.

Usually you run a "log shipper" ( a 'filebeat' process for log files in ELK stack) on each production machine, this
shipper will send your logs to a logstash process running on one of your computers, preferably near your ES cluster.
The filebeat will handle all tomcat logs produced by your application, "remembers" where it stops, can be restarted and handles log file rotations.
The logstash process on it's side will parse all log data it receives, broke lines in fields before sending them to your ES cluster.

So, to recap, log ingestion is two processes : filebeat on production machine to send logs, logstash to receive logs,
parse them and send to ES cluster.

With each filebeat process you have a yaml config file like this one (change paths),
assuming logstash is configured to listen on port 5044 (see logstash.conf config filelater)

Filebeat.yaml

# run with  filebeat-1.2.3-x86_64/filebeat -c filebeat.yml
filebeat:
  prospectors:
    -
      paths:

        - "/xxxFullPathHerexx-app/logs/server.log*"

      multiline:
        #pattern: '\]$'
        pattern: '^\['
        negate: true
        match: after

      input_type: log
      document_type: beat

  registry: /xxxFullPathHerexx/registry

output:
  logstash:
    hosts: ["MyLogstashComputerHere:5044"]

logging:
  to_files: true
  files:
    path: /xxxFullPathHerexx-app/
    name: filebeat
    rotateeverybytes: 10485760
    level: error 

The logstash config file should be something like the following
(the grok pattern is for a glassfish application log, not tomcat,
you have to change it to suit your log format,
plus it depends on the log pattern used with log4j in your application,
and you should also change the field names on the match => line)

logs-glassfish.conf

see ES template in logs-template.json

to run it ... logstash-2.3.4/bin/logstash -f logs-glassfish.conf

nb: send logs with netcat:

nc localhost 4560 < logs/server.log

input {
beats {
type => "engine"
port => 5044
}
}
filter {

customize the match line

grok {
match => { "message" => "%{TIMESTAMP_ISO8601:cTime}|%{LOGLEVEL:logLevel}|%{DATA:application}|%{DATA:class}|_ThreadID=%{NUMBER:threadID};_ThreadName=%{DATA:threadName};|%{DATA:message}" }
}
}
output {

uncomment to debug or see dots

#stdout { codec => rubydebug }
#stdout { codec => dots }
elasticsearch {
hosts => "http://elk1:9200"
index => "logs-%{+YYYY.MM}"
template => "./logs-template.json"
template_name => "logs"
template_overwrite => true
flush_size => 10000
}
}

the template.json is the mappings template to use with your logs.
you can remove the lines if you don’t want to overwrite it each time
the idea is to extract fields you want (like User ID ) in match pattern.

HTH

regards,

Alain

after you succeeded in splitting the users from the succes message :
if you want to display two metrics against each other (like the count of updates per user) is suppose you could use timelion. this blog helped me to visualize the divide between two metrics in one graph by using the divide chain function.

timelion

Daan.