Kibana Visualization Basics

Hi Team,
Please help understand Kibana visualizations, ES queries are also new to me.

Here is what i am trying to do, i uploaded below sample data (filebeat->ES)

192.0.1.111|"Company-1"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|200|9578|"-"|"Mozilla"|userid-1|count
192.0.1.112|"Company-2"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|400|9578|"-"|"Mozilla"|userid-2|count
192.0.1.115|"Company-1"|SFO|[25/Mar/2016:13:00:01 -0700]|"POST Request***"|200|9578|"-"|"Chrome"|userid-3|count
192.0.1.114|"Company-2"|SEA|[25/Mar/2016:13:00:01 -0700]|"PUT Request***"|401|9578|"-"|"Chrome"|userid-4|count
192.0.1.113|"Company-2"|SFO|[25/Mar/2016:13:00:01 -0700]|"PUT Request***"|401|9578|"-"|"IE"|userid-5|count

And now trying to write queries:

Query-1: How many users per company

And this is what i am doing on Kibana:

Discover:
Query=source: "/data/custom/my-test.log"

Result Set:
March 25th 2016, 11:07:45.724 source:/data/custom/my-test.log @timestamp:March 25th 2016, 11:07:45.724 beat.hostname: count:1 fields: - input_type:log message:192.0.1.115|"Company-1"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|200|9578|"-"|"Chrome"|userid-3|count offset:224 type:log _id:AVOu9fYHr2iUKFBikyIA _type:log _index:filebeat-2016.03.25 _score: -
March 25th 2016, 11:07:45.724 source:/data/custom/my-test.log @timestamp:March 25th 2016, 11:07:45.724 beat.hostname: count:1 fields: - input_type:log message: offset:553 type:log _id:AVOu9fYHr2iUKFBikyID _type:log _index:filebeat-2016.03.25 _score: -
March 25th 2016, 11:07:45.724 source:/data/custom/my-test.log @timestamp:March 25th 2016, 11:07:45.724 beat.hostname: count:1 fields: - input_type:log message:192.0.1.114|"Company-2"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|401|9578|"-"|"Chrome"|userid-4|count offset:335 type:log _id:AVOu9fYHr2iUKFBikyIB _type:log _index:filebeat-2016.03.25 _score: -
March 25th 2016, 11:07:45.724 source:/data/custom/my-test.log @timestamp:March 25th 2016, 11:07:45.724 beat.hostname: count:1 fields: - input_type:log message:192.0.1.112|"Company-2"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|400|9578|"-"|"Mozilla"|userid-2|count offset:112 type:log id:AVOu9fYHr2iUKFBikyH _type:log _index:filebeat-2016.03.25 _score: -
March 25th 2016, 11:07:45.724 source:/data/custom/my-test.log @timestamp:March 25th 2016, 11:07:45.724 beat.hostname: count:1 fields: - input_type:log message:192.0.1.113|"Company-2"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|401|9578|"-"|"IE"|userid-5|count offset:446 type:log _id:AVOu9fYHr2iUKFBikyIC _type:log _index:filebeat-2016.03.25 _score: -
March 25th 2016, 11:07:45.723 source:/data/custom/my-test.log @timestamp:March 25th 2016, 11:07:45.723 beat.hostname: count:1 fields: - input_type:log message:192.0.1.111|"Company-1"|SEA|[25/Mar/2016:13:00:01 -0700]|"GET Request***"|200|9578|"-"|"Mozilla"|userid-1|count offset:0 type:log _id:AVOu9fYHr2iUKFBikyH- _type:log _index:filebeat-2016.03.25 _score: -

Visualize:
Y-axis: Count
X-axis: Aggregation->Filters->"Company-1"

It's giving me count=6 (it should be 2), and "Company~" doesn't show company-1 and company-2 in result set.

Query-2: Over all count of web browsers
Y-axis: Count
X-axis: Filter Mozilla
Filter Chrome
Filter IE

Above is showing 3 bar with mozilla=2, chrome=2 and IE=1. But how can i make it generic? wanted to avoid writing 3 different filters.

Regards...

Some beats produce structured documents where there is little need for additional processing before the data is inserted into Elasticsearch. Filebeat does very little parsing and puts the entire log message in a single field, which often gives limited flexibility when exploring the data in Kibana. The output from Filebeat is in my experience therefore generally sent to Logstash for further processing and extraction of useful fields before being indexed into Elasticsearch.

If you use Logstash to separate out the components of the message, e.g. using a csv filter or grok filter I believe it would be much easier to get what you are looking for out of Kibana.

Thanks for suggestion.

I am working on logstash configuration now but it's throwing error.

I am sure my filter understanding is not working. If input is "beats" is this a correct way to filter file input?

input {
beats {
port => 5044
}
syslog{
port => 5044
}
}
filter {
if [type] == "cust-log" {
csv {
columns => ["IP","Company","Unit","Create-Date","Request-URL","Response-Code","Zip","line","Browser","User","Count"]
separator => "|"
}
}
}
output {
elasticsearch {
hosts => ["http://1.2.3.4:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

Regards...