Kibana: What's Next?

Hey guys,

Thanks a lot for the great support you've given in the past couple of days, I truly appreciate it.

I was finally able to get a sample filebeat dashboard visible in Kibana.

Now, what are the steps I should do next to go from this output:

to this result: (basically so many links to be monitored on the same dashboard, all visible together, and when I click one of them, it appears alone as shown below):

Regardless of the image being on SolarWinds Orion web interface, I want the same exact output on Kibana on my localhost server so far, so that I can apply the same steps afterwards on my real server and deploy it in the network.

Looking forward for your great support.

Thanks,
Safa

Hi Safa,

great to hear you got Kibana running. :thumbsup: Let me try to deduce some details from the screenshots you provided. Please correct me if I got something wrong.

  • You are using filebeat to parse linux system logs.
  • You want to create a dashboard with the following properties:
    • It displays a line graph plotting the average packet loss of individual network interfaces over time.
    • It can be filtered to only show the graph of a single interface with a click.

Assuming I got that right, that absolutely sounds possible.

If the packet data are not already indexed in Elasticsearch, monitoring must first be set up to do so, e.g. via Metricbeat:

However the data got into Elasticsearch, Kibana can then be configured to display the chart:

  • Create an index pattern that match the respective indices in Kibana.
  • Create a line chart visualization based on that index pattern that...
    • ... uses the "average" aggregation on the packetloss field as a metric on the y-axis
    • ... uses the "date histogram" aggregation on the correct timestamp field on the x-axis
    • ... uses the "terms" aggregation on the field containing the network interface name to split the x-axis buckets

In the above description I've made several assumptions about your data source and setup. If you can provide more details about the way the data are indexed in Elasticsearch, I would be happy to assist with more concrete hints.

Thanks a lot dear for the great support!

The thing is, we have so many clients so I think it will be troublesome to install on every single machine.

We do, however, have IPSLA on all clients.

Would that be possible?

I'm not familiar with IPSLA, but quick glance at the documentation suggests that it supports SNMP. If that is accurate, you might be able to use Logstash's SNMP trap input in combination with the Elasticsearch output to get the network data into Elasticsearch and subsequently display them in Kibana as I described before.

If SNMP is not an option you might be able to create a pipeline using external tools, log files and logstash to index your network data. That really heavily depends on your specific environment.

IPSLA test results are available via SNMP, however not as traps.

To fetch IPSLA test results you will need an SNMP poller. There are two tables from which to collect results, a "latest result" table (contains only the latest result of each configured test) and a "history" table (contains the last two hours of results). They each have their advantages and disadvantages.

A solution that should work is to use something like collectd to do the SNMP polling, send the results to Logstash via redis, and let logstash do any required processing before sending to ElasticSearch.

Rob

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.