IPAM integration

Hi,

Is there anyone out there combining an IPAM with Kibana, ES or logstash ?

I'm looking for a way to group syslog-messages and maybe beats data based on host or subnet.

We have one or more vlans for an application in an IPAM which I can talk to over a rest api.
There are 1175 subnets each with a description VLAN tag, netmask etc.

For example out of IPAM I can symlink the subnet to it's description on my syslog server.

DRAC_Management -> /opt/syslog-ng/logs/192_168_255_44
ASA_Management -> /opt/syslog-ng/logs/192_168_254_84
VMware_management -> /opt/syslog-ng/logs/192_168_255_0
OpenStack_management -> /opt/syslog-ng/logs/192_168_255_4
Apache_roll_A09_2 -> /opt/syslog-ng/logs/192_168_255_8
Product_X_web -> /opt/syslog-ng/logs/192_168_255_12

What's the best way to create these groupings ?

I create my own tags to identify pipeline inputs.

For example, all my routers and switches coming on their own dedicated Logstash input. I apply a tag giving it a pipeline name.

I do the same for ESX hosts, firewalls, Web servers, Windows desktop/servers, Wireless controllers, etc.

Currently, I have 22 dedicated inputs created.

They all get output into the same index in Elasticsearch. But, with those unique tags applied at input time, I can track the life of the message and it gives me lots of options for searching.

Sounds ok for a small enterprise but I have +5000 servers and even more service instances like apache that could be sending data.

Looking for a programmatic way to handle this without setting up a separate pipeline config for every tag.

I understand and can see where you are coming from. Mine is very programmatic.

Each location has a standardized syslog-ng server that does all my heavy lifting for syslog messages and each location has a central Logstash target for beats and other miscellaneous inputs

I have the same configs on every server, except for the local significant configuration items (subnets, city name, etc).

I am also not suggesting creating a pipeline for every tag. Do it categorically. All my network switches and routers have one, all the ESX hosts have one, etc. On the front end, the more diverse you are, the more you have on the back end later.

That makes a little more sense let me echo what I think you said and how I
would apply it to see if we're on the same page.

Scope: 10,000+ log sources, 30-50k EPS.

Syslog-ng is able to split and direct data to different destinations based
on programatic rule.
Have a separate log destination for every high level group, say there are
6,000 unique Product+Instances. Thats a logstash farm in each DC that needs
to support 10,000 or more unique port destinations.

Redis->Logstash farm could then receive each separate stream and apply
additional tags and load to ES. Redis used in data critical high traffic
requirements.

Programmable with chef but seems a bit daunting. I guess what I was
looking for is a backdoor mechanism to link the two documents in ES.

For example an asset in asset_index running an os on a network_index
running producdX_index is logging data.
Where the asset, and network, and product data also existed in ES indexes.

The root of the question is how we link data related to the same system,
but multiple sources.

It seems the answer needs to be at the point of ingestion but has some
scale concerns.

Yeap, now I see what you are referring to.

I had a similar issue with multiple sources from one target.

We have Windows servers with Winlogbeat on them, as well as some applications that do their sysloging, and finally some filebeats for "imporant" log files. I don't have 5000 plus, like you. I am more in the range of 900-1000.

In the end, the single device, is sending out to three different receivers. In my case, I've loaded configure the winlogbeat and filebeat to use the same index as my Logstash index. So I have 3 points of ingestion for that one machine, each with it's own pipeline for processing but all end up in the same index for dashboarding, reporting, etc.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.