Best way to view large centrlized syslog(CentOS) using ELK docker images

Hello,

This is my first post. I have 10GB/day centralized syslog from CentOS systems. Log files can be accessed from local or NFS mount. Need at least two fields(timestamp and host) to filter logs. Total users are less than five.

  1. What kind of setup you would recommend?
    Minimum hardware requirement. Multiple ingest/data nodes?
    Will logstash be the bottleneck?
    How do I scale up if I have more log in the future.

  2. Since I need to filter data using host and timestamp, I assume filebeat cannot be used.
    Do I have to use logstash? If so, is grok the only way to add timestamp and host fields?

Thanks.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

Thank you for the links.
Let's say I want to set shards size to 25GB and number of shards per GB to 20, but how do I define shards size and number of shards per GB?
If I use default setting - 5 primary shards and 1 replica shard per index, for my time-based daily-indices of centralized logs, should I have
1.. 3 to 5 nodes with default settings(all have node.master/node.data/node.ingest set to true)
2.. 1 master node, two data nodes, two ingest nodes
3.. Other layout, like 1 co-node and 3 master-eligible nodes...

Regards,

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.