Our current set up is Elasticsearch, Filebeat, and Kibana on the same server for dev purposes (hoping to move to 3 nodes & gold licensing next year) -
We are using the cisco.yml module in filebeat to receive ASA logs and then directing filebeat to elasticsearch on the localhost.
The ASA is sending many many logs - and we've recently found that it doesnt seem filebeat/elasticsearch can keep up since there are logs missing in Kibana when searching.
We are indexing at a rate of 250/s and that should probably be a lot higher. I've tried changing the bulk_max_size in filebeat but to no avail.
We have a 100mb connection between the ASA & the server (not direct, but nothing less than 100m all the way there).
ILM - indexes are set to roll over at 25gb
We are still receiving about 18-20 million documents every day but there should be more.
Does anything stick out as a bottleneck or is there anything I could try to identify where these logs are being dropped?
Here is a condensed version of our Filebeat.yml file.
# ======================= Elasticsearch template setting ======================= setup.template.settings: index.number_of_shards: 1 #index.codec: best_compression #_source.enabled: false # ================================== Outputs =================================== # Configure what output to use when sending the data collected by the beat. # ---------------------------- Elasticsearch Output ---------------------------- output.elasticsearch: # Array of hosts to connect to. hosts: ["localhost:9200"] # Protocol - either `http` (default) or `https`. #protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" username: elastic password: ######## bulk_max_size: 500 workers: 6 # ============================= X-Pack Monitoring ============================== scan_frequency: 1s