I'm running a honeypot in a sandbox environment for testing purposes before I expose it to the Internet. Both the honeypot and Elastic seem to run fine on all fronts. The main issue I'm having is that despite the fair amount of storage space (256GB), the logs fill up very quickly (2 days!), and then the honeypot crashes (obviously).
The main culprit is suricata, so I deleted the eve.json file, which was clogging up the system. Now other eve.json files are filling up quickly as well. I turned off suricata as a palliative measure, but it turns back on by itself unless I completely disable, which then renders the honeypot unusable, as it won't work. Similarly, changing logrotate settings to dump files after a few hours is not a solution since I need to have time to analyze the findings.
So, my question is how I can minimize the amount of logs being produced so that I can run t-pot normally without having to dump files so often. Thank you in advance for your help.
Hey Leandro,
Yeah, the folks from the honeypot say it's not an issue with the honeypot, but not the peeps at Elastic say it's not an issue with Elastic... lol I guess you're both right, as both apps are working as intended. I'm starting to think the problem is that the honeypot is reading all traffic in the network, not just the honeypot's, hence the exorbitant amount of data.
To answer your question, the honeypot captures attacks that I'm performing and then displays it on Elastic. I noticed that there are alleged attacks even on a fresh install of the honeypot, even without my attacking it.
I've tried doing that, but it's not feasible for me to have to delete logs too often, just so that I can free up space. In my answer to Leandro, I said that the problem may be because the honeypot is capturing all traffic in the network, not what simply what is intended.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.