Hi, We wanted to integrate logs from Splunk system into Elasticsearch via Logstash. Following are the steps we used to do the same:
Create a Network Load Balancer (NLB) URL with Logstash as backend
Configure Splunk forwarder to push the data to NLB URL over port 443
NLB is configured to forward all receiving logs to target Logstash server over port 5044.
Logstash is configured to receive all traffic over port 5044 and push to Elasticsearch Index.
Findings:
Traffic is flowing from Splunk to NLB and to Logstash. But receiving only junk meaningless messages. EX: message: 'gzip, compressed', message:close, message:443, message: NLB Health Check, etc. multiple times.
Questions:
Is it possible to achieve this solution? Does Splunk forwarded data have to be decrypted? If yes, which plugin can help doing the same?
What sort of output did you use? A tcp forwarder uses a proprietary format that logstash cannot consume. If you use a syslog forwarder then it seems unlikely that port 5044 is appropriate, since that is normally used for beats.
What is the splunk forwarder configuration, and what is the logstash input configuration?
@Badger Sorry for delayed reply. We had dependency on external team to collect the Splunk forwarder configuration. Turns out that TCP forwarder is used in the configuration and not syslog format. Below is the Splunk forwarder configuration followed by Logstash configuration. Kindly suggest on changes required to have this data ingested.
As I said, a tcp forwarder uses a proprietary format that logstash cannot consume.
This blog post from elastic talks about exporting data from Splunk to do the migration. It describes doing a manual export, but you could use the API or perhaps even a REST call to do it periodically. (You can use an exec input to run /bin/true and add a scheduler option to do this. Then in the pipeline the empty event goes would use an http filter to fetch data from Splunk.)
vector is a third-party tool that can take in data from a Splunk http collector (or beats, or syslog, or datadog etc.) and forward it to elasticsearch (or datadog, or Splunk etc.) I have never used it and cannot speak to quality. This would still require configuration of a new sink in Splunk. That said, if your external team are willing to configure an http sink then you may be able to consume it directly into logstash.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.