Splunk data ingestion into Elasticsearch via Logstash behind NLB

Hi, We wanted to integrate logs from Splunk system into Elasticsearch via Logstash. Following are the steps we used to do the same:

  1. Create a Network Load Balancer (NLB) URL with Logstash as backend
  2. Configure Splunk forwarder to push the data to NLB URL over port 443
  3. NLB is configured to forward all receiving logs to target Logstash server over port 5044.
  4. Logstash is configured to receive all traffic over port 5044 and push to Elasticsearch Index.

Findings:
Traffic is flowing from Splunk to NLB and to Logstash. But receiving only junk meaningless messages. EX: message: 'gzip, compressed', message:close, message:443, message: NLB Health Check, etc. multiple times.

Questions:
Is it possible to achieve this solution? Does Splunk forwarded data have to be decrypted? If yes, which plugin can help doing the same?

What sort of output did you use? A tcp forwarder uses a proprietary format that logstash cannot consume. If you use a syslog forwarder then it seems unlikely that port 5044 is appropriate, since that is normally used for beats.

What is the splunk forwarder configuration, and what is the logstash input configuration?

@Badger Sorry for delayed reply. We had dependency on external team to collect the Splunk forwarder configuration. Turns out that TCP forwarder is used in the configuration and not syslog format. Below is the Splunk forwarder configuration followed by Logstash configuration. Kindly suggest on changes required to have this data ingested.

Splunk forwarder configuration:

Current output.conf:

[tcpout]
forwardedindex.filter.disable = false
forwardedindex.0.blacklist =
forwardedindex.0.whitelist =
forwardedindex.1.blacklist =
forwardedindex.1.whitelist =
forwardedindex.2.whitelist =
forwardedindex.2.blacklist =
#forwardedindex.1.whitelist = test
#indexAndForward = true
maxQueueSize = 32MB
defaultGroup=EXT_cluster_or1,customer-server1

[tcpout:EXT_cluster_or1]
maxQueueSize = 32MB
sslCertPath = $SPLUNK_HOME/etc/auth/uf.pem
sslPassword =
sslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.crt

May want to change this

sslVerifyServerCert = false
autoLBFrequency = 60
forceTimebasedAutoLB = true
useACK = false
server = splunk-ext-d.xyz.net:9998

[tcpout:customer-server1]
server = logstash.abc.com:443,54.123.10.100:443,52.123.86.115:443 --- Load balancer IPs
sendCookedData = false
disabled = false
clientCert = /opt/splunkforwarder/etc/auth/mycerts/99bd8abc715cb7f7.pem
sslVerifyServerCert = true
server = logstash.abc.com:443

Current Logstash conf:
input {
tcp {
port => 5044
}
}
output {
file
{
path => "/etc/logstash/conf.d/out.log"
}
elasticsearch {
hosts => ["https://abc-elk-es.abc.com:443"]
index => "splunkindex"
user => "elkadmin"
password => "***********"
ilm_enabled => false
}
stdout { codec => rubydebug }
}

As I said, a tcp forwarder uses a proprietary format that logstash cannot consume.

This blog post from elastic talks about exporting data from Splunk to do the migration. It describes doing a manual export, but you could use the API or perhaps even a REST call to do it periodically. (You can use an exec input to run /bin/true and add a scheduler option to do this. Then in the pipeline the empty event goes would use an http filter to fetch data from Splunk.)

vector is a third-party tool that can take in data from a Splunk http collector (or beats, or syslog, or datadog etc.) and forward it to elasticsearch (or datadog, or Splunk etc.) I have never used it and cannot speak to quality. This would still require configuration of a new sink in Splunk. That said, if your external team are willing to configure an http sink then you may be able to consume it directly into logstash.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.