Confused on LS outputs and data not going to right index

Hey there,

I'm not understanding the "output" section of my config for Logstash > Elasticsearch.

I've been following this guide: How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) on Ubuntu 22.04 | DigitalOcean

When I had JUST filebeat data/configuration in there - it was all smooth and ALL the data seemed to ship to "filebeat-*" index...

GREAT!

But then I JUST added my Meraki config (separate config file) and all hell broke loose.

No meraki devices pointing to the LS server, and no data going in, but suddenly the filebeat data started "leaking" over to a new "logstash-*" index...

But in my mind, I didn't configure it to do this, and the filebeat data was already output to ES via it's output, and the new config I placed in there shouldn't have affected it...

I then updated the meraki part of my config to output to a new index called "meraki-syslog".

Once again, the linux server beats output started appearing here.

Clearly, I've misunderstood something in the config and how to arrange my outputs.

Ideally I'd like to have Beats go into just one index, and meraki into another.

Here is my config below:

I setup the beats input/output as follows:

input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
	elasticsearch {
  	hosts => ["MY_Elastic_server:9200"]
  	manage_template => false
  	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  	pipeline => "%{[@metadata][pipeline]}"
	}
  } else {
	elasticsearch {
  	hosts => ["MY_Elastic_server:9200"]
  	manage_template => false
  	index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
	}
  }
}

I also ran this on my Logstash server prior to data arriving to create indexes and dashboard for filebeat:

filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["MY_Logstash_server:9200"]'

filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=['MY_Logstash_server:9200'] -E setup.kibana.host=localhost:5601

And my Meraki config:

input {
	udp {
		port => 5500
		type => syslog_meraki_sa_events
	}
	udp {
		port => 5510
		type => syslog_meraki_sa_flows
	}
}

SOME GROK STUFF HERE...

output {
    elasticsearch {
        hosts => ["MY_Elastic_server:9200"]
        manage_template => false
		index => "meraki-syslog"
    }
}

Can anyone help me understand why my beats are leaking over to a new index?

Okay so I further updated my config in an attempt to dumb it down, and just make it easier...

But I'm still getting beats stuff leaking into the Meraki index...

input {
  beats {
    port => 5044
  }
}


output {
    elasticsearch {
        hosts => ["MY_Elastic_server:9200"]
  	    manage_template => false
  	    index => "linux-server"
  	    pipeline => "%{[@metadata][pipeline]}"
    }
}
input {
	udp {
		port => 5500
		type => syslog_meraki_sa_events
	}
	udp {
		port => 5510
		type => syslog_meraki_sa_flows
	}
}

SOME GROK STUFF HERE...

output {
    elasticsearch {
        hosts => ["MY_Elastic_server:9200"]
        manage_template => false
		index => "meraki-syslog"
    }
}

If you have both of these config files in the conf.d directory and are running Logstash from that, it'll merge both of them together. Which is why you are seeing this "leakage", as every input will be run via every filter and then sent to every output.

If this is how you are running things, then take a look at Multiple Pipelines | Logstash Reference [8.1] | Elastic

1 Like

Ahh I see - thank you very much warkolm - I'll give that a try today and report back :slight_smile:

Excellent this worked for me, many thanks!

- pipeline.id: filebeat
  path.config: "/etc/logstash/conf.d/02-beats-input.conf"
  pipeline.workers: 2
- pipeline.id: meraki-syslog
  path.config: "/etc/logstash/conf.d/32-meraki-syslog.conf"
  pipeline.workers: 6
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.