Running multiple endpoints on the same node

Big picture I want to run atleast 2 nodes, put a load balancer on the front of them and have the requests fire in across the nodes, not sure if there is anything clever I have to do to ensure what I belive is called atleast-once-delivery but that hurdle is a little down the way for me.

I'm just to get it runing or the moment with just the one node on a proof on concept basis.

I've got logstash setup (5.6.6 so it matches my elastic version) on a VM. On that VM in the /etc/logstash/conf.d/ I've created 2 configs for end points, these are:

 input {
 tcp {
     port => 6000
     type => syslog
   }
 udp
  {
     port => 6000
     type => syslog
   }
 
  }
 output {
   elasticsearch {
      hosts => ["192.168.100.4:9200"]
      user => "logstashuser"
      password => "password"
      index => "test_b-%{+YYYY.MM.dd}"
      document_type => "b_request"
   }
 }

and

 input {
 tcp {
     port => 5000
     type => syslog
   }
 udp
  {
     port => 5000
     type => syslog
   }
 
  }
 output {
   elasticsearch {
      hosts => ["192.168.100.4:9200"]
      user => "logstashuser"
      password => "password"
      index => "test_a-%{+YYYY.MM.dd}"
      document_type => "a_request"
   }
 }

I've then started logstash as a service

In keeping with some guideance I found I've connected to the logstash box with telnet, one client on port 5000 the other 6000

In elastic I've got 2 new indexes of test_a-2018.04.13 & test_b-2018.04.13

I've typed data in to one but not the other but the document count goes up for both, this happens regardless of which I try to interact with.

I've deleted the indexes for both and entered a single line to each and that line gets sent to both indexes. The behavior I'm expecting though is that the traffic sent to port 5000 only appear for test a and the traffic for port 6000 appear for test b

type => syslog was on an example I found online so I just coppied and pasted that
document_type => "a_request" & document_type => "a_request" were based on a post I found for posting to indexes other than logstash-[date] where they used my_request without the document type I found nothing was posted to the index and I used different made up documents for each but there are no definitions created for either so I'm of the understaning elastic will auto generate a definition on the first request.

any assistance greatfully recieved

Logstash reads all configuration files in /etc/logstash/conf.d and concatenates them. Therefore you're effectively loading Logstash with two tcp inputs, two udp inputs, and two elasticsearch outputs. If you're going to run two instances of Logstash you're going to want to have two directories with config files.

Thanks Magnus that makes sense with what I'm seeing then, I've treid creating
/etc/logstash/conf.d/configa/endpoint5000.conf and /etc/logstash/conf.d/configb/endpoint6000.conf but that didn't work, treid adding both as path.config entired in the logstash.yml but that didn't help either. I've found reference to a pipeline.yml file but I can't see where it is. do you have any information on how I go about creating multiple directories and getting logstash to recognise them as such?

also to reiterate I'm running 5.6.6 incase that is of relivence

Multiple pipelines in a single Logstash instance requires 6.0, but I suppose the whole point of the exercise is to run multiple instances.

treid adding both as path.config entired in the logstash.yml

What do you mean? Please don't describe what you're doing when concrete and unambiguous examples are possible.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.