Logstash config: Elastic search indices not being properly populated from multiple http_poller inputs

Currently, I have two http_poller input blocks with a url for a live and backup instance of our app in my logstash config. What I'm then doing is I'm trying to output to two different elastic search indices: logstash_poller and logstash_poller_backup in the output of the configuration. Here's the gist of what I'm doing:

    input {
      http_poller {
    urls => {
      healthendpoint => {
        method => get
        url => "https://${APP_HOST}/.../data"
        headers => {
          Accept => "application/json"
        }
        add_field => { "mode" => "LIVE" }
      }
    }
    schedule => {every => "5s"}
    codec => "json"
      }
      http_poller {
    urls => {
      dataEndpointBackup=> {
        method => get
        url => "https://${HOST_BACKUP}/.../data"
        headers => {
          Accept => "application/json"
        }
        add_field => { "mode" => "BACKUP" }
      }
    }
    schedule => {every => "5s"}
    codec => "json"
      }
    }

    filter {
      split {
    field => "value"
      }
    }

    output {
      if [mode] == ["BACKUP"] {
    elasticsearch {
      hosts => ["${ELASTICSEARCH_HOST}"]
      index => "logstash_http_poller_backup"
    }
      }
      if [mode] == ["LIVE"] {
    elasticsearch {
      hosts => ["${ELASTICSEARCH_HOST}"]
      index => "logstash_http_poller"
    }
      }
    }

The problem that I'm running into is that the results from both http_poller inputs is being added into each of the indices despite the conditionals added. What I want is for them to only have logs pertaining to either the live/backup instance. Not sure what issue is causing this or if this is not the best approach. Appreciate any input!

Any input on this? I just want to be able to specify multiple http_poller inputs, be able to attach something like a type option, and then send to the appropriate elasticsearch index in the output. However, even if I add conditionals in the output checking for a mode or type added in the input phase, each of the elasticsearch indices will still receive ALL the input.

What does an event look like in

output { stdout { codec => rubydebug } }

I am surprised mode is even present, since those add_field options are at the wrong level (they should be at the same level as codec).

I actually changed my input to instead do something like:

type => "LIVE"

instead of 'add_field'. However, it still ends up with the same outcome. Adding the rubydebug to the output per your suggestion, and here's a sample of some of that:

{
        "status" => 200,
     "timestamp" => 1595371090,
       "request" => {
            "mbean" => "app.status:service=Health",
             "type" => "exec",
        "operation" => "health()"
    },
      "@version" => "1",
    "@timestamp" => 2020-07-21T22:38:10.052Z,
         "value" => {
              "message" => "81.69999999999999% free disk space",
                "level" => "INFO",
             "id" => "Disk Usage Status",
        "percent" => 0.817
    }
}

I don't really see much other than what's coming directly from the input of the http pollers. I would also like to say that if I change my conditional in the output to something like

if ..... {
}
else if ... {
}
else if .... {
}
else {
}

Then all I will ever get is one ES index created in the first conditional, even if I've added different fields or modes or types in the input phase to evaluate later.

That event does not have either [mode] or [type]. I cannot think of any way the events could be written to either elasticsearch by the output configuration you showed, far less both. Is it possible you are pointing path.config to a directory and there is another file (maybe an old copy of the config) that has no conditionals around the outputs?

That's a good suggestion and I checked that out, but the config file being used by the logstash container (I'm using Docker, I forgot to mention -.-) is the same and has all the conditionals on the outputs. And you're right about the first part. Given that I don't see the [type] that I set on the input in the events when I grep the logstash logs I'm not sure how any of those elasticsearch indices are being populated either, but they are definitely being created and filled.

I would seriously consider that you may not be running the configuration you expect to be running. Can you add

 --config.debug --log.level debug --config.test_and_exit

to the command line?

It is not unusual that docker configs do not mount the directories that folks expect.

Thanks for those pointers. I ran those commands with the docker container and the validation result was OK and the output from the config debug setting was what is in my logstash.conf file. Also, to check even further I've SSH'd into the docker container to check that the volume is being mounted with the config file correctly and I can see that the config file being used in the container is identical to what I'm developing. Not sure where to proceed from here unless there is something fundamentally wrong with my configuration's layout. I might try to downgrade my version? Maybe see if that could be causing any issues, but I'm at a loss.

Found a solution. Instead of adding type or tag to the http poller input, I instead added

metadata_target => "http_metadata"

And then in the output, just checked for

  if [http_metadata][name] == "healthlive" {
        elasticsearch {
          hosts => ["${ELASTICSEARCH_HOST}"]
          index => "logstash_http_poller"
        }
  } else if [http_metadata][name] == "healthbackup" {
        elasticsearch {
          hosts => ["${ELASTICSEARCH_HOST}"]
          index => "logstash_http_poller_backup"
        }
  }

Works like a charm and the elasticsearch indexes aren't getting all mixed together with live/backup. A bit bummed that it took me a while to figure that out, and I'm still a bit confused as to why tags, type, and add_field weren't working. Regardless, I really appreciate the help and debugging tips @Badger!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.