Logstash 6.1.1 docker error


(fij29dkvn28) #1

Just trying to create a small ELK instance with the new 6.1.1 docker containers. I'm trying to keep it as vanilla as possible (not creating a new docker build file for example) and secure (a password that can be set in one place, different every time it is run).

Basically my logstash instance is having trouble connecting to ES.

With the docker-compose.yml and pipeline below I get the following errors: https://pastebin.com/GsKAL7KD, however, if I replace the output with stdout in pipeline it seems to work fine.

So far I have the following docker-compose.yml

version: '3.0'

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - "ELASTIC_PASSWORD=password"
      - "DISCOVERY_TYPE=single-node"
    networks:
      - elastic-stack


  kibana:
    image: docker.elastic.co/kibana/kibana:6.1.1
    environment:
      - "SERVER_NAME=woodhick_elasticsearch_1"
      - "ELASTICSEARCH_USERNAME=elastic"
      - "ELASTICSEARCH_PASSWORD=password"
      - "XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true"
    ports:
      - 5601:5601
    networks:
      - elastic-stack


  logstash:
    image: docker.elastic.co/logstash/logstash:6.1.1
    environment:
      - "ELASTICSEARCH_USERNAME=elastic"
      - "ELASTICSEARCH_PASSWORD=password"

      - "XPACK_MANAGEMENT_ENABLED=false"

      - "XPACK_MONITORING_ENABLED=true"
      - "XPACK_MONITORING_ELASTICSEARCH_USERNAME=elastic"
      - "XPACK_MONITORING_ELASTICSEARCH_PASSWORD=password"

      - "LS_OPTS='-r'"
    volumes:
      - ./logstash.conf/enabled/000-basic.conf:/usr/share/logstash/pipeline/logstash.conf
    networks:
      - elastic-stack

volumes:
  esdata:
    driver: local

networks:
  elastic-stack:

I am trying to get the following test pipeline to work:

input {
  http_poller {
    urls => {
      get_time => "http://date.jsontest.com/"
    }
    schedule => { cron => "* * * * * UTC"}
    request_timeout => 30
    codec => "json"
    metadata_target => "http_poller_metadata"
  }
}

output {
  elasticsearch {    
    index => "logstash-%{+YYYY.MM.dd}"
    user => logstash_internal
    password => changeme
  }
}

(Magnus Bäck) #2

You're not telling the elasticsearch output which ES host to connect to (the hosts configuration option). Since all your containers are in the same Docker network you can use the ES container name as the hostname.


(fij29dkvn28) #3

Ah yes... To be honest, I'd taken that out during testing as I didn't know I had the syntax correct (plus it doesn't actually say it's required here: Elasticsearch Output Configuration Options)

So, below is the new logstash pipeline, and here is the link (logstash errors) to the new error logs. They look to be the same to me.

Thanks for taking a look!

input {
  http_poller {
    urls => {
      get_time => "http://date.jsontest.com/"
    }
    # Supports "cron", "every", "at" and "in" schedules by rufus scheduler
    schedule => { cron => "* * * * * UTC"}
    # Maximum amount of time to wait for a request to complete
    request_timeout => 30
    # How far apart requests should be
    #interval => 10
    # Decode the results as JSON
    codec => "json"
    # Store metadata about the request in this key
    metadata_target => "http_poller_metadata"
  }
}

output {

#   stdout {
#      codec => "json"
#   }

  elasticsearch {    
    index => "logstash-%{+YYYY.MM.dd}"
    hosts => 'woodhick_elasticsearch_1'
    user => 'logstash_internal'
    password => 'changeme'
  }
}

(fij29dkvn28) #4

Hmmm, just tried changing the hostname in the elasticsearch output plugin. First I changed it to the above, then to 'elasticsearch_1' (as it shows up in the logs) - but this didn't work, I still got the host unreachable error. Then I changed it to just 'elasticsearch' and got a 401 (access denied error)! No more host unreachable! So then I had to provide it with the elastic user credentials and i worked...!

So the question now is, why is the host 'elasticsearch' when docker refers to it as 'elasticsearch_1' or even 'woodhick_elasticsearch_1'?!

Just for clarity, this works:

input {
  http_poller {
    urls => {
      get_time => "http://date.jsontest.com/"
    }
    # Supports "cron", "every", "at" and "in" schedules by rufus scheduler
    schedule => { cron => "* * * * * UTC"}
    # Maximum amount of time to wait for a request to complete
    request_timeout => 10
    # Decode the results as JSON
    codec => "json"
    # Store metadata about the request in this key
    #metadata_target => "http_poller_metadata"
  }
}


date {
    match => [ "milliseconds_since_epoch","UNIX" ]
}


output {

  elasticsearch {
    index => "logstash-%{+YYYY.MM.dd}"
    hosts => 'elasticsearch'
    user => 'elastic'
    password => 'password'
  }

}

(Magnus Bäck) #5

Ah yes... To be honest, I'd taken that out during testing as I didn't know I had the syntax correct (plus it doesn't actually say it's required here: Elasticsearch Output Configuration Options)

No, but as with all configuration options you can only omit them if the default value is okay.

Judging by https://github.com/logstash-plugins/logstash-output-elasticsearch/issues/629 the problem is that you have underscores in the hostname (and underscores are invalid in that context). The last configuration you posted works because the name of your ES container is "elasticsearch" so that's the hostname it'll be known as. "woodhick_elasticsearch_1" wouldn't have worked even if the string had been a valid hostname.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.