Logstash with docker: unknown setting 'protocol', 'host' for elasticsearch

Hello,

I have a strange problem when using the official logstash image version 2.0.

I am testing a basic logstash to ES configuration using the example from the documentation. I run Logstash via docker and everything everything works:

docker run -it --rm -p 5000:5000 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

Logstash receives data and sends it to ES. :+1:

Then I modify the dockerfile to copy the configuration file into the container image. When I run the same command, I get:

$ docker run -it --rm -p 5000:5000 --name logstash mylogstash
+ set -e
+ '[' l = - ']'
+ '[' logstash = logstash ']'
+ set -- gosu logstash logstash agent -f /etc/logstash/conf.d/
+ exec gosu logstash logstash agent -f /etc/logstash/conf.d/
Unknown setting 'protocol' for elasticsearch {:level=>:error}
Unknown setting 'host' for elasticsearch {:level=>:error}
Error: Something is wrong with your configuration.

Only the last 2 lines of the dockerfile are modified. Using --verbose and --debug does not help understanding the cause. The problem is reproducible.

Dockerfile:

FROM java:8-jre

# grab gosu for easy step-down from root
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN arch="$(dpkg --print-architecture)" \
    && set -x \
    && curl -o /usr/local/bin/gosu -fSL "https://github.com/tianon/gosu/releases/download/1.3/gosu-$arch" \
    && curl -o /usr/local/bin/gosu.asc -fSL "https://github.com/tianon/gosu/releases/download/1.3/gosu-$arch.asc" \
    && gpg --verify /usr/local/bin/gosu.asc \
    && rm /usr/local/bin/gosu.asc \
    && chmod +x /usr/local/bin/gosu

# https://www.elastic.co/guide/en/logstash/2.0/package-repositories.html
# https://packages.elasticsearch.org/GPG-KEY-elasticsearch
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 46095ACC8548582C1A2699A9D27D666CD88E42B4

ENV LOGSTASH_MAJOR 2.0
ENV LOGSTASH_VERSION 1:2.0.0-beta3-1

RUN echo "deb http://packages.elasticsearch.org/logstash/${LOGSTASH_MAJOR}/debian stable main" > /etc/apt/sources.list.d/logstash.list

RUN set -x \
    && apt-get update \
    && apt-get install -y --no-install-recommends logstash=$LOGSTASH_VERSION \
    && rm -rf /var/lib/apt/lists/*

ENV PATH /opt/logstash/bin:$PATH

COPY docker-entrypoint.sh /

ENTRYPOINT ["/docker-entrypoint.sh"]

COPY *.conf /etc/logstash/conf.d/
CMD ["logstash", "agent", "-f", "/etc/logstash/conf.d/"]

logstash.conf:

input {
  tcp {
    port => 5000
    type => syslog
  }
  udp {
    port => 5000
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
    elasticsearch {
        protocol => "http"
        host => "localhost:9200"
    }
    stdout { }
}

Help appreciated.

Just FYI, this is a Docker official image, not an Elastic one :smile:

But it doesn't like this part for some reason.

Yet the config works when I ran it? So I am out of ideas, maybe Docker is doing something funky.

Also you don't need the agent part.

The documentation example you're following is for Logstash 1.5 but if you look at the LS 2.0 documentation for the elasticsearch output you'll notice that

  • host has been renamed to hosts and

  • protocol has been removed (since it's always HTTP).

1 Like

Actually, the host and protocol options have been somewhat revived (the former as recently as yesterday) to at least give a good error message if they're used.

The documentation that the OP followed still refers to the obsolete options in its examples even in the Logstash 2.0 edition. I've filed github.com/elastic/logstash issue #4082 to get this fixed.

That took care of it!
Thanks for the quick response, that was great.