Logstash service is shutting down with Invalid Frame Type 69 and 84

Hi Team,

I am trying to set up Logstash service on ECS cluster to push logs to AWS Elasticsearch service. The Logstash container shuts down frequently giving Invalid Frame Type 69 and 84 errors.

Any direction on resolving this is highly appreciated.

Dockerfile:

# Download the container from the official website

FROM docker.elastic.co/logstash/logstash:6.4.2

COPY --chown=logstash:root customdata /usr/share/logstash

# Move our custom config files into the Logstash root directory in the container

COPY customdata/pipeline/logstash.conf /usr/share/logstash/pipeline/

COPY customdata/config/logstash.yml /usr/share/logstash/config/

COPY customdata/config/template.json /usr/share/logstash/config/

# Download and install the logstash plugin that allows it to communication with AWS es

RUN bin/logstash-plugin install logstash-output-amazon_es

RUN bin/logstash-plugin install logstash-input-beats

RUN pwd

# Bind and monitor traffic on the same port we gave for input in the conf file above

EXPOSE 5044

The logstash.conf file:

input {
  tcp {
    port => 4560
    codec => json_lines
  }
  beats {
    port => 5044
    ssl => false
  }
}
 
output {
  elasticsearch {
    hosts => "https://awsesdomainURL.us-east-1.es.amazonaws.com:443/"
    index => "app-%{+YYYY.MM.dd}"
    ssl => true
  }
  stdout { codec => rubydebug }
}

I tried with AWS Elastic Search output plugin as well.
AWS Elasticsearch version: 6.4

Thanks.

More info about the issue: ES domain has SSL enabled. Logstash is not using SSL.

Please do not post pictures of text, just post the text.

The errors are being logged by the beats input. Something is connecting to that input that does not speak the expected protocol. It could be a beat that is configured to use ssl. I would look on one of the remote hosts in the INFO messages that precede the WARN and check what is using the port listed.

Hi Badger,

Thanks for getting back. I looked at the preceding logs, couldn't find anything connecting to the logstash cluster. Attaching the full log output below.

[2019-04-01T15:15:42,135][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-04-01T15:15:42,156][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-04-01T15:15:42,601][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-04-01T15:15:42,654][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"9a1ff95d-a9a5-4a57-85fb-9600262916f3", :path=>"/usr/share/logstash/data/uuid"}
[2019-04-01T15:15:43,378][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.2"}
2019-04-01 15:15:45,458 Converge PipelineAction::Create<main> WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@fb1144 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
[2019-04-01T15:15:46,616][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-01T15:15:47,264][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://xxxx.us-east-1.es.amazonaws.com:443/]}}
[2019-04-01T15:15:47,276][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://xxx.us-east-1.es.amazonaws.com:443/, :path=>"/"}
[2019-04-01T15:15:47,631][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://xx.us-east-1.es.amazonaws.com:443/"}
[2019-04-01T15:15:47,699][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-01T15:15:47,709][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-04-01T15:15:47,745][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://xxx.us-east-1.es.amazonaws.com:443/"]}
[2019-04-01T15:15:47,782][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-04-01T15:15:47,820][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}

[2019-04-01T15:15:47,874][INFO ][logstash.inputs.tcp      ] Starting tcp input listener {:address=>"0.0.0.0:4560", :ssl_enable=>"false"}
[2019-04-01T15:15:48,512][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-04-01T15:15:48,552][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5b1f83b2 run>"}
[2019-04-01T15:15:48,608][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-04-01T15:15:48,631][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-01T15:15:48,891][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}


2019-04-01 15:20:25,886 defaultEventExecutorGroup-7-2 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@fb1144 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
2019-04-01 15:20:25,892 defaultEventExecutorGroup-7-1 WARN The Logger slowlog.logstash.codecs.plain was created with the message factory org.apache.logging.log4j.spi.MessageFactory2Adapter@fb1144 and is now requested with a null message factory (defaults to org.logstash.log.LogstashMessageFactory), which may create log events with unexpected formatting.
[2019-04-01T15:20:25,962][INFO ][org.logstash.beats.BeatsHandler] [local: 172.17.0.2:5044, remote: <ipaddress>:50262] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2019-04-01T15:20:25,967][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
04-01T15:15:48,608][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[...]
[2019-04-01T15:20:25,962][INFO ][org.logstash.beats.BeatsHandler] [local: 172.17.0.2:5044, remote: &lt;ipaddress&gt;:50262] Handling exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69 
[2019-04-01T15:20:25,967][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception. io.netty.handler.codec.DecoderException: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[/quote]

The "local: 172.17.0.2:5044" tells you it is the beats handler that is having a problem. You need to log into "remote: <ipaddress>" and find what is connecting to that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.