An error occurs when loading loads with " logstash "

hi

An error occurs when loading loads with " logstash ".
The setup file is shown below.

input {
http {
host => "0.0.0.0"
port => "28080"
codec => es_bulk {}
}
}
filter {
mutate {
remove_field => ["host","@timestamp","headers"]
}
}
output {
elasticsearch {
hosts => ["x.x.x.x:yyy1", "x.x.x.x:yyy2"]
index => "%{[@metadata][_index]}"
document_type => "%{[@metadata][_type]}"
}
}

Can I fix this problem by changing the settings before using the queue?

What's the error?

1 Like

Too many requests will result in intermittent errors.

[2017-09-28T22:51:02,535][ERROR][logstash.inputs.http     ] unable to process event. {:request=>{"request_method"=>"POST", "request_path"=>"/", "request_uri"=>"/", "http_version"=>"HTTP/1.1", "http_accept"=>"text/plain, application/json, application/*+json, */*", "content_type"=>"application/x-ndjson;charset=UTF-8", "http_accept_charset"=>"***", "content_length"=>"533", "http_host"=>x.x.x.x:28080", "http_connection"=>"Keep-Alive", "http_user_agent"=>"Apache-HttpClient/4.5.2 (Java/1.8.0_101)", "http_accept_encoding"=>"gzip,deflate", "http_pinpoint_traceid"=>"xxx.yyy^1506582527015^6136", "http_pinpoint_spanid"=>"-590602330179949913", "http_pinpoint_pspanid"=>"-6428203132369051042", "http_pinpoint_flags"=>"0", "http_pinpoint_pappname"=>"xxxx", "http_pinpoint_papptype"=>"1010", "http_pinpoint_host"=>"x.x.x.x:28080"}, :message=>"undefined method `[]=' for 1300:Fixnum", :class=>"NoMethodError", :backtrace=>["/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/logstash-codec-es_bulk-3.0.3/lib/logstash/codecs/es_bulk.rb:36:in `decode'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/logstash-codec-line-3.0.2/lib/logstash/codecs/line.rb:39:in `decode'", "org/jruby/RubyArray.java:1613:in `each'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/logstash-codec-line-3.0.2/lib/logstash/codecs/line.rb:38:in `decode'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/logstash-codec-es_bulk-3.0.3/lib/logstash/codecs/es_bulk.rb:25:in `decode'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/logstash-input-http-3.0.4/lib/logstash/inputs/http.rb:140:in `run'", "org/jruby/RubyProc.java:281:in `call'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/logstash-input-http-3.0.4/lib/logstash/util/http_compressed_requests.rb:27:in `call'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/rack-1.6.8/lib/rack/builder.rb:153:in `call'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:557:in `handle_request'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:555:in `handle_request'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:404:in `process_client'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:400:in `process_client'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:270:in `run'", "org/jruby/RubyProc.java:281:in `call'", "/xxx/xxx/logstash-5.4.3_l4_logging/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:106:in `spawn_thread'"]}

or

[2017-09-29T12:05:19,271][ERROR][logstash.inputs.http     ] unable to process event. {:request=>{"request_method"=>"POST", "request_path"=>"/", "request_uri"=>"/", "http_version"=>"HTTP/1.1", "http_accept"=>"text/plain, application/json, application/*+json, */*", "content_type"=>"application/x-ndjson;charset=UTF-8", "http_accept_charset"=>"***", "content_length"=>"379", "http_host"=>"x.x.x.x:28080", "http_connection"=>"Keep-Alive", "http_user_agent"=>"Apache-HttpClient/4.5.2 (Java/1.8.0_101)", "http_accept_encoding"=>"gzip,deflate", "http_pinpoint_traceid"=>"xxxx.xxx^1506579140846^125648", "http_pinpoint_spanid"=>"-5495314904566667604", "http_pinpoint_pspanid"=>"-6202184905719084245", "http_pinpoint_flags"=>"0", "http_pinpoint_pappname"=>"xxxx", "http_pinpoint_papptype"=>"1010", "http_pinpoint_host"=>"x.x.x.x:28080"}, :message=>"string not matched", :class=>"IndexError", :backtrace=>["org/jruby/RubyString.java:3910:in `[]='", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-es_bulk-3.0.3/lib/logstash/codecs/es_bulk.rb:36:in `decode'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-line-3.0.2/lib/logstash/codecs/line.rb:39:in `decode'", "org/jruby/RubyArray.java:1613:in `each'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-line-3.0.2/lib/logstash/codecs/line.rb:38:in `decode'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-es_bulk-3.0.3/lib/logstash/codecs/es_bulk.rb:25:in `decode'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-http-3.0.4/lib/logstash/inputs/http.rb:140:in `run'", "org/jruby/RubyProc.java:281:in `call'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-http-3.0.4/lib/logstash/util/http_compressed_requests.rb:27:in `call'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/rack-1.6.8/lib/rack/builder.rb:153:in `call'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:557:in `handle_request'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:555:in `handle_request'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:404:in `process_client'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:400:in `process_client'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/server.rb:270:in `run'", "org/jruby/RubyProc.java:281:in `call'", "/xxx/xxx/logstash/vendor/bundle/jruby/1.9/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:106:in `spawn_thread'"]}

or

and so on.

There's a sudden spike in demand, so there's a way to deal with it.

What is writing to this Logstash instance? How come you are using an http input with the es_bulk codec?

1 Like

Configuration consists of :

Application(rest template) → logstash-input-http → logstash-output-elasticsearch

data sample (ndjson)

{"index":{"_index": "myIndex","_type":"myType"}}\n{"data1":"value1","data2":"value2","data3":"value3"}\n

If you use this configuration and you have a lot of requests, you'll notice that when you make a lot of requests, you get a low flow error.

help me T.T

How large bulk requests are you sending? How large are your documents? Which version are you on?

1 Like

Documents are not large.
sample data :

{
  "_index": "api-2017.10",
  "_type": "~~~",
  "_id": "AV7eEbSHWlITkOw5d6Uw",
  "_score": null,
  "_source": {
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
    "@version": "1",
    "~~~": "~~~",
    "~~~": "~~~",
    "~~~": "~~~",
  },
  "fields": {
    "log_time": [
      1506964255861
    ]
  },
  "sort": [
    1506964255861
  ]
}

If a request is received suddenly, it is difficult to estimate the exact number of errors.
It is using the " logstash-5.4.3" version.

Do you have any option to use the rotary setting to get a reliable request?
I am going to introduce a queuk like " kafak " in the middle, and I want to figure out how to solve this problem with the setting of a logarithmic setting.

Thank you for your attention.

Sending lots of small requests can be quite inefficient. The most common way to do this is to have the application write data to disk and have Filebeat read it locally before forwarding it to Elasticsearch or Logstash. The fact that the file acts as a buffer and allows Filebeat to batch up data makes it less sensitive to issues in the pipeline and can offer better performance. You can also as you describe introduce a message broker and log directly to that.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.