About the integration between elasticsearch and logstash

Hi, I am creating log analizing system using logstash and elasticsearch.

But I couldn't send any data to elastic from logstash. (But Delete prune fillter, then could send data)

I found under this error message, but i couldn't find any solutions.

And have one more question, i used prune filter.
but Couldn't get print field that i specified.

add conf and error message, help please

  • logstash error message
[ERROR] 2023-02-07 11:06:48.220 [Ruby-0-Thread-10: /usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:161] elasticsearch - Failed to install template {:message=>"Failed to load default template for Elasticsearch v6 with ECS v8; caused by: #<ArgumentError: Template file '/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch/templates/ecs-v8/elasticsearch-6x.json' could not be found>", :exception=>RuntimeError, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in `load_default_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:24:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch.rb:583:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch.rb:349:in `finish_register'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/outputs/elasticsearch.rb:305:in `block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.1-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:167:in `block in after_successful_connection'"]}
  • logstash config file
input {
        beats {
                port => "5044"
        }
}

filter {
        grok {
        pattern_definitions => { "UA3" => "([^\t]+)" }

match => { "message" => ["%{TIMESTAMP_ISO8601:time_iso8601}\t%{NUMBER:x_request_start_ms}\t%{NUMBER:msec}\t%{UA3:request_time}\t%{IP:server_addr}\t%{NUMBER:server_port}\t%{USERNAME:requst_host}\t%{WORD:request_method}\t%{WORD:request_method2}\s%{UA3:uri}\s%{UA3:server_protocol}\t%{UA3:server_protocol2}\t%{IP:proxy_add_x_forwarded_for}\t%{UA3:http_user_agent}\t%{WORD:upstream_cache_status}\t%{NUMBER:status}\t%{NUMBER:bytes_sent}\t%{NUMBER:sent_http_content_length}"] }
        }
        geoip {
                source => "proxy_add_x_forwarded_for"
                target => "geoip"
        }
        prune {
                whitelist_names => [ "time_iso8601", "server_addr", "requst_host", "^request_method$", "uri", "^server_protocol$", "proxy_add_x_forwarded_for", "http_user_agent", "upstream_cache_status", "status", "country_iso_code", "lon", "lat", "name" ]
        }
}

output {
        stdout { codec => rubydebug }
                elasticsearch {
                        hosts => "http://[IP]:9200"
                        index => "xplatform-%{+YYYY.MM.dd}"
                }
        file {
        path => "/root/apach.log"
        }
}

  • Fixed Prune field
        prune {
                whitelist_names => [ "time_iso8601", "server_addr", "requst_host", "^request_method$", "uri", "^server_protocol$", "proxy_add_x_forwarded_for", "http_user_agent", "upstream_cache_status", "status", "country_iso_code", "lon", "lat", "name" ]
        }
  • Printed Result (Couldn't find geoip fied's "lon", "lat", "name")
{
                  "requst_host" => "v-live-entry.pstatic.net",
                          "uri" => "/lip2_kr/anmss1137/UiOA18Efu6NcFEK2hGxfrx_3Z1O0RZ2KPeFMA2oC4Kl5MctwUNN68kGqxfGskn                                                                                                                        GAmenVvNuKv2hcag/playlist.m3u8",
        "upstream_cache_status" => "MISS",
    "proxy_add_x_forwarded_for" => "210.89.163.231",
                 "time_iso8601" => "2023-02-07T10:51:00+09:00",
                  "server_addr" => "210.89.163.231",
               "request_method" => "GET",
              "server_protocol" => "HTTP/1.1",
              "http_user_agent" => "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn                                                                                                                        /1.18 libssh2/1.4.2",
                       "status" => "200"
}

This is weird, which version is your Elasticsearch?

Do you have anything else in the logs?

Elasticsearch's version is 6.8.23
*MY logstash version is 8.6.1

And no log left.....

Support for elasticsearch 6.8.x was dropped after logstash 6.8. It is documented here.

Still didn't send data to elasticsearch from logstash that prune filter applied after I downgraded my logstash (v 6.8.23)

But Delete prune fillter, then could send data

What is your output when using Logstash 6.8 with the prune filter and the stdout output? Do you have any output or nothing at all?

In the prune filter below you didn't specify the geo field, so you won't get it, not sure what is the issue here.

        prune {
                whitelist_names => [ "time_iso8601", "server_addr", "requst_host", "^request_method$", "uri", "^server_protocol$", "proxy_add_x_forwarded_for", "http_user_agent", "upstream_cache_status", "status", "country_iso_code", "lon", "lat", "name" ]
        }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.