Update Existing Log Via Logstash

I want to update an existing document in Elasticsearch based on certain conditions. However, I am facing some errors.

Here is my filter plugin configuration

filter
{
  json { source => "message" }

  if [baseValueUnitAmount] > 968
  {
   mutate 
   { 
    add_field => {"log" => "update"}
    add_field => {"status" => "this is a updated log"}
   }
   elasticsearch {
      hosts => ["es_host:9200"]
      query => "myrefid:%{[myrefid]}"
      fields => { "_id" => "doc_id" }
      index => "replacement-test"
      ssl => true
      user => 'myuser'
      password => 'mypassword'
   }
  }
}
output
{
	stdout { codec => rubydebug }
	          if [log] == "update"
            {
            elasticsearch
            {
            codec => json
            hosts => [ "es_host:9200" ]
            action => "update"
            document_id => "%{[doc_id]}"
            index => "replacement-test"
            ssl => true
            ssl_certificate_verification => false
            user => 'myuser'
            password => 'mypassword'
            }  
            }
            else
            {
            elasticsearch
            {
            codec => json
            hosts => [ "es_host:9200" ]
            index => "replacement-test"
            ssl => true
            ssl_certificate_verification => false
            user => 'myuser'
            password => 'mypassword'
            }
            }
}

Here is the error I am getting :

[ERROR] 2021-06-03 00:29:25.847 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

Can anyone help ?

There should be additional log messages that explain why that happened.

@Badger thanks for responding. I see no other information about the error. adding logs here :

root@ip-172-31-2-250:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f endpoint.conf --config.reload.automatic
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2021-06-03 11:22:41.645 [main] runner - Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 25.292-b10 on 1.8.0_292-8u292-b10-0ubuntu1~16.04.1-b10 +indy +jit [linux-x86_64]"}
[WARN ] 2021-06-03 11:22:42.410 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2021-06-03 11:22:47.120 [Converge PipelineAction::Create<main>] Reflections - Reflections took 69 ms to scan 1 urls, producing 22 keys and 45 values 
[WARN ] 2021-06-03 11:23:21.091 [[main]-pipeline-manager] elasticsearch - ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[INFO ] 2021-06-03 11:23:21.663 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es_host:9200/]}}
[WARN ] 2021-06-03 11:23:22.532 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es_host:9200/"}
[INFO ] 2021-06-03 11:23:22.785 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-06-03 11:23:22.789 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-06-03 11:23:22.902 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es_host:9200"]}
[WARN ] 2021-06-03 11:23:22.913 [[main]-pipeline-manager] elasticsearch - ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[INFO ] 2021-06-03 11:23:22.998 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es_host:9200/]}}
[WARN ] 2021-06-03 11:23:23.081 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es_host:9200/"}
[INFO ] 2021-06-03 11:23:23.090 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-06-03 11:23:23.092 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-06-03 11:23:23.128 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es_host:9200"]}
[INFO ] 2021-06-03 11:23:23.231 [[main]-pipeline-manager] elasticsearch - New ElasticSearch filter client {:hosts=>[{:host=>"es_host:9200", :scheme=>"https"}]}
[ERROR] 2021-06-03 11:23:23.564 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[INFO ] 2021-06-03 11:23:23.956 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[WARN ] 2021-06-03 11:23:29.645 [[main]-pipeline-manager] elasticsearch - ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[INFO ] 2021-06-03 11:23:29.660 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es_host:9200/]}}
[WARN ] 2021-06-03 11:23:29.738 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es_host:9200/"}
[INFO ] 2021-06-03 11:23:29.751 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-06-03 11:23:29.752 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-06-03 11:23:29.840 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es_host:9200"]}
[WARN ] 2021-06-03 11:23:29.843 [[main]-pipeline-manager] elasticsearch - ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[INFO ] 2021-06-03 11:23:29.847 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es_host:9200/]}}
[WARN ] 2021-06-03 11:23:29.942 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es_host:9200/"}
[INFO ] 2021-06-03 11:23:29.956 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-06-03 11:23:29.957 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-06-03 11:23:30.031 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es_host:9200"]}
[INFO ] 2021-06-03 11:23:30.035 [[main]-pipeline-manager] elasticsearch - New ElasticSearch filter client {:hosts=>[{:host=>"es_host:9200", :scheme=>"https"}]}
[ERROR] 2021-06-03 11:23:30.134 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[WARN ] 2021-06-03 11:23:32.758 [[main]-pipeline-manager] elasticsearch - ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[INFO ] 2021-06-03 11:23:32.768 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es_host:9200/]}}
[WARN ] 2021-06-03 11:23:32.825 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es_host:9200/"}
[INFO ] 2021-06-03 11:23:32.839 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-06-03 11:23:32.840 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-06-03 11:23:32.895 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es_host:9200"]}
[WARN ] 2021-06-03 11:23:32.898 [[main]-pipeline-manager] elasticsearch - ** WARNING ** Detected UNSAFE options in elasticsearch output configuration!
** WARNING ** You have enabled encryption but DISABLED certificate verification.
** WARNING ** To make sure your data is secure change :ssl_certificate_verification to true
[INFO ] 2021-06-03 11:23:32.909 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx@es_host:9200/]}}
[WARN ] 2021-06-03 11:23:32.999 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"https://elastic:xxxxxx@es_host:9200/"}
[INFO ] 2021-06-03 11:23:33.015 [[main]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2021-06-03 11:23:33.017 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2021-06-03 11:23:33.059 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//es_host:9200"]}
[INFO ] 2021-06-03 11:23:33.062 [[main]-pipeline-manager] elasticsearch - New ElasticSearch filter client {:hosts=>[{:host=>"es_host:9200", :scheme=>"https"}]}
[ERROR] 2021-06-03 11:23:33.126 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}

You could try setting log.level to debug, it may enable additional log messages that are helpful. I would suspect a problem with certificate keys.

Here is the debug logs :

[DEBUG][logstash.javapipeline ][main] Pipeline terminated by worker error {:pipeline_id=>"main", :exception=>java.security.cert.CertificateParsingException: signed fields invalid

That is the issue. It is indeed a problem with the certificate. It could be the wrong certificate format, or as I said, a problem with the key.

what should be the ideal format for certificate? .p12 will work ?

Also, my ca cert is password protected, is there anyway I can pass CA cert's password ?

Looking at the code I think it is expecting a PKCS#7 format for the ca_file, but your configuration does not set ca_file on the elasticsearch filter so I am not sure why it is going through that code at all.

I do not think that file can be password protected. It should not contain the private key.

I changed filter config to :

   filter
    {
    elasticsearch 
       {
          hosts => ["https://es_host:9200"]
          query => "txid:%{[txid]}"
          ca_file => "/etc/logstash/elastic-stack-ca.p7b"
          ssl => true
          fields => { "_id" => "doc_id" }
          index => "replacement-test*"
          user => 'elastic'
          password => 'mypassword'
       }
    }

Now it is giving the below error :

[2021-06-03T23:05:51,440][DEBUG][logstash.javapipeline    ][main] Pipeline terminated by worker error {:pipeline_id=>"main", :exception=>#<Manticore::ResolutionFailure: https>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:37:in `block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:79:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:274:in `call_once'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:158:in `code'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:262:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/client.rb:131:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-5.0.5/lib/elasticsearch/api/actions/ping.rb:20:in `ping'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.9.0/lib/logstash/filters/elasticsearch.rb:310:in `test_connection!'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.9.0/lib/logstash/filters/elasticsearch.rb:117:in `register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:226:in `block in register_plugins'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:225:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:560:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:238:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:183:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134:in `block in start'"], "pipeline.sources"=>["/etc/logstash/conf.d/endpoint.conf"], :thread=>"#<Thread:0x22a103eb run>"}
[2021-06-03T23:05:51,450][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil

That's good, it has gotten further. Manticore::ResolutionFailure is a DNS resolution failure. It is failing to do a DNS lookup of "https". Change "https://es_host:9200" to "es_host:9200"

resolution failure got resolved and came up another one :

[2021-06-03T23:42:32,006][DEBUG][logstash.javapipeline ][main] Pipeline terminated by worker error {:pipeline_id=>"main", :exception=>#<Manticore::UnknownException: Host name 'my_es_ip' does not match the certificate subject provided by the peer (CN=elasticsearch)>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:37:in block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:79:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:274:in call_once'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.7.0-java/lib/manticore/response.rb:158:in code'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:84:in block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:262:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:67:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/client.rb:131:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-5.0.5/lib/elasticsearch/api/actions/ping.rb:20:in ping'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.9.0/lib/logstash/filters/elasticsearch.rb:310:in test_connection!'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.9.0/lib/logstash/filters/elasticsearch.rb:117:in register'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:75:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:226:in block in register_plugins'", "org/jruby/RubyArray.java:1809:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:225:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:560:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:238:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:183:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134:in block in start'"], "pipeline.sources"=>["/etc/logstash/conf.d/endpoint.conf"], :thread=>"#<Thread:0x4c47fb4b run>"}`

Going through your this thread, would I really need to re-configure SSL throughout the Stack? :worried:

For an elasticsearch filter, using a name-matched certificate is mandatory. For an elasticsearch output there is an option to disable name matching, but as the documentation notes, it severely compromises security. (The PDF that documentation links to is a really interesting read.)

In general, if you cannot use a name-matched certificate I would suggest turning off TLS, because you are not getting the security that you think you are.

can you tell me how can I turn TLS off? I mean from Logstash or Elasticsearch?

This blog post explain how TLS is enabled on elasticsearch. Section 2-6-2 has the parameters that turn it on/off.

Once you disable TLS in elasticsearch you would set the ssl option on the elasticsearch filter to false.

HOWEVER, you would be much better off if you generated and used name-matched certificates. Generating the certificates is also covered in that blog post.

1 Like

Hey @Badger, I managed to replace certificates with name-matched certificates. Now it is not giving any exception related to certificate.

However, when I send a log having baseValueUnitAmount: 980, it throws two warnings :
1

[INFO ] 2021-06-04 18:31:34.837 [[main]>worker0] elasticsearch - New ElasticSearch filter client {:hosts=>[{:host=>{:host=>"es_host:9200", :scheme=>"https", :protocol=>"https", :port=>9200}, :scheme=>"https"}]}
[WARN ] 2021-06-04 18:31:35.004 [[main]>worker0] elasticsearch - Failed to query elasticsearch for previous event {:index=>"replacement-test", :error=>"Illegal character in authority at index 8: https://{:host=>\"es_host:9200\", :scheme=>\"https\", :protocol=>\"https\", :port=>9200}:9200/replacement-test%2A/_search?q=txid%my_value&size=1&sort=%40timestamp%3Adesc"}

2
[WARN ] 2021-06-04 17:45:48.669 [[main]>worker1] elasticsearch - Could not index event to Elasticsearch. {:status=>404, :action=>["update", {:_id=>"%{[doc_id]}", :_index=>"replacement-test", :routing=>nil, :_type=>"_doc", :retry_on_conflict=>1}, #<LogStash::Event:0x1cbd73c2>], :response=>{"update"=>{"_index"=>"replacement-test", "_type"=>"_doc", "_id"=>"%{[doc_id]}", "status"=>404, "error"=>{"type"=>"document_missing_exception", "reason"=>"[_doc][%{[doc_id]}]: document missing", "index_uuid"=>"ggzdlW2nQmuMGERrg1iAcA", "shard"=>"0", "index"=>"replacement-test"}}}}

showing my filter configurations and output configurations for reference :

if [baseValueUnitAmount] > 968
  {
   mutate 
   { 
    add_field => {"log" => "update"}
    add_field => {"status" => "this is a updated log"}
   }
  
   elasticsearch 
   {
      hosts => ["es_host:9200"]
      query => "txid:%{[txid]}"
      ca_file => "/etc/logstash/certs_blog/ca/ca.crt"
      ssl => true
      fields => { "_id" => "doc_id" }
      index => "replacement-test"
      user => 'my_user'
      password => 'my_password'

   }
  }
}
output
{
	stdout { codec => rubydebug }
	          if [log] == "update"
            {
            elasticsearch
            {
            codec => json
            hosts => [ "es_host:9200" ]
            action => "update"
            document_id => "%{[doc_id]}"
            index => "replacement-test"
            manage_template => false
            ssl => true
            ssl_certificate_verification => false
            user => 'my_user'
            password => 'my_password'
            }  
            }
            else
            {
            elasticsearch
            {
            codec => json
            hosts => [ "es_host:9200" ]
            index => "replacement-test"
            manage_template => false
            ssl => true
            ssl_certificate_verification => false
            user => 'my_user'
            password => 'my_password'
            }
            }
}

Moreover, When I saw event JSON in debug logs, I found my copied field doc_id is being sent nil. not sure why!

In addition, I tried to verify my query from dev tools and it gave the expected result

I managed to resolve above issues by commenting ssl => true and putting https:// in hosts section.
Also, docinfo_fields option helped to copy _id from old documents.

Note : fields option will not help to copy _id field value.

appreciate your inputs @Badger !
it was indeed valuable to solve this issue :slight_smile: :raised_hands:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.