Elasticsearch plugin settings in conf are overridden -- why?

tl;dr

Basically, I've specified the elasticsearch plugin in Output, but the host setting is getting overriden -- anyone have ideas why?

Here's my output:

output {
          	elasticsearch {
            	document_id => "%{claim_id}"
            #	
            # es v5.1
            #
            # hosts => ["https://5758e30cca8bbbd0c4eb5317830f32ba.us-east-1.aws.found.io:9243"]
        #
        # es v5.5
        #
         hosts => ["https://37656d7ac4850603306fc6108576273e.us-east-1.aws.found.io:9243"]
    		user => "xxxxxx"
    		password => "xxxxx"
    		index => "claims-jasmine-v3"
    		workers => 1
      	}
    	stdout {
    	  # codec => rubydebug
      	}
      }

But then here's the trace:

{:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>["http://localhost:9200"], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>"false", document_type=>"logstash_stats", sniffing=>"false", user=>"logstash_system", password=>"changeme", ssl=>"false", id=>"710374655e8d374e8e681c853584e11b845f7812-2">}

I've tried disabling xpack monitoring, but it hasn't worked. When I run the ingestion, it still insists on changing the host to local from what I specified.

Longer version:

Hi everyone, I've read quite a few posts on this topic and it seems like

  1. The issue is xpack.monitoring.enabled:true is causing the issue
  2. changing the setting to false works for everyone

Unfortunately, I've done that and it doesn't work for me.

The experience is odd. Sometimes the conf runs, but with the following issues:

  1. Not all records are loaded (randomly 100-150 records are not uploaded each time)
  2. Data is not loaded correctly (ie, Some records are not performing the lookups correctly, again it's random. I'll find most of the records for a given value perform lookup correctly, and randomly 1-2 won't)

Sometimes it doesn't run at all, and only the error messages show. Rate limiting (with sleep filter) didn't resolve the issue.

Basically, is there a way to tell Elasticsearch NOT to default to the local host and use the host in Logstash?

Here's what I know:

  • Logstash is on my local machine, Elasticsearch and Kibana are hosted at Elastic.co.

  • The error message says the Elasticsearch instance is dying, which I have hosted on Elastic.co. I've reached out to Elastic.co support where it's hosted and they've recommended

  1. updating the plugin (Updated the logstash elasticsearch output plugin from version 7.3.7 to 9.0.0)

  2. seeing if it's pointing to the right instance since the error mentions 9200 local host (confirmed, pasting our output script here, and again, it's sort of working, just not completely. Also I've checked the yml file and it doesn't mention a 9200 localhost)

       output {
       	elasticsearch {
         	document_id => "%{claim_id}"
         #	
         # es v5.1
         #
         # hosts => ["https://5758e30cca8bbbd0c4eb5317830f32ba.us-east-1.aws.found.io:9243"]
    
     #
     # es v5.5
     #
      hosts => ["https://37656d7ac4850603306fc6108576273e.us-east-1.aws.found.io:9243"]
     	user => "xxxxxx"
     	password => "xxxxx"
     	index => "claims-jasmine-v3"
     	workers => 1
     }
     stdout {
       # codec => rubydebug
     }
    

    }

I've tried the following solutions previously described in similar topics:

Here is code from the error message that sputters:

## note that it mentions _xpack which means the logstash has been looked to it

[2018-01-24T10:19:45,797][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>["http://localhost:9200"], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>"false", document_type=>"logstash_stats", sniffing=>"false", user=>"logstash_system", password=>"changeme", ssl=>"false", id=>"710374655e8d374e8e681c853584e11b845f7812-2">}

## because it's hooked to xpack, it resets the URL to the elasticsearch URLs, which in this case is the local host

[2018-01-24T10:19:48,546][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_system:xxxxxx@localhost:9200/]}}

## then the system looks for connection to elastic, which has been changed to local host....

[2018-01-24T10:19:48,546][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@localhost:9200/, :path=>"/"}

##  ...and as expected, it is not connecting

[2018-01-24T10:19:50,859][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx@localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@localhost:9200/][Manticore::SocketException] Connection refused: connect"}

Thanks in advance for your help! I really appreciate it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.