Issue with app search (download) release -- not quite a connection issue

I'm interested in trying out app-search self hosted, but I'm having trouble getting started.

I can submit an object using past JSON or upload. In the logs this is all well received.
But then App search keeps returning me that I have 0 documents.

Setup:

  • Elasticsearch 7.2 with basic license (installed through elasticsearch kubernetes operator) available over https with authentication on my server with https on port 443
  • KIbana works fine
  • App search is configured with the hostname, user and password of elasticsearch
  • I can login fine on app search
  • I can submit an object using past JSON or upload. In the logs this is all well received.
  • App Search logs show the document(s) are successfully received.
app-server.1 | [2019-07-18T16:14:26.892+00:00][12273][2366][rails][INFO]: [3dd6e664-73b0-48e8-b81f-4f5aa8d9939c] Engine[5d30804df841b462bf4f3c50]: Adding a batch of 2 documents to the index asynchronously
app-server.1 | [2019-07-18T16:14:26.918+00:00][12273][2366][rails][INFO]: [3dd6e664-73b0-48e8-b81f-4f5aa8d9939c] [ActiveJob] Enqueueing a job into the '.app-search-esqueues-me_queue_v1_index_adder' index. {"job_type"=>"ActiveJob::QueueAdapters::EsqueuesMeAdapter::JobWrapper", "payload"=>{"args"=>[{"job_class"=>"Work::Engine::IndexAdder", "job_id"=>"984576c7d5d00b376476f1aba9b863732290f8b3", "queue_name"=>"index_adder", "arguments"=>["5d30804df841b462bf4f3c50", ["5d30813cf841b4e1a64f3c54", "5d30813cf841b4e1a64f3c55"]], "locale"=>:en, "executions"=>1}]}, "status"=>"pending", "created_at"=>1563466466917, "perform_at"=>1563466466917, "attempts"=>0}
app-server.1 | [2019-07-18T16:14:27.074+00:00][12273][2366][active_job][INFO]: [3dd6e664-73b0-48e8-b81f-4f5aa8d9939c] [ActiveJob] [2019-07-18 16:14:27 UTC] enqueued Work::Engine::IndexAdder job (984576c7d5d00b376476f1aba9b863732290f8b3) on `index_adder`
app-server.1 | [2019-07-18T16:14:27.076+00:00][12273][2366][active_job][INFO]: [3dd6e664-73b0-48e8-b81f-4f5aa8d9939c] [ActiveJob] Enqueued Work::Engine::IndexAdder (Job ID: 984576c7d5d00b376476f1aba9b863732290f8b3) to EsqueuesMe(index_adder) with arguments: "5d30804df841b462bf4f3c50", ["5d30813cf841b4e1a64f3c54", "5d30813cf841b4e1a64f3c55"]
app-server.1 | [2019-07-18T16:14:27.087+00:00][12273][2366][action_controller][INFO]: [3dd6e664-73b0-48e8-b81f-4f5aa8d9939c] Completed 200 OK in 723ms (Views: 2.0ms)

But the worker returns the following errors:

worker.1     | [2019-07-18T16:14:28.176+00:00][12274][2364][rails][WARN]: Failed to claim job 984576c7d5d00b376476f1aba9b863732290f8b3, claim conflict occurred
worker.1     | [2019-07-18T16:14:28.176+00:00][12274][2370][rails][WARN]: Failed to claim job 984576c7d5d00b376476f1aba9b863732290f8b3, claim conflict occurred
worker.1     | [2019-07-18T16:14:28.176+00:00][12274][2366][rails][WARN]: Failed to claim job 984576c7d5d00b376476f1aba9b863732290f8b3, claim conflict occurred
worker.1     | [2019-07-18T16:14:28.181+00:00][12274][2368][active_job][INFO]: [ActiveJob] [Work::Engine::IndexAdder] [984576c7d5d00b376476f1aba9b863732290f8b3] Performing Work::Engine::IndexAdder from EsqueuesMe(index_adder) with arguments: "5d30804df841b462bf4f3c50", ["5d30813cf841b4e1a64f3c54", "5d30813cf841b4e1a64f3c55"]
worker.1     | [2019-07-18T16:14:28.546+00:00][12274][2368][rails][INFO]: [ActiveJob] [Work::Engine::IndexAdder] [984576c7d5d00b376476f1aba9b863732290f8b3] Adding document 5d30813cf841b4e1a64f3c54 to index for engine 5d30804df841b462bf4f3c50
worker.1     | [2019-07-18T16:14:29.053+00:00][12274][2368][active_job][INFO]: [ActiveJob] [Work::Engine::IndexAdder] [984576c7d5d00b376476f1aba9b863732290f8b3] Performed Work::Engine::IndexAdder from EsqueuesMe(index_adder) in 864.99ms
worker.1     | [2019-07-18T16:14:29.055+00:00][12274][2368][rails][ERROR]: Retrying Work::Engine::IndexAdder in 300 seconds, due to a StandardError. The original exception was #<Faraday::ConnectionFailed wrapped=#<Manticore::SocketException: Connection refused (Connection refused)>>.
worker.1     | [2019-07-18T16:14:29.058+00:00][12274][2368][rails][INFO]: [ActiveJob] Enqueueing a job into the '.app-search-esqueues-me_queue_v1_index_adder' index. {"job_type"=>"ActiveJob::QueueAdapters::EsqueuesMeAdapter::JobWrapper", "payload"=>{"args"=>[{"job_class"=>"Work::Engine::IndexAdder", "job_id"=>"984576c7d5d00b376476f1aba9b863732290f8b3", "queue_name"=>"index_adder", "arguments"=>["5d30804df841b462bf4f3c50", ["5d30813cf841b4e1a64f3c54", "5d30813cf841b4e1a64f3c55"]], "locale"=>:en, "executions"=>2}]}, "status"=>"pending", "created_at"=>1563466469057, "perform_at"=>1563466769056, "attempts"=>0}
worker.1     | [2019-07-18T16:14:29.079+00:00][12274][2368][rails][INFO]: [ActiveJob] Ignoring duplicate job class=Work::Engine::IndexAdder, id=984576c7d5d00b376476f1aba9b863732290f8b3, args=["5d30804df841b462bf4f3c50", ["5d30813cf841b4e1a64f3c54", "5d30813cf841b4e1a64f3c55"]]
worker.1     | [2019-07-18T16:14:29.080+00:00][12274][2368][active_job][INFO]: [ActiveJob] [2019-07-18 16:14:29 UTC] enqueued Work::Engine::IndexAdder job (984576c7d5d00b376476f1aba9b863732290f8b3) on `index_adder`
worker.1     | [2019-07-18T16:14:29.084+00:00][12274][2368][active_job][INFO]: [ActiveJob] Enqueued Work::Engine::IndexAdder (Job ID: 984576c7d5d00b376476f1aba9b863732290f8b3) to EsqueuesMe(index_adder) at 2019-07-18 16:19:29 UTC with arguments: "5d30804df841b462bf4f3c50", ["5d30813cf841b4e1a64f3c54", "5d30813cf841b4e1a64f3c55"]
worker.1     | [2019-07-18T16:14:29.086+00:00][12274][2368][rails][INFO]: Deleting: {:index=>".app-search-esqueues-me_queue_v1_index_adder", :type=>nil, :id=>"984576c7d5d00b376476f1aba9b863732290f8b3"}

In the Elasticsearch logs I see that app search is connected, because it gives me the following deprecation warning every so much time. It does not seem related.

{"type": "deprecation", "timestamp": "2019-07-18T16:34:50,046+0000", "level": "WARN", "component": "o.e.d.s.a.b.h.DateHistogramAggregationBuilder", "cluster.name": "quickstart", "node.name": "quickstart-es-z92ct6ztpk", "cluster.uuid": "H77aFeQoQUqU7ckCMK_gVg", "node.id": "D7V-shukRdi9WyuxuWdHIQ",  "message": "[interval] on [date_histogram] is deprecated, use [fixed_interval] or [calendar_interval] in the future."  }

I have checked settings, and it includes (not sure if app search or myself set it)

{
  "persistent": {
    "action": {
      "auto_create_index": ".app-search-*-logs-*,-.app-search-*,+*"
    },
    "discovery": {
      "zen": {
        "minimum_master_nodes": "1"
      }
    }
  },

The end result is that there are some indexes that start with .app-search, but no data.... !?#!

help?

Hey there --

That doesn't look good. :smile:

I'm going to try to replicate this on my end.

In the mean time, can you please share your config/app_search.yml?

There is nothing special in my config.

allow_es_settings_modification: true
# Elasticsearch full cluster URL:

elasticsearch.host: https://my.host-name.io


# Elasticsearch credentials:

elasticsearch.username: elastic
elasticsearch.password: the_autogenerated_password

# Elasticsearch SSL settings:
elasticsearch.ssl.enabled: true 

So.. I've been trying to dig into it furrther. With the same app-search install I can connect to elasticsearch running locally in a docker container, and it then all works. I have the feeling it has something to do with the ingress-nginx that I have in between on the server..

The connection generally works, but then there is a process that fails. So perhaps there is one part int he application (the worker?) which doesn't support ServerNameIdentification (SNI) or something. (now guessing here)

Ok, last update: I've now deployed a basic elasticsearch node on the server (w.o. operator), and can confirm that:

When using TLS though ingress-nginx it /does not/ work.
When going to the same elasticsearch node directly (exposed as a NodePort service) it /does work/

Now the question is.. Why do some app-search commands work though tls proxy, and not others?

Thatcher -- thank you for the updates.

I've tried poking around in different configurations but haven't yet been able reproduce. The information you've given is helpful. I'll take some time to digest it and run it by a few team mates; if you try anything else, please do let us know.

Enjoy the weekend,

Kellen

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.