'Unable to process GET request to elasticsearch URL' in Zammad

Hello all,

I am new to Elastic products, so forgive my ignorance.

I've installed Zammad on a Debian server today, everything went well but the search indexing, even though Elasticsearch service is up and running.

When testing the site on the server via browser I receive the Apache test site, although I have disabled it. But testing it from client side I cannot reach anything. Zammad works well.

When testing in Zammad, I receive the following error:

Unable to process GET request to elasticsearch URL 'http://localhost:9200/zammad_production_ticket/_search'. Check the response and payload for detailed information:

Response:
#<UserAgent::Result:0x00007fbc361326e0 @success=false, @body="<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>404 Not Found</title>\n</head><body>\n<h1>Not Found</h1>\n<p>The requested URL was not found on this server.</p>\n<hr>\n<address>Apache Server at localhost Port 9200</address>\n</body></html>\n", @data=nil, @code="404", @content_type=nil, @error="No such file http://localhost:9200/zammad_production_ticket/_search, 404!", @header={"date"=>"Fri, 24 Jan 2025 21:42:33 GMT", "server"=>"Apache", "content-length"=>"257", "connection"=>"close", "content-type"=>"text/html; charset=iso-8859-1"}>

Payload:
{"size":0,"query":{"bool":{"must":[{"range":{"created_at":{"from":"2024-12-31T23:00:00Z","to":"2025-12-31T22:59:59Z"}}},{"bool":{"must":[{"bool":{"must_not":[{"term":{"state.name.keyword":"merged"}}]}}]}}]}},"aggs":{"time_buckets":{"date_histogram":{"field":"created_at","calendar_interval":"month","time_zone":"Europe/Budapest"}}},"sort":[{"updated_at":{"order":"desc"}},"_score"]}

Payload size: 0M

The port is open, I have tried decommenting network.host in elasticsearch.yml, but same results. When trying to do search indexing, I receive the same 'Unable to process...' error with the Apache test site's HTML showing below it.

Probably something have distracted my attention while installing, can you please how can I fix this? I'm pretty sure Apache's site showing is not right, but I'm not sure how I can get rid of it.

I appreciate your help in advance.

Is elasticsearch responding to curl? does

curl http://localhost:9200

get a response?

If so, check what indices you have

curl http://localhost:9200/_cat/indices

even though Elasticsearch service is up and running

I've no idea what installation instructions you followed for installing elasticsearch ... it would be nice to know. But sharing at least the elasticsearch.yml will likely help, and the log it gives when starting up.

Hello,

First of all, thank you for your reply.

It was Install Zammad Ticketing System on Debian 12 guide by Kifarunix, I can't post URL's here as I'm a newbie.

localhost:9200 responds, but I receive the Apache test page. /_cat/indices gives back an error 404 page.

Checking the status of the service I don't see anomalies.

Or should I maybe just re-install ES? I installed it from Zammad seperately, so probably it wouldn't cause any cross-problems.

Here is elasticsearch.yml:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 127.0.0.1
#http.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 24-01-2025 14:16:06
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["hu0001sapp"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
http.max_content_length: 400mb

And elasticsearch.log:

[2025-01-25T01:30:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] [hu0001sapp] triggering scheduled [ML] maintenance tasks
[2025-01-25T01:30:00,035][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [hu0001sapp] Deleting expired data
[2025-01-25T01:30:00,068][INFO ][o.e.x.m.j.r.UnusedStatsRemover] [hu0001sapp] Successfully deleted [0] unused stats documents
[2025-01-25T01:30:00,068][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [hu0001sapp] Completed deletion of expired ML data
[2025-01-25T01:30:00,069][INFO ][o.e.x.m.MlDailyMaintenanceService] [hu0001sapp] Successfully completed [ML] maintenance task: triggerDeleteExpiredDataTask
[2025-01-25T02:30:00,008][INFO ][o.e.x.s.SnapshotRetentionTask] [hu0001sapp] starting SLM retention snapshot cleanup task
[2025-01-25T02:30:00,011][INFO ][o.e.x.s.SnapshotRetentionTask] [hu0001sapp] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2025-01-25T10:01:28,870][WARN ][o.e.c.c.ClusterBootstrapService] [hu0001sapp] this node is locked into cluster UUID [a7FpKn6qRiyrrIHxKu4IyQ] but [cluster.initial_master_nodes] is set to [hu0001sapp]; remove this setting to avoid possible data loss caused by subsequent cluster bootstrap attempts; for further information see https://www.elastic.co/guide/en/elasticsearch/reference/8.17/important-settings.html#initial_master_nodes

Thank you, RainTown.

You are on https not http

AND you are going to need to account for Self Signed Cert AND Authentication.

So start with curl and get that working...

curl -k -v -u elastic https://localhost:9200

I would look at this documentation closely

this is .deb if you did you can look at .rpm ... but they are the same here

Hello Stephen,

A bit of confusion here, so sorry about that.

Firstly I set it up as HTTPS, but realized I don't have the needed cert right now (I'm on holiday, just trying to set up a ticketing system for my org), so for testing purposes I generated HTTP. It definitely messed things up in the .yml file, do I only need to comment xpack lines or set them to false, then do a search indexing?

Thanks a lot.

Sorry I am lost at this point... if you do not want HTTPS etc.. you need to get your system all configured correctly... come back when you are ready... and have the system in the state you want... backing things out... is painfull.

Completely agree. If I were you I'd start over from scratch, my feeling is you are not that close to a working setup.

Amongst many confusing things here is

localhost:9200 responds, but I receive the Apache test page. /_cat/indices gives back an error 404 page.

localhost:9200 should be elasticsearch (not apache) responding (9200 is its standard HTTP/HTTPS port) and response will be something like

{
  "name" : "some_name",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "aNbOasdgQcuRVMpe8hmoLg",
  "version" : {
    "number" : "8.17.0",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "2b6a7fed44faa321997703718f07ee0420804b41",
    "build_date" : "2024-12-11T12:08:05.663969764Z",
    "build_snapshot" : false,
    "lucene_version" : "9.12.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

The _cat/indices should be a list of indices. If you are seeing apache pages, then you are not talking to elasticsearch.

I'd suggest get a working elasticsearch installation first, then worry about the rest of whatever Zammad is later.

Or, and don't take this wrong way please, ask in a Zammad forum.

I got it fixed, for some reason port 9200's PID was Apache for some reason, I changed the port in elasticsearch.yml to 9201, now its PID is Java, I could rebuild the indexes and now Reports in Zammad works perfectly.

Thank you for all the help.

I was completely wrong there then :slight_smile:

Good luck with the tool.