There are no ingest nodes in this cluster, unable to forward request to an ingest node

If you are asking about a problem you are experiencing, please use the following template, as it will help us help you. If you have a different problem, please delete all of this text :slight_smile:

TIP 1: select at least one tag that further categorizes your topic. For example server for APM Server related questions, java for questions regarding the Elastic APM Java agent, or ui for questions about the APM App within Kibana.

TIP 2: Check out the troubleshooting guide first. Not only will it help you to resolve common problems faster but it also explains in more detail which information we need before we can properly help you.

Kibana version:
7.13.4
Elasticsearch version:
7.13.4
APM Server version:
7.13.4
APM Agent language and version:
NA
Browser version:
NA
Original install method (e.g. download page, yum, deb, from source, etc.) and version:
Helm
Fresh install or upgraded from other version?
Fresh
Is there anything special in your setup? For example, are you using the Logstash or Kafka outputs? Are you using a load balancer in front of the APM Servers? Have you changed index pattern, generated custom templates, changed agent configuration etc.
Nothing special
Description of the problem including expected versus actual behavior. Please include screenshots (if relevant):
We are using below setup all installed through helm charts

  1. 3 Master pods
  2. 2 Data pods
  3. 3 remote client pods
  4. 1 kibana pod
  5. 1 apm-server pod
    Steps to reproduce:
  6. Start APM-sever deployment

Errors in browser console (if relevant):
NA
Provide logs and/or server output (if relevant):

{"log.level":"info","@timestamp":"2021-07-27T13:52:24.351Z","log.logger":"publisher","log.origin":{"file.name":"pipeline/retry.go","file.line":213},"message":"retryer: send wait signal to consumer","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2021-07-27T13:52:24.351Z","log.logger":"publisher","log.origin":{"file.name":"pipeline/retry.go","file.line":217},"message":"  done","ecs.version":"1.6.0"}
{"log.level":"error","@timestamp":"2021-07-27T13:52:24.353Z","log.logger":"elasticsearch","log.origin":{"file.name":"elasticsearch/client.go","file.line":224},"message":"failed to perform any bulk index operations: 500 Internal Server Error: {\"error\":{\"root_cause\":[{\"type\":\"illegal_state_exception\",\"reason\":\"There are no ingest nodes in this cluster, unable to forward request to an ingest node.\"}],\"type\":\"illegal_state_exception\",\"reason\":\"There are no ingest nodes in this cluster, unable to forward request to an ingest node.\"},\"status\":500}","ecs.version":"1.6.0"}```

Is it necessary to have a ingest node for apm-server to work?
is it advisable to allow ingest role in the same remote client node?
how to make apm-server work without ingest nodes if we are using external ingestion like logstash?

Regards
Nitin

Is it necessary to have a ingest node for apm-server to work?

In its default configuration, yes. Per Configure the Elasticsearch output | APM User Guide [8.11] | Elastic, you can disable the ingest pipeline by configuring apm-server with output.elasticsearch.pipeline: _none. Although it is possible, I wouldn't recommend it -- this might break the UI. We are making increasingly more use of ingest node for processing events as they are indexed.

is it advisable to allow ingest role in the same remote client node?

I'm afraid I don't understand your question, but you can find information about Elasticsearch node roles at Node | Elasticsearch Guide [8.11] | Elastic

how to make apm-server work without ingest nodes if we are using external ingestion like logstash?

If you're using the Logstash output, you can instruct Logstash to pass the pipeline onto Elasticsearch, as described in Use ingest pipelines for parsing | Logstash Reference [7.13] | Elastic. You would need to load the ingest pipeline directly in Elasticsearch.

1 Like

Hello @axw ,

Thank you for your response. we are completely new to implement APM server. The motive to use APM server is to collect and visualize application API level metrics via open telemetry collector(elastic exporter) pass it to ES via APM server and then visualize them through Kibana. But we don't know how to set this whole pipeline. As per your suggestion we have converted remote client node role into "coordinating node + ingest node" role. The previous error has vanished now. But we are not sure weather to use indices or pipeline to feed into Elasticsearch to achieve our purpose. Nor we could find any relevant documentation or article for such settings. Also, the APM page on Kibana shows no apm services installed.

Could you please hint us so that we can use correct APM settings? We are very keen on utilizing APM for our use case

The motive to use APM server is to collect and visualize application API level metrics via open telemetry collector(elastic exporter) pass it to ES via APM server and then visualize them through Kibana.

The elastic exporter in opentelemetry-collector-contrib is deprecated, and has been replaced by native support for OpenTelemetry Line Protocol (OTLP) directly in apm-server. You should either use the otlp exporter in opentelemetry-collector, pointing it at apm-server, or configure your instrumented applications (OpenTelemetry SDKs) to point directly at apm-server.

But we are not sure weather to use indices or pipeline to feed into Elasticsearch to achieve our purpose. Nor we could find any relevant documentation or article for such settings.

Indices and ingest pipelines are two very different things -- you don't use one in place of the other. You can read about what indices are at Data in: documents and indices | Elasticsearch Guide [8.11] | Elastic, and about ingest pipelines at Ingest pipelines | Elasticsearch Guide [8.11] | Elastic

My recommendations are (unless you specifically need Logstash):

  1. Configure APM Server to output directly to Elasticsearch. This is the default, most common, and simplest, configuration.
  2. Ensure you have at least one Elasticsearch ingest node, and run APM Server with its default pipeline configuration. i.e. leave output.elasticsearch.pipeline unspecified.

When you run APM Server in this way, APM Server will install the ingest pipelines itself.

1 Like

Hello @axw
Thank you for your suggestions. It really helps to understand the flow. So we have tried to configured in a similar fashion.

You should either use the otlp exporter in opentelemetry-collector, pointing it at apm-server

config.yaml: |-
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:55680
          http:
            endpoint: 0.0.0.0:55681
      hostmetrics:
        collection_interval: 1m
        scrapers:
          cpu:
          load:
          memory:
    processors:
      batch: null
    exporters:
      elastic:
        apm_server_url: 'http://apm-server-apm-server.elasticsearch.svc.cluster.local:8200'
        # secret_token: 'XXXXXXXXXXXXXXX'
      logging:
        loglevel: DEBUG
    extensions:
      health_check:
    service:
      pipelines:
        metrics:
          receivers:
            - otlp
            - hostmetrics
          exporters:
            - logging
            - elastic
        traces:
          receivers:
            - otlp
          processors:
            - batch
          exporters:
            - elastic
            - logging
  1. Configure APM Server to output directly to Elasticsearch. This is the default, most common, and simplest, configuration.
  2. Ensure you have at least one Elasticsearch ingest node, and run APM Server with its default pipeline configuration. i.e. leave output.elasticsearch.pipeline unspecified.
  apm-server.yml: |
    apm-server:
      host: "0.0.0.0:8200"
    queue: {}
    setup.template.settings:
      index:
        number_of_shards: 1
        codec: best_compression
    setup.dashboards.enabled: false
    setup.kibana:
      host: "http://kibana-kibana.elasticsearch.svc.cluster.local"
    output.elasticsearch:
      hosts: ["http://es-client.elasticsearch.svc.cluster.local:9200"]

Could you please review above configs for OTEL and APM respectively and suggest? There is no error in logs as of now

Also, how would we be able to see data in APM, Logs & Metrics section under Observability module?

Regards
Nitin G

Could you please review above configs for OTEL and APM respectively and suggest? There is no error in logs as of now

Your APM Server config looks right. I suggest changing your opentelemetry-collector config to use the "otlp" exporter, like in OpenTelemetry integration | APM User Guide [8.11] | Elastic. Something like:

exporters:
  otlp/elastic:
    endpoint: "apm-server-apm-server.elasticsearch.svc.cluster.local:8200"
    insecure: true # set this if you're not configuring APM Server with TLS
    headers:
      Authorization: "Bearer <secret_token>"

Once you've done that, check the apm-server logs to make sure it is receiving requests, and whether there are any errors in it. It you can't see any requests in the apm-server log, then check the opentelemetry-collector log to make sure it is configured correctly.

Also, how would we be able to see data in APM, Logs & Metrics section under Observability module?

If your opentelemetry-collector (and SDKs) and apm-server are configured correctly, then data should just start showing up in the APM app in Kibana. If not, then there's likely an issue in one of the above components and you'll need to look through the logs to debug the configuration.

Note that we don't currently support OpenTelemetry's experimental log data. If you want to see logs for your applications, I would recommend using Filebeat to consume logs from Docker or from log files on disk: Filebeat Reference [8.11] | Elastic

1 Like

Thank you so much for all the details and reference. This helped me create a agent and collector setup. I have marked one of your answers as solution. Though I have more queries. But I will ask them in other thread.

1 Like

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.