If the output of apmserver is not elasticsearch, will the kibana apm ui be empty?

apm-error.conf
apm-metric.conf
apm-onboarding.conf
apm-span.conf
apm-transaction.conf

I have 5 conf files with contents similar to the following

input {
kafka {
topics => "apm-transaction"
group_id => "apm_group"
client_id => "apm-transaction01"
bootstrap_servers => "192.168.10.145:9092,192.168.10.146:9092,192.168.10.147:9092"
codec => "json"
#max_partition_fetch_bytes => "10485760"
}
}

filter{
if "[client][ip]" {
geoip{
source => "[client][ip]"
target => "geoip"
fields => ["CITY_NAME","REGION_NAME","COUNTRY_NAME"]
}
}
}

output {
elasticsearch {
hosts => ["192.168.10.139:9200","192.168.10.140:9200","192.168.10.141:9200"]
index => "apm-7.6-transaction-%{+YYYY.MM.dd}"
}
}

It turned out to be directly output to elasticsearch, I can see the information in apm ui, but after outputting to kafka, what data seems to be lost, now apm ui is empty, is there any way to restore the data of apm ui?
I have tried to import the template of apmserver before entering es in logstash, but it has no effect

Hi,

You can have APM Server output to Kafka and then Logstash to pull the events out of Kafka and index them in Elasticsearch. You should see events in the UI without problem, but with the information you give I can't pinpoint the problem.

One of the things you need to do first is to point APM Server to Elasticsearch and run apm-server -e setup to save the correct index template.

Maybe this tutorial can help: https://www.elastic.co/blog/how-to-send-data-through-logstash-or-kafka-from-elastic-apm#introduce-kafka

Let me know if it doesn't.

Juan

I have read this blog, I have executed apm-server setup --template, but it still has no effect

Hi,

Can you share your apm-server.yml configuration along with APM Server, Logstash and Kafka logs?

apm-server.yml

apm-server:
host: "0.0.0.0:8200"
rum:
enabled: true
allow_origins: '*'
source_mapping.enabled: true

output.elasticsearch:
enable: true
hosts: ["192.168.10.141:9200","192.168.10.139:9200","192.168.10.140:9200"]
username: "elastic"
password: "elastic"
worker: 5
bulk_max_size: 10240

queue.mem.events: 368640
queue.mem.flush.min_events: 2048
queue.mem.flush.timeout: 1s

setup.template.enabled: true
#setup.template.name: "apm-%{[observer.version]}"
##setup.template.pattern: "apm-%{[observer.version]}-*"
#setup.template.fields: "fields.yml"
setup.template.overwrite: true
setup.template.append_fields:

  • name: http.request.headers
    type: group
    dynamic: true
  • name: http.response.headers
    type: group
    dynamic: true
  • name: transaction.custom
    type: group
    dynamic: true
  • name: transaction.page
    type: group
    dynamic: true

apm-server.kibana:
enable: true
host: "192.168.10.148:5601"
protocol: http
path: /kibana

kafka:
enable: false
hosts: ["192.168.10.145:9092","192.168.10.146:9092","192.168.10.147:9092"]
topics:
- topic: "apm-span"
when.contains:
processor.event: "span"
- topic: "apm-transaction"
when.contains:
processor.event: "transaction"
- topic: "apm-metric"
when.contains:
processor.event: "metric"
- topic: "apm-error"
when.contains:
processor.event: "error"
- topic: "apm-onboarding"
when.contains:
processor.event: "onboarding"
- topic: "apm-soucemap"
when.contains:
processor.event: "sourcemap"
partition.round_robin:
reachable_only: false
required_acks: 1
codec.json:
pretty: true
worker: 2

I did not keep logstash and kafka logs, I can simulate again if necessary
No errors are reported in logstash and kafka logs

Hi,

With the info you provide I can not say what is the issue. It seems there are some typos in your config: it is output.elasticsearch.enabled, not output.elasticsearch.enable. The same for output.kafka.
Indentation is lost in the copy paste, but it is also important that yml files are indented correctly.

Generally the right approach would be go start from the beginning step by step, with the simplest configurations possible, eg:

  • Check that APM Server can connect to Elasticsearch: /apm-server test output -e -E 'output.elasticsearch.hosts=["http://localhost:9200"]' -E output.elasticsearch.username=admin -E output.elasticsearch.password=changeme

  • Setup templates with the same arguments:
    ./apm-server setup -e -E 'output.elasticsearch.hosts=["http://localhost:9200"]' -E output.elasticsearch.username=admin -E output.elasticsearch.password=changeme

  • Check that templates exists:
    curl -u admin:changeme http://localhost:9200/_cat/templates/apm\*

  • Now check connection to Kafka:
    ./apm-server test output -e -E output.elasticsearch.enabled=false -E output.kafka.enabled=true -E 'output.kafka.hosts=["localhost:9092"]' -E output.kafka.topic=apm

Keep going with minimal config and ingesting data. You can query Elasticsearch directly to see if documents have been indexed: curl -u admin:changeme 'http://localhost:9200/apm*/_search'.

Check apm-server, kafka, and logstash logs along the way, making sure there are no errors. https://www.elastic.co/blog/how-to-send-data-through-logstash-or-kafka-from-elastic-apm#introduce-kafka should still be valid and work as a guidance.