Setup seems to be not configuring indexes correctly?


(Kurtis Rainbolt Greene) #1

So I have logstash, elasticsearch, kibana, an apm server, an apm client, and an apm RUM client all running with the correct settings however when I click the "Setup Kibana" and attempt to view transaction details:

First, it defaults to the default index which is not the apm index. Feels strange since the system should know the right index.

Second, no results turn up:

Finally, when I expand the search window to 24hrs I get this error message:

I haven't done anything strange to my setup, I'm using the official docker images. Also, to be clear, I know how to solve the individual issue, but this feels like something I shouldn't have to drop into to fix since 5.X.Y worked just fine?


(Søren Louv Jansen) #2

Hi Kurtis,

Sorry to hear about the bugs you are running into. What version of Kibana and APM Server are you using, and what steps do you take to resolve the error message that shows up when expanding the window in Discover?


(Søren Louv Jansen) #3

Hi again,

After taking a closer look, I think you are missing the index template.

Loading index template automatically
The index template should have been created automatically but you might have disabled this in the apm-server.yml config file.
Read more about how the index template is automatically loaded: https://www.elastic.co/guide/en/apm/server/current/configuration-template.html

Loading index template manually
To load the index template manually, please run:

apm-server setup --template

Read more about loading the index template manually here:
https://www.elastic.co/guide/en/apm/server/current/_manually_loading_template_configuration.html


(Gil Raphaelli) #4

In addition to what @sqren provided, you should find https://www.elastic.co/blog/how-to-send-data-through-logstash-or-kafka-from-elastic-apm useful for guidance on using apm-server with logstash.


(Kurtis Rainbolt Greene) #5

I believe this has solved my issue, however I'm currently experiencing "1 of 7 shards failed" error, which I believe is something else. I appreciate the help here!


(Kurtis Rainbolt Greene) #6

To get it truly to work I had to set these properties:

transaction.type
context.service.agent.name
context.service.name
processor.event

To fielddata=true (see below), which isn't turned on by default. @gil @sqren, was there something I missed?

PUT apm-server-*/_mapping/doc
{
  "properties": {
    "processor.event": { 
      "type": "text",
      "fielddata": true
    }
  }
}

(Gil Raphaelli) #7

The apm index template should actually make those keyword fields, so something isn't lining up for your
apm-server setup --template command (probably setup.template.pattern) or you need to recreate those indices after installing the index template, so the mapping is applied when the index is created.


(Kurtis Rainbolt Greene) #8

Hum, I haven't changed the setup.template.pattern.


(Gil Raphaelli) #9

Excellent, then that's probably the issue here. The default index pattern is apm-6.x.y-* - it appears you've changed your indices to write to apm-server-*, which won't match the default and therefore not have that mapping applied when the index is created. Changing setup.template.pattern='apm-server-6.x.y-*' would address that, assuming you have retained the version number in your indices, which we highly recommend you do. If possible, I'd recommend switching to the default indices and index pattern as laid out in the docs and blog post referenced previously.


(Kurtis Rainbolt Greene) #10

Oh wow, okay, that makes sense.


(Kurtis Rainbolt Greene) #11

Okay, I've searched my instance top to bottom and I can find nowhere that I've defined it as apm-server-. Here's my logstash:

input {
  udp {
    port => 12200
    codec => json_lines
    add_field => {
      "input" => "udp"
    }
  }

  gelf {
    add_field => {
      "input" => "gelf"
    }
  }

  http {
    add_field => {
      "input" => "http"
    }
  }

  beats {
    port => 5044
  }
}

output {
  if [@metadata][beat] {
    elasticsearch {
      hosts => ["http://elasticsearch:9200"]
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    }
  } else if [input] == "gelf" {
    elasticsearch {
      hosts => ["http://elasticsearch:9200"]
      index => "docker-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["http://elasticsearch:9200"]
      index => "default-%{+YYYY.MM.dd}"
    }
  }

  if !([input] == "beats") {
    stdout {
      codec => json_lines
    }
  }
}

I just did a fresh install and it's still creating an index at apm-server-*.


(Kurtis Rainbolt Greene) #12

I also set the configuration you recommended and I'm still having to set fielddata. Here's my ENTIRE configuration: https://gist.github.com/krainboltgreene/cd7afac60a84c92e2eeaee0ba588a11d


(Gil Raphaelli) #13

Sorry for the trouble @krainboltgreene. apm-server is coming from [@metadata][beat] - this issue tracks the source of that confusion. You can add a condition in the logstash ([@metadata][beat] == apm-server) and set the index to apm-%{[@metadata][version]}-%{+YYYY.MM.dd} accordingly.


(Kurtis Rainbolt Greene) #14

No problems at all @gil, this is amazing software and you all work very hard and I appreciate that!


(Kurtis Rainbolt Greene) #15

Okay, so I've now modified my logstash configuration so that it creates the index as per the pattern above. I've renamed old indexes as well and I've rebuilt my index and dashboards. Interestingly when I look at the individual transaction in the discover panel (processor.event:"transaction" AND transaction.id:"aa916d668cd6ae8f" AND trace.id:"6baffb6dada15e95e08b5051279c0afc") I get this warning:

Does anyone know what this could mean or how to investigate it?


(Kurtis Rainbolt Greene) #16

After completely rebuilding everything from scratch it's working perfectly. Thanks so much @gil!!!!!


(Gil Raphaelli) #17

@krainboltgreene I missed your last post - I've seen that issue when a document in the APM index doesn't have a scripted field that is expected. Please let us know if you come across it again and we can try to track that down.

I'm really glad to hear things are working now!


(system) closed #18

This topic was automatically closed 20 days after the last reply. New replies are no longer allowed.