Configure main instance with puppet module

I have "inherited" some puppet code that is using the elasticsearch-puppet module to manage our "ELK" stack, and was trying to use it to create a separate (unrelated) Elasticsearch cluster. What I cannot figure out, is why I cannot just configure the "default" instance of elasticsearch, rather than creating a sub-directory and managing it as an additional instance. That complicates the path to logs, data, config, etc. I just want one instance of elasticsearch per VM. The puppet module appears to be setup to support multiple instances of ES per host, which I thought was discouraged. Why is the puppet code setup to make everything more complicated that it needs to be? Am I missing something basic on how to just configure it as a single instance?

Current Puppet Code: (trimmed for brevity)

      include elasticsearch
      include java

        elasticsearch::instance { $instance_name:
          ensure => present,
          config => $config_hash,
        }

the config hash has lots of conditionals (ssl, ad auth, auditing, etc, but lets just say it ends up looking like:

      {
          'network.bind_host'                  => '0.0.0.0',
          'network.publish_host'               => $publish_host,
          'node.master'                        => $node_master,
          'node.data'                          => $node_data,
          'node.ingest'                        => $node_ingest,
          'discovery.zen.minimum_master_nodes' => $min_nodes,
          'discovery.zen.ping.unicast.hosts'   => $nodes
        }

... I don't think anything in the $config_hash is relevant to instance_name, which in the "inherited" code, was actually `::fqdn` . <-- (there is a $, but something is funky with the formatting)

This creates means config is in /etc/elasticsearch/the.systems.fqdn.tld/ (which complicates the SSL setup and the ability to push "scripts" to each server. Additionally, it puts the logs in /var/log/elasticsearch/the.systems.fqdn.tld/ and the data is in /path/to/data/the.systems.fqdn.tld/ (etc). I have experimented with changing that $::fqdn to $instance_name, and setting it to "es-01" (which seems to be the documented "norm"). That is slightly better, in that each host is setup exactly the same, but why the es-01? I just want to use /etc/elasticsearch/elasticsearch.yml ... /var/logs/elasticsearch, /path/to/data/.

I would usually file an "issue" in GitHub, but that seemed to be discouraged, so I hope I have posted this to the correct forum here.

Thanks in advance,
Tommy

I suppose its only fair if I include the hieradata too:

#####
# elasticsearch - https://forge.puppet.com/elastic/elasticsearch
elasticsearch::config:
  cluster.name: "des_%{application_environment}"
#elasticsearch::version: 5.6.14
elasticsearch::package_url: https://artifactory.company.com/artifactory/elasticsearch-local/elasticsearch-5.6.14.rpm
elasticsearch::manage_repo: false
elasticsearch::restart_on_change: false
elasticsearch::restart_plugin_change: false
elasticsearch::datadir: /apps/elasticsearch
elasticsearch::java_install: false
elasticsearch::api_basic_auth_username: elastic
elasticsearch::api_basic_auth_password: changeme # Encrypt with eyaml in the application_env yaml
elasticsearch::api_protocol: https
elasticsearch::api_host: "%{facts.fqdn}"
elasticsearch::api_ca_file: "/etc/elasticsearch/%{facts.fqdn}/ssl/ca.crt"
elasticsearch::jvm_options:
  - -Xms31g
  - -Xmx31g
elasticsearch::security_plugin: x-pack
elasticsearch::plugins:
  x-pack:
    url: https://artifactory.company.com/artifactory/X-Pack/x-pack-5.6.14.zip

As you can see in there, the instance name of fqdn requires a lot of extra nonsense. :slight_smile:

Also, these servers are not Internet connected, so I had to put the RPM directly in artifactory.

@Tommy2 the situation you're describing is one example why we're probably moving away from supporting multiple instances of Elasticsearch, as it only serves to complicate the installation. It was added some time ago to support some edge cases, but it's non standard and creates more problems than it's worth.

If you have feedback with regards to this functionality, please feel free to provide them in the related issue I created to discuss it.

@tylerjl Thanks for responding. I was hoping that I wasn't just missing something super easy and straightforward. At a minimum I will +1 it, so I can follow it. We look forward to this change.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.