Setting up a "lab-as-prod": Beats roles, permissions and tutorial!

(backtracking after all is done, I'd like to start apologizing for the sheer size of this post, and ask that you bear with me)

So, not only I'm trying to build it, but I'm trying to document step-by-step how to do it.

I decided to try this using Docker, because I felt like learning more just about the Stack wasn't enough, and Docker could make it easier for the idea of a lab, I could spin it up easily on a light VM, show stuff around... So far I successfully got to configure 3 Elasticsearch nodes and a Kibana, with TLS and basic user security.

But when I got to the Beats part... I suffered. I guess I should go step-by-step here, showing what I've done:

docker-compose -f create-certs.yml run --rm create_certs to get the certs, as the documentation was clear - my instances.yml contains entries for everything so far, 3 ES nodes, Kibana, along with Filebeat and Metricbeat hosts.

instances:
  - name: es1-01
    dns:
      - es1-01
      - localhost
    ip:
      - 127.0.0.1

  - name: es1-02
    dns:
      - es1-02
      - localhost
    ip:
      - 127.0.0.1

  - name: es1-03
    dns:
      - es1-03
      - localhost
    ip:
      - 127.0.0.1

  - name: kibana
    dns:
      - kibana
      - localhost
    ip:
      - 127.0.0.1

  - name: filebeat
    dns:
      - filebeat
      - localhost
    ip:
      - 127.0.0.1

  - name: metricbeat
    dns:
      - metricbeat
      - localhost
    ip:
      - 127.0.0.1

I, then, manually spin up a temporary Elasticsearch container with volume and network mapping, to properly configure passwords:

docker run -ti -v elastic_lab_data1-01:/usr/share/elasticsearch/data --network=elastic_lab_elastic --env discovery.type=single-node --env-file .env --name=iniciar --rm "docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}" /bin/bash

#inside the container

printf "discovery.type: single-node\nxpack.security.enabled: true" >> config/elasticsearch.yml
#because for some reason the Elasticsearch ignores the --env parameters used on the docker run command

runuser -l elasticsearch -c '/usr/share/elasticsearch/bin/elasticsearch' &
#because I'm setting up .security-*, right?

#wait for Elasticsearch's log spit out the "Active license is now [BASIC]" line and...
echo "y" | runuser -l elasticsearch -c 'bin/elasticsearch-setup-passwords auto' | grep "PASSWORD" | cut -d ' ' -f 2,4
#to start up built-in passwords

This was my previous "save point". After this I'd shut down the temp container, copy the generated kibana password to .env KIBANA_PASSWORD used on the docker-compose.yml file, and docker-compose up and all three ES nodes and Kibana would pop up.

I then spent a few days bashing my head against some walls, going from one beats documentation to another until I finally landed on this one, even before @DanRoscigno mentioned it to me on Github.

Even following it to the letter, I'd get some errors here and there. I finally ended up with a nice configuration, so I hoped back on the temp container:

#changed the setup-passwords command to...
echo "y" | runuser -l elasticsearch -c 'bin/elasticsearch-setup-passwords auto' | grep "PASSWORD" | cut -d ' ' -f 2,4 > passwords.txt

#to add newly generated random passwords for the beats_setup and beats_writer users
printf "beats_setup $(cat /dev/urandom | base64 | head -c 42)\nbeats_writer $(cat /dev/urandom | base64 | head -c 42)\n" >> passwords.txt

cat passwords.txt | grep "elastic" && cat passwords.txt | grep "kibana" && cat passwords.txt | grep "beats_setup" && cat passwords.txt | grep "beats_writer"
export ELASTIC_PASSWORD="$(cat senhas.txt | grep "elastic" | cut -d ' ' -f 2)"
export KIBANA_PASSWORD="$(cat senhas.txt | grep "kibana" | cut -d ' ' -f 2)"
export BEATSSETUP_PASSWORD="$(cat senhas.txt | grep "beats_setup" | cut -d ' ' -f 2)"
export BEATSWRITER_PASSWORD="$(cat senhas.txt | grep "beats_writer" | cut -d ' ' -f 2)"
#this all to output on console and set the passwords as variables to use after, when needed

And here comes... I find it nice and all to have the GUI available, but for the sake of simplicity I thought it'd be best to still stick to the command-line to create roles and users for the beats. Since I'm not very good at cURLing stuff, I initially set them up on GUI, then extracted their cURL values from the Dev Tools.

But here's the thing: doing just the bare minimum that the documentation says, I'd still get errors when trying to spin up the beats! So, after some more research and talking at the Elastic Slack's #beats channel, I ended up with this config for roles:

#first, I add the beats_setup role:
curl -X POST "localhost:9200/_security/role/beats_setup?pretty" -u elastic:${ELASTIC_PASSWORD} -k -H 'Content-Type: application/json' -d'{
    "cluster" : [
      "monitor",
      "manage_ilm",
      "manage_ml",
      "manage_index_templates",
      "manage_ingest_pipelines",
      "manage_pipeline"
    ],
    "indices" : [
      {
        "names" : [
          "filebeat-*",
          "auditbeat-*",
          "heartbeat-*",
          "metricbeat-*",
          "packetbeat-*",
          "winlogbeat-*",
          "metricbeat*"
        ],
        "privileges" : [
          "read"
        ],
        "allow_restricted_indices" : false
      },
      {
        "names" : [
          "*"
        ],
        "privileges" : [
          "manage"
        ],
        "field_security" : {
          "grant" : [
            "*"
          ]
        },
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : { },
    "transient_metadata" : {
      "enabled" : true
    }
  }'

#then, the beats_writer role
curl -X POST "localhost:9200/_security/role/beats_writer?pretty" -u elastic:${ELASTIC_PASSWORD} -k -H 'Content-Type: application/json' -d'{
    "cluster" : [
      "monitor",
      "read_ilm",
      "cluster:admin/ingest/pipeline/get",
      "cluster:admin/ingest/pipeline/put",
      "cluster:admin/ilm/put"
    ],
    "indices" : [
      {
        "names" : [
          "auditbeat-*",
          "filebeat-*",
          "heartbeat-*",
          "metricbeat-*",
          "packetbeat-*",
          "winlogbeat-*"
        ],
        "privileges" : [
          "create_doc",
          "create_index",
          "view_index_metadata"
        ],
        "allow_restricted_indices" : false
      }
    ],
    "applications" : [ ],
    "run_as" : [ ],
    "metadata" : { },
    "transient_metadata" : {
      "enabled" : true
    }
  }'

#and then I add the beats_setup and beats_writer users, with their roles:
curl -X POST "localhost:9200/_security/user/beats_setup?pretty" -u elastic:$ELASTIC_PASSWORD -k -H 'Content-Type: application/json' -d'{"password":"'$BEATSSETUP_PASS'","roles":["beats_setup","kibana_admin","ingest_admin","beats_admin"],"full_name":"","email":"","metadata":{},"enabled":true}}'

curl -X POST "localhost:9200/_security/user/beats_writer?pretty" -u elastic:$ELASTIC_PASSWORD -k -H 'Content-Type: application/json' -d'{"password":"'$BEATSWRITER_PASS'","roles":["beats_writer"],"full_name":"","email":"","metadata":{},"enabled":true}}'

If you compare, my setup role has 4 more cluster privileges manage_ml; manage_index_templates; manage_ingest_pipelines; manage_pipeline and indice privilege manage:* over what the documentation said was needed for setup role, and writer role has "cluster:admin/ingest/pipeline/get; cluster:admin/ingest/pipeline/put; cluster:admin/ilm/put aditionally for cluster privileges.

Without this configuration, I kept getting different errors, the latest one I could still find on talks back at the Elastic Slack was:

Exiting: failed to check for policy name 'metricbeat': (status=403) {"error":{"root_cause":[{"type":"security_exception","reason":"action [cluster:admin/ilm/get] is unauthorized for user [beats_system]"}],"type":"security_exception","reason":"action [cluster:admin/ilm/get] is unauthorized for user [beats_system]"},"status":403}: 403 Forbidden: {"error":{"root_cause":[{"type":"security_exception","reason":"action [cluster:admin/ilm/get] is unauthorized for user [beats_system]"}],"type":"security_exception","reason":"action [cluster:admin/ilm/get] is unauthorized for user [beats_system]"},"status":403}

I reckon this exact snippet was from when I was trying to use the beats_system built-in user (that I still fail to grasp why is it there and what interest would we have in knowing it's password from elasticsearch-setup-auto, if we can't use it) but I was getting the same action [cluster:admin/ilm/get] is unauthorized for user when trying the beats_setup role/user as the docs said, without the additional cluster permissions.

One other thing that bothers me is that the beats images don't seem to expose options to docker-compose.yml environment variables, like Elasticsearch or Kibana do. I tried both ways:

#like elasticsearch
environment:
  - output.elasticsearch.username=beats_user
  - output.elasticsearch.password=${BEATSWRITER_PASSWORD}

#and like Kibana
environment:
  - OUTPUT_ELASTICSEARCH_USERNAME=beats_user
  - OUTPUT_ELASTICSEARCH_PASSWORD=${BEATSWRITER_PASSWORD}

But both of those attempts end up in an authentication error, and no authentication error logs on Elasticsearch. I then tried using the variables inside the *beat.yml config files, and got the same result... but when I hardcoded the passwords there, it worked.

I'm aware I'd probably should use the keystore for these settings, but I'm avoiding that for now.

One thing that doesn't seem to work using docker images, also, is the strict.perms: false configuration. Tried both on beats.yml config files and docker-compose.yml environmet variables, I still need to have the files as UID=0 or root when running. So it's kind of a pain when trying to spin up this lab, I'd have to chown, save, chown...

But I'm sidetracking, back to the environment...

After all those cURL's over there, I can finally ctrl+d my way out of the temporary container... and start the 3 Elasticsearch nodes and the Kibana with docker-compose up -d es1-01 es1-02 es1-03 kibana (they're all based on the official Elastic/Docker documentation with minor changes, including the docker-compose.yml), then I get to run another two temp containers for setting up the beats:

export BEATSSETUP_PASSWORD="$(cat .env | grep "BEATSSETUP" | cut -d '=' -f 2)"

docker run --rm \
--network=elastic_lab_elastic \
docker.elastic.co/beats/metricbeat:7.7.1 setup \
-E setup.kibana.host=https://kibana:5601 \
-E setup.kibana.ssl.verification_mode=none \
-E setup.ilm.overwrite=true \
-E output.elasticsearch.hosts=["https://es1-01:9200"] \
-E output.elasticsearch.ssl.verification_mode=none \
-E output.elasticsearch.username=beats_setup \
-E output.elasticsearch.password=$BEATSSETUP_PASSWORD

repeat for filebeat changing the docker image (edited for max post length)

Index setup finished. Dashboards loaded, and docker-compose up -d --no-deps metricbeat filebeat to a fully built Elastic Stack docker environment, fully working with Metric- and File- beat coverage for monitoring the status! Right?

No. Because of the problem with *beats passwords in config files, we have to chown user:user *beat.yml to save the created BEATSWRITER_PASSWORD on them, then we need to chown root:root *beat.yml, and only then we can docker-compose up -d --no-deps --force-recreate filebeat metricbeat. There we go now, ain't that a pretty dashboard?

I have so many questions I don't really know where to start them.

1- why do I need the additional privileges, undocumented, for beats_setup role/user?

2- actually, why can't beats_system, the built-in user, be used here? I'd understand for a real production set up, FWIW I think the only credential elasticsearch-setup-passwords auto should create for production is elastic/superuser, and that should be forced to be changed on first GUI login - AFAIK you already have the bootstrap process to check for things like vm.max_map_count and exposing to non-loopback IPs, right? This could be another item.

3- why can I use environment variables for Elasticsearch and Kibana, but not for Beats?

I think there's more in the middle of it all, but I'm really beat (pun intended) by now and need to sleep. Thanks for reading all this, if you've got this far!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.