Self hosted in aws ec2

Hey all, this is part question, and part tips for my elastic experience on ec2. for background I am running two servers, one with elasticsearch on an m6gd.large, and the other server runs kibana on an m6gd.medium. this is single node for my use case as at this point I am still experimental, and cost conscious. later I could add more machines to the target groups if I need more compute.

the goal: setup beats on each of my internal servers and have the ability to monitor a few dozen external servers either on another vpc or on prem elsewhere.

the architecture: I have a three tier VPC, public, private(with nat gateways) and secure. both servers are in the private zone, with an Application Load Balancer (ALB) in the public zone. The database secure zone is not used here.

I have a bunch of other servers on the account, all running across one ALB, with different listener rules that resolve to specific Target Groups that contain the specific server needed. As the ALB is setup with a valid ACM certificate, TLS auth is supported already here. Each other server is hidden away behind the cognito auth service with will provide our users with a token to access them, all except the kibana and elasticsearch servers. those are open to the public with xpack security enabled for their auth.

route53 subdomain -> ALB 443 -> Target Group HTTPS to 9200elastic and 5601kibana -> on to the box with TLS because it seems to need TLS internally signed to work.

To get to this point it seems that there is a requirement to setup certificates internally on both machines to get them to talk when using xpack, and if you want them to talk to each other. This means that you also need to use HTTPS on the target groups to get the ALB to allow communication between them.

The Problems: The whole certificate thing is a journey, and I am struggling to get my head around where I need to configure it, and whats the minimum requirement to get it to work with beats, and the elastic agent. It seems the agents refuse to work if there is no TLS configured on the box, but I have TLS on the ALB, so I dont really need it or its headaches, but I cant get past it. Ive tried to set certificates and paths to them in each config file for each beat, and I am missing something.

I ended up giving up on my first install as my whole stack ground to a halt, and queries took 3 or 4 mins to complete in the hosts or network dashboard. im on a fresh install now, and trying to solve this efficently, and preferably in a script to install all my beats, which I will probably build later on once I get these two machines configured properly.

Question: how can I easily configure all the beats and elastic-agents and do I need to have certs copied into every server I want to collect from? What is actually needed here, I tried to follow the documentation, but its hard to read, and does not really have a clear answer for using an ALB intermediate as far as I could tell. Certificates......gah.

feedback: if you have any clear improvement points or suggestions, please drop a comment, so others can also learn from this. I you have questions about how I did it or want more specifics, just ask here too, and Ill try to respond.

elastic server elasticsearch.yml

    node.name: node1
    xpack:
      security:
    enabled: true
    http:
      ssl:
        enabled: true
        verification_mode: none
        key: certs/node1.key
        certificate: certs/node1.crt
        certificate_authorities: certs/ca.crt
    transport:
      ssl:
        enabled: true
        key: certs/node1.key
        certificate: certs/node1.crt
        certificate_authorities: certs/ca.crt
    discovery.seed_hosts: [ "https://elastic.mydomain.com.au:443" ]
    cluster:
      name: myname
      initial_master_nodes: [ "node1" ]
    path.data: /var/lib/elasticsearch
    path.logs: /var/log/elasticsearch
    bootstrap.memory_lock: true
    network.host: 0.0.0.0leting indices:

elastic server metricbeat.yml

    metricbeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: true

    setup.template.settings:
      index.number_of_shards: 1
      index.codec: best_compression

    name: Elasticsearch-Metricbeat
    tags: ["tag", "Internal","Elasticsearch"]

    setup.dashboards.enabled: true
    setup.kibana:
      host: "https://kibana.mydomain.com.au:443"

    output.elasticsearch:
      hosts: ["https://elastic.mydomain.com.au:443"]
      protocol: "https"
      pipeline: "geoip"
      username: "elastic"
      password: "mypassword"
      ssl.enabled: true
      ssl.verification_mode: "none"
      ssl.client_authentication: none
    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~
      - add_docker_metadata: ~
      - add_kubernetes_metadata: ~

    logging.level: debug
    logging.selectors: ["*"]
    monitoring.enabled: true
    monitoring.elasticsearch:
    instrumentation:
        enabled: true

kibana server kibana.yml

    server.host: "0.0.0.0"
    server.publicBaseUrl: "https://kibana.mydomain.com.au:443"
    server.name: "myname"
    elasticsearch.hosts: ["https://elastic.mydomain.com.au:443"]
    kibana.index: ".kibana"
    kibana.defaultAppId: "home"
    elasticsearch.username: "kibana_system"
    elasticsearch.password: "mypassword"
    server.ssl.enabled: true
    server.ssl.certificate: /etc/kibana/certs/kibana/kibana.crt
    server.ssl.key: /etc/kibana/certs/kibana/kibana.key

    xpack.security.encryptionKey: mylongkey
    xpack.security.session.idleTimeout: "1h"
    xpack.security.session.lifespan: "30d"
    xpack.encryptedSavedObjects.encryptionKey: myotherlongkey
    xpack.reporting.encryptionKey: mythirdlongkey

also, does anyone have any idea if its possible to use the instance profile in ec2 to gather data out of aws, rather than having to make access keys etc? Why cant elasticsearch/beats just use the attached profile, or if it can, what config does it actually need here?

Can you define "refuse to work" more clearly? Is there an error message? If so, what is it?

More generally, there's lots of moving parts in your setup and I get the impression from your post that none of it is working right now, but it's really hard to help if you don't share details of the errors you're encountering. There are millions of ways for things not to work, each with its own remedy, and it's best to work through the problems one-by-one.

I'm also pretty sure that Elasticsearch itself doesn't require any particular kind of certificate, or even require HTTPS at all (although it's strongly recommended). By "internally signed" do you mean self-signed? If so, no, certificates from an external CA are also fine.

Hi David, I appreciate you taking the time to respond. AWS is my strong suit, and I am just recently trying to get elasticsearch kibana, and beats to work for me. apologies if I have misunderstood something here.

After grinding away at this issue over the last days, I set some verification to none for ssl or something else too maybe l, and I have some success with my stack working. Sorry, but I don't have the error messages/stack traces at hand anymore. I do recall a message from fleet manager I think, about the elastic agent would not load without TLS, however not really any more specifics that that. I think it was a message from elastic agent perhaps endpoint security(?) that said it required transport.

yes, sorry I do mean self signed. elastic agent and Beats were not playing nice with certs and it complains about my ACM certificate which is signed by amazon. I also set this to no verification.

it takes a lot of effort to configure certificates, to make beats and elastic agent work, and i get why that is the case, but its frustrating to configure all that to allow beats and elastic agent to be allowed to communicate, and then invalidate all the checks its supposed to do so that you can get them to actually connect and pass data. It could be also that I am completely retarded, and cant follow instructions, although I feel like I read the docs thoroughly in my setup process.

Ok, I could believe that. HTTPS is definitely recommended and I could believe it's mandatory for things like endpoint security. Still, any reasonable certificate should work fine, there's no need for it to be self-signed or anything, and none verification is only really for testing and should be avoided in production.

I believe as I have a valid certificate on my AWS Application Loadbalancer, and the client will be connecting to it first, all the certificates that are configured on the hosts themselves are basically all for theater. Im running in a fairly secure VPC, and there will not be any public traffic there, and I am not concerned that anyone could listen to that traffic who should not be at this point. Again, part of this post is for info of anyone who also is trying to go down this path, and partly for me to validate that I am on something approximating the right path. Thanks for your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.