Logstash in Docker Container to ES Cloud

Hi there, new so please go easy.

I have logstash deployed in docker container and need it to send logs to our ElasticSearch in the Cloud account.

In the logstash config, I see where you can specify the output as such:

output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}

Do I just replace the host with the endpoint of the cloud ES for my account? If so, how do I specifiy the userid and pw as well?

Where I am getting confused is the documentation says to use the logstash.yml file and to set the Cloud ID settings there. Do you have to do it in both? Or just one or the other?

Thanks for any/all help.

Jen

The settings in the logstash.yml file specify where monitoring data is to be sent if you have installed X-Pack and enabled this. For data in a pipeline, you need to configure username and password in the Elasticsearch output plugin.

Yes, sorry I do realize that. But we are deploying the container using docker-compose.yml. In that file, we are using a 'Secret' (instead of pointing to a config file ) and the contents of the secret are below. Please see both the docker-compose file contents as well as the 'secret' contents below:

Docker-compose.yml
version: "3.3"
services:
logstash:
image: dtr.qcorpaa.aa.com/etds/logstash:1.0
entrypoint: "logstash -f /etc/logstash/conf.d/inputjen.conf"
ports:
- 5000
networks:
- etds-logging-jen
deploy:
replicas: 1
labels:
com.docker.lb.hosts: logstashet2.ecaas.qcorpaa.aa.com
com.docker.lb.network: etds-logging-jen
com.docker.lb.port: 8095
com.docker.lb.ssl_cert: ecaas-bundle-v2
com.docker.lb.ssl_key: ecaas-key-v1
com.docker.ucp.access.label: /orgs/etds
restart_policy:
condition: on-failure
delay: 30s
max_attempts: 10
window: 300s
environment:
METADATA: proxy-handles-tls
secrets:
- source: jen
target: /etc/logstash/conf.d/inputjen.conf

secrets:
jen:
external: true

networks:
etds-logging-jen:
driver: overlay

Our secret called 'jen':

input {
tcp {
port => 5000
type => syslog
}
udp {
port => 5000
type => syslog
}
}

filter
{
if [type] == "syslog" {
grok
{
match => { "message" => "(?<cf_logid>\d{3}) <(?<cf_pri>[0-9]{1,5})>1 (?<cf_time>[^ ]+) (?<cf_host>[^ ]+) (?<cf_msgid>[^ ]+) (?<cf_procid>[^ ]+) - - %{TIMESTAMP_ISO8601:app_ti mestamp} (?<cf_offset>[\d\s]{1,7}) %{DATA:app_loglevel} %{DATA:app_tranid} %{DATA:app_client_tranid} %{DATA:app_servername} %{DATA:app_version} %{DATA:app_recordLoc} %{DATA:app_className} - %{GREEDYDATA:app_message}" }
}
date
{
locale => "en"
match => [ "logdate", "MMM dd yyyy HH:mm:ss", "MMM d yyyy HH:mm:ss", "ISO8601" ]
target => "@timestamp"
}
mutate
{
convert => { "cf_offset" => "integer" }
remove_field => ["app_timestamp"]
}
}
}

output
{
elasticsearch
{
hosts => ["1663e7cf9c704d7b8768cb050b816993.us-west-1.aws.found.io:9243"]
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
user => "elastic"
password => ""
ssl => true
}
stdout { codec => rubydebug }
}

The question I have is where can I (or CAN I?) also use a logstash.yml file as apparently I also need to specify that we are using x-pack such as:

xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: logstashpassword

Hopefully this makes sense. :frowning:

Jen

I was able to finally figure it out and get connected. We just needed the proxy settings.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.