How to make TLS and authorization configurable by environment variables?

Hi,

I use security module to encrypt and authorization in public dev system in kubernetes - as it will be on production later.
In local developement environment I have not set up TLS.

Is there any way how I can configure the elasticsearch output for ingesting via environment variables?

Here is my current try:

output
{
	if "${USE_ES_SSL}" == "true"
	{
		# use ssl with authorization
		elasticsearch
		{
			hosts 		=> ["${ES_HOST}:${ES_PORT}"]
			ssl 			=> "${USE_ES_SSL}"
			cacert		=> "${ES_CA_CERT_PATH}"

			# credentials are fetched from envrionment or logstash-keystore

			user			=> "${LOGSTASH_USER}"
			password	=> "${LOGSTASH_PASSWORD}"

			index			=> "%{[@metadata][indexName]}"
		}
	}
	else
	{
		# no ssl set, so do not use ssl and no authorization
		elasticsearch
		{
			hosts 		=> ["${ES_HOST}:${ES_PORT}"]
			index			=> "%{[@metadata][indexName]}"
		}
	}
}

Although I've set env USE_ES_SSL to false, logstash is complaining if environment variables ES_CA_CERT_PATH, LOGSTASH_USER and LOGSTASH_PASSWORD are not set.

For cacert logstash also checks if the file is available. So there I cant just set "dummy" as value. At least it must point to an existing file.

If somehow possible I'd like to take the same pipeline as on prod and just configure via environment.

If it is not possible I need to change the output files depending on the runtime environment (dev / prod / local).

What is best practice for this use case?

Thanks, Andreas

See this. You cannot reference environment variables directly in conditionals.

Thanks, tried to change to the following:

filter
{
	mutate
	{
		add_field => { "[@metadata][USE_ES_SSL]" => "${USE_ES_SSL:false}"
	}
}

}
output
{
	if [@metadata][USE_ES_SSL] == "true"
	{
		# use ssl with authorization
		elasticsearch
		{
			hosts 		=> ["${ES_HOST}:${ES_PORT}"]
			ssl 			=> "${USE_ES_SSL}"
			cacert		=> "${ES_CA_CERT_PATH}"

			# credentials are fetched from envrionment or logstash-keystore

			user			=> "${LOGSTASH_USER}"
			password	=> "${LOGSTASH_PASSWORD}"

			index			=> "%{[@metadata][indexName]}"
		}
	}
	else
	{
		# no ssl set, so do not use ssl and no authorization
		elasticsearch
		{
			hosts 		=> ["${ES_HOST}:${ES_PORT}"]
			index			=> "%{[@metadata][indexName]}"
		}
	}
}

But when I don't set ES_CA_CERT_PATH , LOGSTASH_USER and LOGSTASH_PASSWORD, I still get this error here:

[2019-08-29T14:44:49,553][FATAL][logstash.runner          ] The given configuration is invalid. Reason: Cannot evaluate `${LOGSTASH_PASSWORD}`. Replacement variable `LOGSTASH_PASSWORD` is not defined in a Logstash secret store or as an Environment entry and there is no default value given.

I assume it is because this field value could change and logstash wants to check if all variables which may be need are really there.

Or did I made a mistake when trying to implement your suggestion?

Yeah, logstash establishes a connection to elasticsearch during initialization, it does not wait to see if the output is ever used.

Then my next idea was to do this nasty workaround:

filter
{
 mutate
	{
		add_field => { "[@metadata][USE_ES_SSL]" => "${USE_ES_SSL:false}" }
		add_field => { "[@metadata][ES_CA_CERT_PATH]" => "${LS_HOME}/CONTRIBUTORS" }
		add_field => { "[@metadata][user]" => "${LOGSTASH_USER:dummy}" }
		add_field => { "[@metadata][password]" => "${LOGSTASH_PASSWORD:dummy}" }

	}
}

output
{
	if [@metadata][USE_ES_SSL] == "true"
	{

		# use ssl with authorization
		elasticsearch
		{
			hosts 		=> ["${ES_HOST}:${ES_PORT}"]
			ssl 			=> true
			cacert		=> "%{[@metadata][ES_CA_CERT_PATH]}"
			#cacert		=> "${LS_HOME}/CONTRIBUTORS"

			# credentials are fetched from envrionment or logstash-keystore

			user			=> "%{[@metadata][user]}"
			password	=>  "%{[@metadata][password]}"

			index			=> "%{[@metadata][indexName]}"
		}
	}
	else
	{
		# no ssl set, so do not use ssl and no authorization
		elasticsearch
		{
			ssl 			=> false
			hosts 		=> ["${ES_HOST}:${ES_PORT}"]
			index			=> "%{[@metadata][indexName]}"
		}
	}
}

My idea was to use dummy values and some unimportant file which is allways deployed for CA where the file is checked as defaults.

It is throwing this error:

[2019-08-29T15:31:48,790][ERROR][logstash.outputs.elasticsearch] Invalid setting for elasticsearch output plugin:

  output {
    elasticsearch {
      # This setting must be a path
      # File does not exist or cannot be opened %{[@metadata][ES_CA_CERT_PATH]}
      cacert => "%{[@metadata][ES_CA_CERT_PATH]}"
      ...
    }
  }
[2019-08-29T15:31:48,825][FATAL][logstash.runner          ] The given configuration is invalid. Reason: Something is wrong with your configuration.
[2019-08-29T15:31:48,840][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

If I swap the cacert line with the commented out one, it works. Is there any trick to use a field as path?

Next step would be to set the ./CONTRIBUTORS file only as default value.

Or do you have any other idea in mind, who to make TLS encryption optional?

update from my side.

in -t (test mode) logstash is just checking if the file is existing. If I start it, logstash is reading certificate at startup. If I use a file which is contains no certificate, it is failing

So using ./CONTRIBUTORS is not possible

I am now evaluating pipeline to pipeline communication. Then I ship any message to commonOut queue. Then pipeline.yml is the only file I need to change between local dev and prod.

Definitively not at smart as I like it, but better than changing an output file for each pipeline.

But better Ideas are still welcome :wink:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.