Warning 'Relying on default value of `pipeline.ecs_compatibility' after updating to logstash 7.16.1

Hello,
I just updated my Logstash to 7.16.1... but I noticed errors in my conf when starting Logstash.*
the same conf of Logstash works well in version 7.9.1
How do to fix this warning ?
Anyone else experienced this?
I can't find anything relevant on Google.

[2021-12-15T09:56:00,608][INFO ][org.logstash.ackedqueue.QueueUpgrade] No PQ version file found, upgrading to PQ v2.
[2021-12-15T09:56:00,689][WARN ][deprecation.logstash.codecs.json] Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2021-12-15T09:56:00,770][WARN ][deprecation.logstash.filters.clone] Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2021-12-15T09:56:00,771][WARN ][deprecation.logstash.inputs.http] Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2021-12-15T09:56:00,787][WARN ][deprecation.logstash.codecs.plain] Relying on default value of pipeline.ecs_compatibility, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.

2 Likes

Set the ecs_compatibility option on the codec so that you explicitly choose the mode rather than relying on a default that will change in a future release. Similarly for the inputs and filters.

When the option isn't specified for an individual plugin, it checks the pipeline-level setting. When the pipeline-level setting is also not specified, you get this warning (not an error).

If you want to lock in the current behaviour for all plugins in a pipeline, add pipeline.ecs_compatibility: disabled to its definition in your config/pipelines.yml. If you want to do so globally, for all pipelines, add pipeline.ecs_compatibility: disabled to your config/logstash.yml.

4 Likes

thank you for your answer ... but I noticed another warining :"gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key"... knowing that before I did not have this type of alerts on version 7.9.2.
I am using a Logstash docker image: 7.16.1

2021-12-15T20:06:53,819][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2021-12-15T20:06:57,241][INFO ][org.reflections.Reflections] Reflections took 509 ms to scan 1 urls, producing 119 keys and 417 values
[2021-12-15T20:07:08,512][INFO ][logstash.outputs.Elasticsearch][log_output] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//localhost:9200"]}
[2021-12-15T20:07:13,258][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_kaas] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,270][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_matchportail] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,268][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_kaas] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,288][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_kaas] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,289][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_kaas] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,295][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_xray] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,297][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_xray] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: send_to. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2021-12-15T20:07:13,297][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][log_xray] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: se

See Warning [org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] after updating to 7.16.1 - #2 by yaauie

Hi @yaauie ,

I've set pipeline.ecs_compatibility: disabled in my config/logstash.yml file, which made some of the warning go away, however, I still see warnings coming from codecs (json, plain):

[2021-12-26T11:20:47,500][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2021-12-26T11:20:47,505][WARN ][deprecation.logstash.codecs.json] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.

I would be glad to help you chase that down. What does your pipeline definition look like (especially your input and output sections, since that is where codecs live), and specifically what versions of Logstash (bin/logstash --version) and what versions of the plugins are you running (bin/logstash-plugin list --verbose)? It is possible that one or more plugins instantiates its own codec outside of the normal flow.

Hi @yaauie , I have also recently upgraded to 7.16.1 from version 7.4.2 and initially I was getting this error.

"status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}

Then I renamed the host field in filter plugin like this.

mutate {
         rename => { "host" => "[host][name]" }
}

And then I was getting this.

[WARN ][deprecation.logstash.codecs.jsonlines][vasapi_stream][c11de3d22d41883d169599d7dbe4237718d4bdb77409a8fec4f02c1c5d71d1cc] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.

I have set property "pipeline.ecs_compatibility: disabled" in logstash.yml file which makes the warning go away but the logstash pipeline is stuck as I cannot see any new logs after these following lines. Usually the ecs warnings used to come after these lines.

bidos-logstash_1  | [2021-12-27T02:58:49,492][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
bidos-logstash_1  | [2021-12-27T02:58:49,665][INFO ][logstash.javapipeline    ][vasapi_stream] Pipeline Java execution initialization time {"seconds"=>0.75}
bidos-logstash_1  | [2021-12-27T02:58:49,838][INFO ][logstash.javapipeline    ][vasapi_stream] Pipeline started {"pipeline.id"=>"vasapi_stream"}
bidos-logstash_1  | [2021-12-27T02:58:49,858][INFO ][logstash.inputs.tcp      ][vasapi_stream][b14eb8d9e41a7b7c6a22f1d12a127fca5d104a428505d67aee0732dd3d544c42] Starting tcp input listener {:address=>"0.0.0.0:5959", :ssl_enable=>false}
bidos-logstash_1  | [2021-12-27T02:58:49,957][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :vasapi_stream], :non_running_pipelines=>[]}

Can you please help, my entire logstash pipeline conf looks like below.

input {
    tcp {
        port => "${VASAPI_TCP_PORT}"
        codec => json_lines

    }
}
filter{
        if ![applicationid]{ drop{}}
        # extract duration of finished tasks
        grok{
                match => {"message" => "Finished task %{GREEDYDATA} in %{INT:task_duration_ms:int} ms%{GREEDYDATA}"}
        }
        mutate {
            rename => { "host" => "[host][name]" }
        }
}
output{
        elasticsearch {
                hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
                data_stream => "true"
                data_stream_type => "logs"
                data_stream_dataset => "${VASAPI_STREAM_INDEX}"
                data_stream_namespace => "default"
                action => "create"
                user => "${ELASTICSEARCH_INSERT_USERNAME}"
                password => "${ELASTICSEARCH_INSERT_PASSWORD}"
                ssl => "true"
                ssl_certificate_verification => "true"
                cacert => "${LOGSTASH_BASE}/certificates/BOSCH-CA-DE_pem.cer"
        }
}

Does this add anything to the other thread about this here?

Hi @Badger,
one extra detail, after doing the below changes in Input plugin.

input {
    tcp {
        port => "${VASAPI_TCP_PORT}"
        ecs_compatibility => v1
        codec => json_lines {
                     ecs_compatibility => v1
                 }
    }
}

I am getting this messages repeatedly and logstash is not writing any data to data streams.

[INFO ][logstash.codecs.jsonlines][vasapi_stream] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
1 Like

@yaauie It seems many people are expiriencing this issue with different codecs, and this log is very large in quantity. Do you think there is a way to solve this?

1 Like

expiriencing this issue

@Yuval_Yogev what issue, exactly?

When a relevant plugin is instantiated with ecs_compatibility specified, or its pipeline's pipeline.ecs_compatibility is specified (either for the pipeline's entry in pipelines.yml or globally in the logstash.yml?), the deprecation warning is not logged. The deprecation warning lets you know that the default value is going to change in a future release of Logstash, and is only emitted on a plugin that is relying on the default.

If this is not the case, then we need details about which codecs under which inputs/outputs it is occurring in. We need specifics about how Logstash is being configured if the above doesn't hold true.

I am also experiencing the same issue of large volume of ECS compatibility deprecation messages with my current setup of Logstash 7.16.1 using the docker image from Bitnami.

input {
  beats {
    port => 5044
	ecs_compatibility => "v8"
        ssl => true
	ssl_certificate_authorities => ["[ca_cert_location]/logstashRootCA.cert.pem"]
	ssl_certificate => "[k8s_secret_location]/tls.crt"
	ssl_key => "[k8s_secret_location]/tls.key"
	ssl_verify_mode => "none"
	tls_min_version => 1.2
    tags => ["beats"]
  }
  tcp {
    port => 5054
	ecs_compatibility => "v8"
    codec => json {
      ecs_compatibility => "v8"
	}
	ssl_enable => true
	ssl_certificate_authorities => ["[ca_cert_location]/logstashRootCA.cert.pem"]
	ssl_cert => "[k8s_secret_location]/tls.crt"
	ssl_key => "[k8s_secret_location]/tls.key"
	ssl_verify => false
    tags => ["kong-tcp"]
  }
}

The above is the content of the logstash.conf deployed as a configmap. Prior to setting the compatibility to v8, I have tried configuring them as "disabled" but no matter which value I tried, both will generate the deprecation message. I have not modified the logstash.yml that comes with the image and am wondering if it is causing the issue with the settings on the plugin level.

@yaauie For my setup, Logstash is version 7.16.1 and the Beats and TCP input plugin version are 6.2.3. Following is the latest logstash.conf which still generates the excessive deprecation warning logs. I am assuming, by setting the compatibility at the plugin level, this should override the settings in pipelines.yml or logstash.yml, is that right?

input {
  beats {
    port => 5044
	ecs_compatibility => disabled
        ssl => true
	ssl_certificate_authorities => ["/<root CA cert location>/logstashRootCA.cert.pem"]
	ssl_certificate => "/<cert location>/tls.crt"
	ssl_key => "/<cert location>/logstashServer-pkcs8.key"
	ssl_verify_mode => "full"
	tls_min_version => 1.2
    tags => ["beats"]
  }
  tcp {
    port => 5054
	ecs_compatibility => disabled
    codec => json {
      ecs_compatibility => disabled
	}
	ssl_enable => true
	ssl_certificate_authorities => ["/<root CA cert location>/logstashRootCA.cert.pem"]
	ssl_cert => "/<cert location>/tls.crt"
	ssl_key => "/<cert location>/tls.key"
	ssl_verify => false
    tags => ["tcp"]
  }
}
filter {
}
output {
  if "beats" in [tags] {
    azure_loganalytics {
      customer_id => "${LOG_ANALYTICS_CUST_ID}"
      shared_key => "${LOG_ANALYTICS_KEY}"
      log_type => "uatonpremapplog"
      ecs_compatibility => disabled
    }
  } else if "tcp" in [tags] {
    azure_loganalytics {
      customer_id => "${LOG_ANALYTICS_CUST_ID}"
      shared_key => "${LOG_ANALYTICS_KEY}"
      log_type => "uatapilog"
      ecs_compatibility => disabled
    }
  }
}

Have you tried setting the pipeline.ecs_compatibility: disabled for the entire pipeline? If so, are you still getting the deprecation warnings?

I'll have to look into it, but I believe the problem may be with the beats input, which has a codec but doesn't use it, or possibly with another pipeline, or other confs running in the same pipeline. Are you running multiple pipelines (e.g., pipelines.yml or Kibana's Central Management for Logstash), or do you have Monitoring enabled?

No, I have not tried setting it for the whole pipeline, as I am using the default pipeline and logstash.yml from Bitnami. However, I have tried setting the environment variable PIPELINE_ECS_COMPATIBILITY in the values YAML file for the Helm deployment, and that did not solve the issue as well.

I will need to look into Bitnami's base image for the other pieces of information. I believe that, the last time I checked, there is only one pipeline and ecs_compatibility is not configured in logstash.yml

@yaauie May I ask if you have any findings with the beats input codec? Currently, my setup does not run multiple pipelines and have tried setting the environment variable in the Helms values YAML file but to no avail.

Using Logstash 7.17.0 and 8.0.0, I am seeing that the default plain codec inside the beats input correctly is able to retrieve the value of ecs_compatibility from the pipeline's settings, and only emits this deprecation warning when the setting is not provided in any of the following

  • in the codec's definition (codec => plain { ecs_compatibility => disabled })
  • in the pipelines.yml,
  • in the logstash.yml,
  • via command-line flag (--pipeline.ecs_compatibility=v8, which effectively sets the global value as if it were provided in the logstash.yml).

I see that the env2yaml.go from 7.17.0 also correctly defines the pipeline.ecs_compatibility setting, so the docker bridge should handle the environment variable PIPELINE_ECS_COMPATIBILITY=disabled.

Can you provide the relevant section of your helm values yaml? If you're using the Elastic helm charts as a base, you should likely be using the logstashConfig/logstash.yml key to provide a logstash yaml instead of routing through the docker environment variable bridge.

My apologies for the late reply. We are using Logstash 7.16.3 image and the Helm chart from Bitnami.

Regarding the plain codec for Beats input, if I have already have the codec => json configured, will adding the codec => plain be causing any issue? Unfortunately, I am not familiar with configuring the pipelines.yml/logstash.yml files in the Helm files nor with building them into the image.