Elasticsearch : JNA native support library not loading

SO after I performed the “git clone” command and updated the yml files with correct artifactory location, when I run the master deployment I get these BELOW.

Any ideas? :blush: lmk ty

[lewellf_adm@awva-pclif03001 elasticsearch]$ kubectl get pods -n monitoring-tools
NAME READY STATUS RESTARTS AGE
app-container-monitor-gqvhh 1/1 Running 0 21d
app-container-monitor-ssn2t 1/1 Running 0 21d
app-container-monitor-vkc9n 1/1 Running 0 14d
cluster-performance-prometheus-58796d7b87-vm5gz 1/1 Running 0 21d
clusterinfo-84978b7656-rzldq 1/1 Running 0 21d
container-monitor-796cf5c6f5-7p8px 1/1 Running 0 21d
es-master-0 0/1 CrashLoopBackOff 3 3m12s
es-master-1 0/1 CrashLoopBackOff 3 3m7s
es-master-2 0/1 CrashLoopBackOff 3 3m3s
fluentd-cloudwatch-b2z8l 1/1 Running 0 159d
fluentd-cloudwatch-rtdrm 1/1 Running 0 159d
fluentd-cloudwatch-srsgb 1/1 Running 1 159d
grafana-f659459cd-9dplv 1/1 Running 0 91d
prometheus-server-599cb9f96-65c82 2/2 Running 0 89d

Then run again and it shows different:
[lewellf_adm@awva-pclif03001 elasticsearch]$ kubectl get pods -n monitoring-tools
NAME READY STATUS RESTARTS AGE
app-container-monitor-gqvhh 1/1 Running 0 21d
app-container-monitor-ssn2t 1/1 Running 0 21d
app-container-monitor-vkc9n 1/1 Running 0 14d
cluster-performance-prometheus-58796d7b87-vm5gz 1/1 Running 0 21d
clusterinfo-84978b7656-rzldq 1/1 Running 0 21d
container-monitor-796cf5c6f5-7p8px 1/1 Running 0 21d
es-master-0 1/1 Running 4 4m5s
es-master-1 1/1 Running 4 4m
es-master-2 1/1 Running 4 3m56s
fluentd-cloudwatch-b2z8l 1/1 Running 0 159d
fluentd-cloudwatch-rtdrm 1/1 Running 0 159d
fluentd-cloudwatch-srsgb 1/1 Running 1 159d
grafana-f659459cd-9dplv 1/1 Running 0 91d
prometheus-server-599cb9f96-65c82 2/2 Running 0 89d

Run again:
[lewellf_adm@awva-pclif03001 elasticsearch]$ kubectl get pods -n monitoring-tools
NAME READY STATUS RESTARTS AGE
app-container-monitor-gqvhh 1/1 Running 0 21d
app-container-monitor-ssn2t 1/1 Running 0 21d
app-container-monitor-vkc9n 1/1 Running 0 14d
cluster-performance-prometheus-58796d7b87-vm5gz 1/1 Running 0 21d
clusterinfo-84978b7656-rzldq 1/1 Running 0 21d
container-monitor-796cf5c6f5-7p8px 1/1 Running 0 21d
es-master-0 0/1 CrashLoopBackOff 4 4m19s
es-master-1 1/1 Running 4 4m14s
es-master-2 0/1 CrashLoopBackOff 4 4m10s
fluentd-cloudwatch-b2z8l 1/1 Running 0 159d
fluentd-cloudwatch-rtdrm 1/1 Running 0 159d
fluentd-cloudwatch-srsgb 1/1 Running 1 159d
grafana-f659459cd-9dplv 1/1 Running 0 91d
prometheus-server-599cb9f96-65c82 2/2 Running 0 89d

Here is what I see in log for es-master-0:

[2020-08-18T19:52:41,646][WARN ][o.e.b.Natives ] [es-master-0] unable to load JNA native support library, native methods will be disabled.
java.lang.UnsatisfiedLinkError: /tmp/elasticsearch-2730565356642515375/jna--1985354563/jna9538996240127811702.tmp: /tmp/elasticsearch-2730565356642515375/jna--1985354563/jna9538996240127811702.tmp: cannot open shared object file: Operation not permitted
at java.lang.ClassLoader$NativeLibrary.load0(Native Method) ~[?:?]
at java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2452) ~[?:?]
at java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2508) ~[?:?]
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2704) ~[?:?]
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:2637) ~[?:?]
at java.lang.Runtime.load0(Runtime.java:745) ~[?:?]
at java.lang.System.load(System.java:1871) ~[?:?]
at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:947) ~[jna-4.5.1.jar:4.5.1 (b0)]
at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:922) ~[jna-4.5.1.jar:4.5.1 (b0)]
at com.sun.jna.Native.(Native.java:190) ~[jna-4.5.1.jar:4.5.1 (b0)]
at java.lang.Class.forName0(Native Method) ~[?:?]
at java.lang.Class.forName(Class.java:340) ~[?:?]
at org.elasticsearch.bootstrap.Natives.(Natives.java:45) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:110) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:178) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) [elasticsearch-cli-7.8.0.jar:7.8.0]
at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) [elasticsearch-7.8.0.jar:7.8.0]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-7.8.0.jar:7.8.0]

AND

[2020-08-18T19:52:59,516][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [es-master-0] PerformanceAnalyzer Enabled: false
[2020-08-18T19:52:59,707][INFO ][o.e.n.Node ] [es-master-0] initialized
[2020-08-18T19:52:59,708][INFO ][o.e.n.Node ] [es-master-0] starting ...
[2020-08-18T19:52:59,911][INFO ][o.e.t.TransportService ] [es-master-0] publish_address {100.64.19.202:9300}, bound_addresses {0.0.0.0:9300}
[2020-08-18T19:53:00,309][INFO ][o.e.b.BootstrapChecks ] [es-master-0] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/logs.log
[2020-08-18T19:53:00,315][INFO ][c.a.o.s.a.s.SinkProvider ] [es-master-0] Closing InternalESSink
[2020-08-18T19:53:00,315][INFO ][o.e.n.Node ] [es-master-0] stopping ...
[2020-08-18T19:53:00,315][INFO ][c.a.o.s.a.s.SinkProvider ] [es-master-0] Closing DebugSink
[2020-08-18T19:53:00,327][INFO ][o.e.n.Node ] [es-master-0] stopped
[2020-08-18T19:53:00,327][INFO ][o.e.n.Node ] [es-master-0] closing ...
[2020-08-18T19:53:00,389][INFO ][o.e.n.Node ] [es-master-0] closed

Any help appreciated!!!

FREDDIE2020

What exactly did you clone? What does your pod configuration look like?

1 Like

@Christian_Dahlqvist

I use the git clone to get my package for elasticsearch downloaded to my jump server and it unpackages my yaml files for deployment .

Here's the config I have setup:

Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.

SPDX-License-Identifier: MIT-0


apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch
namespace: monitoring-tools
labels:
app: elasticsearch
data:
elasticsearch.yml: |-
cluster:
name: {CLUSTER_NAME} node: master: {NODE_MASTER}
data: {NODE_DATA} name: {NODE_NAME}
ingest: ${NODE_INGEST}
max_local_storage_nodes: 1
attr.box_type: hot

processors: ${PROCESSORS:1}

network.host: ${NETWORK_HOST}

path:
  data: /usr/share/elasticsearch/data
  logs: /usr/share/elasticsearch/logs

http:
  compression: true

discovery:
  zen:
    ping.unicast.hosts: ${DISCOVERY_SERVICE}
    minimum_master_nodes: ${NUMBER_OF_MASTERS}

# TLS Configuration Transport Layer
opendistro_security.ssl.transport.pemcert_filepath: elk-crt.pem
opendistro_security.ssl.transport.pemkey_filepath: elk-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-root-ca.pem
opendistro_security.ssl.transport.truststore_filepath: tfs-root-truststore.jks
#opendistro_security.ssl.transport.keystore_filepath: tfs-root-truststore.jks
#opendistro_security.ssl.transport.pemkey_password: ${TRANSPORT_TLS_PEM_PASS}
opendistro_security.ssl.transport.enforce_hostname_verification: false

# TLS Configuration REST Layer
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: elk-crt.pem
opendistro_security.ssl.http.pemkey_filepath: elk-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: elk-root-ca.pem
opendistro_security.ssl.http.truststore_filepath: tfs-root-truststore.jks
#opendistro_security.ssl.http.pemkey_password: ${HTTP_TLS_PEM_PASS}

# Demo Certificate Option Disabled
opendistro_security.allow_unsafe_democertificates: false

opendistro_security.allow_default_init_securityindex: false

opendistro_security.authcz.admin_dn:
  - 'EMAILADDRESS=tfs_dne_cloud@internal.toyota.com,CN=logadmin-dev.tfs.toyota.com,OU=TFS/IDS,O=Toyota Motor Credit Corporation,L=Plano,ST=Texas,C=

US'
opendistro_security.nodes_dn:
- 'CN=log-es-dev.tfs.toyota.com,OU=TFS/IDS,O=Toyota Motor Credit Corporation,L=Plano,ST=Texas,C=US'

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
opendistro_security.audit.config.disabled_rest_categories: NONE
opendistro_security.audit.config.disabled_transport_categories: NONE

logging.yml: |-
# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console
logger:
# log action execution errors for easier debugging
action: DEBUG
# reduce the logging for aws, too much is logged under the default INFO
com.amazonaws: WARN
appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

config.yml: |-
_meta:
type: "config"
config_version: 2

config:
  dynamic:
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
    authc:
      basic_internal_auth_domain:
     description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: intern
      ldap:
        description: "Authenticate via LDAP or Active Directory"
        transport_enabled: true
        order: 2
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: ldap
          config:
             enable_ssl: true
             enable_start_tls: false
             enable_ssl_client_auth: false
             verify_hostnames: false
             hosts:
             - tfsdirectory.tfs.toyota.com:636
             bind_dn: 'CN=srv_unix_adjoin_prd,OU=Service Accounts,OU=Unix,DC=TFS,DC=Toyota,DC=com'
             password: <CHANGE SERVICE ACCOUNT AND PUT ITS PASSWORD>
             userbase: 'DC=TFS,DC=Toyota,DC=com'
             usersearch: '(&(objectClass=user) (sAMAccountName={0}))'
             username_attribute: cn
    authz:
      ldap:
        http_enabled: true
        transport_enabled: true
        authorization_backend:
          type: ldap
          config:
             enable_ssl: true
             enable_start_tls: false
             enable_ssl_client_auth: false
             verify_hostnames: false
             hosts:
             - tfsdirectory.tfs.toyota.com:636
             bind_dn: 'CN=srv_unix_adjoin_prd,OU=Service Accounts,OU=Unix,DC=TFS,DC=Toyota,DC=com'
             password: <CHANGE SERVICE ACCOUNT AND PUT ITS PASSWORD>
             userbase: 'DC=TFS,DC=Toyota,DC=com'
             usersearch: '(&(objectClass=user) (sAMAccountName={0}))'
             username_attribute: cn
             rolebase: 'OU=Domain Security Groups,OU=Domain Groups,OU=TFS NA,DC=TFS,DC=Toyota,DC=com'
             rolesearch: '(member={0})'
             userroleattribute: null
             userrolename: none
             rolename: cn
             resolve_nested_roles: true
             skip_users:
               - kibanaserver
               - admin

internal_users.yml: |-
# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh

_meta:
  type: "internalusers"
  config_version: 2

# Define your internal users here

## Internal default users

admin:
  hash: "$2y$12$zdYWp4iGmzitcLteUcMQVO1swkmbYPc2bl26qJEnw00ziUeed70a."
  reserved: true
  backend_roles:
  - "admin"
  description: "admin user"

kibanaserver:
  hash: "$2y$12$6TO6JkdR.3qVLYym/omb2Od7MbRfSM.43qQ/tFGdobLw8iQ8hIW/W"
  reserved: true
  description: "kibanaserver user"

Thanks for looking into this Christian.

Freddie2020

I have no experience with OpenDistro or their images nor how they are setup so will unfortunately not be able to help. I would recommend you ask at the OpenDistro forum.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.