X-Pack Logstash monitoring and management issue

Hi,

I'm a newbie in ELK Stack. I just installed ELK on CentOS 7 VM as RPM (7.2.0). Revers proxy is Nginx. SELinux was disabled. Then I enabled trial period for Enterprise ELK. Then I enabled X-Pack in all ELK configurations:

/etc/logstash/logstash.yml

node.name: elastic_node
path.data: /var/lib/logstash
path.logs: /var/log/logstash
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: ${ES_USER_ADM}
xpack.monitoring.elasticsearch.password: ${ES_PWD_ADM}
xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
xpack.monitoring.collection.interval: 10s
xpack.monitoring.collection.pipeline.details.enabled: true
xpack.management.enabled: true
xpack.management.elasticsearch.username: ${ES_USER_ADM}
xpack.management.elasticsearch.password: ${ES_PWD_ADM}
xpack.management.elasticsearch.hosts: ["http://localhost:9200"]
xpack.management.logstash.poll_interval: 5s

/etc/kibana/kibana.yml

server.port: 5601
server.host: "127.0.0.1"
server.name: "elastic.internal_domain.com"
elasticsearch.hosts: ["http://127.0.0.1:9200"]
elasticsearch.username: "kibana"
elasticsearch.password: "encrypted"
xpack.security.encryptionKey: "encrypted"
xpack.security.sessionTimeout: 900000

/etc/elasticsearch/elasticsearch.yml

path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
xpack.security.enabled: true
discovery.type: single-node
xpack.security.authc.realms.native.dm_native.order: 0
xpack.security.authc.realms.active_directory.dm_ru.order: 0
xpack.security.authc.realms.active_directory.dm_ru.domain_name: internal_domain.com
xpack.security.authc.realms.active_directory.dm_ru.url: ldap://10.10.10.10:3268, ldap://10.10.10.11:3268
xpack.security.authc.realms.active_directory.dm_ru.load_balance.type: "round_robin"
xpack.security.authc.realms.active_directory.dm_ru.bind_dn: ldap_elastic@internal_domain.com
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true

I'm able to manage Kibana and Elasticsearch from web gui. I see Pipelines under Logstash but new pipelines don't work. However, I don't see logstash in monitoring. There are only Kibana and Elasticsearch.

Please help. Thank you.

Addon:

When I start logstash as command I see error:

Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-07-19T16:46:05,511][INFO ][logstash.configmanagement.bootstrapcheck] Using Elasticsearch as config store {:pipeline_id=>["main"], :poll_interval=>"5000000000ns"}
[2019-07-19T16:46:07,364][INFO ][logstash.configmanagement.elasticsearchsource] Configuration Management License OK
[2019-07-19T16:46:07,923][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.2.0"}
[2019-07-19T16:46:09,052][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2019-07-19T16:46:09,053][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2019-07-19T16:46:09,137][INFO ][logstash.configmanagement.elasticsearchsource] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_admin_user:xxxxxx@localhost:9200/]}}
[2019-07-19T16:46:09,146][WARN ][logstash.configmanagement.elasticsearchsource] Restored connection to ES instance {:url=>"http://logstash_admin_user:xxxxxx@localhost:9200/"}
[2019-07-19T16:46:09,153][INFO ][logstash.configmanagement.elasticsearchsource] ES Output version determined {:es_version=>7}
[2019-07-19T16:46:09,153][WARN ][logstash.configmanagement.elasticsearchsource] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-19T16:46:09,214][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2019-07-19T16:46:16,271][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", password=><password>, hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"53c81292ae8ac1011ba30697f6560abdc1e89190308737a636596f3d30aada7a", user=>"logstash_admin_user", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_b28001a5-3d0f-4fea-b8cf-2c3ef0ac302e", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-07-19T16:46:16,473][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_admin_user:xxxxxx@localhost:9200/]}}
[2019-07-19T16:46:16,522][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_admin_user:xxxxxx@localhost:9200/"}
[2019-07-19T16:46:16,533][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-07-19T16:46:16,533][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-07-19T16:46:16,556][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2019-07-19T16:46:16,707][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0xf578ca2 run>"}
[2019-07-19T16:46:16,793][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2019-07-19T16:46:16,904][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>[]}
[2019-07-19T16:46:17,410][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2019-07-19T16:46:27,035][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"For path: events. Map keys: [:reloads, :pipelines]", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}
[2019-07-19T16:47:27,125][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"For path: events. Map keys: [:reloads, :pipelines]", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

I'm also using the elastic credentials to test out in Logstash, since elastic is a super user. The same result.

I checked "_cluster/settings":

{
  "persistent" : { },
  "transient" : { }
}

I change them to:

{
  "persistent" : {
    "action" : {
      "auto_create_index" : "true"
    }
  },
  "transient" : { }
}

because I didn't find system index like .monitoring-logstash*. I guess it should be.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.